[llvm] Empty test (just a test...) (PR #65439)

Mehdi Amini via llvm-commits llvm-commits at lists.llvm.org
Fri Sep 8 17:33:06 PDT 2023


https://github.com/joker-eph updated https://github.com/llvm/llvm-project/pull/65439:

>From 2117b7902c08bb172d7287f1ebeac99562aaa8cb Mon Sep 17 00:00:00 2001
From: Mehdi Amini <joker.eph at gmail.com>
Date: Fri, 8 Sep 2023 12:58:19 -0700
Subject: [PATCH 1/2] Update the developer policy information on "Stay
 Informed" to refer to GitHub teams

---
 llvm/docs/DeveloperPolicy.rst | 34 +++++++++++++++++++++++-----------
 1 file changed, 23 insertions(+), 11 deletions(-)

diff --git a/llvm/docs/DeveloperPolicy.rst b/llvm/docs/DeveloperPolicy.rst
index a453601d7ad93d1..3d25ce9488f126e 100644
--- a/llvm/docs/DeveloperPolicy.rst
+++ b/llvm/docs/DeveloperPolicy.rst
@@ -46,21 +46,33 @@ quality.
 Stay Informed
 -------------
 
-Developers should stay informed by reading the `LLVM Discourse forums`_.
-If you are doing anything more than just casual work on LLVM, it is suggested that you also
-subscribe to the "commits" mailing list for the subproject you're interested in,
+Developers should stay informed by reading the `LLVM Discourse forums`_ and subscribing
+to the categories of interest for notifications.
+
+Paying attention to changes being made by others is a good way to see what other people
+are interested in and watching the flow of the project as a whole.
+
+Contibutions to the project are made through :ref:`GitHub Pull Requests <github-reviews>`.
+You can subscribe to notification for areas of the codebase by joining
+one of the `pr-subscribers-* <https://github.com/orgs/llvm/teams?query=issue-subscribers>`_
+GitHub teams. This `mapping <https://github.com/llvm/llvm-project/blob/main/.github/new-prs-labeler.yml>`_
+indicates which team is associated with a particular paths in the repository.
+
+You can also subscribe to the "commits" mailing list for a subproject you're interested in,
 such as `llvm-commits
 <http://lists.llvm.org/mailman/listinfo/llvm-commits>`_, `cfe-commits
 <http://lists.llvm.org/mailman/listinfo/cfe-commits>`_, or `lldb-commits
-<http://lists.llvm.org/mailman/listinfo/lldb-commits>`_.  Reading the
-"commits" list and paying attention to changes being made by others is a good
-way to see what other people are interested in and watching the flow of the
-project as a whole.
-
-We recommend that active developers monitor incoming issues to our `GitHub issue tracker <https://github.com/llvm/llvm-project/issues>`_ and preferably subscribe to the `llvm-bugs
+<http://lists.llvm.org/mailman/listinfo/lldb-commits>`_.
+
+Missing features and bugs are tracked through our `GitHub issue tracker <https://github.com/llvm/llvm-project/issues>`_
+and assigned labels. We recommend that active developers monitor incoming issues.
+You can subscribe for notification for specific components by joining
+one of the `issue-subscribers-* <https://github.com/orgs/llvm/teams?query=issue-subscribers>`_
+teams.
+You may also subscribe to the `llvm-bugs
 <http://lists.llvm.org/mailman/listinfo/llvm-bugs>`_ email list to keep track
-of bugs and enhancements occurring in LLVM.  We really appreciate people who are
-proactive at catching incoming bugs in their components and dealing with them
+of bugs and enhancements occurring in the entire project.  We really appreciate people
+who are proactive at catching incoming bugs in their components and dealing with them
 promptly.
 
 Please be aware that all public LLVM mailing lists and discourse forums are public and archived, and

>From 80b50d349fed5c8352b820f04cbfea561810a563 Mon Sep 17 00:00:00 2001
From: Mehdi Amini <joker.eph at gmail.com>
Date: Fri, 8 Sep 2023 17:32:07 -0700
Subject: [PATCH 2/2] random format

---
 llvm/lib/Analysis/AliasAnalysis.cpp           |   20 +-
 llvm/lib/Analysis/AliasAnalysisEvaluator.cpp  |   18 +-
 llvm/lib/Analysis/AliasSetTracker.cpp         |   41 +-
 llvm/lib/Analysis/Analysis.cpp                |    6 +-
 llvm/lib/Analysis/AssumeBundleQueries.cpp     |    8 +-
 llvm/lib/Analysis/BasicAliasAnalysis.cpp      |   92 +-
 llvm/lib/Analysis/BlockFrequencyInfo.cpp      |   65 +-
 llvm/lib/Analysis/BlockFrequencyInfoImpl.cpp  |   32 +-
 llvm/lib/Analysis/BranchProbabilityInfo.cpp   |   73 +-
 llvm/lib/Analysis/CFG.cpp                     |   31 +-
 llvm/lib/Analysis/CFGPrinter.cpp              |    4 +-
 llvm/lib/Analysis/CGSCCPassManager.cpp        |   11 +-
 llvm/lib/Analysis/CMakeLists.txt              |  228 +-
 llvm/lib/Analysis/CallGraph.cpp               |   13 +-
 llvm/lib/Analysis/CallGraphSCCPass.cpp        |  159 +-
 llvm/lib/Analysis/CallPrinter.cpp             |   13 +-
 llvm/lib/Analysis/CaptureTracking.cpp         |  208 +-
 llvm/lib/Analysis/CmpInstAnalysis.cpp         |   77 +-
 llvm/lib/Analysis/ConstantFolding.cpp         |  327 +-
 llvm/lib/Analysis/ConstraintSystem.cpp        |    2 +-
 llvm/lib/Analysis/CostModel.cpp               |   89 +-
 llvm/lib/Analysis/DemandedBits.cpp            |   68 +-
 llvm/lib/Analysis/DependenceAnalysis.cpp      |  517 ++-
 .../Analysis/DevelopmentModeInlineAdvisor.cpp |    2 +-
 llvm/lib/Analysis/DomPrinter.cpp              |    5 +-
 llvm/lib/Analysis/DominanceFrontier.cpp       |   13 +-
 llvm/lib/Analysis/GlobalsModRef.cpp           |   31 +-
 llvm/lib/Analysis/GuardUtils.cpp              |    4 +-
 llvm/lib/Analysis/HeatUtils.cpp               |    5 +-
 llvm/lib/Analysis/IRSimilarityIdentifier.cpp  |   85 +-
 llvm/lib/Analysis/IVDescriptors.cpp           |   51 +-
 llvm/lib/Analysis/IVUsers.cpp                 |   10 +-
 .../Analysis/InlineSizeEstimatorAnalysis.cpp  |    3 +-
 .../InstructionPrecedenceTracking.cpp         |    5 +-
 llvm/lib/Analysis/InstructionSimplify.cpp     |   14 +-
 llvm/lib/Analysis/Interval.cpp                |    2 +-
 llvm/lib/Analysis/IntervalPartition.cpp       |    8 +-
 llvm/lib/Analysis/LazyCallGraph.cpp           |    7 +-
 llvm/lib/Analysis/LazyValueInfo.cpp           |  391 +--
 llvm/lib/Analysis/Lint.cpp                    |    6 +-
 llvm/lib/Analysis/Loads.cpp                   |   50 +-
 llvm/lib/Analysis/LoopAccessAnalysis.cpp      |  163 +-
 llvm/lib/Analysis/LoopAnalysisManager.cpp     |    2 +-
 llvm/lib/Analysis/LoopCacheAnalysis.cpp       |   52 +-
 llvm/lib/Analysis/LoopNestAnalysis.cpp        |    4 +-
 llvm/lib/Analysis/LoopPass.cpp                |   24 +-
 llvm/lib/Analysis/MemoryBuiltins.cpp          |   87 +-
 .../lib/Analysis/MemoryDependenceAnalysis.cpp |   20 +-
 llvm/lib/Analysis/MemoryLocation.cpp          |   19 +-
 llvm/lib/Analysis/MemorySSA.cpp               |   34 +-
 llvm/lib/Analysis/MemorySSAUpdater.cpp        |   10 +-
 llvm/lib/Analysis/ModuleSummaryAnalysis.cpp   |   12 +-
 llvm/lib/Analysis/MustExecute.cpp             |   57 +-
 llvm/lib/Analysis/PHITransAddr.cpp            |   51 +-
 llvm/lib/Analysis/PhiValues.cpp               |    8 +-
 llvm/lib/Analysis/PostDominators.cpp          |    7 +-
 llvm/lib/Analysis/PtrUseVisitor.cpp           |    6 +-
 llvm/lib/Analysis/RegionInfo.cpp              |   66 +-
 llvm/lib/Analysis/RegionPass.cpp              |   31 +-
 llvm/lib/Analysis/RegionPrinter.cpp           |   50 +-
 llvm/lib/Analysis/ScalarEvolution.cpp         | 1292 +++----
 .../Analysis/ScalarEvolutionAliasAnalysis.cpp |    4 +-
 llvm/lib/Analysis/StackLifetime.cpp           |    6 +-
 llvm/lib/Analysis/StackSafetyAnalysis.cpp     |   23 +-
 llvm/lib/Analysis/TFLiteUtils.cpp             |    2 +-
 llvm/lib/Analysis/TargetLibraryInfo.cpp       |   47 +-
 llvm/lib/Analysis/TargetTransformInfo.cpp     |   17 +-
 llvm/lib/Analysis/Trace.cpp                   |    8 +-
 llvm/lib/Analysis/TypeBasedAliasAnalysis.cpp  |   32 +-
 llvm/lib/Analysis/ValueLattice.cpp            |    7 +-
 llvm/lib/Analysis/ValueTracking.cpp           |  636 ++--
 llvm/lib/Analysis/VectorUtils.cpp             |   40 +-
 .../models/gen-inline-oz-test-model.py        |    9 +-
 .../gen-regalloc-eviction-test-model.py       |    4 +-
 .../gen-regalloc-priority-test-model.py       |    6 +-
 llvm/lib/Analysis/models/interactive_host.py  |    2 +-
 .../Analysis/models/saved-model-to-tflite.py  |   76 +-
 llvm/lib/AsmParser/CMakeLists.txt             |   21 +-
 llvm/lib/AsmParser/LLLexer.cpp                |  398 ++-
 llvm/lib/AsmParser/LLParser.cpp               |  663 ++--
 .../BinaryFormat/AMDGPUMetadataVerifier.cpp   |   86 +-
 llvm/lib/BinaryFormat/CMakeLists.txt          |   30 +-
 llvm/lib/BinaryFormat/Dwarf.cpp               |   12 +-
 llvm/lib/BinaryFormat/MsgPackDocumentYAML.cpp |    1 -
 llvm/lib/Bitcode/CMakeLists.txt               |    3 +-
 llvm/lib/Bitcode/Reader/BitReader.cpp         |   10 +-
 llvm/lib/Bitcode/Reader/BitcodeAnalyzer.cpp   |    1 -
 llvm/lib/Bitcode/Reader/BitcodeReader.cpp     |  914 ++---
 llvm/lib/Bitcode/Reader/CMakeLists.txt        |   24 +-
 llvm/lib/Bitcode/Reader/MetadataLoader.h      |    2 +-
 llvm/lib/Bitcode/Reader/ValueList.h           |   12 +-
 llvm/lib/Bitcode/Writer/BitWriter.cpp         |    1 -
 llvm/lib/Bitcode/Writer/BitcodeWriter.cpp     |  534 +--
 llvm/lib/Bitcode/Writer/BitcodeWriterPass.cpp |   75 +-
 llvm/lib/Bitcode/Writer/CMakeLists.txt        |   20 +-
 llvm/lib/Bitcode/Writer/ValueEnumerator.cpp   |   52 +-
 llvm/lib/Bitcode/Writer/ValueEnumerator.h     |   15 +-
 llvm/lib/Bitstream/CMakeLists.txt             |    2 +-
 llvm/lib/Bitstream/Reader/BitstreamReader.cpp |   19 +-
 llvm/lib/Bitstream/Reader/CMakeLists.txt      |   13 +-
 llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp |  137 +-
 llvm/lib/CodeGen/AggressiveAntiDepBreaker.h   |  281 +-
 llvm/lib/CodeGen/Analysis.cpp                 |  140 +-
 llvm/lib/CodeGen/AsmPrinter/ARMException.cpp  |   17 +-
 llvm/lib/CodeGen/AsmPrinter/AddressPool.h     |    4 +-
 llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp    |  164 +-
 .../CodeGen/AsmPrinter/AsmPrinterDwarf.cpp    |   20 +-
 .../AsmPrinter/AsmPrinterInlineAsm.cpp        |   64 +-
 llvm/lib/CodeGen/AsmPrinter/ByteStreamer.h    |   20 +-
 llvm/lib/CodeGen/AsmPrinter/CMakeLists.txt    |   58 +-
 llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.cpp |  232 +-
 llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.h   |   20 +-
 llvm/lib/CodeGen/AsmPrinter/DIE.cpp           |  103 +-
 llvm/lib/CodeGen/AsmPrinter/DIEHash.cpp       |    7 +-
 llvm/lib/CodeGen/AsmPrinter/DIEHash.h         |    2 +-
 .../AsmPrinter/DbgEntityHistoryCalculator.cpp |    4 +-
 llvm/lib/CodeGen/AsmPrinter/DebugLocEntry.h   |   17 +-
 llvm/lib/CodeGen/AsmPrinter/DebugLocStream.h  |   14 +-
 .../CodeGen/AsmPrinter/DwarfCFIException.cpp  |    4 +-
 .../CodeGen/AsmPrinter/DwarfCompileUnit.cpp   |   45 +-
 .../lib/CodeGen/AsmPrinter/DwarfCompileUnit.h |   25 +-
 llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp    |  156 +-
 llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h      |   37 +-
 .../CodeGen/AsmPrinter/DwarfExpression.cpp    |    4 +-
 llvm/lib/CodeGen/AsmPrinter/DwarfFile.h       |    4 +-
 llvm/lib/CodeGen/AsmPrinter/DwarfUnit.cpp     |   46 +-
 llvm/lib/CodeGen/AsmPrinter/DwarfUnit.h       |   15 +-
 llvm/lib/CodeGen/AsmPrinter/EHStreamer.cpp    |   43 +-
 llvm/lib/CodeGen/AsmPrinter/WinException.cpp  |   32 +-
 llvm/lib/CodeGen/AsmPrinter/WinException.h    |    4 +-
 llvm/lib/CodeGen/AtomicExpandPass.cpp         |    5 +-
 llvm/lib/CodeGen/BasicBlockSections.cpp       |    3 +-
 llvm/lib/CodeGen/BasicTargetTransformInfo.cpp |    6 +-
 llvm/lib/CodeGen/BranchFolding.cpp            |  243 +-
 llvm/lib/CodeGen/BranchFolding.h              |  312 +-
 llvm/lib/CodeGen/BranchRelaxation.cpp         |   24 +-
 llvm/lib/CodeGen/BreakFalseDeps.cpp           |   18 +-
 llvm/lib/CodeGen/CFIFixup.cpp                 |    3 +-
 llvm/lib/CodeGen/CFIInstrInserter.cpp         |   17 +-
 llvm/lib/CodeGen/CMakeLists.txt               |  433 +--
 llvm/lib/CodeGen/CalcSpillWeights.cpp         |   10 +-
 llvm/lib/CodeGen/CallingConvLower.cpp         |   23 +-
 llvm/lib/CodeGen/CodeGenCommonISel.cpp        |   13 +-
 llvm/lib/CodeGen/CodeGenPrepare.cpp           |   52 +-
 llvm/lib/CodeGen/CommandFlags.cpp             |   24 +-
 llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp   |  121 +-
 llvm/lib/CodeGen/CriticalAntiDepBreaker.h     |  141 +-
 llvm/lib/CodeGen/DFAPacketizer.cpp            |   13 +-
 .../CodeGen/DeadMachineInstructionElim.cpp    |   40 +-
 llvm/lib/CodeGen/DetectDeadLanes.cpp          |    9 +-
 llvm/lib/CodeGen/EarlyIfConversion.cpp        |   58 +-
 llvm/lib/CodeGen/EdgeBundles.cpp              |   15 +-
 llvm/lib/CodeGen/ExpandLargeFpConvert.cpp     |   38 +-
 llvm/lib/CodeGen/ExpandMemCmp.cpp             |   35 +-
 llvm/lib/CodeGen/ExpandPostRAPseudos.cpp      |   10 +-
 llvm/lib/CodeGen/ExpandReductions.cpp         |   14 +-
 llvm/lib/CodeGen/ExpandVectorPredication.cpp  |    2 +-
 llvm/lib/CodeGen/FEntryInserter.cpp           |    2 +-
 llvm/lib/CodeGen/FinalizeISel.cpp             |   22 +-
 llvm/lib/CodeGen/FuncletLayout.cpp            |    6 +-
 llvm/lib/CodeGen/GCMetadata.cpp               |    3 +-
 llvm/lib/CodeGen/GCRootLowering.cpp           |    9 +-
 llvm/lib/CodeGen/GlobalISel/CMakeLists.txt    |   56 +-
 llvm/lib/CodeGen/GlobalISel/CallLowering.cpp  |   10 +-
 .../lib/CodeGen/GlobalISel/CombinerHelper.cpp |   83 +-
 .../GlobalISel/GISelChangeObserver.cpp        |    4 +-
 .../lib/CodeGen/GlobalISel/GISelKnownBits.cpp |   32 +-
 llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp  |  343 +-
 .../GlobalISel/LegacyLegalizerInfo.cpp        |   14 +-
 .../CodeGen/GlobalISel/LegalityPredicates.cpp |    2 +-
 llvm/lib/CodeGen/GlobalISel/Legalizer.cpp     |    5 +-
 .../CodeGen/GlobalISel/LegalizerHelper.cpp    |  259 +-
 llvm/lib/CodeGen/GlobalISel/LegalizerInfo.cpp |   15 +-
 llvm/lib/CodeGen/GlobalISel/LoadStoreOpt.cpp  |   13 +-
 .../CodeGen/GlobalISel/MachineIRBuilder.cpp   |   74 +-
 llvm/lib/CodeGen/GlobalISel/RegBankSelect.cpp |   17 +-
 llvm/lib/CodeGen/GlobalISel/Utils.cpp         |   18 +-
 llvm/lib/CodeGen/GlobalMerge.cpp              |  193 +-
 llvm/lib/CodeGen/HardwareLoops.cpp            |  312 +-
 llvm/lib/CodeGen/IfConversion.cpp             |  901 +++--
 llvm/lib/CodeGen/ImplicitNullChecks.cpp       |   26 +-
 llvm/lib/CodeGen/InlineSpiller.cpp            |   66 +-
 llvm/lib/CodeGen/InterferenceCache.cpp        |   19 +-
 llvm/lib/CodeGen/InterferenceCache.h          |   22 +-
 llvm/lib/CodeGen/InterleavedAccessPass.cpp    |   16 +-
 .../CodeGen/InterleavedLoadCombinePass.cpp    |   18 +-
 llvm/lib/CodeGen/IntrinsicLowering.cpp        |  177 +-
 llvm/lib/CodeGen/JMCInstrumenter.cpp          |    3 +-
 llvm/lib/CodeGen/LLVMTargetMachine.cpp        |    4 +-
 llvm/lib/CodeGen/LatencyPriorityQueue.cpp     |   26 +-
 .../CodeGen/LazyMachineBlockFrequencyInfo.cpp |    4 +-
 llvm/lib/CodeGen/LexicalScopes.cpp            |   16 +-
 .../LiveDebugValues/InstrRefBasedImpl.cpp     |   30 +-
 .../LiveDebugValues/InstrRefBasedImpl.h       |   20 +-
 .../LiveDebugValues/VarLocBasedImpl.cpp       |   70 +-
 llvm/lib/CodeGen/LiveDebugVariables.cpp       |   84 +-
 llvm/lib/CodeGen/LiveInterval.cpp             |  106 +-
 llvm/lib/CodeGen/LiveIntervalUnion.cpp        |   16 +-
 llvm/lib/CodeGen/LiveIntervals.cpp            |  162 +-
 llvm/lib/CodeGen/LivePhysRegs.cpp             |   15 +-
 llvm/lib/CodeGen/LiveRangeCalc.cpp            |   79 +-
 llvm/lib/CodeGen/LiveRangeEdit.cpp            |   19 +-
 llvm/lib/CodeGen/LiveRangeUtils.h             |   13 +-
 llvm/lib/CodeGen/LiveRegMatrix.cpp            |   22 +-
 llvm/lib/CodeGen/LiveStacks.cpp               |   14 +-
 llvm/lib/CodeGen/LiveVariables.cpp            |   71 +-
 llvm/lib/CodeGen/LocalStackSlotAllocation.cpp |  119 +-
 llvm/lib/CodeGen/LowerEmuTLS.cpp              |   20 +-
 llvm/lib/CodeGen/MBFIWrapper.cpp              |   17 +-
 llvm/lib/CodeGen/MIRCanonicalizerPass.cpp     |    4 +-
 llvm/lib/CodeGen/MIRFSDiscriminator.cpp       |    2 +-
 llvm/lib/CodeGen/MIRParser/CMakeLists.txt     |   26 +-
 llvm/lib/CodeGen/MIRParser/MILexer.cpp        |   11 +-
 llvm/lib/CodeGen/MIRParser/MIParser.cpp       |   73 +-
 llvm/lib/CodeGen/MIRParser/MIRParser.cpp      |   47 +-
 llvm/lib/CodeGen/MIRPrinter.cpp               |   47 +-
 llvm/lib/CodeGen/MIRVRegNamerUtils.h          |    2 +-
 llvm/lib/CodeGen/MLRegAllocEvictAdvisor.cpp   |    2 +-
 .../lib/CodeGen/MLRegAllocPriorityAdvisor.cpp |    3 +-
 llvm/lib/CodeGen/MachineBasicBlock.cpp        |   92 +-
 .../lib/CodeGen/MachineBlockFrequencyInfo.cpp |   22 +-
 llvm/lib/CodeGen/MachineBlockPlacement.cpp    |  496 ++-
 llvm/lib/CodeGen/MachineCSE.cpp               |  203 +-
 llvm/lib/CodeGen/MachineCombiner.cpp          |   66 +-
 llvm/lib/CodeGen/MachineCopyPropagation.cpp   |   35 +-
 llvm/lib/CodeGen/MachineDominanceFrontier.cpp |   11 +-
 llvm/lib/CodeGen/MachineDominators.cpp        |    7 +-
 llvm/lib/CodeGen/MachineFrameInfo.cpp         |   15 +-
 llvm/lib/CodeGen/MachineFunction.cpp          |  176 +-
 .../CodeGen/MachineFunctionPrinterPass.cpp    |   10 +-
 llvm/lib/CodeGen/MachineInstr.cpp             |   97 +-
 llvm/lib/CodeGen/MachineInstrBundle.cpp       |   56 +-
 llvm/lib/CodeGen/MachineLICM.cpp              |  420 ++-
 llvm/lib/CodeGen/MachineLateInstrsCleanup.cpp |   11 +-
 llvm/lib/CodeGen/MachineLoopInfo.cpp          |   14 +-
 llvm/lib/CodeGen/MachineLoopUtils.cpp         |    3 +-
 llvm/lib/CodeGen/MachineModuleInfo.cpp        |   12 +-
 llvm/lib/CodeGen/MachineOperand.cpp           |   15 +-
 .../MachineOptimizationRemarkEmitter.cpp      |    4 +-
 llvm/lib/CodeGen/MachineOutliner.cpp          |   13 +-
 llvm/lib/CodeGen/MachinePipeliner.cpp         |   40 +-
 llvm/lib/CodeGen/MachinePostDominators.cpp    |    2 +-
 llvm/lib/CodeGen/MachineRegionInfo.cpp        |   18 +-
 llvm/lib/CodeGen/MachineRegisterInfo.cpp      |   60 +-
 llvm/lib/CodeGen/MachineSSAUpdater.cpp        |   90 +-
 llvm/lib/CodeGen/MachineScheduler.cpp         |  569 +--
 llvm/lib/CodeGen/MachineSink.cpp              |  364 +-
 llvm/lib/CodeGen/MachineSizeOpts.cpp          |    5 +-
 llvm/lib/CodeGen/MachineTraceMetrics.cpp      |  160 +-
 llvm/lib/CodeGen/MachineVerifier.cpp          |  665 ++--
 llvm/lib/CodeGen/MacroFusion.cpp              |   39 +-
 llvm/lib/CodeGen/ModuloSchedule.cpp           |    7 +-
 llvm/lib/CodeGen/OptimizePHIs.cpp             |   50 +-
 llvm/lib/CodeGen/PHIElimination.cpp           |  164 +-
 llvm/lib/CodeGen/PHIEliminationUtils.cpp      |    4 +-
 llvm/lib/CodeGen/PHIEliminationUtils.h        |   16 +-
 llvm/lib/CodeGen/PatchableFunction.cpp        |    4 +-
 llvm/lib/CodeGen/PeepholeOptimizer.cpp        |  586 ++-
 llvm/lib/CodeGen/PostRAHazardRecognizer.cpp   |   29 +-
 llvm/lib/CodeGen/PostRASchedulerList.cpp      |  315 +-
 llvm/lib/CodeGen/ProcessImplicitDefs.cpp      |   13 +-
 llvm/lib/CodeGen/PrologEpilogInserter.cpp     |   35 +-
 llvm/lib/CodeGen/PseudoSourceValue.cpp        |   13 +-
 llvm/lib/CodeGen/RDFGraph.cpp                 |    2 +-
 llvm/lib/CodeGen/RDFLiveness.cpp              |    2 +-
 llvm/lib/CodeGen/RDFRegisters.cpp             |    2 +-
 llvm/lib/CodeGen/README.txt                   |  264 +-
 llvm/lib/CodeGen/ReachingDefAnalysis.cpp      |   47 +-
 llvm/lib/CodeGen/RegAllocBase.h               |    6 +-
 llvm/lib/CodeGen/RegAllocBasic.cpp            |   34 +-
 llvm/lib/CodeGen/RegAllocFast.cpp             |  488 ++-
 llvm/lib/CodeGen/RegAllocGreedy.cpp           |   98 +-
 llvm/lib/CodeGen/RegAllocPBQP.cpp             |   90 +-
 llvm/lib/CodeGen/RegUsageInfoCollector.cpp    |   18 +-
 llvm/lib/CodeGen/RegUsageInfoPropagate.cpp    |   14 +-
 llvm/lib/CodeGen/RegisterBankInfo.cpp         |    8 +-
 llvm/lib/CodeGen/RegisterClassInfo.cpp        |    4 +-
 llvm/lib/CodeGen/RegisterCoalescer.cpp        |  846 ++---
 llvm/lib/CodeGen/RegisterCoalescer.h          |  128 +-
 llvm/lib/CodeGen/RegisterPressure.cpp         |  180 +-
 llvm/lib/CodeGen/RegisterScavenging.cpp       |   22 +-
 llvm/lib/CodeGen/RegisterUsageInfo.cpp        |    6 +-
 llvm/lib/CodeGen/RenameIndependentSubregs.cpp |   57 +-
 llvm/lib/CodeGen/ResetMachineFunctionPass.cpp |   87 +-
 llvm/lib/CodeGen/SafeStack.cpp                |   16 +-
 llvm/lib/CodeGen/ScheduleDAG.cpp              |  105 +-
 llvm/lib/CodeGen/ScheduleDAGInstrs.cpp        |  129 +-
 llvm/lib/CodeGen/ScheduleDAGPrinter.cpp       |   86 +-
 .../CodeGen/ScoreboardHazardRecognizer.cpp    |   25 +-
 llvm/lib/CodeGen/SelectionDAG/CMakeLists.txt  |   52 +-
 llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp | 3127 +++++++++--------
 llvm/lib/CodeGen/SelectionDAG/FastISel.cpp    |  204 +-
 .../SelectionDAG/FunctionLoweringInfo.cpp     |   20 +-
 .../lib/CodeGen/SelectionDAG/InstrEmitter.cpp |  175 +-
 llvm/lib/CodeGen/SelectionDAG/InstrEmitter.h  |   33 +-
 llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp |  807 +++--
 .../SelectionDAG/LegalizeFloatTypes.cpp       | 1896 +++++-----
 .../SelectionDAG/LegalizeIntegerTypes.cpp     | 1578 +++++----
 .../CodeGen/SelectionDAG/LegalizeTypes.cpp    |  349 +-
 llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h |  292 +-
 .../SelectionDAG/LegalizeTypesGeneric.cpp     |  119 +-
 .../SelectionDAG/LegalizeVectorOps.cpp        |   62 +-
 .../SelectionDAG/LegalizeVectorTypes.cpp      |  883 +++--
 .../SelectionDAG/ResourcePriorityQueue.cpp    |  158 +-
 .../lib/CodeGen/SelectionDAG/SDNodeDbgValue.h |    3 +-
 .../CodeGen/SelectionDAG/ScheduleDAGFast.cpp  |  144 +-
 .../SelectionDAG/ScheduleDAGRRList.cpp        |  444 ++-
 .../SelectionDAG/ScheduleDAGSDNodes.cpp       |  113 +-
 .../CodeGen/SelectionDAG/ScheduleDAGSDNodes.h |  273 +-
 .../CodeGen/SelectionDAG/ScheduleDAGVLIW.cpp  |   29 +-
 .../lib/CodeGen/SelectionDAG/SelectionDAG.cpp | 1233 ++++---
 .../SelectionDAGAddressAnalysis.cpp           |   14 +-
 .../SelectionDAG/SelectionDAGBuilder.cpp      | 1330 +++----
 .../SelectionDAG/SelectionDAGBuilder.h        |   36 +-
 .../SelectionDAG/SelectionDAGDumper.cpp       | 1260 ++++---
 .../CodeGen/SelectionDAG/SelectionDAGISel.cpp |  652 ++--
 .../SelectionDAG/SelectionDAGPrinter.cpp      |  218 +-
 .../SelectionDAG/StatepointLowering.cpp       |   70 +-
 .../CodeGen/SelectionDAG/TargetLowering.cpp   |  730 ++--
 llvm/lib/CodeGen/ShadowStackGCLowering.cpp    |   16 +-
 llvm/lib/CodeGen/ShrinkWrap.cpp               |   10 +-
 llvm/lib/CodeGen/SjLjEHPrepare.cpp            |   16 +-
 llvm/lib/CodeGen/SlotIndexes.cpp              |   11 +-
 llvm/lib/CodeGen/SpillPlacement.cpp           |   32 +-
 llvm/lib/CodeGen/SplitKit.cpp                 |   86 +-
 llvm/lib/CodeGen/SplitKit.h                   |   24 +-
 llvm/lib/CodeGen/StackColoring.cpp            |  100 +-
 llvm/lib/CodeGen/StackMapLivenessAnalysis.cpp |    9 +-
 llvm/lib/CodeGen/StackMaps.cpp                |   25 +-
 llvm/lib/CodeGen/StackProtector.cpp           |   20 +-
 llvm/lib/CodeGen/StackSlotColoring.cpp        |  233 +-
 llvm/lib/CodeGen/SwiftErrorValueTracking.cpp  |   15 +-
 llvm/lib/CodeGen/SwitchLoweringUtils.cpp      |    8 +-
 llvm/lib/CodeGen/TailDuplication.cpp          |   13 +-
 llvm/lib/CodeGen/TailDuplicator.cpp           |   47 +-
 llvm/lib/CodeGen/TargetFrameLoweringImpl.cpp  |   11 +-
 llvm/lib/CodeGen/TargetInstrInfo.cpp          |  140 +-
 llvm/lib/CodeGen/TargetLoweringBase.cpp       |  325 +-
 .../CodeGen/TargetLoweringObjectFileImpl.cpp  |  274 +-
 llvm/lib/CodeGen/TargetPassConfig.cpp         |  155 +-
 llvm/lib/CodeGen/TargetRegisterInfo.cpp       |   67 +-
 llvm/lib/CodeGen/TargetSchedule.cpp           |   50 +-
 llvm/lib/CodeGen/TargetSubtargetInfo.cpp      |   18 +-
 .../lib/CodeGen/TwoAddressInstructionPass.cpp |  130 +-
 llvm/lib/CodeGen/TypePromotion.cpp            |    6 +-
 llvm/lib/CodeGen/UnreachableBlockElim.cpp     |   39 +-
 llvm/lib/CodeGen/ValueTypes.cpp               |   81 +-
 llvm/lib/CodeGen/VirtRegMap.cpp               |   32 +-
 llvm/lib/CodeGen/WinEHPrepare.cpp             |   36 +-
 llvm/lib/DWARFLinker/CMakeLists.txt           |   29 +-
 llvm/lib/DWARFLinker/DWARFLinker.cpp          |   10 +-
 llvm/lib/DWARFLinkerParallel/CMakeLists.txt   |   36 +-
 llvm/lib/DWP/CMakeLists.txt                   |   20 +-
 llvm/lib/DebugInfo/BTF/CMakeLists.txt         |   15 +-
 llvm/lib/DebugInfo/CMakeLists.txt             |   11 +-
 .../CodeView/AppendingTypeTableBuilder.cpp    |    2 +-
 llvm/lib/DebugInfo/CodeView/CMakeLists.txt    |   67 +-
 llvm/lib/DebugInfo/CodeView/CVTypeVisitor.cpp |    3 +-
 .../CodeView/ContinuationRecordBuilder.cpp    |    4 +-
 .../CodeView/DebugChecksumsSubsection.cpp     |    4 +-
 .../CodeView/DebugCrossImpSubsection.cpp      |    6 +-
 .../CodeView/DebugInlineeLinesSubsection.cpp  |    4 +-
 llvm/lib/DebugInfo/CodeView/Formatters.cpp    |   14 +-
 .../CodeView/LazyRandomTypeCollection.cpp     |    3 +-
 llvm/lib/DebugInfo/CodeView/RecordName.cpp    |    3 +-
 llvm/lib/DebugInfo/CodeView/SymbolDumper.cpp  |    5 +-
 .../CodeView/SymbolRecordMapping.cpp          |   26 +-
 .../DebugInfo/CodeView/TypeDumpVisitor.cpp    |   15 +-
 .../DebugInfo/CodeView/TypeIndexDiscovery.cpp |    2 +-
 .../DebugInfo/CodeView/TypeRecordMapping.cpp  |    3 +-
 .../DebugInfo/CodeView/TypeStreamMerger.cpp   |    4 +-
 llvm/lib/DebugInfo/DWARF/CMakeLists.txt       |   54 +-
 .../DWARF/DWARFAbbreviationDeclaration.cpp    |    4 +-
 .../DebugInfo/DWARF/DWARFAcceleratorTable.cpp |   11 +-
 llvm/lib/DebugInfo/DWARF/DWARFContext.cpp     |  180 +-
 llvm/lib/DebugInfo/DWARF/DWARFDebugAbbrev.cpp |    4 +-
 llvm/lib/DebugInfo/DWARF/DWARFDebugAddr.cpp   |    3 +-
 .../DebugInfo/DWARF/DWARFDebugArangeSet.cpp   |    3 +-
 .../lib/DebugInfo/DWARF/DWARFDebugAranges.cpp |    4 +-
 llvm/lib/DebugInfo/DWARF/DWARFDebugFrame.cpp  |   19 +-
 llvm/lib/DebugInfo/DWARF/DWARFDebugLine.cpp   |   24 +-
 llvm/lib/DebugInfo/DWARF/DWARFDebugLoc.cpp    |    2 +-
 .../DebugInfo/DWARF/DWARFDebugPubTable.cpp    |    2 +-
 .../DebugInfo/DWARF/DWARFDebugRangeList.cpp   |    7 +-
 llvm/lib/DebugInfo/DWARF/DWARFDie.cpp         |    2 +-
 llvm/lib/DebugInfo/DWARF/DWARFExpression.cpp  |   13 +-
 llvm/lib/DebugInfo/DWARF/DWARFFormValue.cpp   |   19 +-
 llvm/lib/DebugInfo/DWARF/DWARFListTable.cpp   |   34 +-
 llvm/lib/DebugInfo/DWARF/DWARFUnit.cpp        |   59 +-
 llvm/lib/DebugInfo/DWARF/DWARFUnitIndex.cpp   |   48 +-
 llvm/lib/DebugInfo/DWARF/DWARFVerifier.cpp    |   71 +-
 llvm/lib/DebugInfo/GSYM/CMakeLists.txt        |   34 +-
 llvm/lib/DebugInfo/GSYM/DwarfTransformer.cpp  |   41 +-
 llvm/lib/DebugInfo/GSYM/FileWriter.cpp        |   11 +-
 llvm/lib/DebugInfo/GSYM/FunctionInfo.cpp      |  128 +-
 llvm/lib/DebugInfo/GSYM/GsymCreator.cpp       |   58 +-
 llvm/lib/DebugInfo/GSYM/GsymReader.cpp        |  153 +-
 llvm/lib/DebugInfo/GSYM/Header.cpp            |   31 +-
 llvm/lib/DebugInfo/GSYM/InlineInfo.cpp        |   26 +-
 llvm/lib/DebugInfo/GSYM/LineTable.cpp         |   88 +-
 .../DebugInfo/GSYM/ObjectFileTransformer.cpp  |    2 +-
 llvm/lib/DebugInfo/LogicalView/CMakeLists.txt |   68 +-
 llvm/lib/DebugInfo/MSF/CMakeLists.txt         |   15 +-
 llvm/lib/DebugInfo/MSF/MappedBlockStream.cpp  |    2 +-
 llvm/lib/DebugInfo/PDB/CMakeLists.txt         |  277 +-
 llvm/lib/DebugInfo/PDB/PDBContext.cpp         |    2 +-
 llvm/lib/DebugInfo/PDB/PDBExtras.cpp          |  124 +-
 llvm/lib/DebugInfo/PDB/PDBSymbolFunc.cpp      |    2 +-
 .../PDB/PDBSymbolTypeFunctionSig.cpp          |    2 +-
 llvm/lib/DebugInfo/PDB/UDTLayout.cpp          |   10 +-
 llvm/lib/DebugInfo/Symbolize/CMakeLists.txt   |   25 +-
 .../Symbolize/SymbolizableObjectFile.cpp      |    7 +-
 llvm/lib/Debuginfod/CMakeLists.txt            |   48 +-
 llvm/lib/Debuginfod/HTTPServer.cpp            |    8 +-
 llvm/lib/Demangle/CMakeLists.txt              |   13 +-
 llvm/lib/Demangle/ItaniumDemangle.cpp         |   76 +-
 llvm/lib/Demangle/MicrosoftDemangle.cpp       |    2 +-
 llvm/lib/ExecutionEngine/CMakeLists.txt       |   76 +-
 llvm/lib/ExecutionEngine/ExecutionEngine.cpp  |  463 +--
 .../ExecutionEngineBindings.cpp               |  107 +-
 .../GDBRegistrationListener.cpp               |   87 +-
 .../IntelJITEvents/IntelJITEventListener.cpp  |   34 +-
 .../IntelJITEvents/IntelJITEventsWrapper.h    |   10 +-
 .../IntelJITEvents/ittnotify_config.h         |  584 +--
 .../IntelJITEvents/ittnotify_types.h          |   80 +-
 .../IntelJITEvents/jitprofiling.c             |  610 ++--
 .../IntelJITEvents/jitprofiling.h             |  270 +-
 .../Interpreter/CMakeLists.txt                |   21 +-
 .../ExecutionEngine/Interpreter/Execution.cpp | 1097 +++---
 .../Interpreter/ExternalFunctions.cpp         |  309 +-
 .../Interpreter/Interpreter.cpp               |   17 +-
 .../ExecutionEngine/Interpreter/Interpreter.h |   44 +-
 .../ExecutionEngine/JITLink/CMakeLists.txt    |   91 +-
 .../ExecutionEngine/JITLink/COFFOptions.td    |    7 +-
 .../JITLink/EHFrameSupport.cpp                |    6 +-
 .../JITLink/EHFrameSupportImpl.h              |    1 -
 .../JITLink/ELFLinkGraphBuilder.h             |    8 +-
 .../ExecutionEngine/JITLink/ELF_aarch32.cpp   |    4 +-
 .../lib/ExecutionEngine/JITLink/ELF_riscv.cpp |    8 +-
 .../ExecutionEngine/JITLink/JITLinkGeneric.h  |    2 +-
 .../JITLink/MachOLinkGraphBuilder.cpp         |   17 +-
 .../JITLink/MachOLinkGraphBuilder.h           |    3 +-
 .../JITLink/PerGraphGOTAndPLTStubsBuilder.h   |    3 +-
 llvm/lib/ExecutionEngine/JITLink/aarch32.cpp  |    4 +-
 llvm/lib/ExecutionEngine/MCJIT/CMakeLists.txt |   16 +-
 llvm/lib/ExecutionEngine/MCJIT/MCJIT.cpp      |   90 +-
 llvm/lib/ExecutionEngine/MCJIT/MCJIT.h        |   33 +-
 .../OProfileJIT/CMakeLists.txt                |   14 +-
 .../OProfileJIT/OProfileJITEventListener.cpp  |   12 +-
 .../OProfileJIT/OProfileWrapper.cpp           |   78 +-
 llvm/lib/ExecutionEngine/Orc/CMakeLists.txt   |  113 +-
 llvm/lib/ExecutionEngine/Orc/COFFPlatform.cpp |   18 +-
 .../Orc/CompileOnDemandLayer.cpp              |   92 +-
 llvm/lib/ExecutionEngine/Orc/Core.cpp         |  111 +-
 .../Orc/DebugObjectManagerPlugin.cpp          |   43 +-
 .../ExecutionEngine/Orc/ELFNixPlatform.cpp    |    5 +-
 .../Orc/EPCGenericJITLinkMemoryManager.cpp    |    1 -
 .../ExecutionEngine/Orc/ExecutionUtils.cpp    |   17 +-
 .../Orc/ExecutorProcessControl.cpp            |    3 +-
 .../ExecutionEngine/Orc/IndirectionUtils.cpp  |  210 +-
 .../Orc/JITTargetMachineBuilder.cpp           |    6 +-
 llvm/lib/ExecutionEngine/Orc/LLJIT.cpp        |    9 +-
 .../lib/ExecutionEngine/Orc/MachOPlatform.cpp |    5 +-
 llvm/lib/ExecutionEngine/Orc/MemoryMapper.cpp |    3 +-
 .../Orc/ObjectLinkingLayer.cpp                |    2 +-
 .../lib/ExecutionEngine/Orc/OrcABISupport.cpp |  570 +--
 .../ExecutionEngine/Orc/OrcV2CBindings.cpp    |    7 +-
 .../Orc/RTDyldObjectLinkingLayer.cpp          |    3 +-
 .../ExecutionEngine/Orc/SimpleRemoteEPC.cpp   |    3 +-
 .../ExecutionEngine/Orc/ThreadSafeModule.cpp  |    2 +-
 .../PerfJITEvents/CMakeLists.txt              |   15 +-
 .../PerfJITEvents/PerfJITEventListener.cpp    |   22 +-
 .../RuntimeDyld/CMakeLists.txt                |   26 +-
 .../ExecutionEngine/RuntimeDyld/JITSymbol.cpp |    3 +-
 .../RuntimeDyld/RTDyldMemoryManager.cpp       |  157 +-
 .../RuntimeDyld/RuntimeDyld.cpp               |  169 +-
 .../RuntimeDyld/RuntimeDyldCOFF.cpp           |    5 +-
 .../RuntimeDyld/RuntimeDyldChecker.cpp        |    9 +-
 .../RuntimeDyld/RuntimeDyldCheckerImpl.h      |    6 +-
 .../RuntimeDyld/RuntimeDyldELF.cpp            |  152 +-
 .../RuntimeDyld/RuntimeDyldELF.h              |    6 +-
 .../RuntimeDyld/RuntimeDyldImpl.h             |   32 +-
 .../RuntimeDyld/RuntimeDyldMachO.cpp          |   69 +-
 .../RuntimeDyld/RuntimeDyldMachO.h            |   24 +-
 llvm/lib/ExecutionEngine/TargetSelect.cpp     |   10 +-
 llvm/lib/Extensions/CMakeLists.txt            |    7 +-
 llvm/lib/Extensions/Extensions.cpp            |   14 +-
 llvm/lib/FileCheck/CMakeLists.txt             |   10 +-
 llvm/lib/FileCheck/FileCheck.cpp              |   24 +-
 llvm/lib/Frontend/CMakeLists.txt              |    4 +-
 llvm/lib/Frontend/HLSL/CMakeLists.txt         |   18 +-
 llvm/lib/Frontend/OpenACC/CMakeLists.txt      |   17 +-
 llvm/lib/Frontend/OpenMP/CMakeLists.txt       |   28 +-
 llvm/lib/FuzzMutate/CMakeLists.txt            |   48 +-
 llvm/lib/Fuzzer/README.txt                    |    2 +-
 llvm/lib/IR/AbstractCallSite.cpp              |    3 +-
 llvm/lib/IR/AsmWriter.cpp                     |  446 ++-
 llvm/lib/IR/AttributeImpl.h                   |   12 +-
 llvm/lib/IR/Attributes.cpp                    |  122 +-
 llvm/lib/IR/AutoUpgrade.cpp                   | 1354 ++++---
 llvm/lib/IR/BasicBlock.cpp                    |   45 +-
 llvm/lib/IR/BuiltinGCs.cpp                    |    2 +-
 llvm/lib/IR/CMakeLists.txt                    |  108 +-
 llvm/lib/IR/ConstantFold.cpp                  |  334 +-
 llvm/lib/IR/ConstantRange.cpp                 |  184 +-
 llvm/lib/IR/Constants.cpp                     |  319 +-
 llvm/lib/IR/ConstantsContext.h                |   17 +-
 llvm/lib/IR/Core.cpp                          |  775 ++--
 llvm/lib/IR/DIBuilder.cpp                     |    9 +-
 llvm/lib/IR/DataLayout.cpp                    |   69 +-
 llvm/lib/IR/DebugInfo.cpp                     |  472 ++-
 llvm/lib/IR/DebugInfoMetadata.cpp             |   20 +-
 llvm/lib/IR/DiagnosticHandler.cpp             |    2 +-
 llvm/lib/IR/DiagnosticInfo.cpp                |    6 +-
 llvm/lib/IR/DiagnosticPrinter.cpp             |   12 +-
 llvm/lib/IR/Dominators.cpp                    |    5 +-
 llvm/lib/IR/Function.cpp                      |  612 ++--
 llvm/lib/IR/IRBuilder.cpp                     |  110 +-
 llvm/lib/IR/InlineAsm.cpp                     |   65 +-
 llvm/lib/IR/Instruction.cpp                   |  237 +-
 llvm/lib/IR/Instructions.cpp                  | 1477 ++++----
 llvm/lib/IR/IntrinsicInst.cpp                 |   13 +-
 llvm/lib/IR/LLVMContext.cpp                   |   40 +-
 llvm/lib/IR/LLVMContextImpl.cpp               |   10 +-
 llvm/lib/IR/LLVMRemarkStreamer.cpp            |    4 +-
 llvm/lib/IR/LegacyPassManager.cpp             |  163 +-
 llvm/lib/IR/MDBuilder.cpp                     |   11 +-
 llvm/lib/IR/Mangler.cpp                       |    7 +-
 llvm/lib/IR/Metadata.cpp                      |   10 +-
 llvm/lib/IR/Module.cpp                        |   15 +-
 llvm/lib/IR/ModuleSummaryIndex.cpp            |    7 +-
 llvm/lib/IR/Operator.cpp                      |    7 +-
 llvm/lib/IR/Pass.cpp                          |   24 +-
 llvm/lib/IR/PassTimingInfo.cpp                |   24 +-
 llvm/lib/IR/ProfileSummary.cpp                |    3 +-
 llvm/lib/IR/SafepointIRVerifier.cpp           |  126 +-
 llvm/lib/IR/SymbolTableListTraitsImpl.h       |    9 +-
 llvm/lib/IR/Type.cpp                          |  185 +-
 llvm/lib/IRPrinter/CMakeLists.txt             |   17 +-
 llvm/lib/IRReader/CMakeLists.txt              |   19 +-
 llvm/lib/IRReader/IRReader.cpp                |    4 +-
 llvm/lib/InterfaceStub/CMakeLists.txt         |   14 +-
 llvm/lib/MC/MCDisassembler/CMakeLists.txt     |   15 +-
 llvm/lib/MC/MCDisassembler/Disassembler.cpp   |   72 +-
 llvm/lib/MC/MCDisassembler/MCDisassembler.cpp |    2 +-
 .../MCDisassembler/MCExternalSymbolizer.cpp   |   51 +-
 llvm/lib/MC/MCParser/AsmLexer.cpp             |  110 +-
 llvm/lib/MCA/HardwareUnits/LSUnit.cpp         |   12 +-
 .../MCA/HardwareUnits/RetireControlUnit.cpp   |    8 +-
 llvm/lib/MCA/Stages/InstructionTables.cpp     |    3 +-
 550 files changed, 30017 insertions(+), 28217 deletions(-)

diff --git a/llvm/lib/Analysis/AliasAnalysis.cpp b/llvm/lib/Analysis/AliasAnalysis.cpp
index 75fe68a80ab6085..579087857b7eb0e 100644
--- a/llvm/lib/Analysis/AliasAnalysis.cpp
+++ b/llvm/lib/Analysis/AliasAnalysis.cpp
@@ -56,8 +56,8 @@
 
 using namespace llvm;
 
-STATISTIC(NumNoAlias,   "Number of NoAlias results");
-STATISTIC(NumMayAlias,  "Number of MayAlias results");
+STATISTIC(NumNoAlias, "Number of NoAlias results");
+STATISTIC(NumMayAlias, "Number of MayAlias results");
 STATISTIC(NumMustAlias, "Number of MustAlias results");
 
 namespace llvm {
@@ -116,8 +116,8 @@ AliasResult AAResults::alias(const MemoryLocation &LocA,
   if (EnableAATrace) {
     for (unsigned I = 0; I < AAQI.Depth; ++I)
       dbgs() << "  ";
-    dbgs() << "Start " << *LocA.Ptr << " @ " << LocA.Size << ", "
-           << *LocB.Ptr << " @ " << LocB.Size << "\n";
+    dbgs() << "Start " << *LocA.Ptr << " @ " << LocA.Size << ", " << *LocB.Ptr
+           << " @ " << LocB.Size << "\n";
   }
 
   AAQI.Depth++;
@@ -131,8 +131,8 @@ AliasResult AAResults::alias(const MemoryLocation &LocA,
   if (EnableAATrace) {
     for (unsigned I = 0; I < AAQI.Depth; ++I)
       dbgs() << "  ";
-    dbgs() << "End " << *LocA.Ptr << " @ " << LocA.Size << ", "
-           << *LocB.Ptr << " @ " << LocB.Size << " = " << Result << "\n";
+    dbgs() << "End " << *LocA.Ptr << " @ " << LocA.Size << ", " << *LocB.Ptr
+           << " @ " << LocB.Size << " = " << Result << "\n";
   }
 
   if (AAQI.Depth == 0) {
@@ -584,7 +584,8 @@ ModRefInfo AAResults::getModRefInfo(const AtomicCmpXchgInst *CX,
 ModRefInfo AAResults::getModRefInfo(const AtomicRMWInst *RMW,
                                     const MemoryLocation &Loc,
                                     AAQueryInfo &AAQI) {
-  // Acquire/Release atomicrmw has properties that matter for arbitrary addresses.
+  // Acquire/Release atomicrmw has properties that matter for arbitrary
+  // addresses.
   if (isStrongerThanMonotonic(RMW->getOrdering()))
     return ModRefInfo::ModRef;
 
@@ -646,8 +647,7 @@ ModRefInfo AAResults::getModRefInfo(const Instruction *I,
 /// with a smarter AA in place, this test is just wasting compile time.
 ModRefInfo AAResults::callCapturesBefore(const Instruction *I,
                                          const MemoryLocation &MemLoc,
-                                         DominatorTree *DT,
-                                         AAQueryInfo &AAQI) {
+                                         DominatorTree *DT, AAQueryInfo &AAQI) {
   if (!DT)
     return ModRefInfo::ModRef;
 
@@ -718,7 +718,7 @@ bool AAResults::canInstructionRangeModRef(const Instruction &I1,
          "Instructions not in same basic block!");
   BasicBlock::const_iterator I = I1.getIterator();
   BasicBlock::const_iterator E = I2.getIterator();
-  ++E;  // Convert from inclusive to exclusive range.
+  ++E; // Convert from inclusive to exclusive range.
 
   for (; I != E; ++I) // Check every instruction in range
     if (isModOrRefSet(getModRefInfo(&*I, Loc) & Mode))
diff --git a/llvm/lib/Analysis/AliasAnalysisEvaluator.cpp b/llvm/lib/Analysis/AliasAnalysisEvaluator.cpp
index a551ea6b69c5a40..9811af81642db4f 100644
--- a/llvm/lib/Analysis/AliasAnalysisEvaluator.cpp
+++ b/llvm/lib/Analysis/AliasAnalysisEvaluator.cpp
@@ -24,7 +24,8 @@ static cl::opt<bool> PrintAll("print-all-alias-modref-info", cl::ReallyHidden);
 
 static cl::opt<bool> PrintNoAlias("print-no-aliases", cl::ReallyHidden);
 static cl::opt<bool> PrintMayAlias("print-may-aliases", cl::ReallyHidden);
-static cl::opt<bool> PrintPartialAlias("print-partial-aliases", cl::ReallyHidden);
+static cl::opt<bool> PrintPartialAlias("print-partial-aliases",
+                                       cl::ReallyHidden);
 static cl::opt<bool> PrintMustAlias("print-must-aliases", cl::ReallyHidden);
 
 static cl::opt<bool> PrintNoModRef("print-no-modref", cl::ReallyHidden);
@@ -68,9 +69,9 @@ static void PrintResults(AliasResult AR, bool P,
   }
 }
 
-static inline void PrintModRefResults(
-    const char *Msg, bool P, Instruction *I,
-    std::pair<const Value *, Type *> Loc, Module *M) {
+static inline void PrintModRefResults(const char *Msg, bool P, Instruction *I,
+                                      std::pair<const Value *, Type *> Loc,
+                                      Module *M) {
   if (PrintAll || P) {
     errs() << "  " << Msg << ":  Ptr: ";
     Loc.second->print(errs(), false, /* NoDetails */ true);
@@ -115,8 +116,8 @@ void AAEvaluator::runInternal(Function &F, AAResults &AA) {
       Pointers.insert({LI->getPointerOperand(), LI->getType()});
       Loads.insert(LI);
     } else if (auto *SI = dyn_cast<StoreInst>(&Inst)) {
-      Pointers.insert({SI->getPointerOperand(),
-                       SI->getValueOperand()->getType()});
+      Pointers.insert(
+          {SI->getPointerOperand(), SI->getValueOperand()->getType()});
       Stores.insert(SI);
     } else if (auto *CB = dyn_cast<CallBase>(&Inst))
       Calls.insert(CB);
@@ -171,7 +172,8 @@ void AAEvaluator::runInternal(Function &F, AAResults &AA) {
           ++MayAliasCount;
           break;
         case AliasResult::PartialAlias:
-          PrintLoadStoreResults(AR, PrintPartialAlias, Load, Store, F.getParent());
+          PrintLoadStoreResults(AR, PrintPartialAlias, Load, Store,
+                                F.getParent());
           ++PartialAliasCount;
           break;
         case AliasResult::MustAlias:
@@ -349,7 +351,7 @@ class AAEvalLegacyPass : public FunctionPass {
     return false;
   }
 };
-}
+} // namespace llvm
 
 char AAEvalLegacyPass::ID = 0;
 INITIALIZE_PASS_BEGIN(AAEvalLegacyPass, "aa-eval",
diff --git a/llvm/lib/Analysis/AliasSetTracker.cpp b/llvm/lib/Analysis/AliasSetTracker.cpp
index 91b889116dfa2d3..8c65c001d2fe311 100644
--- a/llvm/lib/Analysis/AliasSetTracker.cpp
+++ b/llvm/lib/Analysis/AliasSetTracker.cpp
@@ -49,7 +49,7 @@ void AliasSet::mergeSetIn(AliasSet &AS, AliasSetTracker &AST,
   bool WasMustAlias = (Alias == SetMustAlias);
   // Update the alias and access types of this set...
   Access |= AS.Access;
-  Alias  |= AS.Alias;
+  Alias |= AS.Alias;
 
   if (Alias == SetMustAlias) {
     // Check that these two merged sets really are must aliases.  Since both
@@ -73,7 +73,7 @@ void AliasSet::mergeSetIn(AliasSet &AS, AliasSetTracker &AST,
   }
 
   bool ASHadUnknownInsts = !AS.UnknownInsts.empty();
-  if (UnknownInsts.empty()) {            // Merge call sites...
+  if (UnknownInsts.empty()) { // Merge call sites...
     if (ASHadUnknownInsts) {
       std::swap(UnknownInsts, AS.UnknownInsts);
       addRef();
@@ -107,8 +107,8 @@ void AliasSetTracker::removeAliasSet(AliasSet *AS) {
     Fwd->dropRef(*this);
     AS->Forward = nullptr;
   } else // Update TotalMayAliasSetSize only if not forwarding.
-      if (AS->Alias == AliasSet::SetMayAlias)
-        TotalMayAliasSetSize -= AS->size();
+    if (AS->Alias == AliasSet::SetMayAlias)
+      TotalMayAliasSetSize -= AS->size();
 
   AliasSets.erase(AS);
   // If we've removed the saturated alias set, set saturated marker back to
@@ -170,8 +170,9 @@ void AliasSet::addUnknownInst(Instruction *I, BatchAAResults &AA) {
   // Guards are marked as modifying memory for control flow modelling purposes,
   // but don't actually modify any specific memory location.
   using namespace PatternMatch;
-  bool MayWriteMemory = I->mayWriteToMemory() && !isGuard(I) &&
-    !(I->use_empty() && match(I, m_Intrinsic<Intrinsic::invariant_start>()));
+  bool MayWriteMemory =
+      I->mayWriteToMemory() && !isGuard(I) &&
+      !(I->use_empty() && match(I, m_Intrinsic<Intrinsic::invariant_start>()));
   if (!MayWriteMemory) {
     Alias = SetMayAlias;
     Access |= RefAccess;
@@ -318,7 +319,7 @@ AliasSet *AliasSetTracker::findAliasSetForUnknownInst(Instruction *Inst) {
 
 AliasSet &AliasSetTracker::getAliasSetFor(const MemoryLocation &MemLoc) {
 
-  Value * const Pointer = const_cast<Value*>(MemLoc.Ptr);
+  Value *const Pointer = const_cast<Value *>(MemLoc.Ptr);
   const LocationSize Size = MemLoc.Size;
   const AAMDNodes &AAInfo = MemLoc.AATags;
 
@@ -565,22 +566,32 @@ AliasSet &AliasSetTracker::addPointer(MemoryLocation Loc,
 //===----------------------------------------------------------------------===//
 
 void AliasSet::print(raw_ostream &OS) const {
-  OS << "  AliasSet[" << (const void*)this << ", " << RefCount << "] ";
+  OS << "  AliasSet[" << (const void *)this << ", " << RefCount << "] ";
   OS << (Alias == SetMustAlias ? "must" : "may") << " alias, ";
   switch (Access) {
-  case NoAccess:     OS << "No access "; break;
-  case RefAccess:    OS << "Ref       "; break;
-  case ModAccess:    OS << "Mod       "; break;
-  case ModRefAccess: OS << "Mod/Ref   "; break;
-  default: llvm_unreachable("Bad value for Access!");
+  case NoAccess:
+    OS << "No access ";
+    break;
+  case RefAccess:
+    OS << "Ref       ";
+    break;
+  case ModAccess:
+    OS << "Mod       ";
+    break;
+  case ModRefAccess:
+    OS << "Mod/Ref   ";
+    break;
+  default:
+    llvm_unreachable("Bad value for Access!");
   }
   if (Forward)
-    OS << " forwarding to " << (void*)Forward;
+    OS << " forwarding to " << (void *)Forward;
 
   if (!empty()) {
     OS << "Pointers: ";
     for (iterator I = begin(), E = end(); I != E; ++I) {
-      if (I != begin()) OS << ", ";
+      if (I != begin())
+        OS << ", ";
       I.getPointer()->printAsOperand(OS << "(");
       if (I.getSize() == LocationSize::afterPointer())
         OS << ", unknown after)";
diff --git a/llvm/lib/Analysis/Analysis.cpp b/llvm/lib/Analysis/Analysis.cpp
index 5461ce07af0b9f4..9f1d8c594e232b7 100644
--- a/llvm/lib/Analysis/Analysis.cpp
+++ b/llvm/lib/Analysis/Analysis.cpp
@@ -97,9 +97,9 @@ LLVMBool LLVMVerifyModule(LLVMModuleRef M, LLVMVerifierFailureAction Action,
 }
 
 LLVMBool LLVMVerifyFunction(LLVMValueRef Fn, LLVMVerifierFailureAction Action) {
-  LLVMBool Result = verifyFunction(
-      *unwrap<Function>(Fn), Action != LLVMReturnStatusAction ? &errs()
-                                                              : nullptr);
+  LLVMBool Result =
+      verifyFunction(*unwrap<Function>(Fn),
+                     Action != LLVMReturnStatusAction ? &errs() : nullptr);
 
   if (Action == LLVMAbortProcessAction && Result)
     report_fatal_error("Broken function found, compilation aborted!");
diff --git a/llvm/lib/Analysis/AssumeBundleQueries.cpp b/llvm/lib/Analysis/AssumeBundleQueries.cpp
index 7c6a7bad33b5c97..13d3b811d8c0f06 100644
--- a/llvm/lib/Analysis/AssumeBundleQueries.cpp
+++ b/llvm/lib/Analysis/AssumeBundleQueries.cpp
@@ -44,8 +44,8 @@ bool llvm::hasAttributeInAssume(AssumeInst &Assume, Value *IsOn,
                                 StringRef AttrName, uint64_t *ArgVal) {
   assert(Attribute::isExistingAttribute(AttrName) &&
          "this attribute doesn't exist");
-  assert((ArgVal == nullptr || Attribute::isIntAttrKind(
-                                   Attribute::getAttrKindFromName(AttrName))) &&
+  assert((ArgVal == nullptr ||
+          Attribute::isIntAttrKind(Attribute::getAttrKindFromName(AttrName))) &&
          "requested value for an attribute that has no argument");
   if (Assume.bundle_op_infos().empty())
     return false;
@@ -140,7 +140,7 @@ static CallInst::BundleOpInfo *getBundleFromUse(const Use *U) {
 RetainedKnowledge
 llvm::getKnowledgeFromUse(const Use *U,
                           ArrayRef<Attribute::AttrKind> AttrKinds) {
-  CallInst::BundleOpInfo* Bundle = getBundleFromUse(U);
+  CallInst::BundleOpInfo *Bundle = getBundleFromUse(U);
   if (!Bundle)
     return RetainedKnowledge::none();
   RetainedKnowledge RK =
@@ -179,7 +179,7 @@ llvm::getKnowledgeForValue(const Value *V,
     return RetainedKnowledge::none();
   }
   for (const auto &U : V->uses()) {
-    CallInst::BundleOpInfo* Bundle = getBundleFromUse(&U);
+    CallInst::BundleOpInfo *Bundle = getBundleFromUse(&U);
     if (!Bundle)
       continue;
     if (RetainedKnowledge RK =
diff --git a/llvm/lib/Analysis/BasicAliasAnalysis.cpp b/llvm/lib/Analysis/BasicAliasAnalysis.cpp
index c162b8f6edc1905..e94fddfac239ea8 100644
--- a/llvm/lib/Analysis/BasicAliasAnalysis.cpp
+++ b/llvm/lib/Analysis/BasicAliasAnalysis.cpp
@@ -102,8 +102,7 @@ bool BasicAAResult::invalidate(Function &Fn, const PreservedAnalyses &PA,
 
 /// Returns the size of the object specified by V or UnknownSize if unknown.
 static uint64_t getObjectSize(const Value *V, const DataLayout &DL,
-                              const TargetLibraryInfo &TLI,
-                              bool NullIsValidLoc,
+                              const TargetLibraryInfo &TLI, bool NullIsValidLoc,
                               bool RoundToAlign = false) {
   uint64_t Size;
   ObjectSizeOpts Opts;
@@ -170,7 +169,7 @@ static uint64_t getMinimalExtentFrom(const Value &V,
   // access after free would be undefined behavior.
   bool CanBeNull, CanBeFreed;
   uint64_t DerefBytes =
-    V.getPointerDereferenceableBytes(DL, CanBeNull, CanBeFreed);
+      V.getPointerDereferenceableBytes(DL, CanBeNull, CanBeFreed);
   DerefBytes = (CanBeNull && NullIsValidLoc) ? 0 : DerefBytes;
   // If queried with a precise location size, we assume that location size to be
   // accessed, thus valid.
@@ -284,18 +283,24 @@ struct CastedValue {
   APInt evaluateWith(APInt N) const {
     assert(N.getBitWidth() == V->getType()->getPrimitiveSizeInBits() &&
            "Incompatible bit width");
-    if (TruncBits) N = N.trunc(N.getBitWidth() - TruncBits);
-    if (SExtBits) N = N.sext(N.getBitWidth() + SExtBits);
-    if (ZExtBits) N = N.zext(N.getBitWidth() + ZExtBits);
+    if (TruncBits)
+      N = N.trunc(N.getBitWidth() - TruncBits);
+    if (SExtBits)
+      N = N.sext(N.getBitWidth() + SExtBits);
+    if (ZExtBits)
+      N = N.zext(N.getBitWidth() + ZExtBits);
     return N;
   }
 
   ConstantRange evaluateWith(ConstantRange N) const {
     assert(N.getBitWidth() == V->getType()->getPrimitiveSizeInBits() &&
            "Incompatible bit width");
-    if (TruncBits) N = N.truncate(N.getBitWidth() - TruncBits);
-    if (SExtBits) N = N.signExtend(N.getBitWidth() + SExtBits);
-    if (ZExtBits) N = N.zeroExtend(N.getBitWidth() + ZExtBits);
+    if (TruncBits)
+      N = N.truncate(N.getBitWidth() - TruncBits);
+    if (SExtBits)
+      N = N.signExtend(N.getBitWidth() + SExtBits);
+    if (ZExtBits)
+      N = N.zeroExtend(N.getBitWidth() + ZExtBits);
     return N;
   }
 
@@ -338,13 +343,14 @@ struct LinearExpression {
     return LinearExpression(Val, Scale * Other, Offset * Other, NSW);
   }
 };
-}
+} // namespace
 
 /// Analyzes the specified value as a linear expression: "A*V + B", where A and
 /// B are constant integers.
-static LinearExpression GetLinearExpression(
-    const CastedValue &Val,  const DataLayout &DL, unsigned Depth,
-    AssumptionCache *AC, DominatorTree *DT) {
+static LinearExpression GetLinearExpression(const CastedValue &Val,
+                                            const DataLayout &DL,
+                                            unsigned Depth, AssumptionCache *AC,
+                                            DominatorTree *DT) {
   // Limit our recursion depth.
   if (Depth == 6)
     return Val;
@@ -426,13 +432,13 @@ static LinearExpression GetLinearExpression(
 
   if (isa<ZExtInst>(Val.V))
     return GetLinearExpression(
-        Val.withZExtOfValue(cast<CastInst>(Val.V)->getOperand(0)),
-        DL, Depth + 1, AC, DT);
+        Val.withZExtOfValue(cast<CastInst>(Val.V)->getOperand(0)), DL,
+        Depth + 1, AC, DT);
 
   if (isa<SExtInst>(Val.V))
     return GetLinearExpression(
-        Val.withSExtOfValue(cast<CastInst>(Val.V)->getOperand(0)),
-        DL, Depth + 1, AC, DT);
+        Val.withSExtOfValue(cast<CastInst>(Val.V)->getOperand(0)), DL,
+        Depth + 1, AC, DT);
 
   return Val;
 }
@@ -477,16 +483,13 @@ struct VariableGEPIndex {
     dbgs() << "\n";
   }
   void print(raw_ostream &OS) const {
-    OS << "(V=" << Val.V->getName()
-       << ", zextbits=" << Val.ZExtBits
-       << ", sextbits=" << Val.SExtBits
-       << ", truncbits=" << Val.TruncBits
-       << ", scale=" << Scale
-       << ", nsw=" << IsNSW
-       << ", negated=" << IsNegated << ")";
+    OS << "(V=" << Val.V->getName() << ", zextbits=" << Val.ZExtBits
+       << ", sextbits=" << Val.SExtBits << ", truncbits=" << Val.TruncBits
+       << ", scale=" << Scale << ", nsw=" << IsNSW << ", negated=" << IsNegated
+       << ")";
   }
 };
-}
+} // namespace
 
 // Represents the internal structure of a GEP, decomposed into a base pointer,
 // constant offsets, and variable scaled indices.
@@ -506,8 +509,7 @@ struct BasicAAResult::DecomposedGEP {
     dbgs() << "\n";
   }
   void print(raw_ostream &OS) const {
-    OS << "(DecomposedGEP Base=" << Base->getName()
-       << ", Offset=" << Offset
+    OS << "(DecomposedGEP Base=" << Base->getName() << ", Offset=" << Offset
        << ", VarIndices=[";
     for (size_t i = 0; i < VarIndices.size(); i++) {
       if (i != 0)
@@ -518,7 +520,6 @@ struct BasicAAResult::DecomposedGEP {
   }
 };
 
-
 /// If V is a symbolic pointer expression, decompose it into a base pointer
 /// with a constant offset and a number of scaled symbolic offsets.
 ///
@@ -1023,10 +1024,12 @@ static bool isBaseOfObject(const Value *V) {
 /// We know that V1 is a GEP, but we don't know anything about V2.
 /// UnderlyingV1 is getUnderlyingObject(GEP1), UnderlyingV2 is the same for
 /// V2.
-AliasResult BasicAAResult::aliasGEP(
-    const GEPOperator *GEP1, LocationSize V1Size,
-    const Value *V2, LocationSize V2Size,
-    const Value *UnderlyingV1, const Value *UnderlyingV2, AAQueryInfo &AAQI) {
+AliasResult BasicAAResult::aliasGEP(const GEPOperator *GEP1,
+                                    LocationSize V1Size, const Value *V2,
+                                    LocationSize V2Size,
+                                    const Value *UnderlyingV1,
+                                    const Value *UnderlyingV2,
+                                    AAQueryInfo &AAQI) {
   if (!V1Size.hasValue() && !V2Size.hasValue()) {
     // TODO: This limitation exists for compile-time reasons. Relax it if we
     // can avoid exponential pathological cases.
@@ -1155,11 +1158,10 @@ AliasResult BasicAAResult::aliasGEP(
 
     ConstantRange CR = computeConstantRange(Index.Val.V, /* ForSigned */ false,
                                             true, &AC, Index.CxtI);
-    KnownBits Known =
-        computeKnownBits(Index.Val.V, DL, 0, &AC, Index.CxtI, DT);
-    CR = CR.intersectWith(
-        ConstantRange::fromKnownBits(Known, /* Signed */ true),
-        ConstantRange::Signed);
+    KnownBits Known = computeKnownBits(Index.Val.V, DL, 0, &AC, Index.CxtI, DT);
+    CR =
+        CR.intersectWith(ConstantRange::fromKnownBits(Known, /* Signed */ true),
+                         ConstantRange::Signed);
     CR = Index.Val.evaluateWith(CR).sextOrTrunc(OffsetRange.getBitWidth());
 
     assert(OffsetRange.getBitWidth() == Scale.getBitWidth() &&
@@ -1279,10 +1281,9 @@ static AliasResult MergeAliasResults(AliasResult A, AliasResult B) {
 
 /// Provides a bunch of ad-hoc rules to disambiguate a Select instruction
 /// against another.
-AliasResult
-BasicAAResult::aliasSelect(const SelectInst *SI, LocationSize SISize,
-                           const Value *V2, LocationSize V2Size,
-                           AAQueryInfo &AAQI) {
+AliasResult BasicAAResult::aliasSelect(const SelectInst *SI,
+                                       LocationSize SISize, const Value *V2,
+                                       LocationSize V2Size, AAQueryInfo &AAQI) {
   // If the values are Selects with the same condition, we can do a more precise
   // check: just check for aliases between the values on corresponding arms.
   if (const SelectInst *SI2 = dyn_cast<SelectInst>(V2))
@@ -1420,8 +1421,8 @@ AliasResult BasicAAResult::aliasPHI(const PHINode *PN, LocationSize PNSize,
   for (unsigned i = 1, e = V1Srcs.size(); i != e; ++i) {
     Value *V = V1Srcs[i];
 
-    AliasResult ThisAlias = AAQI.AAR.alias(
-        MemoryLocation(V, PNSize), MemoryLocation(V2, V2Size), AAQI);
+    AliasResult ThisAlias = AAQI.AAR.alias(MemoryLocation(V, PNSize),
+                                           MemoryLocation(V2, V2Size), AAQI);
     Alias = MergeAliasResults(ThisAlias, Alias);
     if (Alias == AliasResult::MayAlias)
       break;
@@ -1627,8 +1628,7 @@ AliasResult BasicAAResult::aliasCheck(const Value *V1, LocationSize V1Size,
 }
 
 AliasResult BasicAAResult::aliasCheckRecursive(
-    const Value *V1, LocationSize V1Size,
-    const Value *V2, LocationSize V2Size,
+    const Value *V1, LocationSize V1Size, const Value *V2, LocationSize V2Size,
     AAQueryInfo &AAQI, const Value *O1, const Value *O2) {
   if (const GEPOperator *GV1 = dyn_cast<GEPOperator>(V1)) {
     AliasResult Result = aliasGEP(GV1, V1Size, V2, V2Size, O1, O2, AAQI);
@@ -1790,7 +1790,7 @@ bool BasicAAResult::constantOffsetHeuristic(const DecomposedGEP &GEP,
   APInt MinDiff = E0.Offset - E1.Offset, Wrapped = -MinDiff;
   MinDiff = APIntOps::umin(MinDiff, Wrapped);
   APInt MinDiffBytes =
-    MinDiff.zextOrTrunc(Var0.Scale.getBitWidth()) * Var0.Scale.abs();
+      MinDiff.zextOrTrunc(Var0.Scale.getBitWidth()) * Var0.Scale.abs();
 
   // We can't definitely say whether GEP1 is before or after V2 due to wrapping
   // arithmetic (i.e. for some values of GEP1 and V2 GEP1 < V2, and for other
diff --git a/llvm/lib/Analysis/BlockFrequencyInfo.cpp b/llvm/lib/Analysis/BlockFrequencyInfo.cpp
index b18d04cc73dbca0..f22d3c79a5c6a25 100644
--- a/llvm/lib/Analysis/BlockFrequencyInfo.cpp
+++ b/llvm/lib/Analysis/BlockFrequencyInfo.cpp
@@ -43,8 +43,9 @@ static cl::opt<GVDAGType> ViewBlockFreqPropagationDAG(
                clEnumValN(GVDT_Integer, "integer",
                           "display a graph using the raw "
                           "integer fractional block frequency representation."),
-               clEnumValN(GVDT_Count, "count", "display a graph using the real "
-                                               "profile count if available.")));
+               clEnumValN(GVDT_Count, "count",
+                          "display a graph using the real "
+                          "profile count if available.")));
 
 namespace llvm {
 cl::opt<std::string>
@@ -62,25 +63,25 @@ cl::opt<unsigned>
                                 "function multiplied by this percent."));
 
 // Command line option to turn on CFG dot or text dump after profile annotation.
-cl::opt<PGOViewCountsType> PGOViewCounts(
-    "pgo-view-counts", cl::Hidden,
-    cl::desc("A boolean option to show CFG dag or text with "
-             "block profile counts and branch probabilities "
-             "right after PGO profile annotation step. The "
-             "profile counts are computed using branch "
-             "probabilities from the runtime profile data and "
-             "block frequency propagation algorithm. To view "
-             "the raw counts from the profile, use option "
-             "-pgo-view-raw-counts instead. To limit graph "
-             "display to only one function, use filtering option "
-             "-view-bfi-func-name."),
-    cl::values(clEnumValN(PGOVCT_None, "none", "do not show."),
-               clEnumValN(PGOVCT_Graph, "graph", "show a graph."),
-               clEnumValN(PGOVCT_Text, "text", "show in text.")));
-
-static cl::opt<bool> PrintBlockFreq(
-    "print-bfi", cl::init(false), cl::Hidden,
-    cl::desc("Print the block frequency info."));
+cl::opt<PGOViewCountsType>
+    PGOViewCounts("pgo-view-counts", cl::Hidden,
+                  cl::desc("A boolean option to show CFG dag or text with "
+                           "block profile counts and branch probabilities "
+                           "right after PGO profile annotation step. The "
+                           "profile counts are computed using branch "
+                           "probabilities from the runtime profile data and "
+                           "block frequency propagation algorithm. To view "
+                           "the raw counts from the profile, use option "
+                           "-pgo-view-raw-counts instead. To limit graph "
+                           "display to only one function, use filtering option "
+                           "-view-bfi-func-name."),
+                  cl::values(clEnumValN(PGOVCT_None, "none", "do not show."),
+                             clEnumValN(PGOVCT_Graph, "graph", "show a graph."),
+                             clEnumValN(PGOVCT_Text, "text", "show in text.")));
+
+static cl::opt<bool>
+    PrintBlockFreq("print-bfi", cl::init(false), cl::Hidden,
+                   cl::desc("Print the block frequency info."));
 
 cl::opt<std::string> PrintBlockFreqFuncName(
     "print-bfi-func-name", cl::Hidden,
@@ -96,8 +97,7 @@ static GVDAGType getGVDT() {
   return ViewBlockFreqPropagationDAG;
 }
 
-template <>
-struct GraphTraits<BlockFrequencyInfo *> {
+template <> struct GraphTraits<BlockFrequencyInfo *> {
   using NodeRef = const BasicBlock *;
   using ChildIteratorType = const_succ_iterator;
   using nodes_iterator = pointer_iterator<Function::const_iterator>;
@@ -193,9 +193,8 @@ void BlockFrequencyInfo::calculate(const Function &F,
        F.getName().equals(ViewBlockFreqFuncName))) {
     view();
   }
-  if (PrintBlockFreq &&
-      (PrintBlockFreqFuncName.empty() ||
-       F.getName().equals(PrintBlockFreqFuncName))) {
+  if (PrintBlockFreq && (PrintBlockFreqFuncName.empty() ||
+                         F.getName().equals(PrintBlockFreqFuncName))) {
     print(dbgs());
   }
 }
@@ -266,14 +265,14 @@ const BranchProbabilityInfo *BlockFrequencyInfo::getBPI() const {
   return BFI ? &BFI->getBPI() : nullptr;
 }
 
-raw_ostream &BlockFrequencyInfo::
-printBlockFreq(raw_ostream &OS, const BlockFrequency Freq) const {
+raw_ostream &
+BlockFrequencyInfo::printBlockFreq(raw_ostream &OS,
+                                   const BlockFrequency Freq) const {
   return BFI ? BFI->printBlockFreq(OS, Freq) : OS;
 }
 
-raw_ostream &
-BlockFrequencyInfo::printBlockFreq(raw_ostream &OS,
-                                   const BasicBlock *BB) const {
+raw_ostream &BlockFrequencyInfo::printBlockFreq(raw_ostream &OS,
+                                                const BasicBlock *BB) const {
   return BFI ? BFI->printBlockFreq(OS, BB) : OS;
 }
 
@@ -340,8 +339,8 @@ BlockFrequencyInfo BlockFrequencyAnalysis::run(Function &F,
   return BFI;
 }
 
-PreservedAnalyses
-BlockFrequencyPrinterPass::run(Function &F, FunctionAnalysisManager &AM) {
+PreservedAnalyses BlockFrequencyPrinterPass::run(Function &F,
+                                                 FunctionAnalysisManager &AM) {
   OS << "Printing analysis results of BFI for function "
      << "'" << F.getName() << "':"
      << "\n";
diff --git a/llvm/lib/Analysis/BlockFrequencyInfoImpl.cpp b/llvm/lib/Analysis/BlockFrequencyInfoImpl.cpp
index 82b1e3b9eede709..6bbc91af99fad70 100644
--- a/llvm/lib/Analysis/BlockFrequencyInfoImpl.cpp
+++ b/llvm/lib/Analysis/BlockFrequencyInfoImpl.cpp
@@ -42,8 +42,7 @@ using namespace llvm::bfi_detail;
 
 namespace llvm {
 cl::opt<bool> CheckBFIUnknownBlockQueries(
-    "check-bfi-unknown-block-queries",
-    cl::init(false), cl::Hidden,
+    "check-bfi-unknown-block-queries", cl::init(false), cl::Hidden,
     cl::desc("Check if block frequency is queried for an unknown block "
              "for debugging missed BFI updates"));
 
@@ -265,8 +264,8 @@ void Distribution::normalize() {
     // sum of the weights, but let's double-check.
     assert(Total == std::accumulate(Weights.begin(), Weights.end(), UINT64_C(0),
                                     [](uint64_t Sum, const Weight &W) {
-                      return Sum + W.Amount;
-                    }) &&
+                                      return Sum + W.Amount;
+                                    }) &&
            "Expected total to be correct");
     return;
   }
@@ -586,18 +585,14 @@ BlockFrequencyInfoImplBase::getBlockFreq(const BlockNode &Node) const {
   return Freqs[Node.Index].Integer;
 }
 
-std::optional<uint64_t>
-BlockFrequencyInfoImplBase::getBlockProfileCount(const Function &F,
-                                                 const BlockNode &Node,
-                                                 bool AllowSynthetic) const {
+std::optional<uint64_t> BlockFrequencyInfoImplBase::getBlockProfileCount(
+    const Function &F, const BlockNode &Node, bool AllowSynthetic) const {
   return getProfileCountFromFreq(F, getBlockFreq(Node).getFrequency(),
                                  AllowSynthetic);
 }
 
-std::optional<uint64_t>
-BlockFrequencyInfoImplBase::getProfileCountFromFreq(const Function &F,
-                                                    uint64_t Freq,
-                                                    bool AllowSynthetic) const {
+std::optional<uint64_t> BlockFrequencyInfoImplBase::getProfileCountFromFreq(
+    const Function &F, uint64_t Freq, bool AllowSynthetic) const {
   auto EntryCount = F.getEntryCount(AllowSynthetic);
   if (!EntryCount)
     return std::nullopt;
@@ -612,8 +607,7 @@ BlockFrequencyInfoImplBase::getProfileCountFromFreq(const Function &F,
   return BlockCount.getLimitedValue();
 }
 
-bool
-BlockFrequencyInfoImplBase::isIrrLoopHeader(const BlockNode &Node) {
+bool BlockFrequencyInfoImplBase::isIrrLoopHeader(const BlockNode &Node) {
   if (!Node.isValid())
     return false;
   return IsIrrLoopHeader.test(Node.Index);
@@ -711,8 +705,7 @@ template <> struct GraphTraits<IrreducibleGraph> {
 /// Find entry blocks and other blocks with backedges, which exist when \c G
 /// contains irreducible sub-SCCs.
 static void findIrreducibleHeaders(
-    const BlockFrequencyInfoImplBase &BFI,
-    const IrreducibleGraph &G,
+    const BlockFrequencyInfoImplBase &BFI, const IrreducibleGraph &G,
     const std::vector<const IrreducibleGraph::IrrNode *> &SCC,
     LoopData::NodeList &Headers, LoopData::NodeList &Others) {
   // Map from nodes in the SCC to whether it's an entry block.
@@ -821,8 +814,8 @@ BlockFrequencyInfoImplBase::analyzeIrreducible(
   return make_range(Loops.begin(), Insert);
 }
 
-void
-BlockFrequencyInfoImplBase::updateLoopWithIrreducible(LoopData &OuterLoop) {
+void BlockFrequencyInfoImplBase::updateLoopWithIrreducible(
+    LoopData &OuterLoop) {
   OuterLoop.Exits.clear();
   for (auto &Mass : OuterLoop.BackedgeMass)
     Mass = BlockMass::getEmpty();
@@ -870,7 +863,8 @@ void BlockFrequencyInfoImplBase::adjustLoopHeaderMass(LoopData &Loop) {
   }
 }
 
-void BlockFrequencyInfoImplBase::distributeIrrLoopHeaderMass(Distribution &Dist) {
+void BlockFrequencyInfoImplBase::distributeIrrLoopHeaderMass(
+    Distribution &Dist) {
   BlockMass LoopMass = BlockMass::getFull();
   DitheringDistributer D(Dist, LoopMass);
   for (const Weight &W : Dist.Weights) {
diff --git a/llvm/lib/Analysis/BranchProbabilityInfo.cpp b/llvm/lib/Analysis/BranchProbabilityInfo.cpp
index b45deccd913db47..15bf56d9a00da08 100644
--- a/llvm/lib/Analysis/BranchProbabilityInfo.cpp
+++ b/llvm/lib/Analysis/BranchProbabilityInfo.cpp
@@ -51,9 +51,9 @@ using namespace llvm;
 
 #define DEBUG_TYPE "branch-prob"
 
-static cl::opt<bool> PrintBranchProb(
-    "print-bpi", cl::init(false), cl::Hidden,
-    cl::desc("Print the branch probability info."));
+static cl::opt<bool>
+    PrintBranchProb("print-bpi", cl::init(false), cl::Hidden,
+                    cl::desc("Print the branch probability info."));
 
 cl::opt<std::string> PrintBranchProbFuncName(
     "print-bpi-func-name", cl::Hidden,
@@ -135,16 +135,18 @@ static const BranchProbability
 
 /// Integer compares with 0:
 static const ProbabilityTable ICmpWithZeroTable{
-    {CmpInst::ICMP_EQ, {ZeroUntakenProb, ZeroTakenProb}},  /// X == 0 -> Unlikely
-    {CmpInst::ICMP_NE, {ZeroTakenProb, ZeroUntakenProb}},  /// X != 0 -> Likely
-    {CmpInst::ICMP_SLT, {ZeroUntakenProb, ZeroTakenProb}}, /// X < 0  -> Unlikely
+    {CmpInst::ICMP_EQ, {ZeroUntakenProb, ZeroTakenProb}}, /// X == 0 -> Unlikely
+    {CmpInst::ICMP_NE, {ZeroTakenProb, ZeroUntakenProb}}, /// X != 0 -> Likely
+    {CmpInst::ICMP_SLT,
+     {ZeroUntakenProb, ZeroTakenProb}}, /// X < 0  -> Unlikely
     {CmpInst::ICMP_SGT, {ZeroTakenProb, ZeroUntakenProb}}, /// X > 0  -> Likely
 };
 
 /// Integer compares with -1:
 static const ProbabilityTable ICmpWithMinusOneTable{
-    {CmpInst::ICMP_EQ, {ZeroUntakenProb, ZeroTakenProb}},  /// X == -1 -> Unlikely
-    {CmpInst::ICMP_NE, {ZeroTakenProb, ZeroUntakenProb}},  /// X != -1 -> Likely
+    {CmpInst::ICMP_EQ,
+     {ZeroUntakenProb, ZeroTakenProb}}, /// X == -1 -> Unlikely
+    {CmpInst::ICMP_NE, {ZeroTakenProb, ZeroUntakenProb}}, /// X != -1 -> Likely
     // InstCombine canonicalizes X >= 0 into X > -1
     {CmpInst::ICMP_SGT, {ZeroTakenProb, ZeroUntakenProb}}, /// X >= 0  -> Likely
 };
@@ -152,7 +154,8 @@ static const ProbabilityTable ICmpWithMinusOneTable{
 /// Integer compares with 1:
 static const ProbabilityTable ICmpWithOneTable{
     // InstCombine canonicalizes X <= 0 into X < 1
-    {CmpInst::ICMP_SLT, {ZeroUntakenProb, ZeroTakenProb}}, /// X <= 0 -> Unlikely
+    {CmpInst::ICMP_SLT,
+     {ZeroUntakenProb, ZeroTakenProb}}, /// X <= 0 -> Unlikely
 };
 
 /// strcmp and similar functions return zero, negative, or positive, if the
@@ -189,8 +192,10 @@ static const BranchProbability
 
 /// Floating-Point compares:
 static const ProbabilityTable FCmpTable{
-    {FCmpInst::FCMP_ORD, {FPOrdTakenProb, FPOrdUntakenProb}}, /// !isnan -> Likely
-    {FCmpInst::FCMP_UNO, {FPOrdUntakenProb, FPOrdTakenProb}}, /// isnan -> Unlikely
+    {FCmpInst::FCMP_ORD,
+     {FPOrdTakenProb, FPOrdUntakenProb}}, /// !isnan -> Likely
+    {FCmpInst::FCMP_UNO,
+     {FPOrdUntakenProb, FPOrdTakenProb}}, /// isnan -> Unlikely
 };
 
 /// Set of dedicated "absolute" execution weights for a block. These weights are
@@ -436,7 +441,7 @@ bool BranchProbabilityInfo::calcMetadataWeights(const BasicBlock *BB) {
   // Set the probability.
   SmallVector<BranchProbability, 2> BP;
   for (unsigned I = 0, E = TI->getNumSuccessors(); I != E; ++I)
-    BP.push_back({ Weights[I], static_cast<uint32_t>(WeightSum) });
+    BP.push_back({Weights[I], static_cast<uint32_t>(WeightSum)});
 
   // Examine the metadata against unreachable heuristic.
   // If the unreachable heuristic is more strong then we use it for this edge.
@@ -541,7 +546,7 @@ bool BranchProbabilityInfo::calcPointerHeuristics(const BasicBlock *BB) {
 // UnlikelyBlocks set.
 static void
 computeUnlikelySuccessors(const BasicBlock *BB, Loop *L,
-                          SmallPtrSetImpl<const BasicBlock*> &UnlikelyBlocks) {
+                          SmallPtrSetImpl<const BasicBlock *> &UnlikelyBlocks) {
   // Sometimes in a loop we have a branch whose condition is made false by
   // taking it. This is typically something like
   //  int n = 0;
@@ -595,8 +600,8 @@ computeUnlikelySuccessors(const BasicBlock *BB, Loop *L,
     return;
 
   // Trace the phi node to find all values that come from successors of BB
-  SmallPtrSet<PHINode*, 8> VisitedInsts;
-  SmallVector<PHINode*, 8> WorkList;
+  SmallPtrSet<PHINode *, 8> VisitedInsts;
+  SmallVector<PHINode *, 8> WorkList;
   WorkList.push_back(CmpPHI);
   VisitedInsts.insert(CmpPHI);
   while (!WorkList.empty()) {
@@ -634,9 +639,8 @@ computeUnlikelySuccessors(const BasicBlock *BB, Loop *L,
                                                   CmpLHSConst, CmpConst, true);
       // If the result means we don't branch to the block then that block is
       // unlikely.
-      if (Result &&
-          ((Result->isZeroValue() && B == BI->getSuccessor(0)) ||
-           (Result->isOneValue() && B == BI->getSuccessor(1))))
+      if (Result && ((Result->isZeroValue() && B == BI->getSuccessor(0)) ||
+                     (Result->isOneValue() && B == BI->getSuccessor(1))))
         UnlikelyBlocks.insert(B);
     }
   }
@@ -998,12 +1002,9 @@ bool BranchProbabilityInfo::calcZeroHeuristics(const BasicBlock *BB,
         TLI->getLibFunc(*CalledFn, Func);
 
   ProbabilityTable::const_iterator Search;
-  if (Func == LibFunc_strcasecmp ||
-      Func == LibFunc_strcmp ||
-      Func == LibFunc_strncasecmp ||
-      Func == LibFunc_strncmp ||
-      Func == LibFunc_memcmp ||
-      Func == LibFunc_bcmp) {
+  if (Func == LibFunc_strcasecmp || Func == LibFunc_strcmp ||
+      Func == LibFunc_strncasecmp || Func == LibFunc_strncmp ||
+      Func == LibFunc_memcmp || Func == LibFunc_bcmp) {
     Search = ICmpWithLibCallTable.find(CI->getPredicate());
     if (Search == ICmpWithLibCallTable.end())
       return false;
@@ -1040,10 +1041,11 @@ bool BranchProbabilityInfo::calcFloatingPointHeuristics(const BasicBlock *BB) {
   ProbabilityList ProbList;
   if (FCmp->isEquality()) {
     ProbList = !FCmp->isTrueWhenEqual() ?
-      // f1 == f2 -> Unlikely
-      ProbabilityList({FPTakenProb, FPUntakenProb}) :
-      // f1 != f2 -> Likely
-      ProbabilityList({FPUntakenProb, FPTakenProb});
+                                        // f1 == f2 -> Unlikely
+                   ProbabilityList({FPTakenProb, FPUntakenProb})
+                                        :
+                                        // f1 != f2 -> Likely
+                   ProbabilityList({FPUntakenProb, FPTakenProb});
   } else {
     auto Search = FCmpTable.find(FCmp->getPredicate());
     if (Search == FCmpTable.end())
@@ -1080,8 +1082,8 @@ void BranchProbabilityInfo::print(raw_ostream &OS) const {
   }
 }
 
-bool BranchProbabilityInfo::
-isEdgeHot(const BasicBlock *Src, const BasicBlock *Dst) const {
+bool BranchProbabilityInfo::isEdgeHot(const BasicBlock *Src,
+                                      const BasicBlock *Dst) const {
   // Hot probability is at least 4/5 = 80%
   // FIXME: Compare against a static "hot" BranchProbability.
   return getEdgeProbability(Src, Dst) > BranchProbability(4, 5);
@@ -1183,10 +1185,8 @@ void BranchProbabilityInfo::swapSuccEdgesProbabilities(const BasicBlock *Src) {
   std::swap(Probs[std::make_pair(Src, 0)], Probs[std::make_pair(Src, 1)]);
 }
 
-raw_ostream &
-BranchProbabilityInfo::printEdgeProbability(raw_ostream &OS,
-                                            const BasicBlock *Src,
-                                            const BasicBlock *Dst) const {
+raw_ostream &BranchProbabilityInfo::printEdgeProbability(
+    raw_ostream &OS, const BasicBlock *Src, const BasicBlock *Dst) const {
   const BranchProbability Prob = getEdgeProbability(Src, Dst);
   OS << "edge " << Src->getName() << " -> " << Dst->getName()
      << " probability is " << Prob
@@ -1270,9 +1270,8 @@ void BranchProbabilityInfo::calculate(const Function &F, const LoopInfo &LoopI,
   EstimatedBlockWeight.clear();
   SccI.reset();
 
-  if (PrintBranchProb &&
-      (PrintBranchProbFuncName.empty() ||
-       F.getName().equals(PrintBranchProbFuncName))) {
+  if (PrintBranchProb && (PrintBranchProbFuncName.empty() ||
+                          F.getName().equals(PrintBranchProbFuncName))) {
     print(dbgs());
   }
 }
diff --git a/llvm/lib/Analysis/CFG.cpp b/llvm/lib/Analysis/CFG.cpp
index 8528aa9f77e0223..db535fa7f4d5d17 100644
--- a/llvm/lib/Analysis/CFG.cpp
+++ b/llvm/lib/Analysis/CFG.cpp
@@ -31,15 +31,17 @@ static cl::opt<unsigned> DefaultMaxBBsToExplore(
 /// (compared to computing dominators and loop info) analysis.
 ///
 /// The output is added to Result, as pairs of <from,to> edge info.
-void llvm::FindFunctionBackedges(const Function &F,
-     SmallVectorImpl<std::pair<const BasicBlock*,const BasicBlock*> > &Result) {
+void llvm::FindFunctionBackedges(
+    const Function &F,
+    SmallVectorImpl<std::pair<const BasicBlock *, const BasicBlock *>>
+        &Result) {
   const BasicBlock *BB = &F.getEntryBlock();
   if (succ_empty(BB))
     return;
 
-  SmallPtrSet<const BasicBlock*, 8> Visited;
+  SmallPtrSet<const BasicBlock *, 8> Visited;
   SmallVector<std::pair<const BasicBlock *, const_succ_iterator>, 8> VisitStack;
-  SmallPtrSet<const BasicBlock*, 8> InStack;
+  SmallPtrSet<const BasicBlock *, 8> InStack;
 
   Visited.insert(BB);
   VisitStack.push_back(std::make_pair(BB, succ_begin(BB)));
@@ -77,12 +79,12 @@ void llvm::FindFunctionBackedges(const Function &F,
 /// successors.  It is an error to call this with a block that is not a
 /// successor.
 unsigned llvm::GetSuccessorNumber(const BasicBlock *BB,
-    const BasicBlock *Succ) {
+                                  const BasicBlock *Succ) {
   const Instruction *Term = BB->getTerminator();
 #ifndef NDEBUG
   unsigned e = Term->getNumSuccessors();
 #endif
-  for (unsigned i = 0; ; ++i) {
+  for (unsigned i = 0;; ++i) {
     assert(i != e && "Didn't find edge?");
     if (Term->getSuccessor(i) == Succ)
       return i;
@@ -101,7 +103,8 @@ bool llvm::isCriticalEdge(const Instruction *TI, unsigned SuccNum,
 bool llvm::isCriticalEdge(const Instruction *TI, const BasicBlock *Dest,
                           bool AllowIdenticalEdges) {
   assert(TI->isTerminator() && "Must be a terminator to have successors!");
-  if (TI->getNumSuccessors() == 1) return false;
+  if (TI->getNumSuccessors() == 1)
+    return false;
 
   assert(is_contained(predecessors(Dest), TI->getParent()) &&
          "No edge between TI's block and Dest.");
@@ -111,7 +114,7 @@ bool llvm::isCriticalEdge(const Instruction *TI, const BasicBlock *Dest,
   // If there is more than one predecessor, this is a critical edge...
   assert(I != E && "No preds, but we have an edge to the block?");
   const BasicBlock *FirstPred = *I;
-  ++I;        // Skip one edge due to the incoming arc from TI.
+  ++I; // Skip one edge due to the incoming arc from TI.
   if (!AllowIdenticalEdges)
     return I != E;
 
@@ -158,7 +161,7 @@ bool llvm::isPotentiallyReachableFromMany(
   const Loop *StopLoop = LI ? getOutermostLoop(LI, StopBB) : nullptr;
 
   unsigned Limit = DefaultMaxBBsToExplore;
-  SmallPtrSet<const BasicBlock*, 32> Visited;
+  SmallPtrSet<const BasicBlock *, 32> Visited;
   do {
     BasicBlock *BB = Worklist.pop_back_val();
     if (!Visited.insert(BB).second)
@@ -222,8 +225,8 @@ bool llvm::isPotentiallyReachable(
     }
   }
 
-  SmallVector<BasicBlock*, 32> Worklist;
-  Worklist.push_back(const_cast<BasicBlock*>(A));
+  SmallVector<BasicBlock *, 32> Worklist;
+  Worklist.push_back(const_cast<BasicBlock *>(A));
 
   return isPotentiallyReachableFromMany(Worklist, B, ExclusionSet, DT, LI);
 }
@@ -258,7 +261,7 @@ bool llvm::isPotentiallyReachable(
       return false;
 
     // Otherwise, continue doing the normal per-BB CFG walk.
-    SmallVector<BasicBlock*, 32> Worklist;
+    SmallVector<BasicBlock *, 32> Worklist;
     Worklist.append(succ_begin(BB), succ_end(BB));
     if (Worklist.empty()) {
       // We've proven that there's no path!
@@ -269,6 +272,6 @@ bool llvm::isPotentiallyReachable(
                                           ExclusionSet, DT, LI);
   }
 
-  return isPotentiallyReachable(
-      A->getParent(), B->getParent(), ExclusionSet, DT, LI);
+  return isPotentiallyReachable(A->getParent(), B->getParent(), ExclusionSet,
+                                DT, LI);
 }
diff --git a/llvm/lib/Analysis/CFGPrinter.cpp b/llvm/lib/Analysis/CFGPrinter.cpp
index f05dd6852d6dc93..b5a4a1a0259214f 100644
--- a/llvm/lib/Analysis/CFGPrinter.cpp
+++ b/llvm/lib/Analysis/CFGPrinter.cpp
@@ -290,8 +290,8 @@ FunctionPass *llvm::createCFGOnlyPrinterLegacyPassPass() {
   return new CFGOnlyPrinterLegacyPass();
 }
 
-/// Find all blocks on the paths which terminate with a deoptimize or 
-/// unreachable (i.e. all blocks which are post-dominated by a deoptimize 
+/// Find all blocks on the paths which terminate with a deoptimize or
+/// unreachable (i.e. all blocks which are post-dominated by a deoptimize
 /// or unreachable). These paths are hidden if the corresponding cl::opts
 /// are enabled.
 void DOTGraphTraits<DOTFuncInfo *>::computeDeoptOrUnreachablePaths(
diff --git a/llvm/lib/Analysis/CGSCCPassManager.cpp b/llvm/lib/Analysis/CGSCCPassManager.cpp
index 2246887afe68a6d..61d100994459f71 100644
--- a/llvm/lib/Analysis/CGSCCPassManager.cpp
+++ b/llvm/lib/Analysis/CGSCCPassManager.cpp
@@ -694,7 +694,8 @@ bool FunctionAnalysisManagerCGSCCProxy::Result::invalidate(
   // forcibly cleared. When preserved, this proxy will only invalidate results
   // cached on functions *still in the module* at the end of the module pass.
   auto PAC = PA.getChecker<FunctionAnalysisManagerCGSCCProxy>();
-  if (!PAC.preserved() && !PAC.preservedSet<AllAnalysesOn<LazyCallGraph::SCC>>()) {
+  if (!PAC.preserved() &&
+      !PAC.preservedSet<AllAnalysesOn<LazyCallGraph::SCC>>()) {
     for (LazyCallGraph::Node &N : C)
       FAM->invalidate(N.getFunction(), PA);
 
@@ -957,8 +958,8 @@ static LazyCallGraph::SCC &updateCGAndAnalysisManagerForPass(
     (void)TargetRC;
     // TODO: This only allows trivial edges to be added for now.
 #ifdef EXPENSIVE_CHECKS
-    assert((RC == &TargetRC ||
-           RC->isAncestorOf(TargetRC)) && "New ref edge is not trivial!");
+    assert((RC == &TargetRC || RC->isAncestorOf(TargetRC)) &&
+           "New ref edge is not trivial!");
 #endif
     RC->insertTrivialRefEdge(N, *RefTarget);
   }
@@ -970,8 +971,8 @@ static LazyCallGraph::SCC &updateCGAndAnalysisManagerForPass(
     (void)TargetRC;
     // TODO: This only allows trivial edges to be added for now.
 #ifdef EXPENSIVE_CHECKS
-    assert((RC == &TargetRC ||
-           RC->isAncestorOf(TargetRC)) && "New call edge is not trivial!");
+    assert((RC == &TargetRC || RC->isAncestorOf(TargetRC)) &&
+           "New call edge is not trivial!");
 #endif
     // Add a trivial ref edge to be promoted later on alongside
     // PromotedRefTargets.
diff --git a/llvm/lib/Analysis/CMakeLists.txt b/llvm/lib/Analysis/CMakeLists.txt
index 9d8c9cfda66c921..cd32731fcb06a2d 100644
--- a/llvm/lib/Analysis/CMakeLists.txt
+++ b/llvm/lib/Analysis/CMakeLists.txt
@@ -1,163 +1,83 @@
 if (DEFINED LLVM_HAVE_TF_AOT OR LLVM_HAVE_TFLITE)
-  include(TensorFlowCompile)
-  set(LLVM_INLINER_MODEL_PATH_DEFAULT "models/inliner-Oz")
+include(TensorFlowCompile) set(LLVM_INLINER_MODEL_PATH_DEFAULT
+                               "models/inliner-Oz")
 
-  set(LLVM_INLINER_MODEL_CURRENT_URL "<UNSPECIFIED>" CACHE STRING "URL to download the LLVM inliner model")
+    set(LLVM_INLINER_MODEL_CURRENT_URL "<UNSPECIFIED>" CACHE STRING
+                                       "URL to download the LLVM inliner model")
 
-  if (DEFINED LLVM_HAVE_TF_AOT)
-    tf_find_and_compile(
-      ${LLVM_INLINER_MODEL_PATH}
-      ${LLVM_INLINER_MODEL_CURRENT_URL}
-      ${LLVM_INLINER_MODEL_PATH_DEFAULT}
-      "models/gen-inline-oz-test-model.py"
-      serve
-      action
-      InlinerSizeModel
-      llvm::InlinerSizeModel
-    )
-  endif()
+        if (DEFINED LLVM_HAVE_TF_AOT) tf_find_and_compile(
+            ${LLVM_INLINER_MODEL_PATH} ${LLVM_INLINER_MODEL_CURRENT_URL} $ {
+              LLVM_INLINER_MODEL_PATH_DEFAULT
+            } "models/gen-inline-oz-test-model.py" serve action InlinerSizeModel
+                llvm::InlinerSizeModel) endif()
 
-  if (LLVM_HAVE_TFLITE)
-    list(APPEND MLLinkDeps
-      tensorflow-lite::tensorflow-lite)
-  endif()
-endif()
+            if (LLVM_HAVE_TFLITE) list(APPEND MLLinkDeps tensorflow -
+                                       lite::tensorflow - lite) endif() endif()
 
-add_llvm_component_library(LLVMAnalysis
-  AliasAnalysis.cpp
-  AliasAnalysisEvaluator.cpp
-  AliasSetTracker.cpp
-  Analysis.cpp
-  AssumeBundleQueries.cpp
-  AssumptionCache.cpp
-  BasicAliasAnalysis.cpp
-  BlockFrequencyInfo.cpp
-  BlockFrequencyInfoImpl.cpp
-  BranchProbabilityInfo.cpp
-  CFG.cpp
-  CFGPrinter.cpp
-  CFGSCCPrinter.cpp
-  CGSCCPassManager.cpp
-  CallGraph.cpp
-  CallGraphSCCPass.cpp
-  CallPrinter.cpp
-  CaptureTracking.cpp
-  CmpInstAnalysis.cpp
-  CostModel.cpp
-  CodeMetrics.cpp
-  ConstantFolding.cpp
-  CycleAnalysis.cpp
-  DDG.cpp
-  DDGPrinter.cpp
-  ConstraintSystem.cpp
-  Delinearization.cpp
-  DemandedBits.cpp
-  DependenceAnalysis.cpp
-  DependenceGraphBuilder.cpp
-  DevelopmentModeInlineAdvisor.cpp
-  DomPrinter.cpp
-  DomTreeUpdater.cpp
-  DominanceFrontier.cpp
-  FunctionPropertiesAnalysis.cpp
-  GlobalsModRef.cpp
-  GuardUtils.cpp
-  HeatUtils.cpp
-  IRSimilarityIdentifier.cpp
-  IVDescriptors.cpp
-  IVUsers.cpp
-  ImportedFunctionsInliningStatistics.cpp
-  IndirectCallPromotionAnalysis.cpp
-  InlineCost.cpp
-  InlineAdvisor.cpp
-  InlineOrder.cpp
-  InlineSizeEstimatorAnalysis.cpp
-  InstCount.cpp
-  InstructionPrecedenceTracking.cpp
-  InstructionSimplify.cpp
-  InteractiveModelRunner.cpp
-  Interval.cpp
-  IntervalPartition.cpp
-  LazyBranchProbabilityInfo.cpp
-  LazyBlockFrequencyInfo.cpp
-  LazyCallGraph.cpp
-  LazyValueInfo.cpp
-  Lint.cpp
-  Loads.cpp
-  Local.cpp
-  LoopAccessAnalysis.cpp
-  LoopAnalysisManager.cpp
-  LoopCacheAnalysis.cpp
-  LoopNestAnalysis.cpp
-  LoopUnrollAnalyzer.cpp
-  LoopInfo.cpp
-  LoopPass.cpp
-  MLInlineAdvisor.cpp
-  MemDerefPrinter.cpp
-  MemoryBuiltins.cpp
-  MemoryDependenceAnalysis.cpp
-  MemoryLocation.cpp
-  MemoryProfileInfo.cpp
-  MemorySSA.cpp
-  MemorySSAUpdater.cpp
-  ModelUnderTrainingRunner.cpp
-  ModuleDebugInfoPrinter.cpp
-  ModuleSummaryAnalysis.cpp
-  MustExecute.cpp
-  NoInferenceModelRunner.cpp
-  ObjCARCAliasAnalysis.cpp
-  ObjCARCAnalysisUtils.cpp
-  ObjCARCInstKind.cpp
-  OptimizationRemarkEmitter.cpp
-  OverflowInstAnalysis.cpp
-  PHITransAddr.cpp
-  PhiValues.cpp
-  PostDominators.cpp
-  ProfileSummaryInfo.cpp
-  PtrUseVisitor.cpp
-  RegionInfo.cpp
-  RegionPass.cpp
-  RegionPrinter.cpp
-  ReplayInlineAdvisor.cpp
-  ScalarEvolution.cpp
-  ScalarEvolutionAliasAnalysis.cpp
-  ScalarEvolutionDivision.cpp
-  ScalarEvolutionNormalization.cpp
-  StackLifetime.cpp
-  StackSafetyAnalysis.cpp
-  StructuralHash.cpp
-  SyntheticCountsUtils.cpp
-  TFLiteUtils.cpp
-  TargetLibraryInfo.cpp
-  TargetTransformInfo.cpp
-  TensorSpec.cpp
-  Trace.cpp
-  TrainingLogger.cpp
-  TypeBasedAliasAnalysis.cpp
-  TypeMetadataUtils.cpp
-  UniformityAnalysis.cpp
-  ScopedNoAliasAA.cpp
-  ValueLattice.cpp
-  ValueLatticeUtils.cpp
-  ValueTracking.cpp
-  VectorUtils.cpp
-  VFABIDemangling.cpp
-  ${GeneratedMLSources}
+                add_llvm_component_library(
+                    LLVMAnalysis AliasAnalysis.cpp AliasAnalysisEvaluator
+                        .cpp AliasSetTracker.cpp Analysis
+                        .cpp AssumeBundleQueries.cpp AssumptionCache
+                        .cpp BasicAliasAnalysis.cpp BlockFrequencyInfo
+                        .cpp BlockFrequencyInfoImpl.cpp BranchProbabilityInfo
+                        .cpp CFG.cpp CFGPrinter.cpp CFGSCCPrinter
+                        .cpp CGSCCPassManager.cpp CallGraph.cpp CallGraphSCCPass
+                        .cpp CallPrinter.cpp CaptureTracking.cpp CmpInstAnalysis
+                        .cpp CostModel.cpp CodeMetrics.cpp ConstantFolding
+                        .cpp CycleAnalysis.cpp DDG.cpp DDGPrinter
+                        .cpp ConstraintSystem.cpp Delinearization
+                        .cpp DemandedBits.cpp DependenceAnalysis
+                        .cpp DependenceGraphBuilder
+                        .cpp DevelopmentModeInlineAdvisor.cpp DomPrinter
+                        .cpp DomTreeUpdater.cpp DominanceFrontier
+                        .cpp FunctionPropertiesAnalysis.cpp GlobalsModRef
+                        .cpp GuardUtils.cpp HeatUtils.cpp IRSimilarityIdentifier
+                        .cpp IVDescriptors.cpp IVUsers
+                        .cpp ImportedFunctionsInliningStatistics
+                        .cpp IndirectCallPromotionAnalysis.cpp InlineCost
+                        .cpp InlineAdvisor.cpp InlineOrder
+                        .cpp InlineSizeEstimatorAnalysis.cpp InstCount
+                        .cpp InstructionPrecedenceTracking
+                        .cpp InstructionSimplify.cpp InteractiveModelRunner
+                        .cpp Interval.cpp IntervalPartition
+                        .cpp LazyBranchProbabilityInfo
+                        .cpp LazyBlockFrequencyInfo.cpp LazyCallGraph
+                        .cpp LazyValueInfo.cpp Lint.cpp Loads.cpp Local
+                        .cpp LoopAccessAnalysis.cpp LoopAnalysisManager
+                        .cpp LoopCacheAnalysis.cpp LoopNestAnalysis
+                        .cpp LoopUnrollAnalyzer.cpp LoopInfo.cpp LoopPass
+                        .cpp MLInlineAdvisor.cpp MemDerefPrinter
+                        .cpp MemoryBuiltins.cpp MemoryDependenceAnalysis
+                        .cpp MemoryLocation.cpp MemoryProfileInfo.cpp MemorySSA
+                        .cpp MemorySSAUpdater.cpp ModelUnderTrainingRunner
+                        .cpp ModuleDebugInfoPrinter.cpp ModuleSummaryAnalysis
+                        .cpp MustExecute.cpp NoInferenceModelRunner
+                        .cpp ObjCARCAliasAnalysis.cpp ObjCARCAnalysisUtils
+                        .cpp ObjCARCInstKind.cpp OptimizationRemarkEmitter
+                        .cpp OverflowInstAnalysis.cpp PHITransAddr.cpp PhiValues
+                        .cpp PostDominators.cpp ProfileSummaryInfo
+                        .cpp PtrUseVisitor.cpp RegionInfo.cpp RegionPass
+                        .cpp RegionPrinter.cpp ReplayInlineAdvisor
+                        .cpp ScalarEvolution.cpp ScalarEvolutionAliasAnalysis
+                        .cpp ScalarEvolutionDivision
+                        .cpp ScalarEvolutionNormalization.cpp StackLifetime
+                        .cpp StackSafetyAnalysis.cpp StructuralHash
+                        .cpp SyntheticCountsUtils.cpp TFLiteUtils
+                        .cpp TargetLibraryInfo.cpp TargetTransformInfo
+                        .cpp TensorSpec.cpp Trace.cpp TrainingLogger
+                        .cpp TypeBasedAliasAnalysis.cpp TypeMetadataUtils
+                        .cpp UniformityAnalysis.cpp ScopedNoAliasAA
+                        .cpp ValueLattice.cpp ValueLatticeUtils
+                        .cpp ValueTracking.cpp VectorUtils.cpp VFABIDemangling
+                        .cpp ${GeneratedMLSources}
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/Analysis
+                    ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+                    llvm /
+                    Analysis
 
-  DEPENDS
-  intrinsics_gen
-  ${MLDeps}
+                        DEPENDS intrinsics_gen ${MLDeps}
 
-  LINK_LIBS
-  ${MLLinkDeps}
+                    LINK_LIBS ${MLLinkDeps}
 
-  LINK_COMPONENTS
-  BinaryFormat
-  Core
-  Object
-  ProfileData
-  Support
-  TargetParser
-  )
+                    LINK_COMPONENTS BinaryFormat Core Object ProfileData Support
+                        TargetParser)
diff --git a/llvm/lib/Analysis/CallGraph.cpp b/llvm/lib/Analysis/CallGraph.cpp
index 58ccf2bd664be0d..d39b005c5cfdfd9 100644
--- a/llvm/lib/Analysis/CallGraph.cpp
+++ b/llvm/lib/Analysis/CallGraph.cpp
@@ -156,7 +156,7 @@ void CallGraph::ReplaceExternalCallEdge(CallGraphNode *Old,
 //
 Function *CallGraph::removeFunctionFromModule(CallGraphNode *CGN) {
   assert(CGN->empty() && "Cannot remove function from call "
-         "graph if it references other functions!");
+                         "graph if it references other functions!");
   Function *F = CGN->getFunction(); // Get the function for the call graph node
   FunctionMap.erase(F);             // Remove the call graph node from the map
 
@@ -192,7 +192,7 @@ void CallGraphNode::print(raw_ostream &OS) const {
   for (const auto &I : *this) {
     OS << "  CS<" << I.first << "> calls ";
     if (Function *FI = I.second->getFunction())
-      OS << "function '" << FI->getName() <<"'\n";
+      OS << "function '" << FI->getName() << "'\n";
     else
       OS << "external node\n";
   }
@@ -207,7 +207,7 @@ LLVM_DUMP_METHOD void CallGraphNode::dump() const { print(dbgs()); }
 /// specified call site.  Note that this method takes linear time, so it
 /// should be used sparingly.
 void CallGraphNode::removeCallEdgeFor(CallBase &Call) {
-  for (CalledFunctionsVector::iterator I = CalledFunctions.begin(); ; ++I) {
+  for (CalledFunctionsVector::iterator I = CalledFunctions.begin();; ++I) {
     assert(I != CalledFunctions.end() && "Cannot find callsite to remove!");
     if (I->first && *I->first == &Call) {
       I->second->DropRef();
@@ -232,14 +232,15 @@ void CallGraphNode::removeAnyCallEdgeTo(CallGraphNode *Callee) {
       Callee->DropRef();
       CalledFunctions[i] = CalledFunctions.back();
       CalledFunctions.pop_back();
-      --i; --e;
+      --i;
+      --e;
     }
 }
 
 /// removeOneAbstractEdgeTo - Remove one edge associated with a null callsite
 /// from this node to the specified callee function.
 void CallGraphNode::removeOneAbstractEdgeTo(CallGraphNode *Callee) {
-  for (CalledFunctionsVector::iterator I = CalledFunctions.begin(); ; ++I) {
+  for (CalledFunctionsVector::iterator I = CalledFunctions.begin();; ++I) {
     assert(I != CalledFunctions.end() && "Cannot find callee to remove!");
     CallRecord &CR = *I;
     if (CR.second == Callee && !CR.first) {
@@ -256,7 +257,7 @@ void CallGraphNode::removeOneAbstractEdgeTo(CallGraphNode *Callee) {
 /// time, so it should be used sparingly.
 void CallGraphNode::replaceCallEdge(CallBase &Call, CallBase &NewCall,
                                     CallGraphNode *NewNode) {
-  for (CalledFunctionsVector::iterator I = CalledFunctions.begin(); ; ++I) {
+  for (CalledFunctionsVector::iterator I = CalledFunctions.begin();; ++I) {
     assert(I != CalledFunctions.end() && "Cannot find callsite to remove!");
     if (I->first && *I->first == &Call) {
       I->second->DropRef();
diff --git a/llvm/lib/Analysis/CallGraphSCCPass.cpp b/llvm/lib/Analysis/CallGraphSCCPass.cpp
index 307dddd51ece057..bc2604c7313346e 100644
--- a/llvm/lib/Analysis/CallGraphSCCPass.cpp
+++ b/llvm/lib/Analysis/CallGraphSCCPass.cpp
@@ -67,8 +67,8 @@ class CGPassManager : public ModulePass, public PMDataManager {
   /// whether any of the passes modifies the module, and if so, return true.
   bool runOnModule(Module &M) override;
 
-  using ModulePass::doInitialization;
   using ModulePass::doFinalization;
+  using ModulePass::doInitialization;
 
   bool doInitialization(CallGraph &CG);
   bool doFinalization(CallGraph &CG);
@@ -87,11 +87,11 @@ class CGPassManager : public ModulePass, public PMDataManager {
 
   // Print passes managed by this manager
   void dumpPassStructure(unsigned Offset) override {
-    errs().indent(Offset*2) << "Call Graph SCC Pass Manager\n";
+    errs().indent(Offset * 2) << "Call Graph SCC Pass Manager\n";
     for (unsigned Index = 0; Index < getNumContainedPasses(); ++Index) {
       Pass *P = getContainedPass(Index);
       P->dumpPassStructure(Offset + 1);
-      dumpLastUses(P, Offset+1);
+      dumpLastUses(P, Offset + 1);
     }
   }
 
@@ -108,9 +108,8 @@ class CGPassManager : public ModulePass, public PMDataManager {
   bool RunAllPassesOnSCC(CallGraphSCC &CurSCC, CallGraph &CG,
                          bool &DevirtualizedCall);
 
-  bool RunPassOnSCC(Pass *P, CallGraphSCC &CurSCC,
-                    CallGraph &CG, bool &CallGraphUpToDate,
-                    bool &DevirtualizedCall);
+  bool RunPassOnSCC(Pass *P, CallGraphSCC &CurSCC, CallGraph &CG,
+                    bool &CallGraphUpToDate, bool &DevirtualizedCall);
   bool RefreshCallGraph(const CallGraphSCC &CurSCC, CallGraph &CG,
                         bool IsCheckingMode);
 };
@@ -119,8 +118,8 @@ class CGPassManager : public ModulePass, public PMDataManager {
 
 char CGPassManager::ID = 0;
 
-bool CGPassManager::RunPassOnSCC(Pass *P, CallGraphSCC &CurSCC,
-                                 CallGraph &CG, bool &CallGraphUpToDate,
+bool CGPassManager::RunPassOnSCC(Pass *P, CallGraphSCC &CurSCC, CallGraph &CG,
+                                 bool &CallGraphUpToDate,
                                  bool &DevirtualizedCall) {
   bool Changed = false;
   PMDataManager *PM = P->getAsPMDataManager();
@@ -169,7 +168,7 @@ bool CGPassManager::RunPassOnSCC(Pass *P, CallGraphSCC &CurSCC,
 
   assert(PM->getPassManagerType() == PMT_FunctionPassManager &&
          "Invalid CGPassManager member");
-  FPPassManager *FPP = (FPPassManager*)P;
+  FPPassManager *FPP = (FPPassManager *)P;
 
   // Run pass P on all functions in the current SCC.
   for (CallGraphNode *CGN : CurSCC) {
@@ -220,7 +219,8 @@ bool CGPassManager::RefreshCallGraph(const CallGraphSCC &CurSCC, CallGraph &CG,
        SCCIdx != E; ++SCCIdx, ++FunctionNo) {
     CallGraphNode *CGN = *SCCIdx;
     Function *F = CGN->getFunction();
-    if (!F || F->isDeclaration()) continue;
+    if (!F || F->isDeclaration())
+      continue;
 
     // Walk the function body looking for call sites.  Sync up the call sites in
     // CGN with those actually in the function.
@@ -407,15 +407,16 @@ bool CGPassManager::RefreshCallGraph(const CallGraphSCC &CurSCC, CallGraph &CG,
       Calls.clear();
   }
 
-  LLVM_DEBUG(if (MadeChange) {
-    dbgs() << "CGSCCPASSMGR: Refreshed SCC is now:\n";
-    for (CallGraphNode *CGN : CurSCC)
-      CGN->dump();
-    if (DevirtualizedCall)
-      dbgs() << "CGSCCPASSMGR: Refresh devirtualized a call!\n";
-  } else {
-    dbgs() << "CGSCCPASSMGR: SCC Refresh didn't change call graph.\n";
-  });
+  LLVM_DEBUG(
+      if (MadeChange) {
+        dbgs() << "CGSCCPASSMGR: Refreshed SCC is now:\n";
+        for (CallGraphNode *CGN : CurSCC)
+          CGN->dump();
+        if (DevirtualizedCall)
+          dbgs() << "CGSCCPASSMGR: Refresh devirtualized a call!\n";
+      } else {
+        dbgs() << "CGSCCPASSMGR: SCC Refresh didn't change call graph.\n";
+      });
   (void)MadeChange;
 
   return DevirtualizedCall;
@@ -438,15 +439,15 @@ bool CGPassManager::RunAllPassesOnSCC(CallGraphSCC &CurSCC, CallGraph &CG,
   bool CallGraphUpToDate = true;
 
   // Run all passes on current SCC.
-  for (unsigned PassNo = 0, e = getNumContainedPasses();
-       PassNo != e; ++PassNo) {
+  for (unsigned PassNo = 0, e = getNumContainedPasses(); PassNo != e;
+       ++PassNo) {
     Pass *P = getContainedPass(PassNo);
 
     // If we're in -debug-pass=Executions mode, construct the SCC node list,
     // otherwise avoid constructing this string as it is expensive.
     if (isPassDebuggingExecutionsOrMore()) {
       std::string Functions;
-  #ifndef NDEBUG
+#ifndef NDEBUG
       raw_string_ostream OS(Functions);
       ListSeparator LS;
       for (const CallGraphNode *CGN : CurSCC) {
@@ -454,7 +455,7 @@ bool CGPassManager::RunAllPassesOnSCC(CallGraphSCC &CurSCC, CallGraph &CG,
         CGN->print(OS);
       }
       OS.flush();
-  #endif
+#endif
       dumpPassInfo(P, EXECUTION_MSG, ON_CG_MSG, Functions);
     }
     dumpRequiredSet(P);
@@ -503,7 +504,7 @@ bool CGPassManager::runOnModule(Module &M) {
   bool Changed = doInitialization(CG);
 
   // Walk the callgraph in bottom-up SCC order.
-  scc_iterator<CallGraph*> CGI = scc_begin(&CG);
+  scc_iterator<CallGraph *> CGI = scc_begin(&CG);
 
   CallGraphSCC CurSCC(CG, &CGI);
   while (!CGI.isAtEnd()) {
@@ -553,9 +554,10 @@ bool CGPassManager::doInitialization(CallGraph &CG) {
     if (PMDataManager *PM = getContainedPass(i)->getAsPMDataManager()) {
       assert(PM->getPassManagerType() == PMT_FunctionPassManager &&
              "Invalid CGPassManager member");
-      Changed |= ((FPPassManager*)PM)->doInitialization(CG.getModule());
+      Changed |= ((FPPassManager *)PM)->doInitialization(CG.getModule());
     } else {
-      Changed |= ((CallGraphSCCPass*)getContainedPass(i))->doInitialization(CG);
+      Changed |=
+          ((CallGraphSCCPass *)getContainedPass(i))->doInitialization(CG);
     }
   }
   return Changed;
@@ -568,9 +570,9 @@ bool CGPassManager::doFinalization(CallGraph &CG) {
     if (PMDataManager *PM = getContainedPass(i)->getAsPMDataManager()) {
       assert(PM->getPassManagerType() == PMT_FunctionPassManager &&
              "Invalid CGPassManager member");
-      Changed |= ((FPPassManager*)PM)->doFinalization(CG.getModule());
+      Changed |= ((FPPassManager *)PM)->doFinalization(CG.getModule());
     } else {
-      Changed |= ((CallGraphSCCPass*)getContainedPass(i))->doFinalization(CG);
+      Changed |= ((CallGraphSCCPass *)getContainedPass(i))->doFinalization(CG);
     }
   }
   return Changed;
@@ -584,9 +586,10 @@ bool CGPassManager::doFinalization(CallGraph &CG) {
 /// Old node has been deleted, and New is to be used in its place.
 void CallGraphSCC::ReplaceNode(CallGraphNode *Old, CallGraphNode *New) {
   assert(Old != New && "Should not replace node with self");
-  for (unsigned i = 0; ; ++i) {
+  for (unsigned i = 0;; ++i) {
     assert(i != Nodes.size() && "Node not in SCC");
-    if (Nodes[i] != Old) continue;
+    if (Nodes[i] != Old)
+      continue;
     if (New)
       Nodes[i] = New;
     else
@@ -596,7 +599,7 @@ void CallGraphSCC::ReplaceNode(CallGraphNode *Old, CallGraphNode *New) {
 
   // Update the active scc_iterator so that it doesn't contain dangling
   // pointers to the old CallGraphNode.
-  scc_iterator<CallGraph*> *CGI = (scc_iterator<CallGraph*>*)Context;
+  scc_iterator<CallGraph *> *CGI = (scc_iterator<CallGraph *> *)Context;
   CGI->ReplaceNode(Old, New);
 }
 
@@ -620,7 +623,7 @@ void CallGraphSCCPass::assignPassManager(PMStack &PMS,
   CGPassManager *CGP;
 
   if (PMS.top()->getPassManagerType() == PMT_CallGraphPassManager)
-    CGP = (CGPassManager*)PMS.top();
+    CGP = (CGPassManager *)PMS.top();
   else {
     // Create new Call Graph SCC Pass Manager if it does not exist.
     assert(!PMS.empty() && "Unable to create Call Graph Pass Manager");
@@ -659,63 +662,63 @@ void CallGraphSCCPass::getAnalysisUsage(AnalysisUsage &AU) const {
 
 namespace {
 
-  /// PrintCallGraphPass - Print a Module corresponding to a call graph.
-  ///
-  class PrintCallGraphPass : public CallGraphSCCPass {
-    std::string Banner;
-    raw_ostream &OS;       // raw_ostream to print on.
+/// PrintCallGraphPass - Print a Module corresponding to a call graph.
+///
+class PrintCallGraphPass : public CallGraphSCCPass {
+  std::string Banner;
+  raw_ostream &OS; // raw_ostream to print on.
 
-  public:
-    static char ID;
+public:
+  static char ID;
 
-    PrintCallGraphPass(const std::string &B, raw_ostream &OS)
+  PrintCallGraphPass(const std::string &B, raw_ostream &OS)
       : CallGraphSCCPass(ID), Banner(B), OS(OS) {}
 
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.setPreservesAll();
-    }
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.setPreservesAll();
+  }
 
-    bool runOnSCC(CallGraphSCC &SCC) override {
-      bool BannerPrinted = false;
-      auto PrintBannerOnce = [&]() {
-        if (BannerPrinted)
-          return;
-        OS << Banner;
-        BannerPrinted = true;
-      };
-
-      bool NeedModule = llvm::forcePrintModuleIR();
-      if (isFunctionInPrintList("*") && NeedModule) {
-        PrintBannerOnce();
-        OS << "\n";
-        SCC.getCallGraph().getModule().print(OS, nullptr);
-        return false;
-      }
-      bool FoundFunction = false;
-      for (CallGraphNode *CGN : SCC) {
-        if (Function *F = CGN->getFunction()) {
-          if (!F->isDeclaration() && isFunctionInPrintList(F->getName())) {
-            FoundFunction = true;
-            if (!NeedModule) {
-              PrintBannerOnce();
-              F->print(OS);
-            }
+  bool runOnSCC(CallGraphSCC &SCC) override {
+    bool BannerPrinted = false;
+    auto PrintBannerOnce = [&]() {
+      if (BannerPrinted)
+        return;
+      OS << Banner;
+      BannerPrinted = true;
+    };
+
+    bool NeedModule = llvm::forcePrintModuleIR();
+    if (isFunctionInPrintList("*") && NeedModule) {
+      PrintBannerOnce();
+      OS << "\n";
+      SCC.getCallGraph().getModule().print(OS, nullptr);
+      return false;
+    }
+    bool FoundFunction = false;
+    for (CallGraphNode *CGN : SCC) {
+      if (Function *F = CGN->getFunction()) {
+        if (!F->isDeclaration() && isFunctionInPrintList(F->getName())) {
+          FoundFunction = true;
+          if (!NeedModule) {
+            PrintBannerOnce();
+            F->print(OS);
           }
-        } else if (isFunctionInPrintList("*")) {
-          PrintBannerOnce();
-          OS << "\nPrinting <null> Function\n";
         }
-      }
-      if (NeedModule && FoundFunction) {
+      } else if (isFunctionInPrintList("*")) {
         PrintBannerOnce();
-        OS << "\n";
-        SCC.getCallGraph().getModule().print(OS, nullptr);
+        OS << "\nPrinting <null> Function\n";
       }
-      return false;
     }
+    if (NeedModule && FoundFunction) {
+      PrintBannerOnce();
+      OS << "\n";
+      SCC.getCallGraph().getModule().print(OS, nullptr);
+    }
+    return false;
+  }
 
-    StringRef getPassName() const override { return "Print CallGraph IR"; }
-  };
+  StringRef getPassName() const override { return "Print CallGraph IR"; }
+};
 
 } // end anonymous namespace.
 
diff --git a/llvm/lib/Analysis/CallPrinter.cpp b/llvm/lib/Analysis/CallPrinter.cpp
index 65e3184fad91a36..ae6b0c6c49adbfb 100644
--- a/llvm/lib/Analysis/CallPrinter.cpp
+++ b/llvm/lib/Analysis/CallPrinter.cpp
@@ -40,11 +40,11 @@ static cl::opt<bool> ShowHeatColors("callgraph-heat-colors", cl::init(false),
 
 static cl::opt<bool>
     ShowEdgeWeight("callgraph-show-weights", cl::init(false), cl::Hidden,
-                       cl::desc("Show edges labeled with weights"));
+                   cl::desc("Show edges labeled with weights"));
 
-static cl::opt<bool>
-    CallMultiGraph("callgraph-multigraph", cl::init(false), cl::Hidden,
-            cl::desc("Show call-multigraph (do not remove parallel edges)"));
+static cl::opt<bool> CallMultiGraph(
+    "callgraph-multigraph", cl::init(false), cl::Hidden,
+    cl::desc("Show call-multigraph (do not remove parallel edges)"));
 
 static cl::opt<std::string> CallGraphDotFilenamePrefix(
     "callgraph-dot-filename-prefix", cl::Hidden,
@@ -189,8 +189,7 @@ struct DOTGraphTraits<CallGraphDOTInfo *> : public DefaultDOTGraphTraits {
       return "";
 
     uint64_t Counter = getNumOfCalls(*Caller, *Callee);
-    double Width =
-        1 + 2 * (double(Counter) / CGInfo->getMaxFreq());
+    double Width = 1 + 2 * (double(Counter) / CGInfo->getMaxFreq());
     std::string Attrs = "label=\"" + std::to_string(Counter) +
                         "\" penwidth=" + std::to_string(Width);
     return Attrs;
@@ -215,7 +214,7 @@ struct DOTGraphTraits<CallGraphDOTInfo *> : public DefaultDOTGraphTraits {
   }
 };
 
-} // end llvm namespace
+} // namespace llvm
 
 namespace {
 void doCallGraphDOTPrinting(
diff --git a/llvm/lib/Analysis/CaptureTracking.cpp b/llvm/lib/Analysis/CaptureTracking.cpp
index 2827eb705ac58c7..684de41edf4d1be 100644
--- a/llvm/lib/Analysis/CaptureTracking.cpp
+++ b/llvm/lib/Analysis/CaptureTracking.cpp
@@ -33,9 +33,9 @@ using namespace llvm;
 
 #define DEBUG_TYPE "capture-tracking"
 
-STATISTIC(NumCaptured,          "Number of pointers maybe captured");
-STATISTIC(NumNotCaptured,       "Number of pointers not captured");
-STATISTIC(NumCapturedBefore,    "Number of pointers maybe captured before");
+STATISTIC(NumCaptured, "Number of pointers maybe captured");
+STATISTIC(NumNotCaptured, "Number of pointers not captured");
+STATISTIC(NumCapturedBefore, "Number of pointers maybe captured before");
 STATISTIC(NumNotCapturedBefore, "Number of pointers not captured before");
 
 /// The default value for MaxUsesToExplore argument. It's relatively small to
@@ -73,140 +73,140 @@ bool CaptureTracker::isDereferenceableOrNull(Value *O, const DataLayout &DL) {
 }
 
 namespace {
-  struct SimpleCaptureTracker : public CaptureTracker {
-    explicit SimpleCaptureTracker(
+struct SimpleCaptureTracker : public CaptureTracker {
+  explicit SimpleCaptureTracker(
 
-        const SmallPtrSetImpl<const Value *> &EphValues, bool ReturnCaptures)
-        : EphValues(EphValues), ReturnCaptures(ReturnCaptures) {}
+      const SmallPtrSetImpl<const Value *> &EphValues, bool ReturnCaptures)
+      : EphValues(EphValues), ReturnCaptures(ReturnCaptures) {}
 
-    void tooManyUses() override {
-      LLVM_DEBUG(dbgs() << "Captured due to too many uses\n");
-      Captured = true;
-    }
+  void tooManyUses() override {
+    LLVM_DEBUG(dbgs() << "Captured due to too many uses\n");
+    Captured = true;
+  }
 
-    bool captured(const Use *U) override {
-      if (isa<ReturnInst>(U->getUser()) && !ReturnCaptures)
-        return false;
+  bool captured(const Use *U) override {
+    if (isa<ReturnInst>(U->getUser()) && !ReturnCaptures)
+      return false;
 
-      if (EphValues.contains(U->getUser()))
-        return false;
+    if (EphValues.contains(U->getUser()))
+      return false;
 
-      LLVM_DEBUG(dbgs() << "Captured by: " << *U->getUser() << "\n");
+    LLVM_DEBUG(dbgs() << "Captured by: " << *U->getUser() << "\n");
 
-      Captured = true;
-      return true;
-    }
+    Captured = true;
+    return true;
+  }
 
-    const SmallPtrSetImpl<const Value *> &EphValues;
+  const SmallPtrSetImpl<const Value *> &EphValues;
 
-    bool ReturnCaptures;
+  bool ReturnCaptures;
 
-    bool Captured = false;
-  };
+  bool Captured = false;
+};
 
-  /// Only find pointer captures which happen before the given instruction. Uses
-  /// the dominator tree to determine whether one instruction is before another.
-  /// Only support the case where the Value is defined in the same basic block
-  /// as the given instruction and the use.
-  struct CapturesBefore : public CaptureTracker {
+/// Only find pointer captures which happen before the given instruction. Uses
+/// the dominator tree to determine whether one instruction is before another.
+/// Only support the case where the Value is defined in the same basic block
+/// as the given instruction and the use.
+struct CapturesBefore : public CaptureTracker {
 
-    CapturesBefore(bool ReturnCaptures, const Instruction *I,
-                   const DominatorTree *DT, bool IncludeI, const LoopInfo *LI)
-        : BeforeHere(I), DT(DT), ReturnCaptures(ReturnCaptures),
-          IncludeI(IncludeI), LI(LI) {}
+  CapturesBefore(bool ReturnCaptures, const Instruction *I,
+                 const DominatorTree *DT, bool IncludeI, const LoopInfo *LI)
+      : BeforeHere(I), DT(DT), ReturnCaptures(ReturnCaptures),
+        IncludeI(IncludeI), LI(LI) {}
 
-    void tooManyUses() override { Captured = true; }
+  void tooManyUses() override { Captured = true; }
 
-    bool isSafeToPrune(Instruction *I) {
-      if (BeforeHere == I)
-        return !IncludeI;
+  bool isSafeToPrune(Instruction *I) {
+    if (BeforeHere == I)
+      return !IncludeI;
 
-      // We explore this usage only if the usage can reach "BeforeHere".
-      // If use is not reachable from entry, there is no need to explore.
-      if (!DT->isReachableFromEntry(I->getParent()))
-        return true;
+    // We explore this usage only if the usage can reach "BeforeHere".
+    // If use is not reachable from entry, there is no need to explore.
+    if (!DT->isReachableFromEntry(I->getParent()))
+      return true;
 
-      // Check whether there is a path from I to BeforeHere.
-      return !isPotentiallyReachable(I, BeforeHere, nullptr, DT, LI);
-    }
+    // Check whether there is a path from I to BeforeHere.
+    return !isPotentiallyReachable(I, BeforeHere, nullptr, DT, LI);
+  }
 
-    bool captured(const Use *U) override {
-      Instruction *I = cast<Instruction>(U->getUser());
-      if (isa<ReturnInst>(I) && !ReturnCaptures)
-        return false;
+  bool captured(const Use *U) override {
+    Instruction *I = cast<Instruction>(U->getUser());
+    if (isa<ReturnInst>(I) && !ReturnCaptures)
+      return false;
 
-      // Check isSafeToPrune() here rather than in shouldExplore() to avoid
-      // an expensive reachability query for every instruction we look at.
-      // Instead we only do one for actual capturing candidates.
-      if (isSafeToPrune(I))
-        return false;
+    // Check isSafeToPrune() here rather than in shouldExplore() to avoid
+    // an expensive reachability query for every instruction we look at.
+    // Instead we only do one for actual capturing candidates.
+    if (isSafeToPrune(I))
+      return false;
 
-      Captured = true;
-      return true;
-    }
+    Captured = true;
+    return true;
+  }
 
-    const Instruction *BeforeHere;
-    const DominatorTree *DT;
+  const Instruction *BeforeHere;
+  const DominatorTree *DT;
 
-    bool ReturnCaptures;
-    bool IncludeI;
+  bool ReturnCaptures;
+  bool IncludeI;
 
-    bool Captured = false;
+  bool Captured = false;
 
-    const LoopInfo *LI;
-  };
+  const LoopInfo *LI;
+};
 
-  /// Find the 'earliest' instruction before which the pointer is known not to
-  /// be captured. Here an instruction A is considered earlier than instruction
-  /// B, if A dominates B. If 2 escapes do not dominate each other, the
-  /// terminator of the common dominator is chosen. If not all uses cannot be
-  /// analyzed, the earliest escape is set to the first instruction in the
-  /// function entry block.
-  // NOTE: Users have to make sure instructions compared against the earliest
-  // escape are not in a cycle.
-  struct EarliestCaptures : public CaptureTracker {
-
-    EarliestCaptures(bool ReturnCaptures, Function &F, const DominatorTree &DT,
-                     const SmallPtrSetImpl<const Value *> &EphValues)
-        : EphValues(EphValues), DT(DT), ReturnCaptures(ReturnCaptures), F(F) {}
-
-    void tooManyUses() override {
-      Captured = true;
-      EarliestCapture = &*F.getEntryBlock().begin();
-    }
+/// Find the 'earliest' instruction before which the pointer is known not to
+/// be captured. Here an instruction A is considered earlier than instruction
+/// B, if A dominates B. If 2 escapes do not dominate each other, the
+/// terminator of the common dominator is chosen. If not all uses cannot be
+/// analyzed, the earliest escape is set to the first instruction in the
+/// function entry block.
+// NOTE: Users have to make sure instructions compared against the earliest
+// escape are not in a cycle.
+struct EarliestCaptures : public CaptureTracker {
 
-    bool captured(const Use *U) override {
-      Instruction *I = cast<Instruction>(U->getUser());
-      if (isa<ReturnInst>(I) && !ReturnCaptures)
-        return false;
+  EarliestCaptures(bool ReturnCaptures, Function &F, const DominatorTree &DT,
+                   const SmallPtrSetImpl<const Value *> &EphValues)
+      : EphValues(EphValues), DT(DT), ReturnCaptures(ReturnCaptures), F(F) {}
 
-      if (EphValues.contains(I))
-        return false;
+  void tooManyUses() override {
+    Captured = true;
+    EarliestCapture = &*F.getEntryBlock().begin();
+  }
 
-      if (!EarliestCapture)
-        EarliestCapture = I;
-      else
-        EarliestCapture = DT.findNearestCommonDominator(EarliestCapture, I);
-      Captured = true;
+  bool captured(const Use *U) override {
+    Instruction *I = cast<Instruction>(U->getUser());
+    if (isa<ReturnInst>(I) && !ReturnCaptures)
+      return false;
 
-      // Return false to continue analysis; we need to see all potential
-      // captures.
+    if (EphValues.contains(I))
       return false;
-    }
 
-    const SmallPtrSetImpl<const Value *> &EphValues;
+    if (!EarliestCapture)
+      EarliestCapture = I;
+    else
+      EarliestCapture = DT.findNearestCommonDominator(EarliestCapture, I);
+    Captured = true;
+
+    // Return false to continue analysis; we need to see all potential
+    // captures.
+    return false;
+  }
 
-    Instruction *EarliestCapture = nullptr;
+  const SmallPtrSetImpl<const Value *> &EphValues;
 
-    const DominatorTree &DT;
+  Instruction *EarliestCapture = nullptr;
 
-    bool ReturnCaptures;
+  const DominatorTree &DT;
 
-    bool Captured = false;
+  bool ReturnCaptures;
 
-    Function &F;
-  };
-}
+  bool Captured = false;
+
+  Function &F;
+};
+} // namespace
 
 /// PointerMayBeCaptured - Return true if this pointer value may be captured
 /// by the enclosing function (which is required to exist).  This routine can
@@ -440,7 +440,7 @@ void llvm::PointerMayBeCaptured(const Value *V, CaptureTracker *Tracker,
     for (const Use &U : V->uses()) {
       // If there are lots of uses, conservatively say that the value
       // is captured to avoid taking too much compile time.
-      if (Visited.size()  >= MaxUsesToExplore) {
+      if (Visited.size() >= MaxUsesToExplore) {
         Tracker->tooManyUses();
         return false;
       }
diff --git a/llvm/lib/Analysis/CmpInstAnalysis.cpp b/llvm/lib/Analysis/CmpInstAnalysis.cpp
index d6407e8750737bc..34f4d2c08a869a6 100644
--- a/llvm/lib/Analysis/CmpInstAnalysis.cpp
+++ b/llvm/lib/Analysis/CmpInstAnalysis.cpp
@@ -20,37 +20,60 @@ using namespace llvm;
 
 unsigned llvm::getICmpCode(CmpInst::Predicate Pred) {
   switch (Pred) {
-      // False -> 0
-    case ICmpInst::ICMP_UGT: return 1;  // 001
-    case ICmpInst::ICMP_SGT: return 1;  // 001
-    case ICmpInst::ICMP_EQ:  return 2;  // 010
-    case ICmpInst::ICMP_UGE: return 3;  // 011
-    case ICmpInst::ICMP_SGE: return 3;  // 011
-    case ICmpInst::ICMP_ULT: return 4;  // 100
-    case ICmpInst::ICMP_SLT: return 4;  // 100
-    case ICmpInst::ICMP_NE:  return 5;  // 101
-    case ICmpInst::ICMP_ULE: return 6;  // 110
-    case ICmpInst::ICMP_SLE: return 6;  // 110
-      // True -> 7
-    default:
-      llvm_unreachable("Invalid ICmp predicate!");
+    // False -> 0
+  case ICmpInst::ICMP_UGT:
+    return 1; // 001
+  case ICmpInst::ICMP_SGT:
+    return 1; // 001
+  case ICmpInst::ICMP_EQ:
+    return 2; // 010
+  case ICmpInst::ICMP_UGE:
+    return 3; // 011
+  case ICmpInst::ICMP_SGE:
+    return 3; // 011
+  case ICmpInst::ICMP_ULT:
+    return 4; // 100
+  case ICmpInst::ICMP_SLT:
+    return 4; // 100
+  case ICmpInst::ICMP_NE:
+    return 5; // 101
+  case ICmpInst::ICMP_ULE:
+    return 6; // 110
+  case ICmpInst::ICMP_SLE:
+    return 6; // 110
+    // True -> 7
+  default:
+    llvm_unreachable("Invalid ICmp predicate!");
   }
 }
 
 Constant *llvm::getPredForICmpCode(unsigned Code, bool Sign, Type *OpTy,
                                    CmpInst::Predicate &Pred) {
   switch (Code) {
-    default: llvm_unreachable("Illegal ICmp code!");
-    case 0: // False.
-      return ConstantInt::get(CmpInst::makeCmpResultType(OpTy), 0);
-    case 1: Pred = Sign ? ICmpInst::ICMP_SGT : ICmpInst::ICMP_UGT; break;
-    case 2: Pred = ICmpInst::ICMP_EQ; break;
-    case 3: Pred = Sign ? ICmpInst::ICMP_SGE : ICmpInst::ICMP_UGE; break;
-    case 4: Pred = Sign ? ICmpInst::ICMP_SLT : ICmpInst::ICMP_ULT; break;
-    case 5: Pred = ICmpInst::ICMP_NE; break;
-    case 6: Pred = Sign ? ICmpInst::ICMP_SLE : ICmpInst::ICMP_ULE; break;
-    case 7: // True.
-      return ConstantInt::get(CmpInst::makeCmpResultType(OpTy), 1);
+  default:
+    llvm_unreachable("Illegal ICmp code!");
+  case 0: // False.
+    return ConstantInt::get(CmpInst::makeCmpResultType(OpTy), 0);
+  case 1:
+    Pred = Sign ? ICmpInst::ICMP_SGT : ICmpInst::ICMP_UGT;
+    break;
+  case 2:
+    Pred = ICmpInst::ICMP_EQ;
+    break;
+  case 3:
+    Pred = Sign ? ICmpInst::ICMP_SGE : ICmpInst::ICMP_UGE;
+    break;
+  case 4:
+    Pred = Sign ? ICmpInst::ICMP_SLT : ICmpInst::ICMP_ULT;
+    break;
+  case 5:
+    Pred = ICmpInst::ICMP_NE;
+    break;
+  case 6:
+    Pred = Sign ? ICmpInst::ICMP_SLE : ICmpInst::ICMP_ULE;
+    break;
+  case 7: // True.
+    return ConstantInt::get(CmpInst::makeCmpResultType(OpTy), 1);
   }
   return nullptr;
 }
@@ -74,8 +97,8 @@ Constant *llvm::getPredForFCmpCode(unsigned Code, Type *OpTy,
 }
 
 bool llvm::decomposeBitTestICmp(Value *LHS, Value *RHS,
-                                CmpInst::Predicate &Pred,
-                                Value *&X, APInt &Mask, bool LookThruTrunc) {
+                                CmpInst::Predicate &Pred, Value *&X,
+                                APInt &Mask, bool LookThruTrunc) {
   using namespace PatternMatch;
 
   const APInt *C;
diff --git a/llvm/lib/Analysis/ConstantFolding.cpp b/llvm/lib/Analysis/ConstantFolding.cpp
index ac846ce42c7080d..83caa7bda2c1618 100644
--- a/llvm/lib/Analysis/ConstantFolding.cpp
+++ b/llvm/lib/Analysis/ConstantFolding.cpp
@@ -115,8 +115,8 @@ Constant *FoldBitCast(Constant *C, Type *DestTy, const DataLayout &DL) {
       unsigned NumSrcElts = cast<FixedVectorType>(VTy)->getNumElements();
       Type *SrcEltTy = VTy->getElementType();
 
-      // If the vector is a vector of floating point, convert it to vector of int
-      // to simplify things.
+      // If the vector is a vector of floating point, convert it to vector of
+      // int to simplify things.
       if (SrcEltTy->isFloatingPointTy()) {
         unsigned FPWidth = SrcEltTy->getPrimitiveSizeInBits();
         auto *SrcIVTy = FixedVectorType::get(
@@ -126,8 +126,8 @@ Constant *FoldBitCast(Constant *C, Type *DestTy, const DataLayout &DL) {
       }
 
       APInt Result(DL.getTypeSizeInBits(DestTy), 0);
-      if (Constant *CE = foldConstVectorToAPInt(Result, DestTy, C,
-                                                SrcEltTy, NumSrcElts, DL))
+      if (Constant *CE = foldConstVectorToAPInt(Result, DestTy, C, SrcEltTy,
+                                                NumSrcElts, DL))
         return CE;
 
       if (isa<IntegerType>(DestTy))
@@ -194,7 +194,7 @@ Constant *FoldBitCast(Constant *C, Type *DestTy, const DataLayout &DL) {
     // Ask IR to do the conversion now that #elts line up.
     C = ConstantExpr::getBitCast(C, SrcIVTy);
     // If IR wasn't able to fold it, bail out.
-    if (!isa<ConstantVector>(C) &&  // FIXME: Remove ConstantVector.
+    if (!isa<ConstantVector>(C) && // FIXME: Remove ConstantVector.
         !isa<ConstantDataVector>(C))
       return C;
   }
@@ -205,17 +205,17 @@ Constant *FoldBitCast(Constant *C, Type *DestTy, const DataLayout &DL) {
   // more elements.
   bool isLittleEndian = DL.isLittleEndian();
 
-  SmallVector<Constant*, 32> Result;
+  SmallVector<Constant *, 32> Result;
   if (NumDstElt < NumSrcElt) {
     // Handle: bitcast (<4 x i32> <i32 0, i32 1, i32 2, i32 3> to <2 x i64>)
     Constant *Zero = Constant::getNullValue(DstEltTy);
-    unsigned Ratio = NumSrcElt/NumDstElt;
+    unsigned Ratio = NumSrcElt / NumDstElt;
     unsigned SrcBitSize = SrcEltTy->getPrimitiveSizeInBits();
     unsigned SrcElt = 0;
     for (unsigned i = 0; i != NumDstElt; ++i) {
       // Build each element of the result.
       Constant *Elt = Zero;
-      unsigned ShiftAmt = isLittleEndian ? 0 : SrcBitSize*(Ratio-1);
+      unsigned ShiftAmt = isLittleEndian ? 0 : SrcBitSize * (Ratio - 1);
       for (unsigned j = 0; j != Ratio; ++j) {
         Constant *Src = C->getAggregateElement(SrcElt++);
         if (Src && isa<UndefValue>(Src))
@@ -223,7 +223,7 @@ Constant *FoldBitCast(Constant *C, Type *DestTy, const DataLayout &DL) {
               cast<VectorType>(C->getType())->getElementType());
         else
           Src = dyn_cast_or_null<ConstantInt>(Src);
-        if (!Src)  // Reject constantexpr elements.
+        if (!Src) // Reject constantexpr elements.
           return ConstantExpr::getBitCast(C, DestTy);
 
         // Zero extend the element to the right size.
@@ -244,7 +244,7 @@ Constant *FoldBitCast(Constant *C, Type *DestTy, const DataLayout &DL) {
   }
 
   // Handle: bitcast (<2 x i64> <i64 0, i64 1> to <4 x i32>)
-  unsigned Ratio = NumDstElt/NumSrcElt;
+  unsigned Ratio = NumDstElt / NumSrcElt;
   unsigned DstBitSize = DL.getTypeSizeInBits(DstEltTy);
 
   // Loop over each source value, expanding into multiple results.
@@ -264,12 +264,12 @@ Constant *FoldBitCast(Constant *C, Type *DestTy, const DataLayout &DL) {
     if (!Src)
       return ConstantExpr::getBitCast(C, DestTy);
 
-    unsigned ShiftAmt = isLittleEndian ? 0 : DstBitSize*(Ratio-1);
+    unsigned ShiftAmt = isLittleEndian ? 0 : DstBitSize * (Ratio - 1);
     for (unsigned j = 0; j != Ratio; ++j) {
       // Shift the piece of the value into the right place, depending on
       // endianness.
-      Constant *Elt = ConstantExpr::getLShr(Src,
-                                  ConstantInt::get(Src->getType(), ShiftAmt));
+      Constant *Elt = ConstantExpr::getLShr(
+          Src, ConstantInt::get(Src->getType(), ShiftAmt));
       ShiftAmt += isLittleEndian ? DstBitSize : -DstBitSize;
 
       // Truncate the element to an integer with the same pointer size and
@@ -317,7 +317,8 @@ bool llvm::IsConstantOffsetFromGlobal(Constant *C, GlobalValue *&GV,
 
   // Otherwise, if this isn't a constant expr, bail out.
   auto *CE = dyn_cast<ConstantExpr>(C);
-  if (!CE) return false;
+  if (!CE)
+    return false;
 
   // Look through ptr->int and ptr->ptr casts.
   if (CE->getOpcode() == Instruction::PtrToInt ||
@@ -347,7 +348,7 @@ bool llvm::IsConstantOffsetFromGlobal(Constant *C, GlobalValue *&GV,
 }
 
 Constant *llvm::ConstantFoldLoadThroughBitcast(Constant *C, Type *DestTy,
-                                         const DataLayout &DL) {
+                                               const DataLayout &DL) {
   do {
     Type *SrcTy = C->getType();
     if (SrcTy == DestTy)
@@ -433,7 +434,7 @@ bool ReadDataFromGlobal(Constant *C, uint64_t ByteOffset, unsigned char *CurPtr,
     if ((CI->getBitWidth() & 7) != 0)
       return false;
     const APInt &Val = CI->getValue();
-    unsigned IntBytes = unsigned(CI->getBitWidth()/8);
+    unsigned IntBytes = unsigned(CI->getBitWidth() / 8);
 
     for (unsigned i = 0; i != BytesLeft && ByteOffset != IntBytes; ++i) {
       unsigned n = ByteOffset;
@@ -450,11 +451,11 @@ bool ReadDataFromGlobal(Constant *C, uint64_t ByteOffset, unsigned char *CurPtr,
       C = FoldBitCast(C, Type::getInt64Ty(C->getContext()), DL);
       return ReadDataFromGlobal(C, ByteOffset, CurPtr, BytesLeft, DL);
     }
-    if (CFP->getType()->isFloatTy()){
+    if (CFP->getType()->isFloatTy()) {
       C = FoldBitCast(C, Type::getInt32Ty(C->getContext()), DL);
       return ReadDataFromGlobal(C, ByteOffset, CurPtr, BytesLeft, DL);
     }
-    if (CFP->getType()->isHalfTy()){
+    if (CFP->getType()->isHalfTy()) {
       C = FoldBitCast(C, Type::getInt16Ty(C->getContext()), DL);
       return ReadDataFromGlobal(C, ByteOffset, CurPtr, BytesLeft, DL);
     }
@@ -509,8 +510,8 @@ bool ReadDataFromGlobal(Constant *C, uint64_t ByteOffset, unsigned char *CurPtr,
     } else {
       NumElts = cast<FixedVectorType>(C->getType())->getNumElements();
       EltTy = cast<FixedVectorType>(C->getType())->getElementType();
-      // TODO: For non-byte-sized vectors, current implementation assumes there is
-      // padding to the next byte boundary between elements.
+      // TODO: For non-byte-sized vectors, current implementation assumes there
+      // is padding to the next byte boundary between elements.
       if (!DL.typeSizeEqualsStoreSize(EltTy))
         return false;
 
@@ -573,15 +574,18 @@ Constant *FoldReinterpretLoadFromConst(Constant *C, Type *LoadTy,
           !LoadTy->isX86_AMXTy())
         // Materializing a zero can be done trivially without a bitcast
         return Constant::getNullValue(LoadTy);
-      Type *CastTy = LoadTy->isPtrOrPtrVectorTy() ? DL.getIntPtrType(LoadTy) : LoadTy;
+      Type *CastTy =
+          LoadTy->isPtrOrPtrVectorTy() ? DL.getIntPtrType(LoadTy) : LoadTy;
       Res = FoldBitCast(Res, CastTy, DL);
       if (LoadTy->isPtrOrPtrVectorTy()) {
-        // For vector of pointer, we needed to first convert to a vector of integer, then do vector inttoptr
+        // For vector of pointer, we needed to first convert to a vector of
+        // integer, then do vector inttoptr
         if (Res->isNullValue() && !LoadTy->isX86_MMXTy() &&
             !LoadTy->isX86_AMXTy())
           return Constant::getNullValue(LoadTy);
         if (DL.isNonIntegralPointerType(LoadTy->getScalarType()))
-          // Be careful not to replace a load of an addrspace value with an inttoptr here
+          // Be careful not to replace a load of an addrspace value with an
+          // inttoptr here
           return nullptr;
         Res = ConstantExpr::getCast(Instruction::IntToPtr, Res, LoadTy);
       }
@@ -741,11 +745,11 @@ Constant *llvm::ConstantFoldLoadFromConstPtr(Constant *C, Type *Ty,
     return nullptr;
 
   C = cast<Constant>(C->stripAndAccumulateConstantOffsets(
-          DL, Offset, /* AllowNonInbounds */ true));
+      DL, Offset, /* AllowNonInbounds */ true));
 
   if (C == GV)
-    if (Constant *Result = ConstantFoldLoadFromConst(GV->getInitializer(), Ty,
-                                                     Offset, DL))
+    if (Constant *Result =
+            ConstantFoldLoadFromConst(GV->getInitializer(), Ty, Offset, DL))
       return Result;
 
   // If this load comes from anywhere in a uniform constant global, the value
@@ -817,7 +821,7 @@ Constant *SymbolicallyEvaluateBinop(unsigned Opc, Constant *Op0, Constant *Op1,
         // PtrToInt may change the bitwidth so we have convert to the right size
         // first.
         return ConstantInt::get(Op0->getType(), Offs1.zextOrTrunc(OpSize) -
-                                                Offs2.zextOrTrunc(OpSize));
+                                                    Offs2.zextOrTrunc(OpSize));
       }
   }
 
@@ -834,21 +838,17 @@ Constant *CastGEPIndices(Type *SrcElemTy, ArrayRef<Constant *> Ops,
   Type *IntIdxScalarTy = IntIdxTy->getScalarType();
 
   bool Any = false;
-  SmallVector<Constant*, 32> NewIdxs;
+  SmallVector<Constant *, 32> NewIdxs;
   for (unsigned i = 1, e = Ops.size(); i != e; ++i) {
-    if ((i == 1 ||
-         !isa<StructType>(GetElementPtrInst::getIndexedType(
-             SrcElemTy, Ops.slice(1, i - 1)))) &&
+    if ((i == 1 || !isa<StructType>(GetElementPtrInst::getIndexedType(
+                       SrcElemTy, Ops.slice(1, i - 1)))) &&
         Ops[i]->getType()->getScalarType() != IntIdxScalarTy) {
       Any = true;
-      Type *NewType = Ops[i]->getType()->isVectorTy()
-                          ? IntIdxTy
-                          : IntIdxScalarTy;
-      NewIdxs.push_back(ConstantExpr::getCast(CastInst::getCastOpcode(Ops[i],
-                                                                      true,
-                                                                      NewType,
-                                                                      true),
-                                              Ops[i], NewType));
+      Type *NewType =
+          Ops[i]->getType()->isVectorTy() ? IntIdxTy : IntIdxScalarTy;
+      NewIdxs.push_back(ConstantExpr::getCast(
+          CastInst::getCastOpcode(Ops[i], true, NewType, true), Ops[i],
+          NewType));
     } else
       NewIdxs.push_back(Ops[i]);
   }
@@ -856,8 +856,8 @@ Constant *CastGEPIndices(Type *SrcElemTy, ArrayRef<Constant *> Ops,
   if (!Any)
     return nullptr;
 
-  Constant *C = ConstantExpr::getGetElementPtr(
-      SrcElemTy, Ops[0], NewIdxs, InBounds, InRangeIndex);
+  Constant *C = ConstantExpr::getGetElementPtr(SrcElemTy, Ops[0], NewIdxs,
+                                               InBounds, InRangeIndex);
   return ConstantFoldConstant(C, DL, TLI);
 }
 
@@ -875,9 +875,8 @@ Constant *SymbolicallyEvaluateGEP(const GEPOperator *GEP,
   if (!SrcElemTy->isSized() || isa<ScalableVectorType>(SrcElemTy))
     return nullptr;
 
-  if (Constant *C = CastGEPIndices(SrcElemTy, Ops, ResTy,
-                                   GEP->isInBounds(), GEP->getInRangeIndex(),
-                                   DL, TLI))
+  if (Constant *C = CastGEPIndices(SrcElemTy, Ops, ResTy, GEP->isInBounds(),
+                                   GEP->getInRangeIndex(), DL, TLI))
     return C;
 
   Constant *Ptr = Ops[0];
@@ -1045,7 +1044,8 @@ Constant *ConstantFoldInstOperandsImpl(const Value *InstOrCE, unsigned Opcode,
   }
 
   switch (Opcode) {
-  default: return nullptr;
+  default:
+    return nullptr;
   case Instruction::ICmp:
   case Instruction::FCmp: {
     auto *C = cast<CmpInst>(InstOrCE);
@@ -1192,9 +1192,11 @@ Constant *llvm::ConstantFoldInstOperands(Instruction *I,
   return ConstantFoldInstOperandsImpl(I, I->getOpcode(), Ops, DL, TLI);
 }
 
-Constant *llvm::ConstantFoldCompareInstOperands(
-    unsigned IntPredicate, Constant *Ops0, Constant *Ops1, const DataLayout &DL,
-    const TargetLibraryInfo *TLI, const Instruction *I) {
+Constant *llvm::ConstantFoldCompareInstOperands(unsigned IntPredicate,
+                                                Constant *Ops0, Constant *Ops1,
+                                                const DataLayout &DL,
+                                                const TargetLibraryInfo *TLI,
+                                                const Instruction *I) {
   CmpInst::Predicate Predicate = (CmpInst::Predicate)IntPredicate;
   // fold: icmp (inttoptr x), null         -> icmp x, 0
   // fold: icmp null, (inttoptr x)         -> icmp 0, x
@@ -1212,8 +1214,8 @@ Constant *llvm::ConstantFoldCompareInstOperands(
         Type *IntPtrTy = DL.getIntPtrType(CE0->getType());
         // Convert the integer value to the right size to ensure we get the
         // proper extension or truncation.
-        Constant *C = ConstantExpr::getIntegerCast(CE0->getOperand(0),
-                                                   IntPtrTy, false);
+        Constant *C =
+            ConstantExpr::getIntegerCast(CE0->getOperand(0), IntPtrTy, false);
         Constant *Null = Constant::getNullValue(C->getType());
         return ConstantFoldCompareInstOperands(Predicate, C, Null, DL, TLI);
       }
@@ -1237,10 +1239,10 @@ Constant *llvm::ConstantFoldCompareInstOperands(
 
           // Convert the integer value to the right size to ensure we get the
           // proper extension or truncation.
-          Constant *C0 = ConstantExpr::getIntegerCast(CE0->getOperand(0),
-                                                      IntPtrTy, false);
-          Constant *C1 = ConstantExpr::getIntegerCast(CE1->getOperand(0),
-                                                      IntPtrTy, false);
+          Constant *C0 =
+              ConstantExpr::getIntegerCast(CE0->getOperand(0), IntPtrTy, false);
+          Constant *C1 =
+              ConstantExpr::getIntegerCast(CE1->getOperand(0), IntPtrTy, false);
           return ConstantFoldCompareInstOperands(Predicate, C0, C1, DL, TLI);
         }
 
@@ -1461,7 +1463,7 @@ Constant *llvm::ConstantFoldCastOperand(unsigned Opcode, Constant *C,
   case Instruction::FPToUI:
   case Instruction::FPToSI:
   case Instruction::AddrSpaceCast:
-      return ConstantExpr::getCast(Opcode, C, DestTy);
+    return ConstantExpr::getCast(Opcode, C, DestTy);
   case Instruction::BitCast:
     return FoldBitCast(C, DestTy, DL);
   }
@@ -1628,7 +1630,8 @@ bool llvm::canConstantFoldCallTo(const CallBase *Call, const Function *F) {
     return true;
   default:
     return false;
-  case Intrinsic::not_intrinsic: break;
+  case Intrinsic::not_intrinsic:
+    break;
   }
 
   if (!F->hasName() || Call->isStrictFP())
@@ -1642,41 +1645,33 @@ bool llvm::canConstantFoldCallTo(const CallBase *Call, const Function *F) {
   default:
     return false;
   case 'a':
-    return Name == "acos" || Name == "acosf" ||
-           Name == "asin" || Name == "asinf" ||
-           Name == "atan" || Name == "atanf" ||
+    return Name == "acos" || Name == "acosf" || Name == "asin" ||
+           Name == "asinf" || Name == "atan" || Name == "atanf" ||
            Name == "atan2" || Name == "atan2f";
   case 'c':
-    return Name == "ceil" || Name == "ceilf" ||
-           Name == "cos" || Name == "cosf" ||
-           Name == "cosh" || Name == "coshf";
+    return Name == "ceil" || Name == "ceilf" || Name == "cos" ||
+           Name == "cosf" || Name == "cosh" || Name == "coshf";
   case 'e':
-    return Name == "exp" || Name == "expf" ||
-           Name == "exp2" || Name == "exp2f";
+    return Name == "exp" || Name == "expf" || Name == "exp2" || Name == "exp2f";
   case 'f':
-    return Name == "fabs" || Name == "fabsf" ||
-           Name == "floor" || Name == "floorf" ||
-           Name == "fmod" || Name == "fmodf";
+    return Name == "fabs" || Name == "fabsf" || Name == "floor" ||
+           Name == "floorf" || Name == "fmod" || Name == "fmodf";
   case 'l':
-    return Name == "log" || Name == "logf" ||
-           Name == "log2" || Name == "log2f" ||
-           Name == "log10" || Name == "log10f";
+    return Name == "log" || Name == "logf" || Name == "log2" ||
+           Name == "log2f" || Name == "log10" || Name == "log10f";
   case 'n':
     return Name == "nearbyint" || Name == "nearbyintf";
   case 'p':
     return Name == "pow" || Name == "powf";
   case 'r':
-    return Name == "remainder" || Name == "remainderf" ||
-           Name == "rint" || Name == "rintf" ||
-           Name == "round" || Name == "roundf";
+    return Name == "remainder" || Name == "remainderf" || Name == "rint" ||
+           Name == "rintf" || Name == "round" || Name == "roundf";
   case 's':
-    return Name == "sin" || Name == "sinf" ||
-           Name == "sinh" || Name == "sinhf" ||
-           Name == "sqrt" || Name == "sqrtf";
+    return Name == "sin" || Name == "sinf" || Name == "sinh" ||
+           Name == "sinhf" || Name == "sqrt" || Name == "sqrtf";
   case 't':
-    return Name == "tan" || Name == "tanf" ||
-           Name == "tanh" || Name == "tanhf" ||
-           Name == "trunc" || Name == "truncf";
+    return Name == "tan" || Name == "tanf" || Name == "tanh" ||
+           Name == "tanhf" || Name == "trunc" || Name == "truncf";
   case '_':
     // Check for various function names that get used for the math functions
     // when the header files are preprocessed with the macro
@@ -1844,11 +1839,10 @@ Constant *ConstantFoldSSEConvertToInt(const APFloat &Val, bool roundTowardZero,
 
   uint64_t UIntVal;
   bool isExact = false;
-  APFloat::roundingMode mode = roundTowardZero? APFloat::rmTowardZero
-                                              : APFloat::rmNearestTiesToEven;
-  APFloat::opStatus status =
-      Val.convertToInteger(MutableArrayRef(UIntVal), ResultWidth,
-                           IsSigned, mode, &isExact);
+  APFloat::roundingMode mode =
+      roundTowardZero ? APFloat::rmTowardZero : APFloat::rmNearestTiesToEven;
+  APFloat::opStatus status = Val.convertToInteger(
+      MutableArrayRef(UIntVal), ResultWidth, IsSigned, mode, &isExact);
   if (status != APFloat::opOK &&
       (!roundTowardZero || status != APFloat::opInexact))
     return nullptr;
@@ -1973,8 +1967,7 @@ static Constant *constantFoldCanonicalize(const Type *Ty, const CallBase *CI,
 }
 
 static Constant *ConstantFoldScalarCall1(StringRef Name,
-                                         Intrinsic::ID IntrinsicID,
-                                         Type *Ty,
+                                         Intrinsic::ID IntrinsicID, Type *Ty,
                                          ArrayRef<Constant *> Operands,
                                          const TargetLibraryInfo *TLI,
                                          const CallBase *Call) {
@@ -1999,8 +1992,7 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
     // cosine(arg) is between -1 and 1. cosine(invalid arg) is NaN.
     // ctpop() is between 0 and bitwidth, pick 0 for undef.
     // fptoui.sat and fptosi.sat can always fold to zero (for a zero input).
-    if (IntrinsicID == Intrinsic::cos ||
-        IntrinsicID == Intrinsic::ctpop ||
+    if (IntrinsicID == Intrinsic::cos || IntrinsicID == Intrinsic::ctpop ||
         IntrinsicID == Intrinsic::fptoui_sat ||
         IntrinsicID == Intrinsic::fptosi_sat ||
         IntrinsicID == Intrinsic::canonicalize)
@@ -2019,8 +2011,7 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
       // If instruction is not yet put in a basic block (e.g. when cloning
       // a function during inlining), Call's caller may not be available.
       // So check Call's BB first before querying Call->getCaller.
-      const Function *Caller =
-          Call->getParent() ? Call->getCaller() : nullptr;
+      const Function *Caller = Call->getParent() ? Call->getCaller() : nullptr;
       if (Caller &&
           !NullPointerIsDefined(
               Caller, Operands[0]->getType()->getPointerAddressSpace())) {
@@ -2187,51 +2178,52 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
     const APFloat &APF = Op->getValueAPF();
 
     switch (IntrinsicID) {
-      default: break;
-      case Intrinsic::log:
-        return ConstantFoldFP(log, APF, Ty);
-      case Intrinsic::log2:
-        // TODO: What about hosts that lack a C99 library?
-        return ConstantFoldFP(log2, APF, Ty);
-      case Intrinsic::log10:
-        // TODO: What about hosts that lack a C99 library?
-        return ConstantFoldFP(log10, APF, Ty);
-      case Intrinsic::exp:
-        return ConstantFoldFP(exp, APF, Ty);
-      case Intrinsic::exp2:
-        // Fold exp2(x) as pow(2, x), in case the host lacks a C99 library.
-        return ConstantFoldBinaryFP(pow, APFloat(2.0), APF, Ty);
-      case Intrinsic::exp10:
-        // Fold exp10(x) as pow(10, x), in case the host lacks a C99 library.
-        return ConstantFoldBinaryFP(pow, APFloat(10.0), APF, Ty);
-      case Intrinsic::sin:
-        return ConstantFoldFP(sin, APF, Ty);
-      case Intrinsic::cos:
-        return ConstantFoldFP(cos, APF, Ty);
-      case Intrinsic::sqrt:
-        return ConstantFoldFP(sqrt, APF, Ty);
-      case Intrinsic::amdgcn_cos:
-      case Intrinsic::amdgcn_sin: {
-        double V = getValueAsDouble(Op);
-        if (V < -256.0 || V > 256.0)
-          // The gfx8 and gfx9 architectures handle arguments outside the range
-          // [-256, 256] differently. This should be a rare case so bail out
-          // rather than trying to handle the difference.
-          return nullptr;
-        bool IsCos = IntrinsicID == Intrinsic::amdgcn_cos;
-        double V4 = V * 4.0;
-        if (V4 == floor(V4)) {
-          // Force exact results for quarter-integer inputs.
-          const double SinVals[4] = { 0.0, 1.0, 0.0, -1.0 };
-          V = SinVals[((int)V4 + (IsCos ? 1 : 0)) & 3];
-        } else {
-          if (IsCos)
-            V = cos(V * 2.0 * numbers::pi);
-          else
-            V = sin(V * 2.0 * numbers::pi);
-        }
-        return GetConstantFoldFPValue(V, Ty);
+    default:
+      break;
+    case Intrinsic::log:
+      return ConstantFoldFP(log, APF, Ty);
+    case Intrinsic::log2:
+      // TODO: What about hosts that lack a C99 library?
+      return ConstantFoldFP(log2, APF, Ty);
+    case Intrinsic::log10:
+      // TODO: What about hosts that lack a C99 library?
+      return ConstantFoldFP(log10, APF, Ty);
+    case Intrinsic::exp:
+      return ConstantFoldFP(exp, APF, Ty);
+    case Intrinsic::exp2:
+      // Fold exp2(x) as pow(2, x), in case the host lacks a C99 library.
+      return ConstantFoldBinaryFP(pow, APFloat(2.0), APF, Ty);
+    case Intrinsic::exp10:
+      // Fold exp10(x) as pow(10, x), in case the host lacks a C99 library.
+      return ConstantFoldBinaryFP(pow, APFloat(10.0), APF, Ty);
+    case Intrinsic::sin:
+      return ConstantFoldFP(sin, APF, Ty);
+    case Intrinsic::cos:
+      return ConstantFoldFP(cos, APF, Ty);
+    case Intrinsic::sqrt:
+      return ConstantFoldFP(sqrt, APF, Ty);
+    case Intrinsic::amdgcn_cos:
+    case Intrinsic::amdgcn_sin: {
+      double V = getValueAsDouble(Op);
+      if (V < -256.0 || V > 256.0)
+        // The gfx8 and gfx9 architectures handle arguments outside the range
+        // [-256, 256] differently. This should be a rare case so bail out
+        // rather than trying to handle the difference.
+        return nullptr;
+      bool IsCos = IntrinsicID == Intrinsic::amdgcn_cos;
+      double V4 = V * 4.0;
+      if (V4 == floor(V4)) {
+        // Force exact results for quarter-integer inputs.
+        const double SinVals[4] = {0.0, 1.0, 0.0, -1.0};
+        V = SinVals[((int)V4 + (IsCos ? 1 : 0)) & 3];
+      } else {
+        if (IsCos)
+          V = cos(V * 2.0 * numbers::pi);
+        else
+          V = sin(V * 2.0 * numbers::pi);
       }
+      return GetConstantFoldFPValue(V, Ty);
+    }
     }
 
     if (!TLI)
@@ -2416,7 +2408,8 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
   }
 
   switch (IntrinsicID) {
-  default: break;
+  default:
+    break;
   case Intrinsic::vector_reduce_add:
   case Intrinsic::vector_reduce_mul:
   case Intrinsic::vector_reduce_and:
@@ -2436,7 +2429,8 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
       isa<ConstantDataVector>(Operands[0])) {
     auto *Op = cast<Constant>(Operands[0]);
     switch (IntrinsicID) {
-    default: break;
+    default:
+      break;
     case Intrinsic::x86_sse_cvtss2si:
     case Intrinsic::x86_sse_cvtss2si64:
     case Intrinsic::x86_sse2_cvtsd2si:
@@ -2445,7 +2439,7 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
               dyn_cast_or_null<ConstantFP>(Op->getAggregateElement(0U)))
         return ConstantFoldSSEConvertToInt(FPOp->getValueAPF(),
                                            /*roundTowardZero=*/false, Ty,
-                                           /*IsSigned*/true);
+                                           /*IsSigned*/ true);
       break;
     case Intrinsic::x86_sse_cvttss2si:
     case Intrinsic::x86_sse_cvttss2si64:
@@ -2455,7 +2449,7 @@ static Constant *ConstantFoldScalarCall1(StringRef Name,
               dyn_cast_or_null<ConstantFP>(Op->getAggregateElement(0U)))
         return ConstantFoldSSEConvertToInt(FPOp->getValueAPF(),
                                            /*roundTowardZero=*/true, Ty,
-                                           /*IsSigned*/true);
+                                           /*IsSigned*/ true);
       break;
     }
   }
@@ -2482,8 +2476,7 @@ static Constant *evaluateCompare(const APFloat &Op1, const APFloat &Op2,
 }
 
 static Constant *ConstantFoldScalarCall2(StringRef Name,
-                                         Intrinsic::ID IntrinsicID,
-                                         Type *Ty,
+                                         Intrinsic::ID IntrinsicID, Type *Ty,
                                          ArrayRef<Constant *> Operands,
                                          const TargetLibraryInfo *TLI,
                                          const CallBase *Call) {
@@ -2630,16 +2623,18 @@ static Constant *ConstantFoldScalarCall2(StringRef Name,
       case Intrinsic::is_fpclass: {
         FPClassTest Mask = static_cast<FPClassTest>(Op2C->getZExtValue());
         bool Result =
-          ((Mask & fcSNan) && Op1V.isNaN() && Op1V.isSignaling()) ||
-          ((Mask & fcQNan) && Op1V.isNaN() && !Op1V.isSignaling()) ||
-          ((Mask & fcNegInf) && Op1V.isNegInfinity()) ||
-          ((Mask & fcNegNormal) && Op1V.isNormal() && Op1V.isNegative()) ||
-          ((Mask & fcNegSubnormal) && Op1V.isDenormal() && Op1V.isNegative()) ||
-          ((Mask & fcNegZero) && Op1V.isZero() && Op1V.isNegative()) ||
-          ((Mask & fcPosZero) && Op1V.isZero() && !Op1V.isNegative()) ||
-          ((Mask & fcPosSubnormal) && Op1V.isDenormal() && !Op1V.isNegative()) ||
-          ((Mask & fcPosNormal) && Op1V.isNormal() && !Op1V.isNegative()) ||
-          ((Mask & fcPosInf) && Op1V.isPosInfinity());
+            ((Mask & fcSNan) && Op1V.isNaN() && Op1V.isSignaling()) ||
+            ((Mask & fcQNan) && Op1V.isNaN() && !Op1V.isSignaling()) ||
+            ((Mask & fcNegInf) && Op1V.isNegInfinity()) ||
+            ((Mask & fcNegNormal) && Op1V.isNormal() && Op1V.isNegative()) ||
+            ((Mask & fcNegSubnormal) && Op1V.isDenormal() &&
+             Op1V.isNegative()) ||
+            ((Mask & fcNegZero) && Op1V.isZero() && Op1V.isNegative()) ||
+            ((Mask & fcPosZero) && Op1V.isZero() && !Op1V.isNegative()) ||
+            ((Mask & fcPosSubnormal) && Op1V.isDenormal() &&
+             !Op1V.isNegative()) ||
+            ((Mask & fcPosNormal) && Op1V.isNormal() && !Op1V.isNegative()) ||
+            ((Mask & fcPosInf) && Op1V.isPosInfinity());
         return ConstantInt::get(Ty, Result);
       }
       default:
@@ -2685,7 +2680,8 @@ static Constant *ConstantFoldScalarCall2(StringRef Name,
       return nullptr;
 
     switch (IntrinsicID) {
-    default: break;
+    default:
+      break;
     case Intrinsic::smax:
     case Intrinsic::smin:
     case Intrinsic::umax:
@@ -2733,7 +2729,8 @@ static Constant *ConstantFoldScalarCall2(StringRef Name,
       APInt Res;
       bool Overflow;
       switch (IntrinsicID) {
-      default: llvm_unreachable("Invalid case");
+      default:
+        llvm_unreachable("Invalid case");
       case Intrinsic::sadd_with_overflow:
         Res = C0->sadd_ov(*C1, Overflow);
         break;
@@ -2754,9 +2751,8 @@ static Constant *ConstantFoldScalarCall2(StringRef Name,
         break;
       }
       Constant *Ops[] = {
-        ConstantInt::get(Ty->getContext(), Res),
-        ConstantInt::get(Type::getInt1Ty(Ty->getContext()), Overflow)
-      };
+          ConstantInt::get(Ty->getContext(), Res),
+          ConstantInt::get(Type::getInt1Ty(Ty->getContext()), Overflow)};
       return ConstantStruct::get(cast<StructType>(Ty), Ops);
     }
     case Intrinsic::uadd_sat:
@@ -2833,7 +2829,8 @@ static Constant *ConstantFoldScalarCall2(StringRef Name,
       cast<ConstantInt>(Operands[1])->getValue() == 4) {
     auto *Op = cast<Constant>(Operands[0]);
     switch (IntrinsicID) {
-    default: break;
+    default:
+      break;
     case Intrinsic::x86_avx512_vcvtss2si32:
     case Intrinsic::x86_avx512_vcvtss2si64:
     case Intrinsic::x86_avx512_vcvtsd2si32:
@@ -2842,7 +2839,7 @@ static Constant *ConstantFoldScalarCall2(StringRef Name,
               dyn_cast_or_null<ConstantFP>(Op->getAggregateElement(0U)))
         return ConstantFoldSSEConvertToInt(FPOp->getValueAPF(),
                                            /*roundTowardZero=*/false, Ty,
-                                           /*IsSigned*/true);
+                                           /*IsSigned*/ true);
       break;
     case Intrinsic::x86_avx512_vcvtss2usi32:
     case Intrinsic::x86_avx512_vcvtss2usi64:
@@ -2852,7 +2849,7 @@ static Constant *ConstantFoldScalarCall2(StringRef Name,
               dyn_cast_or_null<ConstantFP>(Op->getAggregateElement(0U)))
         return ConstantFoldSSEConvertToInt(FPOp->getValueAPF(),
                                            /*roundTowardZero=*/false, Ty,
-                                           /*IsSigned*/false);
+                                           /*IsSigned*/ false);
       break;
     case Intrinsic::x86_avx512_cvttss2si:
     case Intrinsic::x86_avx512_cvttss2si64:
@@ -2862,7 +2859,7 @@ static Constant *ConstantFoldScalarCall2(StringRef Name,
               dyn_cast_or_null<ConstantFP>(Op->getAggregateElement(0U)))
         return ConstantFoldSSEConvertToInt(FPOp->getValueAPF(),
                                            /*roundTowardZero=*/true, Ty,
-                                           /*IsSigned*/true);
+                                           /*IsSigned*/ true);
       break;
     case Intrinsic::x86_avx512_cvttss2usi:
     case Intrinsic::x86_avx512_cvttss2usi64:
@@ -2872,7 +2869,7 @@ static Constant *ConstantFoldScalarCall2(StringRef Name,
               dyn_cast_or_null<ConstantFP>(Op->getAggregateElement(0U)))
         return ConstantFoldSSEConvertToInt(FPOp->getValueAPF(),
                                            /*roundTowardZero=*/true, Ty,
-                                           /*IsSigned*/false);
+                                           /*IsSigned*/ false);
       break;
     }
   }
@@ -2975,8 +2972,7 @@ static Constant *ConstantFoldAMDGCNPermIntrinsic(ArrayRef<Constant *> Operands,
 }
 
 static Constant *ConstantFoldScalarCall3(StringRef Name,
-                                         Intrinsic::ID IntrinsicID,
-                                         Type *Ty,
+                                         Intrinsic::ID IntrinsicID, Type *Ty,
                                          ArrayRef<Constant *> Operands,
                                          const TargetLibraryInfo *TLI,
                                          const CallBase *Call) {
@@ -3008,7 +3004,8 @@ static Constant *ConstantFoldScalarCall3(StringRef Name,
         }
 
         switch (IntrinsicID) {
-        default: break;
+        default:
+          break;
         case Intrinsic::amdgcn_fma_legacy: {
           // The legacy behaviour is that multiplying +/- 0.0 by anything, even
           // NaN or infinity, gives +0.0.
@@ -3112,8 +3109,7 @@ static Constant *ConstantFoldScalarCall3(StringRef Name,
 }
 
 static Constant *ConstantFoldScalarCall(StringRef Name,
-                                        Intrinsic::ID IntrinsicID,
-                                        Type *Ty,
+                                        Intrinsic::ID IntrinsicID, Type *Ty,
                                         ArrayRef<Constant *> Operands,
                                         const TargetLibraryInfo *TLI,
                                         const CallBase *Call) {
@@ -3440,7 +3436,6 @@ bool llvm::isMathLibCallNoop(const CallBase *Call,
         // Per POSIX, this MAY fail if Op is denormal. We choose not failing.
         return true;
 
-
       case LibFunc_asinl:
       case LibFunc_asin:
       case LibFunc_asinf:
diff --git a/llvm/lib/Analysis/ConstraintSystem.cpp b/llvm/lib/Analysis/ConstraintSystem.cpp
index 8a802515b6f4fb9..496d913522d3f25 100644
--- a/llvm/lib/Analysis/ConstraintSystem.cpp
+++ b/llvm/lib/Analysis/ConstraintSystem.cpp
@@ -8,10 +8,10 @@
 
 #include "llvm/Analysis/ConstraintSystem.h"
 #include "llvm/ADT/SmallVector.h"
-#include "llvm/Support/MathExtras.h"
 #include "llvm/ADT/StringExtras.h"
 #include "llvm/IR/Value.h"
 #include "llvm/Support/Debug.h"
+#include "llvm/Support/MathExtras.h"
 
 #include <string>
 
diff --git a/llvm/lib/Analysis/CostModel.cpp b/llvm/lib/Analysis/CostModel.cpp
index 1782b399e7fd093..ce97de0eedae80d 100644
--- a/llvm/lib/Analysis/CostModel.cpp
+++ b/llvm/lib/Analysis/CostModel.cpp
@@ -20,27 +20,28 @@
 #include "llvm/Analysis/Passes.h"
 #include "llvm/Analysis/TargetTransformInfo.h"
 #include "llvm/IR/Function.h"
+#include "llvm/IR/IntrinsicInst.h"
 #include "llvm/IR/PassManager.h"
 #include "llvm/InitializePasses.h"
 #include "llvm/Pass.h"
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/raw_ostream.h"
-#include "llvm/IR/IntrinsicInst.h"
 using namespace llvm;
 
-static cl::opt<TargetTransformInfo::TargetCostKind> CostKind(
-    "cost-kind", cl::desc("Target cost kind"),
-    cl::init(TargetTransformInfo::TCK_RecipThroughput),
-    cl::values(clEnumValN(TargetTransformInfo::TCK_RecipThroughput,
-                          "throughput", "Reciprocal throughput"),
-               clEnumValN(TargetTransformInfo::TCK_Latency,
-                          "latency", "Instruction latency"),
-               clEnumValN(TargetTransformInfo::TCK_CodeSize,
-                          "code-size", "Code size"),
-               clEnumValN(TargetTransformInfo::TCK_SizeAndLatency,
-                          "size-latency", "Code size and latency")));
-
-static cl::opt<bool> TypeBasedIntrinsicCost("type-based-intrinsic-cost",
+static cl::opt<TargetTransformInfo::TargetCostKind>
+    CostKind("cost-kind", cl::desc("Target cost kind"),
+             cl::init(TargetTransformInfo::TCK_RecipThroughput),
+             cl::values(clEnumValN(TargetTransformInfo::TCK_RecipThroughput,
+                                   "throughput", "Reciprocal throughput"),
+                        clEnumValN(TargetTransformInfo::TCK_Latency, "latency",
+                                   "Instruction latency"),
+                        clEnumValN(TargetTransformInfo::TCK_CodeSize,
+                                   "code-size", "Code size"),
+                        clEnumValN(TargetTransformInfo::TCK_SizeAndLatency,
+                                   "size-latency", "Code size and latency")));
+
+static cl::opt<bool> TypeBasedIntrinsicCost(
+    "type-based-intrinsic-cost",
     cl::desc("Calculate intrinsics cost based only on argument types"),
     cl::init(false));
 
@@ -48,52 +49,49 @@ static cl::opt<bool> TypeBasedIntrinsicCost("type-based-intrinsic-cost",
 #define DEBUG_TYPE CM_NAME
 
 namespace {
-  class CostModelAnalysis : public FunctionPass {
+class CostModelAnalysis : public FunctionPass {
 
-  public:
-    static char ID; // Class identification, replacement for typeinfo
-    CostModelAnalysis() : FunctionPass(ID) {
-      initializeCostModelAnalysisPass(
-        *PassRegistry::getPassRegistry());
-    }
+public:
+  static char ID; // Class identification, replacement for typeinfo
+  CostModelAnalysis() : FunctionPass(ID) {
+    initializeCostModelAnalysisPass(*PassRegistry::getPassRegistry());
+  }
 
-  private:
-    void getAnalysisUsage(AnalysisUsage &AU) const override;
-    bool runOnFunction(Function &F) override;
-    void print(raw_ostream &OS, const Module*) const override;
+private:
+  void getAnalysisUsage(AnalysisUsage &AU) const override;
+  bool runOnFunction(Function &F) override;
+  void print(raw_ostream &OS, const Module *) const override;
 
-    /// The function that we analyze.
-    Function *F = nullptr;
-    /// Target information.
-    const TargetTransformInfo *TTI = nullptr;
-  };
-}  // End of anonymous namespace
+  /// The function that we analyze.
+  Function *F = nullptr;
+  /// Target information.
+  const TargetTransformInfo *TTI = nullptr;
+};
+} // End of anonymous namespace
 
 // Register this pass.
 char CostModelAnalysis::ID = 0;
 static const char cm_name[] = "Cost Model Analysis";
 INITIALIZE_PASS_BEGIN(CostModelAnalysis, CM_NAME, cm_name, false, true)
-INITIALIZE_PASS_END  (CostModelAnalysis, CM_NAME, cm_name, false, true)
+INITIALIZE_PASS_END(CostModelAnalysis, CM_NAME, cm_name, false, true)
 
 FunctionPass *llvm::createCostModelAnalysisPass() {
   return new CostModelAnalysis();
 }
 
-void
-CostModelAnalysis::getAnalysisUsage(AnalysisUsage &AU) const {
+void CostModelAnalysis::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.setPreservesAll();
 }
 
-bool
-CostModelAnalysis::runOnFunction(Function &F) {
- this->F = &F;
- auto *TTIWP = getAnalysisIfAvailable<TargetTransformInfoWrapperPass>();
- TTI = TTIWP ? &TTIWP->getTTI(F) : nullptr;
+bool CostModelAnalysis::runOnFunction(Function &F) {
+  this->F = &F;
+  auto *TTIWP = getAnalysisIfAvailable<TargetTransformInfoWrapperPass>();
+  TTI = TTIWP ? &TTIWP->getTTI(F) : nullptr;
 
- return false;
+  return false;
 }
 
-void CostModelAnalysis::print(raw_ostream &OS, const Module*) const {
+void CostModelAnalysis::print(raw_ostream &OS, const Module *) const {
   if (!F)
     return;
 
@@ -105,8 +103,7 @@ void CostModelAnalysis::print(raw_ostream &OS, const Module*) const {
         IntrinsicCostAttributes ICA(II->getIntrinsicID(), *II,
                                     InstructionCost::getInvalid(), true);
         Cost = TTI->getIntrinsicInstrCost(ICA, CostKind);
-      }
-      else {
+      } else {
         Cost = TTI->getInstructionCost(&Inst, CostKind);
       }
 
@@ -123,7 +120,8 @@ void CostModelAnalysis::print(raw_ostream &OS, const Module*) const {
 PreservedAnalyses CostModelPrinterPass::run(Function &F,
                                             FunctionAnalysisManager &AM) {
   auto &TTI = AM.getResult<TargetIRAnalysis>(F);
-  OS << "Printing analysis 'Cost Model Analysis' for function '" << F.getName() << "':\n";
+  OS << "Printing analysis 'Cost Model Analysis' for function '" << F.getName()
+     << "':\n";
   for (BasicBlock &B : F) {
     for (Instruction &Inst : B) {
       // TODO: Use a pass parameter instead of cl::opt CostKind to determine
@@ -134,8 +132,7 @@ PreservedAnalyses CostModelPrinterPass::run(Function &F,
         IntrinsicCostAttributes ICA(II->getIntrinsicID(), *II,
                                     InstructionCost::getInvalid(), true);
         Cost = TTI.getIntrinsicInstrCost(ICA, CostKind);
-      }
-      else {
+      } else {
         Cost = TTI.getInstructionCost(&Inst, CostKind);
       }
 
diff --git a/llvm/lib/Analysis/DemandedBits.cpp b/llvm/lib/Analysis/DemandedBits.cpp
index c5017bf52498e44..47668076b0f0362 100644
--- a/llvm/lib/Analysis/DemandedBits.cpp
+++ b/llvm/lib/Analysis/DemandedBits.cpp
@@ -51,10 +51,12 @@ static bool isAlwaysLive(Instruction *I) {
          I->mayHaveSideEffects();
 }
 
-void DemandedBits::determineLiveOperandBits(
-    const Instruction *UserI, const Value *Val, unsigned OperandNo,
-    const APInt &AOut, APInt &AB, KnownBits &Known, KnownBits &Known2,
-    bool &KnownBitsComputed) {
+void DemandedBits::determineLiveOperandBits(const Instruction *UserI,
+                                            const Value *Val,
+                                            unsigned OperandNo,
+                                            const APInt &AOut, APInt &AB,
+                                            KnownBits &Known, KnownBits &Known2,
+                                            bool &KnownBitsComputed) {
   unsigned BitWidth = AB.getBitWidth();
 
   // We're called once per operand, but for some instructions, we need to
@@ -63,29 +65,31 @@ void DemandedBits::determineLiveOperandBits(
   // however, want to do this twice, so we cache the result in APInts that live
   // in the caller. For the two-relevant-operands case, both operand values are
   // provided here.
-  auto ComputeKnownBits =
-      [&](unsigned BitWidth, const Value *V1, const Value *V2) {
-        if (KnownBitsComputed)
-          return;
-        KnownBitsComputed = true;
-
-        const DataLayout &DL = UserI->getModule()->getDataLayout();
-        Known = KnownBits(BitWidth);
-        computeKnownBits(V1, Known, DL, 0, &AC, UserI, &DT);
-
-        if (V2) {
-          Known2 = KnownBits(BitWidth);
-          computeKnownBits(V2, Known2, DL, 0, &AC, UserI, &DT);
-        }
-      };
+  auto ComputeKnownBits = [&](unsigned BitWidth, const Value *V1,
+                              const Value *V2) {
+    if (KnownBitsComputed)
+      return;
+    KnownBitsComputed = true;
+
+    const DataLayout &DL = UserI->getModule()->getDataLayout();
+    Known = KnownBits(BitWidth);
+    computeKnownBits(V1, Known, DL, 0, &AC, UserI, &DT);
+
+    if (V2) {
+      Known2 = KnownBits(BitWidth);
+      computeKnownBits(V2, Known2, DL, 0, &AC, UserI, &DT);
+    }
+  };
 
   switch (UserI->getOpcode()) {
-  default: break;
+  default:
+    break;
   case Instruction::Call:
   case Instruction::Invoke:
     if (const auto *II = dyn_cast<IntrinsicInst>(UserI)) {
       switch (II->getIntrinsicID()) {
-      default: break;
+      default:
+        break;
       case Intrinsic::bswap:
         // The alive bits of the input are the swapped alive bits of
         // the output.
@@ -102,8 +106,8 @@ void DemandedBits::determineLiveOperandBits(
           // input to the left of, and including, the leftmost bit
           // known to be one.
           ComputeKnownBits(BitWidth, Val, nullptr);
-          AB = APInt::getHighBitsSet(BitWidth,
-                 std::min(BitWidth, Known.countMaxLeadingZeros()+1));
+          AB = APInt::getHighBitsSet(
+              BitWidth, std::min(BitWidth, Known.countMaxLeadingZeros() + 1));
         }
         break;
       case Intrinsic::cttz:
@@ -112,8 +116,8 @@ void DemandedBits::determineLiveOperandBits(
           // input to the right of, and including, the rightmost bit
           // known to be one.
           ComputeKnownBits(BitWidth, Val, nullptr);
-          AB = APInt::getLowBitsSet(BitWidth,
-                 std::min(BitWidth, Known.countMaxTrailingZeros()+1));
+          AB = APInt::getLowBitsSet(
+              BitWidth, std::min(BitWidth, Known.countMaxTrailingZeros() + 1));
         }
         break;
       case Intrinsic::fshl:
@@ -182,7 +186,7 @@ void DemandedBits::determineLiveOperandBits(
         // (because we've promised that they *must* be zero).
         const auto *S = cast<ShlOperator>(UserI);
         if (S->hasNoSignedWrap())
-          AB |= APInt::getHighBitsSet(BitWidth, ShiftAmt+1);
+          AB |= APInt::getHighBitsSet(BitWidth, ShiftAmt + 1);
         else if (S->hasNoUnsignedWrap())
           AB |= APInt::getHighBitsSet(BitWidth, ShiftAmt);
       }
@@ -211,8 +215,7 @@ void DemandedBits::determineLiveOperandBits(
         // Because the high input bit is replicated into the
         // high-order bits of the result, if we need any of those
         // bits, then we must keep the highest input bit.
-        if ((AOut & APInt::getHighBitsSet(BitWidth, ShiftAmt))
-            .getBoolValue())
+        if ((AOut & APInt::getHighBitsSet(BitWidth, ShiftAmt)).getBoolValue())
           AB.setSignBit();
 
         // If the shift is exact, then the low bits are not dead
@@ -265,7 +268,7 @@ void DemandedBits::determineLiveOperandBits(
     // bits, then we must keep the highest input bit.
     if ((AOut & APInt::getHighBitsSet(AOut.getBitWidth(),
                                       AOut.getBitWidth() - BitWidth))
-        .getBoolValue())
+            .getBoolValue())
       AB.setSignBit();
     break;
   case Instruction::Select:
@@ -294,7 +297,7 @@ void DemandedBits::performAnalysis() {
   AliveBits.clear();
   DeadUses.clear();
 
-  SmallSetVector<Instruction*, 16> Worklist;
+  SmallSetVector<Instruction *, 16> Worklist;
 
   // Collect the set of "root" instructions that are known live.
   for (Instruction &I : instructions(F)) {
@@ -477,7 +480,8 @@ void DemandedBits::print(raw_ostream &OS) {
     OS << *I << '\n';
   };
 
-  OS << "Printing analysis 'Demanded Bits Analysis' for function '" << F.getName() << "':\n";
+  OS << "Printing analysis 'Demanded Bits Analysis' for function '"
+     << F.getName() << "':\n";
   performAnalysis();
   for (auto &KV : AliveBits) {
     Instruction *I = KV.first;
@@ -572,7 +576,7 @@ APInt DemandedBits::determineLiveOperandBitsSub(unsigned OperandNo,
 AnalysisKey DemandedBitsAnalysis::Key;
 
 DemandedBits DemandedBitsAnalysis::run(Function &F,
-                                             FunctionAnalysisManager &AM) {
+                                       FunctionAnalysisManager &AM) {
   auto &AC = AM.getResult<AssumptionAnalysis>(F);
   auto &DT = AM.getResult<DominatorTreeAnalysis>(F);
   return DemandedBits(F, AC, DT);
diff --git a/llvm/lib/Analysis/DependenceAnalysis.cpp b/llvm/lib/Analysis/DependenceAnalysis.cpp
index 1bce9aae09bb26b..a3e0585dbb1d83f 100644
--- a/llvm/lib/Analysis/DependenceAnalysis.cpp
+++ b/llvm/lib/Analysis/DependenceAnalysis.cpp
@@ -182,15 +182,15 @@ static void dumpExampleDependence(raw_ostream &OS, DependenceInfo *DA,
   for (inst_iterator SrcI = inst_begin(F), SrcE = inst_end(F); SrcI != SrcE;
        ++SrcI) {
     if (SrcI->mayReadOrWriteMemory()) {
-      for (inst_iterator DstI = SrcI, DstE = inst_end(F);
-           DstI != DstE; ++DstI) {
+      for (inst_iterator DstI = SrcI, DstE = inst_end(F); DstI != DstE;
+           ++DstI) {
         if (DstI->mayReadOrWriteMemory()) {
           OS << "Src:" << *SrcI << " --> Dst:" << *DstI << "\n";
           OS << "  da analyze - ";
           if (auto D = DA->depends(&*SrcI, &*DstI, true)) {
             // Normalize negative direction vectors if required by clients.
             if (NormalizeResults && D->normalize(&SE))
-                OS << "normalized - ";
+              OS << "normalized - ";
             D->dump(OS);
             for (unsigned Level = 1; Level <= D->getLevels(); Level++) {
               if (D->isSplitable(Level)) {
@@ -199,8 +199,7 @@ static void dumpExampleDependence(raw_ostream &OS, DependenceInfo *DA,
                 OS << "!\n";
               }
             }
-          }
-          else
+          } else
             OS << "none!\n";
         }
       }
@@ -210,8 +209,8 @@ static void dumpExampleDependence(raw_ostream &OS, DependenceInfo *DA,
 
 void DependenceAnalysisWrapperPass::print(raw_ostream &OS,
                                           const Module *) const {
-  dumpExampleDependence(OS, info.get(),
-                        getAnalysis<ScalarEvolutionWrapperPass>().getSE(), false);
+  dumpExampleDependence(
+      OS, info.get(), getAnalysis<ScalarEvolutionWrapperPass>().getSE(), false);
 }
 
 PreservedAnalyses
@@ -231,33 +230,26 @@ bool Dependence::isInput() const {
   return Src->mayReadFromMemory() && Dst->mayReadFromMemory();
 }
 
-
 // Returns true if this is an output dependence.
 bool Dependence::isOutput() const {
   return Src->mayWriteToMemory() && Dst->mayWriteToMemory();
 }
 
-
 // Returns true if this is an flow (aka true)  dependence.
 bool Dependence::isFlow() const {
   return Src->mayWriteToMemory() && Dst->mayReadFromMemory();
 }
 
-
 // Returns true if this is an anti dependence.
 bool Dependence::isAnti() const {
   return Src->mayReadFromMemory() && Dst->mayWriteToMemory();
 }
 
-
 // Returns true if a particular level is scalar; that is,
 // if no subscript in the source or destination mention the induction
 // variable associated with the loop at this level.
 // Leave this out of line, so it will serve as a virtual method anchor
-bool Dependence::isScalar(unsigned level) const {
-  return false;
-}
-
+bool Dependence::isScalar(unsigned level) const { return false; }
 
 //===----------------------------------------------------------------------===//
 // FullDependence methods
@@ -319,8 +311,7 @@ bool FullDependence::normalize(ScalarEvolution *SE) {
     DV[Level - 1].Direction = RevDirection;
     // Reverse the dependence distance as well.
     if (DV[Level - 1].Distance != nullptr)
-      DV[Level - 1].Distance =
-          SE->getNegativeSCEV(DV[Level - 1].Distance);
+      DV[Level - 1].Distance = SE->getNegativeSCEV(DV[Level - 1].Distance);
   }
 
   LLVM_DEBUG(dbgs() << "After normalizing negative direction vectors:\n";
@@ -336,14 +327,12 @@ unsigned FullDependence::getDirection(unsigned Level) const {
   return DV[Level - 1].Direction;
 }
 
-
 // Returns the distance (or NULL) associated with a particular level.
 const SCEV *FullDependence::getDistance(unsigned Level) const {
   assert(0 < Level && Level <= Levels && "Level out of range");
   return DV[Level - 1].Distance;
 }
 
-
 // Returns true if a particular level is scalar; that is,
 // if no subscript in the source or destination mention the induction
 // variable associated with the loop at this level.
@@ -352,7 +341,6 @@ bool FullDependence::isScalar(unsigned Level) const {
   return DV[Level - 1].Scalar;
 }
 
-
 // Returns true if peeling the first iteration from this loop
 // will break this dependence.
 bool FullDependence::isPeelFirst(unsigned Level) const {
@@ -360,7 +348,6 @@ bool FullDependence::isPeelFirst(unsigned Level) const {
   return DV[Level - 1].PeelFirst;
 }
 
-
 // Returns true if peeling the last iteration from this loop
 // will break this dependence.
 bool FullDependence::isPeelLast(unsigned Level) const {
@@ -368,14 +355,12 @@ bool FullDependence::isPeelLast(unsigned Level) const {
   return DV[Level - 1].PeelLast;
 }
 
-
 // Returns true if splitting this loop will break the dependence.
 bool FullDependence::isSplitable(unsigned Level) const {
   assert(0 < Level && Level <= Levels && "Level out of range");
   return DV[Level - 1].Splitable;
 }
 
-
 //===----------------------------------------------------------------------===//
 // DependenceInfo::Constraint methods
 
@@ -386,7 +371,6 @@ const SCEV *DependenceInfo::Constraint::getX() const {
   return A;
 }
 
-
 // If constraint is a point <X, Y>, returns Y.
 // Otherwise assert.
 const SCEV *DependenceInfo::Constraint::getY() const {
@@ -394,7 +378,6 @@ const SCEV *DependenceInfo::Constraint::getY() const {
   return B;
 }
 
-
 // If constraint is a line AX + BY = C, returns A.
 // Otherwise assert.
 const SCEV *DependenceInfo::Constraint::getA() const {
@@ -403,7 +386,6 @@ const SCEV *DependenceInfo::Constraint::getA() const {
   return A;
 }
 
-
 // If constraint is a line AX + BY = C, returns B.
 // Otherwise assert.
 const SCEV *DependenceInfo::Constraint::getB() const {
@@ -412,7 +394,6 @@ const SCEV *DependenceInfo::Constraint::getB() const {
   return B;
 }
 
-
 // If constraint is a line AX + BY = C, returns C.
 // Otherwise assert.
 const SCEV *DependenceInfo::Constraint::getC() const {
@@ -421,7 +402,6 @@ const SCEV *DependenceInfo::Constraint::getC() const {
   return C;
 }
 
-
 // If constraint is a distance, returns D.
 // Otherwise assert.
 const SCEV *DependenceInfo::Constraint::getD() const {
@@ -429,7 +409,6 @@ const SCEV *DependenceInfo::Constraint::getD() const {
   return SE->getNegativeSCEV(C);
 }
 
-
 // Returns the loop associated with this constraint.
 const Loop *DependenceInfo::Constraint::getAssociatedLoop() const {
   assert((Kind == Distance || Kind == Line || Kind == Point) &&
@@ -480,17 +459,16 @@ LLVM_DUMP_METHOD void DependenceInfo::Constraint::dump(raw_ostream &OS) const {
   else if (isPoint())
     OS << " Point is <" << *getX() << ", " << *getY() << ">\n";
   else if (isDistance())
-    OS << " Distance is " << *getD() <<
-      " (" << *getA() << "*X + " << *getB() << "*Y = " << *getC() << ")\n";
+    OS << " Distance is " << *getD() << " (" << *getA() << "*X + " << *getB()
+       << "*Y = " << *getC() << ")\n";
   else if (isLine())
-    OS << " Line is " << *getA() << "*X + " <<
-      *getB() << "*Y = " << *getC() << "\n";
+    OS << " Line is " << *getA() << "*X + " << *getB() << "*Y = " << *getC()
+       << "\n";
   else
     llvm_unreachable("unknown constraint type in Constraint::dump");
 }
 #endif
 
-
 // Updates X with the intersection
 // of the Constraints X and Y. Returns true if X has changed.
 // Corresponds to Figure 4 from the paper
@@ -572,15 +550,14 @@ bool DependenceInfo::intersectConstraints(Constraint *X, const Constraint *Y) {
       const SCEV *A1B2 = SE->getMulExpr(X->getA(), Y->getB());
       const SCEV *A2B1 = SE->getMulExpr(Y->getA(), X->getB());
       const SCEVConstant *C1A2_C2A1 =
-        dyn_cast<SCEVConstant>(SE->getMinusSCEV(C1A2, C2A1));
+          dyn_cast<SCEVConstant>(SE->getMinusSCEV(C1A2, C2A1));
       const SCEVConstant *C1B2_C2B1 =
-        dyn_cast<SCEVConstant>(SE->getMinusSCEV(C1B2, C2B1));
+          dyn_cast<SCEVConstant>(SE->getMinusSCEV(C1B2, C2B1));
       const SCEVConstant *A1B2_A2B1 =
-        dyn_cast<SCEVConstant>(SE->getMinusSCEV(A1B2, A2B1));
+          dyn_cast<SCEVConstant>(SE->getMinusSCEV(A1B2, A2B1));
       const SCEVConstant *A2B1_A1B2 =
-        dyn_cast<SCEVConstant>(SE->getMinusSCEV(A2B1, A1B2));
-      if (!C1B2_C2B1 || !C1A2_C2A1 ||
-          !A1B2_A2B1 || !A2B1_A1B2)
+          dyn_cast<SCEVConstant>(SE->getMinusSCEV(A2B1, A1B2));
+      if (!C1B2_C2B1 || !C1A2_C2A1 || !A1B2_A2B1 || !A2B1_A1B2)
         return false;
       APInt Xtop = C1B2_C2B1->getAPInt();
       APInt Xbot = A1B2_A2B1->getAPInt();
@@ -607,8 +584,8 @@ bool DependenceInfo::intersectConstraints(Constraint *X, const Constraint *Y) {
         ++DeltaSuccesses;
         return true;
       }
-      if (const SCEVConstant *CUB =
-          collectConstantUpperBound(X->getAssociatedLoop(), Prod1->getType())) {
+      if (const SCEVConstant *CUB = collectConstantUpperBound(
+              X->getAssociatedLoop(), Prod1->getType())) {
         const APInt &UpperBound = CUB->getAPInt();
         LLVM_DEBUG(dbgs() << "\t\tupper bound = " << UpperBound << "\n");
         if (Xq.sgt(UpperBound) || Yq.sgt(UpperBound)) {
@@ -617,8 +594,7 @@ bool DependenceInfo::intersectConstraints(Constraint *X, const Constraint *Y) {
           return true;
         }
       }
-      X->setPoint(SE->getConstant(Xq),
-                  SE->getConstant(Yq),
+      X->setPoint(SE->getConstant(Xq), SE->getConstant(Yq),
                   X->getAssociatedLoop());
       ++DeltaSuccesses;
       return true;
@@ -648,7 +624,6 @@ bool DependenceInfo::intersectConstraints(Constraint *X, const Constraint *Y) {
   return false;
 }
 
-
 //===----------------------------------------------------------------------===//
 // DependenceInfo methods
 
@@ -712,8 +687,7 @@ void Dependence::dump(raw_ostream &OS) const {
 // tbaa, non-overlapping regions etc), then it is known there is no dependecy.
 // Otherwise the underlying objects are checked to see if they point to
 // different identifiable objects.
-static AliasResult underlyingObjectsAlias(AAResults *AA,
-                                          const DataLayout &DL,
+static AliasResult underlyingObjectsAlias(AAResults *AA, const DataLayout &DL,
                                           const MemoryLocation &LocA,
                                           const MemoryLocation &LocB) {
   // Check the original locations (minus size) for noalias, which can happen for
@@ -743,11 +717,9 @@ static AliasResult underlyingObjectsAlias(AAResults *AA,
   return AliasResult::NoAlias;
 }
 
-
 // Returns true if the load or store can be analyzed. Atomic and volatile
 // operations have properties which this analysis does not understand.
-static
-bool isLoadOrStore(const Instruction *I) {
+static bool isLoadOrStore(const Instruction *I) {
   if (const LoadInst *LI = dyn_cast<LoadInst>(I))
     return LI->isUnordered();
   else if (const StoreInst *SI = dyn_cast<StoreInst>(I))
@@ -755,7 +727,6 @@ bool isLoadOrStore(const Instruction *I) {
   return false;
 }
 
-
 // Examines the loop nesting of the Src and Dst
 // instructions and establishes their shared loops. Sets the variables
 // CommonLevels, SrcLevels, and MaxLevels.
@@ -833,14 +804,12 @@ void DependenceInfo::establishNestingLevels(const Instruction *Src,
   MaxLevels -= CommonLevels;
 }
 
-
 // Given one of the loops containing the source, return
 // its level index in our numbering scheme.
 unsigned DependenceInfo::mapSrcLoop(const Loop *SrcLoop) const {
   return SrcLoop->getLoopDepth();
 }
 
-
 // Given one of the loops containing the destination,
 // return its level index in our numbering scheme.
 unsigned DependenceInfo::mapDstLoop(const Loop *DstLoop) const {
@@ -853,7 +822,6 @@ unsigned DependenceInfo::mapDstLoop(const Loop *DstLoop) const {
     return D;
 }
 
-
 // Returns true if Expression is loop invariant in LoopNest.
 bool DependenceInfo::isLoopInvariant(const SCEV *Expression,
                                      const Loop *LoopNest) const {
@@ -869,8 +837,6 @@ bool DependenceInfo::isLoopInvariant(const SCEV *Expression,
   return SE->isLoopInvariant(Expression, LoopNest->getOutermostLoop());
 }
 
-
-
 // Finds the set of loops from the LoopNest that
 // have a level <= CommonLevels and are referred to by the SCEV Expression.
 void DependenceInfo::collectCommonLoops(const SCEV *Expression,
@@ -898,8 +864,8 @@ void DependenceInfo::unifySubscriptType(ArrayRef<Subscript *> Pairs) {
     IntegerType *DstTy = dyn_cast<IntegerType>(Dst->getType());
     if (SrcTy == nullptr || DstTy == nullptr) {
       assert(SrcTy == DstTy && "This function only unify integer types and "
-             "expect Src and Dst share the same type "
-             "otherwise.");
+                               "expect Src and Dst share the same type "
+                               "otherwise.");
       continue;
     }
     if (SrcTy->getBitWidth() > widestWidthSeen) {
@@ -912,7 +878,6 @@ void DependenceInfo::unifySubscriptType(ArrayRef<Subscript *> Pairs) {
     }
   }
 
-
   assert(widestWidthSeen > 0);
 
   // Now extend each pair to the widest seen.
@@ -923,8 +888,8 @@ void DependenceInfo::unifySubscriptType(ArrayRef<Subscript *> Pairs) {
     IntegerType *DstTy = dyn_cast<IntegerType>(Dst->getType());
     if (SrcTy == nullptr || DstTy == nullptr) {
       assert(SrcTy == DstTy && "This function only unify integer types and "
-             "expect Src and Dst share the same type "
-             "otherwise.");
+                               "expect Src and Dst share the same type "
+                               "otherwise.");
       continue;
     }
     if (SrcTy->getBitWidth() < widestWidthSeen)
@@ -1009,7 +974,6 @@ bool DependenceInfo::checkDstSubscript(const SCEV *Dst, const Loop *LoopNest,
   return checkSubscript(Dst, LoopNest, Loops, false);
 }
 
-
 // Examines the subscript pair (the Src and Dst SCEVs)
 // and classifies it as either ZIV, SIV, RDIV, MIV, or Nonlinear.
 // Collects the associated loops in a set.
@@ -1030,14 +994,12 @@ DependenceInfo::classifyPair(const SCEV *Src, const Loop *SrcLoopNest,
     return Subscript::ZIV;
   if (N == 1)
     return Subscript::SIV;
-  if (N == 2 && (SrcLoops.count() == 0 ||
-                 DstLoops.count() == 0 ||
+  if (N == 2 && (SrcLoops.count() == 0 || DstLoops.count() == 0 ||
                  (SrcLoops.count() == 1 && DstLoops.count() == 1)))
     return Subscript::RDIV;
   return Subscript::MIV;
 }
 
-
 // A wrapper around SCEV::isKnownPredicate.
 // Looks for cases where we're interested in comparing for equality.
 // If both X and Y have been identically sign or zero extended,
@@ -1050,12 +1012,9 @@ DependenceInfo::classifyPair(const SCEV *Src, const Loop *SrcLoopNest,
 // involving symbolics.
 bool DependenceInfo::isKnownPredicate(ICmpInst::Predicate Pred, const SCEV *X,
                                       const SCEV *Y) const {
-  if (Pred == CmpInst::ICMP_EQ ||
-      Pred == CmpInst::ICMP_NE) {
-    if ((isa<SCEVSignExtendExpr>(X) &&
-         isa<SCEVSignExtendExpr>(Y)) ||
-        (isa<SCEVZeroExtendExpr>(X) &&
-         isa<SCEVZeroExtendExpr>(Y))) {
+  if (Pred == CmpInst::ICMP_EQ || Pred == CmpInst::ICMP_NE) {
+    if ((isa<SCEVSignExtendExpr>(X) && isa<SCEVSignExtendExpr>(Y)) ||
+        (isa<SCEVZeroExtendExpr>(X) && isa<SCEVZeroExtendExpr>(Y))) {
       const SCEVIntegralCastExpr *CX = cast<SCEVIntegralCastExpr>(X);
       const SCEVIntegralCastExpr *CY = cast<SCEVIntegralCastExpr>(Y);
       const SCEV *Xop = CX->getOperand();
@@ -1092,9 +1051,9 @@ bool DependenceInfo::isKnownPredicate(ICmpInst::Predicate Pred, const SCEV *X,
   }
 }
 
-/// Compare to see if S is less than Size, using isKnownNegative(S - max(Size, 1))
-/// with some extra checking if S is an AddRec and we can prove less-than using
-/// the loop bounds.
+/// Compare to see if S is less than Size, using isKnownNegative(S - max(Size,
+/// 1)) with some extra checking if S is an AddRec and we can prove less-than
+/// using the loop bounds.
 bool DependenceInfo::isKnownLessThan(const SCEV *S, const SCEV *Size) const {
   // First unify to the same type
   auto *SType = dyn_cast<IntegerType>(S->getType());
@@ -1159,7 +1118,6 @@ const SCEV *DependenceInfo::collectUpperBound(const Loop *L, Type *T) const {
   return nullptr;
 }
 
-
 // Calls collectUpperBound(), then attempts to cast it to SCEVConstant.
 // If the cast fails, returns NULL.
 const SCEVConstant *DependenceInfo::collectConstantUpperBound(const Loop *L,
@@ -1169,7 +1127,6 @@ const SCEVConstant *DependenceInfo::collectConstantUpperBound(const Loop *L,
   return nullptr;
 }
 
-
 // testZIV -
 // When we have a pair of subscripts of the form [c1] and [c2],
 // where c1 and c2 are both loop invariant, we attack it using
@@ -1199,7 +1156,6 @@ bool DependenceInfo::testZIV(const SCEV *Src, const SCEV *Dst,
   return false; // possibly dependent
 }
 
-
 // strongSIVtest -
 // From the paper, Practical Dependence Testing, Section 4.2.1
 //
@@ -1251,9 +1207,9 @@ bool DependenceInfo::strongSIVtest(const SCEV *Coeff, const SCEV *SrcConst,
     LLVM_DEBUG(dbgs() << "\t    UpperBound = " << *UpperBound);
     LLVM_DEBUG(dbgs() << ", " << *UpperBound->getType() << "\n");
     const SCEV *AbsDelta =
-      SE->isKnownNonNegative(Delta) ? Delta : SE->getNegativeSCEV(Delta);
+        SE->isKnownNonNegative(Delta) ? Delta : SE->getNegativeSCEV(Delta);
     const SCEV *AbsCoeff =
-      SE->isKnownNonNegative(Coeff) ? Coeff : SE->getNegativeSCEV(Coeff);
+        SE->isKnownNonNegative(Coeff) ? Coeff : SE->getNegativeSCEV(Coeff);
     const SCEV *Product = SE->getMulExpr(UpperBound, AbsCoeff);
     if (isKnownPredicate(CmpInst::ICMP_SGT, AbsDelta, Product)) {
       // Distance greater than trip count - no dependence
@@ -1267,7 +1223,7 @@ bool DependenceInfo::strongSIVtest(const SCEV *Coeff, const SCEV *SrcConst,
   if (isa<SCEVConstant>(Delta) && isa<SCEVConstant>(Coeff)) {
     APInt ConstDelta = cast<SCEVConstant>(Delta)->getAPInt();
     APInt ConstCoeff = cast<SCEVConstant>(Coeff)->getAPInt();
-    APInt Distance  = ConstDelta; // these need to be initialized
+    APInt Distance = ConstDelta; // these need to be initialized
     APInt Remainder = ConstDelta;
     APInt::sdivrem(ConstDelta, ConstCoeff, Distance, Remainder);
     LLVM_DEBUG(dbgs() << "\t    Distance = " << Distance << "\n");
@@ -1288,29 +1244,25 @@ bool DependenceInfo::strongSIVtest(const SCEV *Coeff, const SCEV *SrcConst,
     else
       Result.DV[Level].Direction &= Dependence::DVEntry::EQ;
     ++StrongSIVsuccesses;
-  }
-  else if (Delta->isZero()) {
+  } else if (Delta->isZero()) {
     // since 0/X == 0
     Result.DV[Level].Distance = Delta;
     NewConstraint.setDistance(Delta, CurLoop);
     Result.DV[Level].Direction &= Dependence::DVEntry::EQ;
     ++StrongSIVsuccesses;
-  }
-  else {
+  } else {
     if (Coeff->isOne()) {
       LLVM_DEBUG(dbgs() << "\t    Distance = " << *Delta << "\n");
       Result.DV[Level].Distance = Delta; // since X/1 == X
       NewConstraint.setDistance(Delta, CurLoop);
-    }
-    else {
+    } else {
       Result.Consistent = false;
-      NewConstraint.setLine(Coeff,
-                            SE->getNegativeSCEV(Coeff),
+      NewConstraint.setLine(Coeff, SE->getNegativeSCEV(Coeff),
                             SE->getNegativeSCEV(Delta), CurLoop);
     }
 
     // maybe we can get a useful direction
-    bool DeltaMaybeZero     = !SE->isKnownNonZero(Delta);
+    bool DeltaMaybeZero = !SE->isKnownNonZero(Delta);
     bool DeltaMaybePositive = !SE->isKnownNonPositive(Delta);
     bool DeltaMaybeNegative = !SE->isKnownNonNegative(Delta);
     bool CoeffMaybePositive = !SE->isKnownNonPositive(Coeff);
@@ -1334,7 +1286,6 @@ bool DependenceInfo::strongSIVtest(const SCEV *Coeff, const SCEV *SrcConst,
   return false;
 }
 
-
 // weakCrossingSIVtest -
 // From the paper, Practical Dependence Testing, Section 4.2.2
 //
@@ -1428,8 +1379,8 @@ bool DependenceInfo::weakCrossingSIVtest(
   if (const SCEV *UpperBound = collectUpperBound(CurLoop, Delta->getType())) {
     LLVM_DEBUG(dbgs() << "\t    UpperBound = " << *UpperBound << "\n");
     const SCEV *ConstantTwo = SE->getConstant(UpperBound->getType(), 2);
-    const SCEV *ML = SE->getMulExpr(SE->getMulExpr(ConstCoeff, UpperBound),
-                                    ConstantTwo);
+    const SCEV *ML =
+        SE->getMulExpr(SE->getMulExpr(ConstCoeff, UpperBound), ConstantTwo);
     LLVM_DEBUG(dbgs() << "\t    ML = " << *ML << "\n");
     if (isKnownPredicate(CmpInst::ICMP_SGT, Delta, ML)) {
       // Delta too big, no dependence
@@ -1479,7 +1430,6 @@ bool DependenceInfo::weakCrossingSIVtest(
   return false;
 }
 
-
 // Kirch's algorithm, from
 //
 //        Optimizing Supercompilers for Supercomputers
@@ -1500,9 +1450,14 @@ static bool findGCD(unsigned Bits, const APInt &AM, const APInt &BM,
   APInt R = G0;
   APInt::sdivrem(G0, G1, Q, R);
   while (R != 0) {
-    APInt A2 = A0 - Q*A1; A0 = A1; A1 = A2;
-    APInt B2 = B0 - Q*B1; B0 = B1; B1 = B2;
-    G0 = G1; G1 = R;
+    APInt A2 = A0 - Q * A1;
+    A0 = A1;
+    A1 = A2;
+    APInt B2 = B0 - Q * B1;
+    B0 = B1;
+    B1 = B2;
+    G0 = G1;
+    G1 = R;
     APInt::sdivrem(G0, G1, Q, R);
   }
   G = G1;
@@ -1524,8 +1479,7 @@ static APInt floorOfQuotient(const APInt &A, const APInt &B) {
   APInt::sdivrem(A, B, Q, R);
   if (R == 0)
     return Q;
-  if ((A.sgt(0) && B.sgt(0)) ||
-      (A.slt(0) && B.slt(0)))
+  if ((A.sgt(0) && B.sgt(0)) || (A.slt(0) && B.slt(0)))
     return Q;
   else
     return Q - 1;
@@ -1537,8 +1491,7 @@ static APInt ceilingOfQuotient(const APInt &A, const APInt &B) {
   APInt::sdivrem(A, B, Q, R);
   if (R == 0)
     return Q;
-  if ((A.sgt(0) && B.sgt(0)) ||
-      (A.slt(0) && B.slt(0)))
+  if ((A.sgt(0) && B.sgt(0)) || (A.slt(0) && B.slt(0)))
     return Q + 1;
   else
     return Q;
@@ -1714,17 +1667,14 @@ bool DependenceInfo::exactSIVtest(const SCEV *SrcCoeff, const SCEV *DstCoeff,
   return Result.DV[Level].Direction == Dependence::DVEntry::NONE;
 }
 
-
 // Return true if the divisor evenly divides the dividend.
-static
-bool isRemainderZero(const SCEVConstant *Dividend,
-                     const SCEVConstant *Divisor) {
+static bool isRemainderZero(const SCEVConstant *Dividend,
+                            const SCEVConstant *Divisor) {
   const APInt &ConstDividend = Dividend->getAPInt();
   const APInt &ConstDivisor = Divisor->getAPInt();
   return ConstDividend.srem(ConstDivisor) == 0;
 }
 
-
 // weakZeroSrcSIVtest -
 // From the paper, Practical Dependence Testing, Section 4.2.2
 //
@@ -1788,11 +1738,11 @@ bool DependenceInfo::weakZeroSrcSIVtest(const SCEV *DstCoeff,
   const SCEVConstant *ConstCoeff = dyn_cast<SCEVConstant>(DstCoeff);
   if (!ConstCoeff)
     return false;
-  const SCEV *AbsCoeff =
-    SE->isKnownNegative(ConstCoeff) ?
-    SE->getNegativeSCEV(ConstCoeff) : ConstCoeff;
+  const SCEV *AbsCoeff = SE->isKnownNegative(ConstCoeff)
+                             ? SE->getNegativeSCEV(ConstCoeff)
+                             : ConstCoeff;
   const SCEV *NewDelta =
-    SE->isKnownNegative(ConstCoeff) ? SE->getNegativeSCEV(Delta) : Delta;
+      SE->isKnownNegative(ConstCoeff) ? SE->getNegativeSCEV(Delta) : Delta;
 
   // check that Delta/SrcCoeff < iteration count
   // really check NewDelta < count*AbsCoeff
@@ -1834,7 +1784,6 @@ bool DependenceInfo::weakZeroSrcSIVtest(const SCEV *DstCoeff,
   return false;
 }
 
-
 // weakZeroDstSIVtest -
 // From the paper, Practical Dependence Testing, Section 4.2.2
 //
@@ -1897,11 +1846,11 @@ bool DependenceInfo::weakZeroDstSIVtest(const SCEV *SrcCoeff,
   const SCEVConstant *ConstCoeff = dyn_cast<SCEVConstant>(SrcCoeff);
   if (!ConstCoeff)
     return false;
-  const SCEV *AbsCoeff =
-    SE->isKnownNegative(ConstCoeff) ?
-    SE->getNegativeSCEV(ConstCoeff) : ConstCoeff;
+  const SCEV *AbsCoeff = SE->isKnownNegative(ConstCoeff)
+                             ? SE->getNegativeSCEV(ConstCoeff)
+                             : ConstCoeff;
   const SCEV *NewDelta =
-    SE->isKnownNegative(ConstCoeff) ? SE->getNegativeSCEV(Delta) : Delta;
+      SE->isKnownNegative(ConstCoeff) ? SE->getNegativeSCEV(Delta) : Delta;
 
   // check that Delta/SrcCoeff < iteration count
   // really check NewDelta < count*AbsCoeff
@@ -1943,7 +1892,6 @@ bool DependenceInfo::weakZeroDstSIVtest(const SCEV *SrcCoeff,
   return false;
 }
 
-
 // exactRDIVtest - Tests the RDIV subscript pair for dependence.
 // Things of the form [c1 + a*i] and [c2 + b*j],
 // where i and j are induction variable, c1 and c2 are loop invariant,
@@ -2065,7 +2013,6 @@ bool DependenceInfo::exactRDIVtest(const SCEV *SrcCoeff, const SCEV *DstCoeff,
   return TL.sgt(TU);
 }
 
-
 // symbolicRDIVtest -
 // In Section 4.5 of the Practical Dependence Testing paper,the authors
 // introduce a special case of Banerjee's Inequalities (also called the
@@ -2148,8 +2095,7 @@ bool DependenceInfo::symbolicRDIVtest(const SCEV *A1, const SCEV *A2,
           return true;
         }
       }
-    }
-    else if (SE->isKnownNonPositive(A2)) {
+    } else if (SE->isKnownNonPositive(A2)) {
       // a1 >= 0 && a2 <= 0
       if (N1 && N2) {
         // make sure that c2 - c1 <= a1*N1 - a2*N2
@@ -2168,8 +2114,7 @@ bool DependenceInfo::symbolicRDIVtest(const SCEV *A1, const SCEV *A2,
         return true;
       }
     }
-  }
-  else if (SE->isKnownNonPositive(A1)) {
+  } else if (SE->isKnownNonPositive(A1)) {
     if (SE->isKnownNonNegative(A2)) {
       // a1 <= 0 && a2 >= 0
       if (N1 && N2) {
@@ -2188,8 +2133,7 @@ bool DependenceInfo::symbolicRDIVtest(const SCEV *A1, const SCEV *A2,
         ++SymbolicRDIVindependence;
         return true;
       }
-    }
-    else if (SE->isKnownNonPositive(A2)) {
+    } else if (SE->isKnownNonPositive(A2)) {
       // a1 <= 0 && a2 <= 0
       if (N1) {
         // make sure that a1*N1 <= c2 - c1
@@ -2214,7 +2158,6 @@ bool DependenceInfo::symbolicRDIVtest(const SCEV *A1, const SCEV *A2,
   return false;
 }
 
-
 // testSIV -
 // When we have a pair of subscripts of the form [c1 + a1*i] and [c2 - a2*i]
 // where i is an induction variable, c1 and c2 are loop invariant, and a1 and
@@ -2241,17 +2184,17 @@ bool DependenceInfo::testSIV(const SCEV *Src, const SCEV *Dst, unsigned &Level,
     Level = mapSrcLoop(CurLoop);
     bool disproven;
     if (SrcCoeff == DstCoeff)
-      disproven = strongSIVtest(SrcCoeff, SrcConst, DstConst, CurLoop,
-                                Level, Result, NewConstraint);
+      disproven = strongSIVtest(SrcCoeff, SrcConst, DstConst, CurLoop, Level,
+                                Result, NewConstraint);
     else if (SrcCoeff == SE->getNegativeSCEV(DstCoeff))
       disproven = weakCrossingSIVtest(SrcCoeff, SrcConst, DstConst, CurLoop,
                                       Level, Result, NewConstraint, SplitIter);
     else
       disproven = exactSIVtest(SrcCoeff, DstCoeff, SrcConst, DstConst, CurLoop,
                                Level, Result, NewConstraint);
-    return disproven ||
-      gcdMIVtest(Src, Dst, Result) ||
-      symbolicRDIVtest(SrcCoeff, DstCoeff, SrcConst, DstConst, CurLoop, CurLoop);
+    return disproven || gcdMIVtest(Src, Dst, Result) ||
+           symbolicRDIVtest(SrcCoeff, DstCoeff, SrcConst, DstConst, CurLoop,
+                            CurLoop);
   }
   if (SrcAddRec) {
     const SCEV *SrcConst = SrcAddRec->getStart();
@@ -2259,9 +2202,9 @@ bool DependenceInfo::testSIV(const SCEV *Src, const SCEV *Dst, unsigned &Level,
     const SCEV *DstConst = Dst;
     const Loop *CurLoop = SrcAddRec->getLoop();
     Level = mapSrcLoop(CurLoop);
-    return weakZeroDstSIVtest(SrcCoeff, SrcConst, DstConst, CurLoop,
-                              Level, Result, NewConstraint) ||
-      gcdMIVtest(Src, Dst, Result);
+    return weakZeroDstSIVtest(SrcCoeff, SrcConst, DstConst, CurLoop, Level,
+                              Result, NewConstraint) ||
+           gcdMIVtest(Src, Dst, Result);
   }
   if (DstAddRec) {
     const SCEV *DstConst = DstAddRec->getStart();
@@ -2269,15 +2212,14 @@ bool DependenceInfo::testSIV(const SCEV *Src, const SCEV *Dst, unsigned &Level,
     const SCEV *SrcConst = Src;
     const Loop *CurLoop = DstAddRec->getLoop();
     Level = mapDstLoop(CurLoop);
-    return weakZeroSrcSIVtest(DstCoeff, SrcConst, DstConst,
-                              CurLoop, Level, Result, NewConstraint) ||
-      gcdMIVtest(Src, Dst, Result);
+    return weakZeroSrcSIVtest(DstCoeff, SrcConst, DstConst, CurLoop, Level,
+                              Result, NewConstraint) ||
+           gcdMIVtest(Src, Dst, Result);
   }
   llvm_unreachable("SIV test expected at least one AddRec");
   return false;
 }
 
-
 // testRDIV -
 // When we have a pair of subscripts of the form [c1 + a1*i] and [c2 + a2*j]
 // where i and j are induction variables, c1 and c2 are loop invariant,
@@ -2314,46 +2256,37 @@ bool DependenceInfo::testRDIV(const SCEV *Src, const SCEV *Dst,
     DstConst = DstAddRec->getStart();
     DstCoeff = DstAddRec->getStepRecurrence(*SE);
     DstLoop = DstAddRec->getLoop();
-  }
-  else if (SrcAddRec) {
+  } else if (SrcAddRec) {
     if (const SCEVAddRecExpr *tmpAddRec =
-        dyn_cast<SCEVAddRecExpr>(SrcAddRec->getStart())) {
+            dyn_cast<SCEVAddRecExpr>(SrcAddRec->getStart())) {
       SrcConst = tmpAddRec->getStart();
       SrcCoeff = tmpAddRec->getStepRecurrence(*SE);
       SrcLoop = tmpAddRec->getLoop();
       DstConst = Dst;
       DstCoeff = SE->getNegativeSCEV(SrcAddRec->getStepRecurrence(*SE));
       DstLoop = SrcAddRec->getLoop();
-    }
-    else
+    } else
       llvm_unreachable("RDIV reached by surprising SCEVs");
-  }
-  else if (DstAddRec) {
+  } else if (DstAddRec) {
     if (const SCEVAddRecExpr *tmpAddRec =
-        dyn_cast<SCEVAddRecExpr>(DstAddRec->getStart())) {
+            dyn_cast<SCEVAddRecExpr>(DstAddRec->getStart())) {
       DstConst = tmpAddRec->getStart();
       DstCoeff = tmpAddRec->getStepRecurrence(*SE);
       DstLoop = tmpAddRec->getLoop();
       SrcConst = Src;
       SrcCoeff = SE->getNegativeSCEV(DstAddRec->getStepRecurrence(*SE));
       SrcLoop = DstAddRec->getLoop();
-    }
-    else
+    } else
       llvm_unreachable("RDIV reached by surprising SCEVs");
-  }
-  else
+  } else
     llvm_unreachable("RDIV expected at least one AddRec");
-  return exactRDIVtest(SrcCoeff, DstCoeff,
-                       SrcConst, DstConst,
-                       SrcLoop, DstLoop,
+  return exactRDIVtest(SrcCoeff, DstCoeff, SrcConst, DstConst, SrcLoop, DstLoop,
                        Result) ||
-    gcdMIVtest(Src, Dst, Result) ||
-    symbolicRDIVtest(SrcCoeff, DstCoeff,
-                     SrcConst, DstConst,
-                     SrcLoop, DstLoop);
+         gcdMIVtest(Src, Dst, Result) ||
+         symbolicRDIVtest(SrcCoeff, DstCoeff, SrcConst, DstConst, SrcLoop,
+                          DstLoop);
 }
 
-
 // Tests the single-subscript MIV pair (Src and Dst) for dependence.
 // Return true if dependence disproved.
 // Can sometimes refine direction vectors.
@@ -2364,14 +2297,12 @@ bool DependenceInfo::testMIV(const SCEV *Src, const SCEV *Dst,
   LLVM_DEBUG(dbgs() << "    dst = " << *Dst << "\n");
   Result.Consistent = false;
   return gcdMIVtest(Src, Dst, Result) ||
-    banerjeeMIVtest(Src, Dst, Loops, Result);
+         banerjeeMIVtest(Src, Dst, Loops, Result);
 }
 
-
 // Given a product, e.g., 10*X*Y, returns the first constant operand,
 // in this case 10. If there is no constant part, returns NULL.
-static
-const SCEVConstant *getConstantPart(const SCEV *Expr) {
+static const SCEVConstant *getConstantPart(const SCEV *Expr) {
   if (const auto *Constant = dyn_cast<SCEVConstant>(Expr))
     return Constant;
   else if (const auto *Product = dyn_cast<SCEVMulExpr>(Expr))
@@ -2380,7 +2311,6 @@ const SCEVConstant *getConstantPart(const SCEV *Expr) {
   return nullptr;
 }
 
-
 //===----------------------------------------------------------------------===//
 // gcdMIVtest -
 // Tests an MIV subscript pair for dependence.
@@ -2412,7 +2342,7 @@ bool DependenceInfo::gcdMIVtest(const SCEV *Src, const SCEV *Dst,
   // we can't quit the loop just because the GCD == 1.
   const SCEV *Coefficients = Src;
   while (const SCEVAddRecExpr *AddRec =
-         dyn_cast<SCEVAddRecExpr>(Coefficients)) {
+             dyn_cast<SCEVAddRecExpr>(Coefficients)) {
     const SCEV *Coeff = AddRec->getStepRecurrence(*SE);
     // If the coefficient is the product of a constant and other stuff,
     // we can use the constant in the GCD computation.
@@ -2431,7 +2361,7 @@ bool DependenceInfo::gcdMIVtest(const SCEV *Src, const SCEV *Dst,
   // we can't quit the loop just because the GCD == 1.
   Coefficients = Dst;
   while (const SCEVAddRecExpr *AddRec =
-         dyn_cast<SCEVAddRecExpr>(Coefficients)) {
+             dyn_cast<SCEVAddRecExpr>(Coefficients)) {
     const SCEV *Coeff = AddRec->getStepRecurrence(*SE);
     // If the coefficient is the product of a constant and other stuff,
     // we can use the constant in the GCD computation.
@@ -2455,18 +2385,16 @@ bool DependenceInfo::gcdMIVtest(const SCEV *Src, const SCEV *Dst,
       if (isa<SCEVConstant>(Operand)) {
         assert(!Constant && "Surprised to find multiple constants");
         Constant = cast<SCEVConstant>(Operand);
-      }
-      else if (const SCEVMulExpr *Product = dyn_cast<SCEVMulExpr>(Operand)) {
+      } else if (const SCEVMulExpr *Product = dyn_cast<SCEVMulExpr>(Operand)) {
         // Search for constant operand to participate in GCD;
         // If none found; return false.
         const SCEVConstant *ConstOp = getConstantPart(Product);
         if (!ConstOp)
           return false;
         APInt ConstOpValue = ConstOp->getAPInt();
-        ExtraGCD = APIntOps::GreatestCommonDivisor(ExtraGCD,
-                                                   ConstOpValue.abs());
-      }
-      else
+        ExtraGCD =
+            APIntOps::GreatestCommonDivisor(ExtraGCD, ConstOpValue.abs());
+      } else
         return false;
     }
   }
@@ -2501,7 +2429,7 @@ bool DependenceInfo::gcdMIVtest(const SCEV *Src, const SCEV *Dst,
   bool Improved = false;
   Coefficients = Src;
   while (const SCEVAddRecExpr *AddRec =
-         dyn_cast<SCEVAddRecExpr>(Coefficients)) {
+             dyn_cast<SCEVAddRecExpr>(Coefficients)) {
     Coefficients = AddRec->getStart();
     const Loop *CurLoop = AddRec->getLoop();
     RunningGCD = ExtraGCD;
@@ -2520,7 +2448,8 @@ bool DependenceInfo::gcdMIVtest(const SCEV *Src, const SCEV *Dst,
         if (!Constant)
           return false;
         APInt ConstCoeff = Constant->getAPInt();
-        RunningGCD = APIntOps::GreatestCommonDivisor(RunningGCD, ConstCoeff.abs());
+        RunningGCD =
+            APIntOps::GreatestCommonDivisor(RunningGCD, ConstCoeff.abs());
       }
       Inner = AddRec->getStart();
     }
@@ -2537,7 +2466,8 @@ bool DependenceInfo::gcdMIVtest(const SCEV *Src, const SCEV *Dst,
         if (!Constant)
           return false;
         APInt ConstCoeff = Constant->getAPInt();
-        RunningGCD = APIntOps::GreatestCommonDivisor(RunningGCD, ConstCoeff.abs());
+        RunningGCD =
+            APIntOps::GreatestCommonDivisor(RunningGCD, ConstCoeff.abs());
       }
       Inner = AddRec->getStart();
     }
@@ -2568,7 +2498,6 @@ bool DependenceInfo::gcdMIVtest(const SCEV *Src, const SCEV *Dst,
   return false;
 }
 
-
 //===----------------------------------------------------------------------===//
 // banerjeeMIVtest -
 // Use Banerjee's Inequalities to test an MIV subscript pair.
@@ -2642,8 +2571,8 @@ bool DependenceInfo::banerjeeMIVtest(const SCEV *Src, const SCEV *Dst,
   if (testBounds(Dependence::DVEntry::ALL, 0, Bound, Delta)) {
     // Explore the direction vector hierarchy.
     unsigned DepthExpanded = 0;
-    unsigned NewDeps = exploreDirections(1, A, B, Bound,
-                                         Loops, DepthExpanded, Delta);
+    unsigned NewDeps =
+        exploreDirections(1, A, B, Bound, Loops, DepthExpanded, Delta);
     if (NewDeps > 0) {
       bool Improved = false;
       for (unsigned K = 1; K <= CommonLevels; ++K) {
@@ -2660,23 +2589,20 @@ bool DependenceInfo::banerjeeMIVtest(const SCEV *Src, const SCEV *Dst,
       }
       if (Improved)
         ++BanerjeeSuccesses;
-    }
-    else {
+    } else {
       ++BanerjeeIndependence;
       Disproved = true;
     }
-  }
-  else {
+  } else {
     ++BanerjeeIndependence;
     Disproved = true;
   }
-  delete [] Bound;
-  delete [] A;
-  delete [] B;
+  delete[] Bound;
+  delete[] A;
+  delete[] B;
   return Disproved;
 }
 
-
 // Hierarchically expands the direction vector
 // search space, combining the directions of discovered dependences
 // in the DirSet field of Bound. Returns the number of distinct
@@ -2778,27 +2704,26 @@ unsigned DependenceInfo::exploreDirections(unsigned Level, CoefficientInfo *A,
 
     // test bounds for <, *, *, ...
     if (testBounds(Dependence::DVEntry::LT, Level, Bound, Delta))
-      NewDeps += exploreDirections(Level + 1, A, B, Bound,
-                                   Loops, DepthExpanded, Delta);
+      NewDeps += exploreDirections(Level + 1, A, B, Bound, Loops, DepthExpanded,
+                                   Delta);
 
     // Test bounds for =, *, *, ...
     if (testBounds(Dependence::DVEntry::EQ, Level, Bound, Delta))
-      NewDeps += exploreDirections(Level + 1, A, B, Bound,
-                                   Loops, DepthExpanded, Delta);
+      NewDeps += exploreDirections(Level + 1, A, B, Bound, Loops, DepthExpanded,
+                                   Delta);
 
     // test bounds for >, *, *, ...
     if (testBounds(Dependence::DVEntry::GT, Level, Bound, Delta))
-      NewDeps += exploreDirections(Level + 1, A, B, Bound,
-                                   Loops, DepthExpanded, Delta);
+      NewDeps += exploreDirections(Level + 1, A, B, Bound, Loops, DepthExpanded,
+                                   Delta);
 
     Bound[Level].Direction = Dependence::DVEntry::ALL;
     return NewDeps;
-  }
-  else
-    return exploreDirections(Level + 1, A, B, Bound, Loops, DepthExpanded, Delta);
+  } else
+    return exploreDirections(Level + 1, A, B, Bound, Loops, DepthExpanded,
+                             Delta);
 }
 
-
 // Returns true iff the current bounds are plausible.
 bool DependenceInfo::testBounds(unsigned char DirKind, unsigned Level,
                                 BoundInfo *Bound, const SCEV *Delta) const {
@@ -2812,7 +2737,6 @@ bool DependenceInfo::testBounds(unsigned char DirKind, unsigned Level,
   return true;
 }
 
-
 // Computes the upper and lower bounds for level K
 // using the * direction. Records them in Bound.
 // Wolfe gives the equations
@@ -2830,17 +2754,16 @@ bool DependenceInfo::testBounds(unsigned char DirKind, unsigned Level,
 // and the upper bound is always >= 0.
 void DependenceInfo::findBoundsALL(CoefficientInfo *A, CoefficientInfo *B,
                                    BoundInfo *Bound, unsigned K) const {
-  Bound[K].Lower[Dependence::DVEntry::ALL] = nullptr; // Default value = -infinity.
-  Bound[K].Upper[Dependence::DVEntry::ALL] = nullptr; // Default value = +infinity.
+  Bound[K].Lower[Dependence::DVEntry::ALL] =
+      nullptr; // Default value = -infinity.
+  Bound[K].Upper[Dependence::DVEntry::ALL] =
+      nullptr; // Default value = +infinity.
   if (Bound[K].Iterations) {
-    Bound[K].Lower[Dependence::DVEntry::ALL] =
-      SE->getMulExpr(SE->getMinusSCEV(A[K].NegPart, B[K].PosPart),
-                     Bound[K].Iterations);
-    Bound[K].Upper[Dependence::DVEntry::ALL] =
-      SE->getMulExpr(SE->getMinusSCEV(A[K].PosPart, B[K].NegPart),
-                     Bound[K].Iterations);
-  }
-  else {
+    Bound[K].Lower[Dependence::DVEntry::ALL] = SE->getMulExpr(
+        SE->getMinusSCEV(A[K].NegPart, B[K].PosPart), Bound[K].Iterations);
+    Bound[K].Upper[Dependence::DVEntry::ALL] = SE->getMulExpr(
+        SE->getMinusSCEV(A[K].PosPart, B[K].NegPart), Bound[K].Iterations);
+  } else {
     // If the difference is 0, we won't need to know the number of iterations.
     if (isKnownPredicate(CmpInst::ICMP_EQ, A[K].NegPart, B[K].PosPart))
       Bound[K].Lower[Dependence::DVEntry::ALL] =
@@ -2851,7 +2774,6 @@ void DependenceInfo::findBoundsALL(CoefficientInfo *A, CoefficientInfo *B,
   }
 }
 
-
 // Computes the upper and lower bounds for level K
 // using the = direction. Records them in Bound.
 // Wolfe gives the equations
@@ -2869,18 +2791,19 @@ void DependenceInfo::findBoundsALL(CoefficientInfo *A, CoefficientInfo *B,
 // and the upper bound is always >= 0.
 void DependenceInfo::findBoundsEQ(CoefficientInfo *A, CoefficientInfo *B,
                                   BoundInfo *Bound, unsigned K) const {
-  Bound[K].Lower[Dependence::DVEntry::EQ] = nullptr; // Default value = -infinity.
-  Bound[K].Upper[Dependence::DVEntry::EQ] = nullptr; // Default value = +infinity.
+  Bound[K].Lower[Dependence::DVEntry::EQ] =
+      nullptr; // Default value = -infinity.
+  Bound[K].Upper[Dependence::DVEntry::EQ] =
+      nullptr; // Default value = +infinity.
   if (Bound[K].Iterations) {
     const SCEV *Delta = SE->getMinusSCEV(A[K].Coeff, B[K].Coeff);
     const SCEV *NegativePart = getNegativePart(Delta);
     Bound[K].Lower[Dependence::DVEntry::EQ] =
-      SE->getMulExpr(NegativePart, Bound[K].Iterations);
+        SE->getMulExpr(NegativePart, Bound[K].Iterations);
     const SCEV *PositivePart = getPositivePart(Delta);
     Bound[K].Upper[Dependence::DVEntry::EQ] =
-      SE->getMulExpr(PositivePart, Bound[K].Iterations);
-  }
-  else {
+        SE->getMulExpr(PositivePart, Bound[K].Iterations);
+  } else {
     // If the positive/negative part of the difference is 0,
     // we won't need to know the number of iterations.
     const SCEV *Delta = SE->getMinusSCEV(A[K].Coeff, B[K].Coeff);
@@ -2893,7 +2816,6 @@ void DependenceInfo::findBoundsEQ(CoefficientInfo *A, CoefficientInfo *B,
   }
 }
 
-
 // Computes the upper and lower bounds for level K
 // using the < direction. Records them in Bound.
 // Wolfe gives the equations
@@ -2909,35 +2831,35 @@ void DependenceInfo::findBoundsEQ(CoefficientInfo *A, CoefficientInfo *B,
 // We must be careful to handle the case where the upper bound is unknown.
 void DependenceInfo::findBoundsLT(CoefficientInfo *A, CoefficientInfo *B,
                                   BoundInfo *Bound, unsigned K) const {
-  Bound[K].Lower[Dependence::DVEntry::LT] = nullptr; // Default value = -infinity.
-  Bound[K].Upper[Dependence::DVEntry::LT] = nullptr; // Default value = +infinity.
+  Bound[K].Lower[Dependence::DVEntry::LT] =
+      nullptr; // Default value = -infinity.
+  Bound[K].Upper[Dependence::DVEntry::LT] =
+      nullptr; // Default value = +infinity.
   if (Bound[K].Iterations) {
     const SCEV *Iter_1 = SE->getMinusSCEV(
         Bound[K].Iterations, SE->getOne(Bound[K].Iterations->getType()));
     const SCEV *NegPart =
-      getNegativePart(SE->getMinusSCEV(A[K].NegPart, B[K].Coeff));
+        getNegativePart(SE->getMinusSCEV(A[K].NegPart, B[K].Coeff));
     Bound[K].Lower[Dependence::DVEntry::LT] =
-      SE->getMinusSCEV(SE->getMulExpr(NegPart, Iter_1), B[K].Coeff);
+        SE->getMinusSCEV(SE->getMulExpr(NegPart, Iter_1), B[K].Coeff);
     const SCEV *PosPart =
-      getPositivePart(SE->getMinusSCEV(A[K].PosPart, B[K].Coeff));
+        getPositivePart(SE->getMinusSCEV(A[K].PosPart, B[K].Coeff));
     Bound[K].Upper[Dependence::DVEntry::LT] =
-      SE->getMinusSCEV(SE->getMulExpr(PosPart, Iter_1), B[K].Coeff);
-  }
-  else {
+        SE->getMinusSCEV(SE->getMulExpr(PosPart, Iter_1), B[K].Coeff);
+  } else {
     // If the positive/negative part of the difference is 0,
     // we won't need to know the number of iterations.
     const SCEV *NegPart =
-      getNegativePart(SE->getMinusSCEV(A[K].NegPart, B[K].Coeff));
+        getNegativePart(SE->getMinusSCEV(A[K].NegPart, B[K].Coeff));
     if (NegPart->isZero())
       Bound[K].Lower[Dependence::DVEntry::LT] = SE->getNegativeSCEV(B[K].Coeff);
     const SCEV *PosPart =
-      getPositivePart(SE->getMinusSCEV(A[K].PosPart, B[K].Coeff));
+        getPositivePart(SE->getMinusSCEV(A[K].PosPart, B[K].Coeff));
     if (PosPart->isZero())
       Bound[K].Upper[Dependence::DVEntry::LT] = SE->getNegativeSCEV(B[K].Coeff);
   }
 }
 
-
 // Computes the upper and lower bounds for level K
 // using the > direction. Records them in Bound.
 // Wolfe gives the equations
@@ -2953,45 +2875,45 @@ void DependenceInfo::findBoundsLT(CoefficientInfo *A, CoefficientInfo *B,
 // We must be careful to handle the case where the upper bound is unknown.
 void DependenceInfo::findBoundsGT(CoefficientInfo *A, CoefficientInfo *B,
                                   BoundInfo *Bound, unsigned K) const {
-  Bound[K].Lower[Dependence::DVEntry::GT] = nullptr; // Default value = -infinity.
-  Bound[K].Upper[Dependence::DVEntry::GT] = nullptr; // Default value = +infinity.
+  Bound[K].Lower[Dependence::DVEntry::GT] =
+      nullptr; // Default value = -infinity.
+  Bound[K].Upper[Dependence::DVEntry::GT] =
+      nullptr; // Default value = +infinity.
   if (Bound[K].Iterations) {
     const SCEV *Iter_1 = SE->getMinusSCEV(
         Bound[K].Iterations, SE->getOne(Bound[K].Iterations->getType()));
     const SCEV *NegPart =
-      getNegativePart(SE->getMinusSCEV(A[K].Coeff, B[K].PosPart));
+        getNegativePart(SE->getMinusSCEV(A[K].Coeff, B[K].PosPart));
     Bound[K].Lower[Dependence::DVEntry::GT] =
-      SE->getAddExpr(SE->getMulExpr(NegPart, Iter_1), A[K].Coeff);
+        SE->getAddExpr(SE->getMulExpr(NegPart, Iter_1), A[K].Coeff);
     const SCEV *PosPart =
-      getPositivePart(SE->getMinusSCEV(A[K].Coeff, B[K].NegPart));
+        getPositivePart(SE->getMinusSCEV(A[K].Coeff, B[K].NegPart));
     Bound[K].Upper[Dependence::DVEntry::GT] =
-      SE->getAddExpr(SE->getMulExpr(PosPart, Iter_1), A[K].Coeff);
-  }
-  else {
+        SE->getAddExpr(SE->getMulExpr(PosPart, Iter_1), A[K].Coeff);
+  } else {
     // If the positive/negative part of the difference is 0,
     // we won't need to know the number of iterations.
-    const SCEV *NegPart = getNegativePart(SE->getMinusSCEV(A[K].Coeff, B[K].PosPart));
+    const SCEV *NegPart =
+        getNegativePart(SE->getMinusSCEV(A[K].Coeff, B[K].PosPart));
     if (NegPart->isZero())
       Bound[K].Lower[Dependence::DVEntry::GT] = A[K].Coeff;
-    const SCEV *PosPart = getPositivePart(SE->getMinusSCEV(A[K].Coeff, B[K].NegPart));
+    const SCEV *PosPart =
+        getPositivePart(SE->getMinusSCEV(A[K].Coeff, B[K].NegPart));
     if (PosPart->isZero())
       Bound[K].Upper[Dependence::DVEntry::GT] = A[K].Coeff;
   }
 }
 
-
 // X^+ = max(X, 0)
 const SCEV *DependenceInfo::getPositivePart(const SCEV *X) const {
   return SE->getSMaxExpr(X, SE->getZero(X->getType()));
 }
 
-
 // X^- = min(X, 0)
 const SCEV *DependenceInfo::getNegativePart(const SCEV *X) const {
   return SE->getSMinExpr(X, SE->getZero(X->getType()));
 }
 
-
 // Walks through the subscript,
 // collecting each coefficient, the associated loop bounds,
 // and recording its positive and negative parts for later use.
@@ -3036,7 +2958,6 @@ DependenceInfo::collectCoeffInfo(const SCEV *Subscript, bool SrcFlag,
   return CI;
 }
 
-
 // Looks through all the bounds info and
 // computes the lower bound given the current direction settings
 // at each level. If the lower bound for any level is -inf,
@@ -3052,7 +2973,6 @@ const SCEV *DependenceInfo::getLowerBound(BoundInfo *Bound) const {
   return Sum;
 }
 
-
 // Looks through all the bounds info and
 // computes the upper bound given the current direction settings
 // at each level. If the upper bound at any level is +inf,
@@ -3068,7 +2988,6 @@ const SCEV *DependenceInfo::getUpperBound(BoundInfo *Bound) const {
   return Sum;
 }
 
-
 //===----------------------------------------------------------------------===//
 // Constraint manipulation for Delta test.
 
@@ -3088,7 +3007,6 @@ const SCEV *DependenceInfo::findCoefficient(const SCEV *Expr,
   return findCoefficient(AddRec->getStart(), TargetLoop);
 }
 
-
 // Given a linear SCEV,
 // return the SCEV given by zeroing out the coefficient
 // corresponding to the specified loop.
@@ -3102,12 +3020,10 @@ const SCEV *DependenceInfo::zeroCoefficient(const SCEV *Expr,
   if (AddRec->getLoop() == TargetLoop)
     return AddRec->getStart();
   return SE->getAddRecExpr(zeroCoefficient(AddRec->getStart(), TargetLoop),
-                           AddRec->getStepRecurrence(*SE),
-                           AddRec->getLoop(),
+                           AddRec->getStepRecurrence(*SE), AddRec->getLoop(),
                            AddRec->getNoWrapFlags());
 }
 
-
 // Given a linear SCEV Expr,
 // return the SCEV given by adding some Value to the
 // coefficient corresponding to the specified TargetLoop.
@@ -3118,17 +3034,13 @@ const SCEV *DependenceInfo::addToCoefficient(const SCEV *Expr,
                                              const SCEV *Value) const {
   const SCEVAddRecExpr *AddRec = dyn_cast<SCEVAddRecExpr>(Expr);
   if (!AddRec) // create a new addRec
-    return SE->getAddRecExpr(Expr,
-                             Value,
-                             TargetLoop,
+    return SE->getAddRecExpr(Expr, Value, TargetLoop,
                              SCEV::FlagAnyWrap); // Worst case, with no info.
   if (AddRec->getLoop() == TargetLoop) {
     const SCEV *Sum = SE->getAddExpr(AddRec->getStepRecurrence(*SE), Value);
     if (Sum->isZero())
       return AddRec->getStart();
-    return SE->getAddRecExpr(AddRec->getStart(),
-                             Sum,
-                             AddRec->getLoop(),
+    return SE->getAddRecExpr(AddRec->getStart(), Sum, AddRec->getLoop(),
                              AddRec->getNoWrapFlags());
   }
   if (SE->isLoopInvariant(AddRec, TargetLoop))
@@ -3139,7 +3051,6 @@ const SCEV *DependenceInfo::addToCoefficient(const SCEV *Expr,
       AddRec->getNoWrapFlags());
 }
 
-
 // Review the constraints, looking for opportunities
 // to simplify a subscript pair (Src and Dst).
 // Return true if some simplification occurs.
@@ -3168,7 +3079,6 @@ bool DependenceInfo::propagate(const SCEV *&Src, const SCEV *&Dst,
   return Result;
 }
 
-
 // Attempt to propagate a distance
 // constraint into a subscript pair (Src and Dst).
 // Return true if some simplification occurs.
@@ -3194,7 +3104,6 @@ bool DependenceInfo::propagateDistance(const SCEV *&Src, const SCEV *&Dst,
   return true;
 }
 
-
 // Attempt to propagate a line
 // constraint into a subscript pair (Src and Dst).
 // Return true if some simplification occurs.
@@ -3214,22 +3123,24 @@ bool DependenceInfo::propagateLine(const SCEV *&Src, const SCEV *&Dst,
   if (A->isZero()) {
     const SCEVConstant *Bconst = dyn_cast<SCEVConstant>(B);
     const SCEVConstant *Cconst = dyn_cast<SCEVConstant>(C);
-    if (!Bconst || !Cconst) return false;
+    if (!Bconst || !Cconst)
+      return false;
     APInt Beta = Bconst->getAPInt();
     APInt Charlie = Cconst->getAPInt();
     APInt CdivB = Charlie.sdiv(Beta);
     assert(Charlie.srem(Beta) == 0 && "C should be evenly divisible by B");
     const SCEV *AP_K = findCoefficient(Dst, CurLoop);
-    //    Src = SE->getAddExpr(Src, SE->getMulExpr(AP_K, SE->getConstant(CdivB)));
+    //    Src = SE->getAddExpr(Src, SE->getMulExpr(AP_K,
+    //    SE->getConstant(CdivB)));
     Src = SE->getMinusSCEV(Src, SE->getMulExpr(AP_K, SE->getConstant(CdivB)));
     Dst = zeroCoefficient(Dst, CurLoop);
     if (!findCoefficient(Src, CurLoop)->isZero())
       Consistent = false;
-  }
-  else if (B->isZero()) {
+  } else if (B->isZero()) {
     const SCEVConstant *Aconst = dyn_cast<SCEVConstant>(A);
     const SCEVConstant *Cconst = dyn_cast<SCEVConstant>(C);
-    if (!Aconst || !Cconst) return false;
+    if (!Aconst || !Cconst)
+      return false;
     APInt Alpha = Aconst->getAPInt();
     APInt Charlie = Cconst->getAPInt();
     APInt CdivA = Charlie.sdiv(Alpha);
@@ -3239,11 +3150,11 @@ bool DependenceInfo::propagateLine(const SCEV *&Src, const SCEV *&Dst,
     Src = zeroCoefficient(Src, CurLoop);
     if (!findCoefficient(Dst, CurLoop)->isZero())
       Consistent = false;
-  }
-  else if (isKnownPredicate(CmpInst::ICMP_EQ, A, B)) {
+  } else if (isKnownPredicate(CmpInst::ICMP_EQ, A, B)) {
     const SCEVConstant *Aconst = dyn_cast<SCEVConstant>(A);
     const SCEVConstant *Cconst = dyn_cast<SCEVConstant>(C);
-    if (!Aconst || !Cconst) return false;
+    if (!Aconst || !Cconst)
+      return false;
     APInt Alpha = Aconst->getAPInt();
     APInt Charlie = Cconst->getAPInt();
     APInt CdivA = Charlie.sdiv(Alpha);
@@ -3254,8 +3165,7 @@ bool DependenceInfo::propagateLine(const SCEV *&Src, const SCEV *&Dst,
     Dst = addToCoefficient(Dst, CurLoop, A_K);
     if (!findCoefficient(Dst, CurLoop)->isZero())
       Consistent = false;
-  }
-  else {
+  } else {
     // paper is incorrect here, or perhaps just misleading
     const SCEV *A_K = findCoefficient(Src, CurLoop);
     Src = SE->getMulExpr(Src, A);
@@ -3271,7 +3181,6 @@ bool DependenceInfo::propagateLine(const SCEV *&Src, const SCEV *&Dst,
   return true;
 }
 
-
 // Attempt to propagate a point
 // constraint into a subscript pair (Src and Dst).
 // Return true if some simplification occurs.
@@ -3292,7 +3201,6 @@ bool DependenceInfo::propagatePoint(const SCEV *&Src, const SCEV *&Dst,
   return true;
 }
 
-
 // Update direction vector entry based on the current constraint.
 void DependenceInfo::updateDirection(Dependence::DVEntry &Level,
                                      const Constraint &CurConstraint) const {
@@ -3312,34 +3220,28 @@ void DependenceInfo::updateDirection(Dependence::DVEntry &Level,
     if (!SE->isKnownNonNegative(Level.Distance)) // if may be negative
       NewDirection |= Dependence::DVEntry::GT;
     Level.Direction &= NewDirection;
-  }
-  else if (CurConstraint.isLine()) {
+  } else if (CurConstraint.isLine()) {
     Level.Scalar = false;
     Level.Distance = nullptr;
     // direction should be accurate
-  }
-  else if (CurConstraint.isPoint()) {
+  } else if (CurConstraint.isPoint()) {
     Level.Scalar = false;
     Level.Distance = nullptr;
     unsigned NewDirection = Dependence::DVEntry::NONE;
-    if (!isKnownPredicate(CmpInst::ICMP_NE,
-                          CurConstraint.getY(),
+    if (!isKnownPredicate(CmpInst::ICMP_NE, CurConstraint.getY(),
                           CurConstraint.getX()))
       // if X may be = Y
       NewDirection |= Dependence::DVEntry::EQ;
-    if (!isKnownPredicate(CmpInst::ICMP_SLE,
-                          CurConstraint.getY(),
+    if (!isKnownPredicate(CmpInst::ICMP_SLE, CurConstraint.getY(),
                           CurConstraint.getX()))
       // if Y may be > X
       NewDirection |= Dependence::DVEntry::LT;
-    if (!isKnownPredicate(CmpInst::ICMP_SGE,
-                          CurConstraint.getY(),
+    if (!isKnownPredicate(CmpInst::ICMP_SGE, CurConstraint.getY(),
                           CurConstraint.getX()))
       // if Y may be < X
       NewDirection |= Dependence::DVEntry::GT;
     Level.Direction &= NewDirection;
-  }
-  else
+  } else
     llvm_unreachable("constraint has unexpected kind");
 }
 
@@ -3411,7 +3313,7 @@ bool DependenceInfo::tryDelinearizeFixedSize(
         dyn_cast<SCEVUnknown>(SE->getPointerBase(DstAccessFn));
     assert(SrcBase && DstBase && SrcBase == DstBase &&
            "expected src and dst scev unknowns to be equal");
-    });
+  });
 
   SmallVector<int, 4> SrcSizes;
   SmallVector<int, 4> DstSizes;
@@ -3663,9 +3565,8 @@ DependenceInfo::depends(Instruction *Src, Instruction *Dst,
     Pair[P].Group.resize(Pairs);
     removeMatchingExtensions(&Pair[P]);
     Pair[P].Classification =
-      classifyPair(Pair[P].Src, LI->getLoopFor(Src->getParent()),
-                   Pair[P].Dst, LI->getLoopFor(Dst->getParent()),
-                   Pair[P].Loops);
+        classifyPair(Pair[P].Src, LI->getLoopFor(Src->getParent()), Pair[P].Dst,
+                     LI->getLoopFor(Dst->getParent()), Pair[P].Loops);
     Pair[P].GroupLoops = Pair[P].Loops;
     Pair[P].Group.set(P);
     LLVM_DEBUG(dbgs() << "    subscript " << P << "\n");
@@ -3740,18 +3641,15 @@ DependenceInfo::depends(Instruction *Src, Instruction *Dst,
     if (Pair[SI].Classification == Subscript::NonLinear) {
       // ignore these, but collect loops for later
       ++NonlinearSubscriptPairs;
-      collectCommonLoops(Pair[SI].Src,
-                         LI->getLoopFor(Src->getParent()),
+      collectCommonLoops(Pair[SI].Src, LI->getLoopFor(Src->getParent()),
                          Pair[SI].Loops);
-      collectCommonLoops(Pair[SI].Dst,
-                         LI->getLoopFor(Dst->getParent()),
+      collectCommonLoops(Pair[SI].Dst, LI->getLoopFor(Dst->getParent()),
                          Pair[SI].Loops);
       Result.Consistent = false;
     } else if (Pair[SI].Classification == Subscript::ZIV) {
       // always separable
       Separable.set(SI);
-    }
-    else {
+    } else {
       // SIV, RDIV, or MIV, so check for coupled group
       bool Done = true;
       for (unsigned SJ = SI + 1; SJ < Pairs; ++SJ) {
@@ -3769,8 +3667,7 @@ DependenceInfo::depends(Instruction *Src, Instruction *Dst,
         if (Pair[SI].Group.count() == 1) {
           Separable.set(SI);
           ++SeparableSubscriptPairs;
-        }
-        else {
+        } else {
           Coupled.set(SI);
           ++CoupledSubscriptPairs;
         }
@@ -3876,10 +3773,9 @@ DependenceInfo::depends(Instruction *Src, Instruction *Dst,
                           Constraints, Result.Consistent)) {
               LLVM_DEBUG(dbgs() << "\t    Changed\n");
               ++DeltaPropagations;
-              Pair[SJ].Classification =
-                classifyPair(Pair[SJ].Src, LI->getLoopFor(Src->getParent()),
-                             Pair[SJ].Dst, LI->getLoopFor(Dst->getParent()),
-                             Pair[SJ].Loops);
+              Pair[SJ].Classification = classifyPair(
+                  Pair[SJ].Src, LI->getLoopFor(Src->getParent()), Pair[SJ].Dst,
+                  LI->getLoopFor(Dst->getParent()), Pair[SJ].Loops);
               switch (Pair[SJ].Classification) {
               case Subscript::ZIV:
                 LLVM_DEBUG(dbgs() << "ZIV\n");
@@ -3921,8 +3817,7 @@ DependenceInfo::depends(Instruction *Src, Instruction *Dst,
           LLVM_DEBUG(dbgs() << "MIV test\n");
           if (testMIV(Pair[SJ].Src, Pair[SJ].Dst, Pair[SJ].Loops, Result))
             return nullptr;
-        }
-        else
+        } else
           llvm_unreachable("expected only MIV subscripts at this point");
       }
 
@@ -3956,8 +3851,7 @@ DependenceInfo::depends(Instruction *Src, Instruction *Dst,
         break;
       }
     }
-  }
-  else {
+  } else {
     // On the other hand, if all directions are equal and there's no
     // loop-independent dependence possible, then no dependence exists.
     bool AllEqual = true;
@@ -4062,9 +3956,8 @@ const SCEV *DependenceInfo::getSplitIteration(const Dependence &Dep,
     Pair[P].Group.resize(Pairs);
     removeMatchingExtensions(&Pair[P]);
     Pair[P].Classification =
-      classifyPair(Pair[P].Src, LI->getLoopFor(Src->getParent()),
-                   Pair[P].Dst, LI->getLoopFor(Dst->getParent()),
-                   Pair[P].Loops);
+        classifyPair(Pair[P].Src, LI->getLoopFor(Src->getParent()), Pair[P].Dst,
+                     LI->getLoopFor(Dst->getParent()), Pair[P].Loops);
     Pair[P].GroupLoops = Pair[P].Loops;
     Pair[P].Group.set(P);
   }
@@ -4076,15 +3969,12 @@ const SCEV *DependenceInfo::getSplitIteration(const Dependence &Dep,
   for (unsigned SI = 0; SI < Pairs; ++SI) {
     if (Pair[SI].Classification == Subscript::NonLinear) {
       // ignore these, but collect loops for later
-      collectCommonLoops(Pair[SI].Src,
-                         LI->getLoopFor(Src->getParent()),
+      collectCommonLoops(Pair[SI].Src, LI->getLoopFor(Src->getParent()),
                          Pair[SI].Loops);
-      collectCommonLoops(Pair[SI].Dst,
-                         LI->getLoopFor(Dst->getParent()),
+      collectCommonLoops(Pair[SI].Dst, LI->getLoopFor(Dst->getParent()),
                          Pair[SI].Loops);
       Result.Consistent = false;
-    }
-    else if (Pair[SI].Classification == Subscript::ZIV)
+    } else if (Pair[SI].Classification == Subscript::ZIV)
       Separable.set(SI);
     else {
       // SIV, RDIV, or MIV, so check for coupled group
@@ -4118,8 +4008,8 @@ const SCEV *DependenceInfo::getSplitIteration(const Dependence &Dep,
     case Subscript::SIV: {
       unsigned Level;
       const SCEV *SplitIter = nullptr;
-      (void) testSIV(Pair[SI].Src, Pair[SI].Dst, Level,
-                     Result, NewConstraint, SplitIter);
+      (void)testSIV(Pair[SI].Src, Pair[SI].Dst, Level, Result, NewConstraint,
+                    SplitIter);
       if (Level == SplitLevel) {
         assert(SplitIter != nullptr);
         return SplitIter;
@@ -4157,8 +4047,8 @@ const SCEV *DependenceInfo::getSplitIteration(const Dependence &Dep,
           // SJ is an SIV subscript that's part of the current coupled group
           unsigned Level;
           const SCEV *SplitIter = nullptr;
-          (void) testSIV(Pair[SJ].Src, Pair[SJ].Dst, Level,
-                         Result, NewConstraint, SplitIter);
+          (void)testSIV(Pair[SJ].Src, Pair[SJ].Dst, Level, Result,
+                        NewConstraint, SplitIter);
           if (Level == SplitLevel && SplitIter)
             return SplitIter;
           ConstrainedLevels.set(Level);
@@ -4170,12 +4060,11 @@ const SCEV *DependenceInfo::getSplitIteration(const Dependence &Dep,
           // propagate, possibly creating new SIVs and ZIVs
           for (unsigned SJ : Mivs.set_bits()) {
             // SJ is an MIV subscript that's part of the current coupled group
-            if (propagate(Pair[SJ].Src, Pair[SJ].Dst,
-                          Pair[SJ].Loops, Constraints, Result.Consistent)) {
-              Pair[SJ].Classification =
-                classifyPair(Pair[SJ].Src, LI->getLoopFor(Src->getParent()),
-                             Pair[SJ].Dst, LI->getLoopFor(Dst->getParent()),
-                             Pair[SJ].Loops);
+            if (propagate(Pair[SJ].Src, Pair[SJ].Dst, Pair[SJ].Loops,
+                          Constraints, Result.Consistent)) {
+              Pair[SJ].Classification = classifyPair(
+                  Pair[SJ].Src, LI->getLoopFor(Src->getParent()), Pair[SJ].Dst,
+                  LI->getLoopFor(Dst->getParent()), Pair[SJ].Loops);
               switch (Pair[SJ].Classification) {
               case Subscript::ZIV:
                 Mivs.reset(SJ);
diff --git a/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp b/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp
index 456d58660680d7d..1c722bcaadb8b5f 100644
--- a/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp
+++ b/llvm/lib/Analysis/DevelopmentModeInlineAdvisor.cpp
@@ -26,8 +26,8 @@
 #include "llvm/Support/CommandLine.h"
 #include "llvm/Support/ManagedStatic.h"
 
-#include <vector>
 #include <optional>
+#include <vector>
 
 using namespace llvm;
 
diff --git a/llvm/lib/Analysis/DomPrinter.cpp b/llvm/lib/Analysis/DomPrinter.cpp
index e9f5103e1276108..4fad9874b224a43 100644
--- a/llvm/lib/Analysis/DomPrinter.cpp
+++ b/llvm/lib/Analysis/DomPrinter.cpp
@@ -24,13 +24,12 @@
 
 using namespace llvm;
 
-
 void DominatorTree::viewGraph(const Twine &Name, const Twine &Title) {
 #ifndef NDEBUG
   ViewGraph(this, Name, false, Title);
 #else
   errs() << "DomTree dump not available, build with DEBUG\n";
-#endif  // NDEBUG
+#endif // NDEBUG
 }
 
 void DominatorTree::viewGraph() {
@@ -38,7 +37,7 @@ void DominatorTree::viewGraph() {
   this->viewGraph("domtree", "Dominator Tree for function");
 #else
   errs() << "DomTree dump not available, build with DEBUG\n";
-#endif  // NDEBUG
+#endif // NDEBUG
 }
 
 namespace {
diff --git a/llvm/lib/Analysis/DominanceFrontier.cpp b/llvm/lib/Analysis/DominanceFrontier.cpp
index ccba913ccfe5eda..db8a9aa9a7cfaa8 100644
--- a/llvm/lib/Analysis/DominanceFrontier.cpp
+++ b/llvm/lib/Analysis/DominanceFrontier.cpp
@@ -30,19 +30,17 @@ template class ForwardDominanceFrontierBase<BasicBlock>;
 char DominanceFrontierWrapperPass::ID = 0;
 
 INITIALIZE_PASS_BEGIN(DominanceFrontierWrapperPass, "domfrontier",
-                "Dominance Frontier Construction", true, true)
+                      "Dominance Frontier Construction", true, true)
 INITIALIZE_PASS_DEPENDENCY(DominatorTreeWrapperPass)
 INITIALIZE_PASS_END(DominanceFrontierWrapperPass, "domfrontier",
-                "Dominance Frontier Construction", true, true)
+                    "Dominance Frontier Construction", true, true)
 
 DominanceFrontierWrapperPass::DominanceFrontierWrapperPass()
     : FunctionPass(ID) {
   initializeDominanceFrontierWrapperPassPass(*PassRegistry::getPassRegistry());
 }
 
-void DominanceFrontierWrapperPass::releaseMemory() {
-  DF.releaseMemory();
-}
+void DominanceFrontierWrapperPass::releaseMemory() { DF.releaseMemory(); }
 
 bool DominanceFrontierWrapperPass::runOnFunction(Function &) {
   releaseMemory();
@@ -55,7 +53,8 @@ void DominanceFrontierWrapperPass::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.addRequired<DominatorTreeWrapperPass>();
 }
 
-void DominanceFrontierWrapperPass::print(raw_ostream &OS, const Module *) const {
+void DominanceFrontierWrapperPass::print(raw_ostream &OS,
+                                         const Module *) const {
   DF.print(OS);
 }
 
@@ -85,7 +84,7 @@ DominanceFrontier DominanceFrontierAnalysis::run(Function &F,
 }
 
 DominanceFrontierPrinterPass::DominanceFrontierPrinterPass(raw_ostream &OS)
-  : OS(OS) {}
+    : OS(OS) {}
 
 PreservedAnalyses
 DominanceFrontierPrinterPass::run(Function &F, FunctionAnalysisManager &AM) {
diff --git a/llvm/lib/Analysis/GlobalsModRef.cpp b/llvm/lib/Analysis/GlobalsModRef.cpp
index 527f19b194eeb91..16e408c3b92e761 100644
--- a/llvm/lib/Analysis/GlobalsModRef.cpp
+++ b/llvm/lib/Analysis/GlobalsModRef.cpp
@@ -35,7 +35,8 @@ using namespace llvm;
 
 STATISTIC(NumNonAddrTakenGlobalVars,
           "Number of global vars without address taken");
-STATISTIC(NumNonAddrTakenFunctions,"Number of functions without address taken");
+STATISTIC(NumNonAddrTakenFunctions,
+          "Number of functions without address taken");
 STATISTIC(NumNoMemFunctions, "Number of functions that do not access memory");
 STATISTIC(NumReadMemFunctions, "Number of functions that only read memory");
 STATISTIC(NumIndirectGlobalVars, "Number of indirect global objects");
@@ -99,14 +100,11 @@ class GlobalsAAResult::FunctionInfo {
 
 public:
   FunctionInfo() = default;
-  ~FunctionInfo() {
-    delete Info.getPointer();
-  }
+  ~FunctionInfo() { delete Info.getPointer(); }
   // Spell out the copy ond move constructors and assignment operators to get
   // deep copy semantics and correct move semantics in the face of the
   // pointer-int pair.
-  FunctionInfo(const FunctionInfo &Arg)
-      : Info(nullptr, Arg.Info.getInt()) {
+  FunctionInfo(const FunctionInfo &Arg) : Info(nullptr, Arg.Info.getInt()) {
     if (const auto *ArgPtr = Arg.Info.getPointer())
       Info.setPointer(new AlignedMap(*ArgPtr));
   }
@@ -619,12 +617,11 @@ void GlobalsAAResult::AnalyzeCallGraph(CallGraph &CG, Module &M) {
   }
 }
 
-// GV is a non-escaping global. V is a pointer address that has been loaded from.
-// If we can prove that V must escape, we can conclude that a load from V cannot
-// alias GV.
+// GV is a non-escaping global. V is a pointer address that has been loaded
+// from. If we can prove that V must escape, we can conclude that a load from V
+// cannot alias GV.
 static bool isNonEscapingGlobalNoAliasWithLoad(const GlobalValue *GV,
-                                               const Value *V,
-                                               int &Depth,
+                                               const Value *V, int &Depth,
                                                const DataLayout &DL) {
   SmallPtrSet<const Value *, 8> Visited;
   SmallVector<const Value *, 8> Inputs;
@@ -633,8 +630,8 @@ static bool isNonEscapingGlobalNoAliasWithLoad(const GlobalValue *GV,
   do {
     const Value *Input = Inputs.pop_back_val();
 
-    if (isa<GlobalValue>(Input) || isa<Argument>(Input) || isa<CallInst>(Input) ||
-        isa<InvokeInst>(Input))
+    if (isa<GlobalValue>(Input) || isa<Argument>(Input) ||
+        isa<CallInst>(Input) || isa<InvokeInst>(Input))
       // Arguments to functions or returns from functions are inherently
       // escaping, so we can immediately classify those as not aliasing any
       // non-addr-taken globals.
@@ -730,9 +727,9 @@ bool GlobalsAAResult::isNonEscapingGlobalNoAlias(const GlobalValue *GV,
       // FIXME: The condition can be refined, but be conservative for now.
       auto *GVar = dyn_cast<GlobalVariable>(GV);
       auto *InputGVar = dyn_cast<GlobalVariable>(InputGV);
-      if (GVar && InputGVar &&
-          !GVar->isDeclaration() && !InputGVar->isDeclaration() &&
-          !GVar->isInterposable() && !InputGVar->isInterposable()) {
+      if (GVar && InputGVar && !GVar->isDeclaration() &&
+          !InputGVar->isDeclaration() && !GVar->isInterposable() &&
+          !InputGVar->isInterposable()) {
         Type *GVType = GVar->getInitializer()->getType();
         Type *InputGVType = InputGVar->getInitializer()->getType();
         if (GVType->isSized() && InputGVType->isSized() &&
@@ -903,7 +900,7 @@ ModRefInfo GlobalsAAResult::getModRefInfoForArgument(const CallBase *Call,
   // Iterate through all the arguments to the called function. If any argument
   // is based on GV, return the conservative result.
   for (const auto &A : Call->args()) {
-    SmallVector<const Value*, 4> Objects;
+    SmallVector<const Value *, 4> Objects;
     getUnderlyingObjects(A, Objects);
 
     // All objects must be identified.
diff --git a/llvm/lib/Analysis/GuardUtils.cpp b/llvm/lib/Analysis/GuardUtils.cpp
index 83734b7eac0c823..878b0a49cfd3f66 100644
--- a/llvm/lib/Analysis/GuardUtils.cpp
+++ b/llvm/lib/Analysis/GuardUtils.cpp
@@ -52,7 +52,7 @@ bool llvm::parseWidenableBranch(const User *U, Value *&Condition,
                                 BasicBlock *&IfTrueBB, BasicBlock *&IfFalseBB) {
 
   Use *C, *WC;
-  if (parseWidenableBranch(const_cast<User*>(U), C, WC, IfTrueBB, IfFalseBB)) {
+  if (parseWidenableBranch(const_cast<User *>(U), C, WC, IfTrueBB, IfFalseBB)) {
     if (C)
       Condition = C->get();
     else
@@ -63,7 +63,7 @@ bool llvm::parseWidenableBranch(const User *U, Value *&Condition,
   return false;
 }
 
-bool llvm::parseWidenableBranch(User *U, Use *&C,Use *&WC,
+bool llvm::parseWidenableBranch(User *U, Use *&C, Use *&WC,
                                 BasicBlock *&IfTrueBB, BasicBlock *&IfFalseBB) {
 
   auto *BI = dyn_cast<BranchInst>(U);
diff --git a/llvm/lib/Analysis/HeatUtils.cpp b/llvm/lib/Analysis/HeatUtils.cpp
index b6a0d5810b04fa3..4cc6cd4550e1443 100644
--- a/llvm/lib/Analysis/HeatUtils.cpp
+++ b/llvm/lib/Analysis/HeatUtils.cpp
@@ -37,13 +37,12 @@ static const char heatPalette[heatSize][8] = {
     "#d24b40", "#d0473d", "#cc403a", "#ca3b37", "#c53334", "#c32e31", "#be242e",
     "#bb1b2c", "#b70d28"};
 
-uint64_t
-getNumOfCalls(Function &callerFunction, Function &calledFunction) {
+uint64_t getNumOfCalls(Function &callerFunction, Function &calledFunction) {
   uint64_t counter = 0;
   for (User *U : calledFunction.users()) {
     if (auto CI = dyn_cast<CallInst>(U)) {
       if (CI->getCaller() == (&callerFunction)) {
-          counter += 1;
+        counter += 1;
       }
     }
   }
diff --git a/llvm/lib/Analysis/IRSimilarityIdentifier.cpp b/llvm/lib/Analysis/IRSimilarityIdentifier.cpp
index f029c8342fdefb5..beea0222122a3e5 100644
--- a/llvm/lib/Analysis/IRSimilarityIdentifier.cpp
+++ b/llvm/lib/Analysis/IRSimilarityIdentifier.cpp
@@ -41,9 +41,9 @@ cl::opt<bool>
                      cl::desc("only allow matching call instructions if the "
                               "name and type signature match."));
 
-cl::opt<bool>
-    DisableIntrinsics("no-ir-sim-intrinsics", cl::init(false), cl::ReallyHidden,
-                      cl::desc("Don't match or outline intrinsics"));
+cl::opt<bool> DisableIntrinsics("no-ir-sim-intrinsics", cl::init(false),
+                                cl::ReallyHidden,
+                                cl::desc("Don't match or outline intrinsics"));
 } // namespace llvm
 
 IRInstructionData::IRInstructionData(Instruction &I, bool Legality,
@@ -111,20 +111,18 @@ void IRInstructionData::setBranchSuccessors(
 }
 
 ArrayRef<Value *> IRInstructionData::getBlockOperVals() {
-  assert((isa<BranchInst>(Inst) ||
-         isa<PHINode>(Inst)) && "Instruction must be branch or PHINode");
-  
+  assert((isa<BranchInst>(Inst) || isa<PHINode>(Inst)) &&
+         "Instruction must be branch or PHINode");
+
   if (BranchInst *BI = dyn_cast<BranchInst>(Inst))
     return ArrayRef<Value *>(
-      std::next(OperVals.begin(), BI->isConditional() ? 1 : 0),
-      OperVals.end()
-    );
+        std::next(OperVals.begin(), BI->isConditional() ? 1 : 0),
+        OperVals.end());
 
   if (PHINode *PN = dyn_cast<PHINode>(Inst))
     return ArrayRef<Value *>(
-      std::next(OperVals.begin(), PN->getNumIncomingValues()),
-      OperVals.end()
-    );
+        std::next(OperVals.begin(), PN->getNumIncomingValues()),
+        OperVals.end());
 
   return ArrayRef<Value *>();
 }
@@ -210,8 +208,7 @@ CmpInst::Predicate IRInstructionData::getPredicate() const {
 }
 
 StringRef IRInstructionData::getCalleeName() const {
-  assert(isa<CallInst>(Inst) &&
-         "Can only get a name from a call instruction");
+  assert(isa<CallInst>(Inst) && "Can only get a name from a call instruction");
 
   assert(CalleeName && "CalleeName has not been set");
 
@@ -375,7 +372,8 @@ unsigned IRInstructionMapper::mapToLegalUnsigned(
 IRInstructionData *
 IRInstructionMapper::allocateIRInstructionData(Instruction &I, bool Legality,
                                                IRInstructionDataList &IDL) {
-  return new (InstDataAllocator->Allocate()) IRInstructionData(I, Legality, IDL);
+  return new (InstDataAllocator->Allocate())
+      IRInstructionData(I, Legality, IDL);
 }
 
 IRInstructionData *
@@ -383,8 +381,7 @@ IRInstructionMapper::allocateIRInstructionData(IRInstructionDataList &IDL) {
   return new (InstDataAllocator->Allocate()) IRInstructionData(IDL);
 }
 
-IRInstructionDataList *
-IRInstructionMapper::allocateIRInstructionDataList() {
+IRInstructionDataList *IRInstructionMapper::allocateIRInstructionDataList() {
   return new (IDLAllocator->Allocate()) IRInstructionDataList();
 }
 
@@ -486,7 +483,7 @@ IRSimilarityCandidate::IRSimilarityCandidate(unsigned StartIdx, unsigned Len,
   for (BasicBlock *BB : BBSet) {
     if (ValueToNumber.contains(BB))
       continue;
-    
+
     ValueToNumber.try_emplace(BB, LocalValNumber);
     NumberToValue.try_emplace(LocalValNumber, BB);
     LocalValNumber++;
@@ -529,10 +526,9 @@ bool IRSimilarityCandidate::isSimilar(const IRSimilarityCandidate &A,
 /// \returns true if there exists a possible mapping between the source
 /// Instruction operands and the target Instruction operands, and false if not.
 static bool checkNumberingAndReplaceCommutative(
-  const DenseMap<Value *, unsigned> &SourceValueToNumberMapping,
-  DenseMap<unsigned, DenseSet<unsigned>> &CurrentSrcTgtNumberMapping,
-  ArrayRef<Value *> &SourceOperands,
-  DenseSet<unsigned> &TargetValueNumbers){
+    const DenseMap<Value *, unsigned> &SourceValueToNumberMapping,
+    DenseMap<unsigned, DenseSet<unsigned>> &CurrentSrcTgtNumberMapping,
+    ArrayRef<Value *> &SourceOperands, DenseSet<unsigned> &TargetValueNumbers) {
 
   DenseMap<unsigned, DenseSet<unsigned>>::iterator ValueMappingIt;
 
@@ -562,7 +558,7 @@ static bool checkNumberingAndReplaceCommutative(
     // If we could not find a Value, return 0.
     if (NewSet.empty())
       return false;
-    
+
     // Otherwise replace the old mapping with the newly constructed one.
     if (NewSet.size() != ValueMappingIt->second.size())
       ValueMappingIt->second.swap(NewSet);
@@ -683,9 +679,9 @@ bool IRSimilarityCandidate::compareNonCommutativeOperandMapping(
   return true;
 }
 
-bool IRSimilarityCandidate::compareCommutativeOperandMapping(
-    OperandMapping A, OperandMapping B) {
-  DenseSet<unsigned> ValueNumbersA;      
+bool IRSimilarityCandidate::compareCommutativeOperandMapping(OperandMapping A,
+                                                             OperandMapping B) {
+  DenseSet<unsigned> ValueNumbersA;
   DenseSet<unsigned> ValueNumbersB;
 
   ArrayRef<Value *>::iterator VItA = A.OperVals.begin();
@@ -693,8 +689,7 @@ bool IRSimilarityCandidate::compareCommutativeOperandMapping(
   unsigned OperandLength = A.OperVals.size();
 
   // Find the value number sets for the operands.
-  for (unsigned Idx = 0; Idx < OperandLength;
-       Idx++, VItA++, VItB++) {
+  for (unsigned Idx = 0; Idx < OperandLength; Idx++, VItA++, VItB++) {
     ValueNumbersA.insert(A.IRSC.ValueToNumber.find(*VItA)->second);
     ValueNumbersB.insert(B.IRSC.ValueToNumber.find(*VItB)->second);
   }
@@ -740,7 +735,7 @@ bool IRSimilarityCandidate::compareAssignmentMapping(
     }
     ValueNumberMappingA.erase(ValueMappingIt);
     std::tie(ValueMappingIt, WasInserted) = ValueNumberMappingA.insert(
-      std::make_pair(InstValA, DenseSet<unsigned>({InstValB})));
+        std::make_pair(InstValA, DenseSet<unsigned>({InstValB})));
   }
 
   return true;
@@ -757,7 +752,7 @@ bool IRSimilarityCandidate::checkRelativeLocations(RelativeLocMapping A,
   DenseSet<BasicBlock *> BasicBlockB;
   A.IRSC.getBasicBlocks(BasicBlockA);
   B.IRSC.getBasicBlocks(BasicBlockB);
-  
+
   // Determine if the block is contained in the region.
   bool AContained = BasicBlockA.contains(ABB);
   bool BContained = BasicBlockB.contains(BBB);
@@ -766,7 +761,7 @@ bool IRSimilarityCandidate::checkRelativeLocations(RelativeLocMapping A,
   // the region.
   if (AContained != BContained)
     return false;
-  
+
   // If both are contained, then we need to make sure that the relative
   // distance to the target blocks are the same.
   if (AContained)
@@ -829,7 +824,7 @@ bool IRSimilarityCandidate::compareStructure(
     if (!compareAssignmentMapping(InstValA, InstValB, ValueNumberMappingA,
                                   ValueNumberMappingB))
       return false;
-    
+
     if (!compareAssignmentMapping(InstValB, InstValA, ValueNumberMappingB,
                                   ValueNumberMappingA))
       return false;
@@ -967,7 +962,8 @@ void IRSimilarityIdentifier::populateMapper(
 /// \param [out] CandsForRepSubstring - The vector to store the generated
 /// IRSimilarityCandidates.
 static void createCandidatesFromSuffixTree(
-    const IRInstructionMapper& Mapper, std::vector<IRInstructionData *> &InstrList,
+    const IRInstructionMapper &Mapper,
+    std::vector<IRInstructionData *> &InstrList,
     std::vector<unsigned> &IntegerMapping, SuffixTree::RepeatedSubstring &RS,
     std::vector<IRSimilarityCandidate> &CandsForRepSubstring) {
 
@@ -1113,7 +1109,7 @@ void IRSimilarityCandidate::createCanonicalRelationFrom(
          "Canonical Relationship is non-empty");
   assert(!SourceCandLarge.NumberToCanonNum.empty() &&
          "Canonical Relationship is non-empty");
-  
+
   assert(!TargetCandLarge.CanonNumToNumber.empty() &&
          "Canonical Relationship is non-empty");
   assert(!TargetCandLarge.NumberToCanonNum.empty() &&
@@ -1130,7 +1126,7 @@ void IRSimilarityCandidate::createCanonicalRelationFrom(
     Value *CurrVal = ValueNumPair.first;
     unsigned TargetCandGVN = ValueNumPair.second;
 
-    // Find the numbering in the large candidate that surrounds the 
+    // Find the numbering in the large candidate that surrounds the
     // current candidate.
     std::optional<unsigned> OLargeTargetGVN = TargetCandLarge.getGVN(CurrVal);
     assert(OLargeTargetGVN.has_value() && "GVN not found for Value");
@@ -1140,13 +1136,13 @@ void IRSimilarityCandidate::createCanonicalRelationFrom(
         TargetCandLarge.getCanonicalNum(OLargeTargetGVN.value());
     assert(OTargetCandCanon.has_value() &&
            "Canononical Number not found for GVN");
-    
+
     // Get the GVN in the large source candidate from the canonical numbering.
     std::optional<unsigned> OLargeSourceGVN =
         SourceCandLarge.fromCanonicalNum(OTargetCandCanon.value());
     assert(OLargeSourceGVN.has_value() &&
            "GVN Number not found for Canonical Number");
-    
+
     // Get the Value from the GVN in the large source candidate.
     std::optional<Value *> OLargeSourceV =
         SourceCandLarge.fromGVN(OLargeSourceGVN.value());
@@ -1255,7 +1251,7 @@ CheckLargerCands(
   // whether or not there is a match.
   if (IncludedGroupsA.empty())
     return Result;
-  
+
   // Create a pair that contains the larger candidates.
   auto ItA = IncludedGroupAndCandA.find(*IncludedGroupsA.begin());
   auto ItB = IncludedGroupAndCandB.find(*IncludedGroupsA.begin());
@@ -1279,9 +1275,8 @@ CheckLargerCands(
 static void findCandidateStructures(
     std::vector<IRSimilarityCandidate> &CandsForRepSubstring,
     DenseMap<unsigned, SimilarityGroup> &StructuralGroups,
-    DenseMap<unsigned,  DenseSet<IRSimilarityCandidate *>> &IndexToIncludedCand,
-    DenseMap<IRSimilarityCandidate *, unsigned> &CandToOverallGroup
-    ) {
+    DenseMap<unsigned, DenseSet<IRSimilarityCandidate *>> &IndexToIncludedCand,
+    DenseMap<IRSimilarityCandidate *, unsigned> &CandToOverallGroup) {
   std::vector<IRSimilarityCandidate>::iterator CandIt, CandEndIt, InnerCandIt,
       InnerCandEndIt;
 
@@ -1389,7 +1384,7 @@ void IRSimilarityIdentifier::findCandidates(
 
   DenseMap<unsigned, SimilarityGroup> StructuralGroups;
   DenseMap<unsigned, DenseSet<IRSimilarityCandidate *>> IndexToIncludedCand;
-  DenseMap<IRSimilarityCandidate *, unsigned> CandToGroup; 
+  DenseMap<IRSimilarityCandidate *, unsigned> CandToGroup;
 
   // Iterate over the subsequences found by the Suffix Tree to create
   // IRSimilarityCandidates for each repeated subsequence and determine which
@@ -1511,9 +1506,9 @@ bool IRSimilarityIdentifierWrapperPass::runOnModule(Module &M) {
 AnalysisKey IRSimilarityAnalysis::Key;
 IRSimilarityIdentifier IRSimilarityAnalysis::run(Module &M,
                                                  ModuleAnalysisManager &) {
-  auto IRSI = IRSimilarityIdentifier(!DisableBranches, !DisableIndirectCalls,
-                                     MatchCallsByName, !DisableIntrinsics,
-                                     false);
+  auto IRSI =
+      IRSimilarityIdentifier(!DisableBranches, !DisableIndirectCalls,
+                             MatchCallsByName, !DisableIntrinsics, false);
   IRSI.findSimilarity(M);
   return IRSI;
 }
diff --git a/llvm/lib/Analysis/IVDescriptors.cpp b/llvm/lib/Analysis/IVDescriptors.cpp
index 46629e381bc3665..da03493b2daa61b 100644
--- a/llvm/lib/Analysis/IVDescriptors.cpp
+++ b/llvm/lib/Analysis/IVDescriptors.cpp
@@ -741,9 +741,8 @@ RecurrenceDescriptor::isConditionalRdxPattern(RecurKind Kind, Instruction *I) {
       (!isa<PHINode>(*TrueVal) && !isa<PHINode>(*FalseVal)))
     return InstDesc(false, I);
 
-  Instruction *I1 =
-      isa<PHINode>(*TrueVal) ? dyn_cast<Instruction>(FalseVal)
-                             : dyn_cast<Instruction>(TrueVal);
+  Instruction *I1 = isa<PHINode>(*TrueVal) ? dyn_cast<Instruction>(FalseVal)
+                                           : dyn_cast<Instruction>(TrueVal);
   if (!I1 || !I1->isBinaryOp())
     return InstDesc(false, I);
 
@@ -805,14 +804,14 @@ RecurrenceDescriptor::isRecurrenceInstr(Loop *L, PHINode *OrigPhi,
     if (isAnyOfRecurrenceKind(Kind))
       return isAnyOfPattern(L, OrigPhi, I, Prev);
     auto HasRequiredFMF = [&]() {
-     if (FuncFMF.noNaNs() && FuncFMF.noSignedZeros())
-       return true;
-     if (isa<FPMathOperator>(I) && I->hasNoNaNs() && I->hasNoSignedZeros())
-       return true;
-     // minimum and maximum intrinsics do not require nsz and nnan flags since
-     // NaN and signed zeroes are propagated in the intrinsic implementation.
-     return match(I, m_Intrinsic<Intrinsic::minimum>(m_Value(), m_Value())) ||
-            match(I, m_Intrinsic<Intrinsic::maximum>(m_Value(), m_Value()));
+      if (FuncFMF.noNaNs() && FuncFMF.noSignedZeros())
+        return true;
+      if (isa<FPMathOperator>(I) && I->hasNoNaNs() && I->hasNoSignedZeros())
+        return true;
+      // minimum and maximum intrinsics do not require nsz and nnan flags since
+      // NaN and signed zeroes are propagated in the intrinsic implementation.
+      return match(I, m_Intrinsic<Intrinsic::minimum>(m_Value(), m_Value())) ||
+             match(I, m_Intrinsic<Intrinsic::maximum>(m_Value(), m_Value()));
     };
     if (isIntMinMaxRecurrenceKind(Kind) ||
         (HasRequiredFMF() && isFPMinMaxRecurrenceKind(Kind)))
@@ -846,8 +845,7 @@ bool RecurrenceDescriptor::isReductionPHI(PHINode *Phi, Loop *TheLoop,
   BasicBlock *Header = TheLoop->getHeader();
   Function &F = *Header->getParent();
   FastMathFlags FMF;
-  FMF.setNoNaNs(
-      F.getFnAttribute("no-nans-fp-math").getValueAsBool());
+  FMF.setNoNaNs(F.getFnAttribute("no-nans-fp-math").getValueAsBool());
   FMF.setNoSignedZeros(
       F.getFnAttribute("no-signed-zeros-fp-math").getValueAsBool());
 
@@ -933,14 +931,16 @@ bool RecurrenceDescriptor::isReductionPHI(PHINode *Phi, Loop *TheLoop,
     LLVM_DEBUG(dbgs() << "Found an FMulAdd reduction PHI." << *Phi << "\n");
     return true;
   }
-  if (AddReductionVar(Phi, RecurKind::FMaximum, TheLoop, FMF, RedDes, DB, AC, DT,
-                      SE)) {
-    LLVM_DEBUG(dbgs() << "Found a float MAXIMUM reduction PHI." << *Phi << "\n");
+  if (AddReductionVar(Phi, RecurKind::FMaximum, TheLoop, FMF, RedDes, DB, AC,
+                      DT, SE)) {
+    LLVM_DEBUG(dbgs() << "Found a float MAXIMUM reduction PHI." << *Phi
+                      << "\n");
     return true;
   }
-  if (AddReductionVar(Phi, RecurKind::FMinimum, TheLoop, FMF, RedDes, DB, AC, DT,
-                      SE)) {
-    LLVM_DEBUG(dbgs() << "Found a float MINIMUM reduction PHI." << *Phi << "\n");
+  if (AddReductionVar(Phi, RecurKind::FMinimum, TheLoop, FMF, RedDes, DB, AC,
+                      DT, SE)) {
+    LLVM_DEBUG(dbgs() << "Found a float MINIMUM reduction PHI." << *Phi
+                      << "\n");
     return true;
   }
   // Not a reduction of known type.
@@ -1058,10 +1058,11 @@ Value *RecurrenceDescriptor::getRecurrenceIdentity(RecurKind K, Type *Tp,
   case RecurKind::FAdd:
     // Adding zero to a number does not change it.
     // FIXME: Ideally we should not need to check FMF for FAdd and should always
-    // use -0.0. However, this will currently result in mixed vectors of 0.0/-0.0.
-    // Instead, we should ensure that 1) the FMF from FAdd are propagated to the PHI
-    // nodes where possible, and 2) PHIs with the nsz flag + -0.0 use 0.0. This would
-    // mean we can then remove the check for noSignedZeros() below (see D98963).
+    // use -0.0. However, this will currently result in mixed vectors of
+    // 0.0/-0.0. Instead, we should ensure that 1) the FMF from FAdd are
+    // propagated to the PHI nodes where possible, and 2) PHIs with the nsz flag
+    // + -0.0 use 0.0. This would mean we can then remove the check for
+    // noSignedZeros() below (see D98963).
     if (FMF.noSignedZeros())
       return ConstantFP::get(Tp, 0.0L);
     return ConstantFP::get(Tp, -0.0L);
@@ -1506,8 +1507,8 @@ bool InductionDescriptor::isInductionPHI(
   // This function assumes that InductionPhi is called only on Phi nodes
   // present inside loop headers. Check for the same, and throw an assert if
   // the current Phi is not present inside the loop header.
-  assert(Phi->getParent() == AR->getLoop()->getHeader()
-    && "Invalid Phi node, not present in loop header");
+  assert(Phi->getParent() == AR->getLoop()->getHeader() &&
+         "Invalid Phi node, not present in loop header");
 
   Value *StartValue =
       Phi->getIncomingValueForBlock(AR->getLoop()->getLoopPreheader());
diff --git a/llvm/lib/Analysis/IVUsers.cpp b/llvm/lib/Analysis/IVUsers.cpp
index 5c7883fb3b37c90..a40e0873964db40 100644
--- a/llvm/lib/Analysis/IVUsers.cpp
+++ b/llvm/lib/Analysis/IVUsers.cpp
@@ -67,7 +67,7 @@ static bool isInteresting(const SCEV *S, const Instruction *I, const Loop *L,
     // the step value is not interesting, since we don't yet know how to
     // do effective SCEV expansions for addrecs with interesting steps.
     return isInteresting(AR->getStart(), I, L, SE, LI) &&
-          !isInteresting(AR->getStepRecurrence(*SE), I, L, SE, LI);
+           !isInteresting(AR->getStepRecurrence(*SE), I, L, SE, LI);
   }
 
   // An add is interesting if exactly one of its operands is interesting.
@@ -139,10 +139,10 @@ bool IVUsers::AddUsersIfInteresting(Instruction *I) {
   // Add this IV user to the Processed set before returning false to ensure that
   // all IV users are members of the set. See IVUsers::isIVUserOrOperand.
   if (!Processed.insert(I).second)
-    return true;    // Instruction already handled.
+    return true; // Instruction already handled.
 
   if (!SE->isSCEVable(I->getType()))
-    return false;   // Void and FP expressions cannot be reduced.
+    return false; // Void and FP expressions cannot be reduced.
 
   // IVUsers is used by LSR which assumes that all SCEV expressions are safe to
   // pass to SCEVExpander. Expressions are not safe to expand if they represent
@@ -364,9 +364,7 @@ const SCEV *IVUsers::getStride(const IVStrideUse &IU, const Loop *L) const {
   return nullptr;
 }
 
-void IVStrideUse::transformToPostInc(const Loop *L) {
-  PostIncLoops.insert(L);
-}
+void IVStrideUse::transformToPostInc(const Loop *L) { PostIncLoops.insert(L); }
 
 void IVStrideUse::deleted() {
   // Remove this user from the list.
diff --git a/llvm/lib/Analysis/InlineSizeEstimatorAnalysis.cpp b/llvm/lib/Analysis/InlineSizeEstimatorAnalysis.cpp
index 37842b198de87a8..c72fda4fbbe9d8e 100644
--- a/llvm/lib/Analysis/InlineSizeEstimatorAnalysis.cpp
+++ b/llvm/lib/Analysis/InlineSizeEstimatorAnalysis.cpp
@@ -130,7 +130,8 @@ size_t getSize(Function &F, TargetTransformInfo &TTI) {
   for (const auto &BB : F)
     for (const auto &I : BB)
       Ret += *(TTI.getInstructionCost(
-          &I, TargetTransformInfo::TargetCostKind::TCK_CodeSize).getValue());
+                      &I, TargetTransformInfo::TargetCostKind::TCK_CodeSize)
+                   .getValue());
   return Ret;
 }
 
diff --git a/llvm/lib/Analysis/InstructionPrecedenceTracking.cpp b/llvm/lib/Analysis/InstructionPrecedenceTracking.cpp
index fba5859b74cef37..d8a1e4bf790f0fe 100644
--- a/llvm/lib/Analysis/InstructionPrecedenceTracking.cpp
+++ b/llvm/lib/Analysis/InstructionPrecedenceTracking.cpp
@@ -18,8 +18,8 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/Analysis/InstructionPrecedenceTracking.h"
-#include "llvm/Analysis/ValueTracking.h"
 #include "llvm/ADT/Statistic.h"
+#include "llvm/Analysis/ValueTracking.h"
 #include "llvm/IR/PatternMatch.h"
 #include "llvm/Support/CommandLine.h"
 
@@ -144,8 +144,7 @@ bool ImplicitControlFlowTracking::isSpecialInstruction(
   return !isGuaranteedToTransferExecutionToSuccessor(Insn);
 }
 
-bool MemoryWriteTracking::isSpecialInstruction(
-    const Instruction *Insn) const {
+bool MemoryWriteTracking::isSpecialInstruction(const Instruction *Insn) const {
   using namespace PatternMatch;
   if (match(Insn, m_Intrinsic<Intrinsic::experimental_widenable_condition>()))
     return false;
diff --git a/llvm/lib/Analysis/InstructionSimplify.cpp b/llvm/lib/Analysis/InstructionSimplify.cpp
index d0cc56ebc2be319..8f270520874f87d 100644
--- a/llvm/lib/Analysis/InstructionSimplify.cpp
+++ b/llvm/lib/Analysis/InstructionSimplify.cpp
@@ -950,7 +950,7 @@ static Value *simplifyMulInst(Value *Op0, Value *Op1, bool IsNSW, bool IsNUW,
        match(Op1, m_Exact(m_IDiv(m_Value(X), m_Specific(Op0)))))) // Y * (X / Y)
     return X;
 
-   if (Op0->getType()->isIntOrIntVectorTy(1)) {
+  if (Op0->getType()->isIntOrIntVectorTy(1)) {
     // mul i1 nsw is a special-case because -1 * -1 is poison (+1 is not
     // representable). All other cases reduce to 0, so just return 0.
     if (IsNSW)
@@ -1124,7 +1124,6 @@ static Value *simplifyDivRem(Instruction::BinaryOps Opcode, Value *Op0,
   if (Op0 == Op1)
     return IsDiv ? ConstantInt::get(Ty, 1) : Constant::getNullValue(Ty);
 
-
   KnownBits Known = computeKnownBits(Op1, Q.DL, 0, Q.AC, Q.CxtI, Q.DT);
   // X / 0 -> poison
   // X % 0 -> poison
@@ -4345,8 +4344,7 @@ static Value *simplifyWithOpReplaced(Value *V, Value *Op, Value *RepOp,
       // (Op == 0) ? 0 : (Op & -Op)            --> Op & -Op
       // (Op == 0) ? 0 : (Op * (binop Op, C))  --> Op * (binop Op, C)
       // (Op == -1) ? -1 : (Op | (binop C, Op) --> Op | (binop C, Op)
-      Constant *Absorber =
-          ConstantExpr::getBinOpAbsorber(Opcode, I->getType());
+      Constant *Absorber = ConstantExpr::getBinOpAbsorber(Opcode, I->getType());
       if ((NewOps[0] == Absorber || NewOps[1] == Absorber) &&
           impliesPoison(BO, Op))
         return Absorber;
@@ -4558,7 +4556,8 @@ static Value *simplifySelectWithICmpCond(Value *CondVal, Value *TrueVal,
   if (!match(CondVal, m_ICmp(Pred, m_Value(CmpLHS), m_Value(CmpRHS))))
     return nullptr;
 
-  if (Value *V = simplifyCmpSelOfMaxMin(CmpLHS, CmpRHS, Pred, TrueVal, FalseVal))
+  if (Value *V =
+          simplifyCmpSelOfMaxMin(CmpLHS, CmpRHS, Pred, TrueVal, FalseVal))
     return V;
 
   // Canonicalize ne to eq predicate.
@@ -5921,7 +5920,7 @@ static Value *simplifyBinOp(unsigned Opcode, Value *LHS, Value *RHS,
     return simplifyAddInst(LHS, RHS, /* IsNSW */ false, /* IsNUW */ false, Q,
                            MaxRecurse);
   case Instruction::Sub:
-    return simplifySubInst(LHS, RHS,  /* IsNSW */ false, /* IsNUW */ false, Q,
+    return simplifySubInst(LHS, RHS, /* IsNSW */ false, /* IsNUW */ false, Q,
                            MaxRecurse);
   case Instruction::Mul:
     return simplifyMulInst(LHS, RHS, /* IsNSW */ false, /* IsNUW */ false, Q,
@@ -6941,7 +6940,8 @@ static Value *simplifyInstructionWithOperands(Instruction *I,
   case Instruction::Call:
     return simplifyCall(
         cast<CallInst>(I), NewOps.back(),
-        NewOps.drop_back(1 + cast<CallInst>(I)->getNumTotalBundleOperands()), Q);
+        NewOps.drop_back(1 + cast<CallInst>(I)->getNumTotalBundleOperands()),
+        Q);
   case Instruction::Freeze:
     return llvm::simplifyFreezeInst(NewOps[0], Q);
 #define HANDLE_CAST_INST(num, opc, clas) case Instruction::opc:
diff --git a/llvm/lib/Analysis/Interval.cpp b/llvm/lib/Analysis/Interval.cpp
index f7fffcb3d5e62a6..0f0656741563f9f 100644
--- a/llvm/lib/Analysis/Interval.cpp
+++ b/llvm/lib/Analysis/Interval.cpp
@@ -23,7 +23,7 @@ using namespace llvm;
 
 void Interval::print(raw_ostream &OS) const {
   OS << "-------------------------------------------------------------\n"
-       << "Interval Contents:\n";
+     << "Interval Contents:\n";
 
   // Print out all of the basic blocks in the interval...
   for (const BasicBlock *Node : Nodes)
diff --git a/llvm/lib/Analysis/IntervalPartition.cpp b/llvm/lib/Analysis/IntervalPartition.cpp
index d9620fd405bc2e5..e046c9661fbb3f5 100644
--- a/llvm/lib/Analysis/IntervalPartition.cpp
+++ b/llvm/lib/Analysis/IntervalPartition.cpp
@@ -43,7 +43,7 @@ void IntervalPartition::releaseMemory() {
   RootInterval = nullptr;
 }
 
-void IntervalPartition::print(raw_ostream &O, const Module*) const {
+void IntervalPartition::print(raw_ostream &O, const Module *) const {
   for (const Interval *I : Intervals)
     I->print(O);
 }
@@ -79,7 +79,7 @@ bool IntervalPartition::runOnFunction(Function &F) {
 
   addIntervalToPartition(RootInterval = *I);
 
-  ++I;  // After the first one...
+  ++I; // After the first one...
 
   // Add the rest of the intervals to the partition.
   for (function_interval_iterator E = intervals_end(&F); I != E; ++I)
@@ -96,7 +96,7 @@ bool IntervalPartition::runOnFunction(Function &F) {
 // existing interval graph.  This takes an additional boolean parameter to
 // distinguish it from a copy constructor.  Always pass in false for now.
 IntervalPartition::IntervalPartition(IntervalPartition &IP, bool)
-  : FunctionPass(ID) {
+    : FunctionPass(ID) {
   assert(IP.getRootInterval() && "Cannot operate on empty IntervalPartitions!");
 
   // Pass false to intervals_begin because we take ownership of it's memory
@@ -105,7 +105,7 @@ IntervalPartition::IntervalPartition(IntervalPartition &IP, bool)
 
   addIntervalToPartition(RootInterval = *I);
 
-  ++I;  // After the first one...
+  ++I; // After the first one...
 
   // Add the rest of the intervals to the partition.
   for (interval_part_interval_iterator E = intervals_end(IP); I != E; ++I)
diff --git a/llvm/lib/Analysis/LazyCallGraph.cpp b/llvm/lib/Analysis/LazyCallGraph.cpp
index 473ae75b5d251ef..9203b69de572ef9 100644
--- a/llvm/lib/Analysis/LazyCallGraph.cpp
+++ b/llvm/lib/Analysis/LazyCallGraph.cpp
@@ -179,10 +179,9 @@ LazyCallGraph::LazyCallGraph(
   for (auto &A : M.aliases()) {
     if (A.hasLocalLinkage())
       continue;
-    if (Function* F = dyn_cast<Function>(A.getAliasee())) {
-      LLVM_DEBUG(dbgs() << "  Adding '" << F->getName()
-                        << "' with alias '" << A.getName()
-                        << "' to entry set of the graph.\n");
+    if (Function *F = dyn_cast<Function>(A.getAliasee())) {
+      LLVM_DEBUG(dbgs() << "  Adding '" << F->getName() << "' with alias '"
+                        << A.getName() << "' to entry set of the graph.\n");
       addEdge(EntryEdges.Edges, EntryEdges.EdgeIndexMap, get(*F), Edge::Ref);
     }
   }
diff --git a/llvm/lib/Analysis/LazyValueInfo.cpp b/llvm/lib/Analysis/LazyValueInfo.cpp
index 80136a090a8010d..5056c676d9ac977 100644
--- a/llvm/lib/Analysis/LazyValueInfo.cpp
+++ b/llvm/lib/Analysis/LazyValueInfo.cpp
@@ -52,15 +52,17 @@ LazyValueInfoWrapperPass::LazyValueInfoWrapperPass() : FunctionPass(ID) {
   initializeLazyValueInfoWrapperPassPass(*PassRegistry::getPassRegistry());
 }
 INITIALIZE_PASS_BEGIN(LazyValueInfoWrapperPass, "lazy-value-info",
-                "Lazy Value Information Analysis", false, true)
+                      "Lazy Value Information Analysis", false, true)
 INITIALIZE_PASS_DEPENDENCY(AssumptionCacheTracker)
 INITIALIZE_PASS_DEPENDENCY(TargetLibraryInfoWrapperPass)
 INITIALIZE_PASS_END(LazyValueInfoWrapperPass, "lazy-value-info",
-                "Lazy Value Information Analysis", false, true)
+                    "Lazy Value Information Analysis", false, true)
 
 namespace llvm {
-  FunctionPass *createLazyValueInfoPass() { return new LazyValueInfoWrapperPass(); }
+FunctionPass *createLazyValueInfoPass() {
+  return new LazyValueInfoWrapperPass();
 }
+} // namespace llvm
 
 AnalysisKey LazyValueAnalysis::Key;
 
@@ -68,8 +70,7 @@ AnalysisKey LazyValueAnalysis::Key;
 /// This is as precise as any lattice value can get while still representing
 /// reachable code.
 static bool hasSingleValue(const ValueLatticeElement &Val) {
-  if (Val.isConstantRange() &&
-      Val.getConstantRange().isSingleElement())
+  if (Val.isConstantRange() && Val.getConstantRange().isSingleElement())
     // Integer constants are single element ranges
     return true;
   if (Val.isConstant())
@@ -134,130 +135,127 @@ static ValueLatticeElement intersect(const ValueLatticeElement &A,
 //===----------------------------------------------------------------------===//
 
 namespace {
-  /// A callback value handle updates the cache when values are erased.
-  class LazyValueInfoCache;
-  struct LVIValueHandle final : public CallbackVH {
-    LazyValueInfoCache *Parent;
+/// A callback value handle updates the cache when values are erased.
+class LazyValueInfoCache;
+struct LVIValueHandle final : public CallbackVH {
+  LazyValueInfoCache *Parent;
 
-    LVIValueHandle(Value *V, LazyValueInfoCache *P = nullptr)
-      : CallbackVH(V), Parent(P) { }
+  LVIValueHandle(Value *V, LazyValueInfoCache *P = nullptr)
+      : CallbackVH(V), Parent(P) {}
 
-    void deleted() override;
-    void allUsesReplacedWith(Value *V) override {
-      deleted();
-    }
-  };
+  void deleted() override;
+  void allUsesReplacedWith(Value *V) override { deleted(); }
+};
 } // end anonymous namespace
 
 namespace {
-  using NonNullPointerSet = SmallDenseSet<AssertingVH<Value>, 2>;
-
-  /// This is the cache kept by LazyValueInfo which
-  /// maintains information about queries across the clients' queries.
-  class LazyValueInfoCache {
-    /// This is all of the cached information for one basic block. It contains
-    /// the per-value lattice elements, as well as a separate set for
-    /// overdefined values to reduce memory usage. Additionally pointers
-    /// dereferenced in the block are cached for nullability queries.
-    struct BlockCacheEntry {
-      SmallDenseMap<AssertingVH<Value>, ValueLatticeElement, 4> LatticeElements;
-      SmallDenseSet<AssertingVH<Value>, 4> OverDefined;
-      // std::nullopt indicates that the nonnull pointers for this basic block
-      // block have not been computed yet.
-      std::optional<NonNullPointerSet> NonNullPointers;
-    };
-
-    /// Cached information per basic block.
-    DenseMap<PoisoningVH<BasicBlock>, std::unique_ptr<BlockCacheEntry>>
-        BlockCache;
-    /// Set of value handles used to erase values from the cache on deletion.
-    DenseSet<LVIValueHandle, DenseMapInfo<Value *>> ValueHandles;
-
-    const BlockCacheEntry *getBlockEntry(BasicBlock *BB) const {
-      auto It = BlockCache.find_as(BB);
-      if (It == BlockCache.end())
-        return nullptr;
-      return It->second.get();
-    }
+using NonNullPointerSet = SmallDenseSet<AssertingVH<Value>, 2>;
+
+/// This is the cache kept by LazyValueInfo which
+/// maintains information about queries across the clients' queries.
+class LazyValueInfoCache {
+  /// This is all of the cached information for one basic block. It contains
+  /// the per-value lattice elements, as well as a separate set for
+  /// overdefined values to reduce memory usage. Additionally pointers
+  /// dereferenced in the block are cached for nullability queries.
+  struct BlockCacheEntry {
+    SmallDenseMap<AssertingVH<Value>, ValueLatticeElement, 4> LatticeElements;
+    SmallDenseSet<AssertingVH<Value>, 4> OverDefined;
+    // std::nullopt indicates that the nonnull pointers for this basic block
+    // block have not been computed yet.
+    std::optional<NonNullPointerSet> NonNullPointers;
+  };
 
-    BlockCacheEntry *getOrCreateBlockEntry(BasicBlock *BB) {
-      auto It = BlockCache.find_as(BB);
-      if (It == BlockCache.end())
-        It = BlockCache.insert({ BB, std::make_unique<BlockCacheEntry>() })
-                       .first;
+  /// Cached information per basic block.
+  DenseMap<PoisoningVH<BasicBlock>, std::unique_ptr<BlockCacheEntry>>
+      BlockCache;
+  /// Set of value handles used to erase values from the cache on deletion.
+  DenseSet<LVIValueHandle, DenseMapInfo<Value *>> ValueHandles;
+
+  const BlockCacheEntry *getBlockEntry(BasicBlock *BB) const {
+    auto It = BlockCache.find_as(BB);
+    if (It == BlockCache.end())
+      return nullptr;
+    return It->second.get();
+  }
 
-      return It->second.get();
-    }
+  BlockCacheEntry *getOrCreateBlockEntry(BasicBlock *BB) {
+    auto It = BlockCache.find_as(BB);
+    if (It == BlockCache.end())
+      It = BlockCache.insert({BB, std::make_unique<BlockCacheEntry>()}).first;
 
-    void addValueHandle(Value *Val) {
-      auto HandleIt = ValueHandles.find_as(Val);
-      if (HandleIt == ValueHandles.end())
-        ValueHandles.insert({ Val, this });
-    }
+    return It->second.get();
+  }
 
-  public:
-    void insertResult(Value *Val, BasicBlock *BB,
-                      const ValueLatticeElement &Result) {
-      BlockCacheEntry *Entry = getOrCreateBlockEntry(BB);
+  void addValueHandle(Value *Val) {
+    auto HandleIt = ValueHandles.find_as(Val);
+    if (HandleIt == ValueHandles.end())
+      ValueHandles.insert({Val, this});
+  }
 
-      // Insert over-defined values into their own cache to reduce memory
-      // overhead.
-      if (Result.isOverdefined())
-        Entry->OverDefined.insert(Val);
-      else
-        Entry->LatticeElements.insert({ Val, Result });
+public:
+  void insertResult(Value *Val, BasicBlock *BB,
+                    const ValueLatticeElement &Result) {
+    BlockCacheEntry *Entry = getOrCreateBlockEntry(BB);
+
+    // Insert over-defined values into their own cache to reduce memory
+    // overhead.
+    if (Result.isOverdefined())
+      Entry->OverDefined.insert(Val);
+    else
+      Entry->LatticeElements.insert({Val, Result});
+
+    addValueHandle(Val);
+  }
 
-      addValueHandle(Val);
-    }
+  std::optional<ValueLatticeElement> getCachedValueInfo(Value *V,
+                                                        BasicBlock *BB) const {
+    const BlockCacheEntry *Entry = getBlockEntry(BB);
+    if (!Entry)
+      return std::nullopt;
 
-    std::optional<ValueLatticeElement>
-    getCachedValueInfo(Value *V, BasicBlock *BB) const {
-      const BlockCacheEntry *Entry = getBlockEntry(BB);
-      if (!Entry)
-        return std::nullopt;
+    if (Entry->OverDefined.count(V))
+      return ValueLatticeElement::getOverdefined();
 
-      if (Entry->OverDefined.count(V))
-        return ValueLatticeElement::getOverdefined();
+    auto LatticeIt = Entry->LatticeElements.find_as(V);
+    if (LatticeIt == Entry->LatticeElements.end())
+      return std::nullopt;
 
-      auto LatticeIt = Entry->LatticeElements.find_as(V);
-      if (LatticeIt == Entry->LatticeElements.end())
-        return std::nullopt;
+    return LatticeIt->second;
+  }
 
-      return LatticeIt->second;
+  bool
+  isNonNullAtEndOfBlock(Value *V, BasicBlock *BB,
+                        function_ref<NonNullPointerSet(BasicBlock *)> InitFn) {
+    BlockCacheEntry *Entry = getOrCreateBlockEntry(BB);
+    if (!Entry->NonNullPointers) {
+      Entry->NonNullPointers = InitFn(BB);
+      for (Value *V : *Entry->NonNullPointers)
+        addValueHandle(V);
     }
 
-    bool isNonNullAtEndOfBlock(
-        Value *V, BasicBlock *BB,
-        function_ref<NonNullPointerSet(BasicBlock *)> InitFn) {
-      BlockCacheEntry *Entry = getOrCreateBlockEntry(BB);
-      if (!Entry->NonNullPointers) {
-        Entry->NonNullPointers = InitFn(BB);
-        for (Value *V : *Entry->NonNullPointers)
-          addValueHandle(V);
-      }
-
-      return Entry->NonNullPointers->count(V);
-    }
+    return Entry->NonNullPointers->count(V);
+  }
 
-    /// clear - Empty the cache.
-    void clear() {
-      BlockCache.clear();
-      ValueHandles.clear();
-    }
+  /// clear - Empty the cache.
+  void clear() {
+    BlockCache.clear();
+    ValueHandles.clear();
+  }
 
-    /// Inform the cache that a given value has been deleted.
-    void eraseValue(Value *V);
+  /// Inform the cache that a given value has been deleted.
+  void eraseValue(Value *V);
 
-    /// This is part of the update interface to inform the cache
-    /// that a block has been deleted.
-    void eraseBlock(BasicBlock *BB);
+  /// This is part of the update interface to inform the cache
+  /// that a block has been deleted.
+  void eraseBlock(BasicBlock *BB);
 
-    /// Updates the cache to remove any influence an overdefined value in
-    /// OldSucc might have (unless also overdefined in NewSucc).  This just
-    /// flushes elements from the cache and does not add any.
-    void threadEdgeImpl(BasicBlock *OldSucc,BasicBlock *NewSucc);
-  };
-}
+  /// Updates the cache to remove any influence an overdefined value in
+  /// OldSucc might have (unless also overdefined in NewSucc).  This just
+  /// flushes elements from the cache and does not add any.
+  void threadEdgeImpl(BasicBlock *OldSucc, BasicBlock *NewSucc);
+};
+} // namespace
 
 void LazyValueInfoCache::eraseValue(Value *V) {
   for (auto &Pair : BlockCache) {
@@ -278,9 +276,7 @@ void LVIValueHandle::deleted() {
   Parent->eraseValue(*this);
 }
 
-void LazyValueInfoCache::eraseBlock(BasicBlock *BB) {
-  BlockCache.erase(BB);
-}
+void LazyValueInfoCache::eraseBlock(BasicBlock *BB) { BlockCache.erase(BB); }
 
 void LazyValueInfoCache::threadEdgeImpl(BasicBlock *OldSucc,
                                         BasicBlock *NewSucc) {
@@ -294,7 +290,7 @@ void LazyValueInfoCache::threadEdgeImpl(BasicBlock *OldSucc,
   // for all values that were marked overdefined in OldSucc, and for those same
   // values in any successor of OldSucc (except NewSucc) in which they were
   // also marked overdefined.
-  std::vector<BasicBlock*> worklist;
+  std::vector<BasicBlock *> worklist;
   worklist.push_back(OldSucc);
 
   const BlockCacheEntry *Entry = getBlockEntry(OldSucc);
@@ -312,7 +308,8 @@ void LazyValueInfoCache::threadEdgeImpl(BasicBlock *OldSucc,
     worklist.pop_back();
 
     // Skip blocks only accessible through NewSucc.
-    if (ToUpdate == NewSucc) continue;
+    if (ToUpdate == NewSucc)
+      continue;
 
     // If a value was marked overdefined in OldSucc, and is here too...
     auto OI = BlockCache.find_as(ToUpdate);
@@ -330,7 +327,8 @@ void LazyValueInfoCache::threadEdgeImpl(BasicBlock *OldSucc,
       changed = true;
     }
 
-    if (!changed) continue;
+    if (!changed)
+      continue;
 
     llvm::append_range(worklist, successors(ToUpdate));
   }
@@ -368,19 +366,19 @@ class LazyValueInfoImpl {
   /// This stack holds the state of the value solver during a query.
   /// It basically emulates the callstack of the naive
   /// recursive value lookup process.
-  SmallVector<std::pair<BasicBlock*, Value*>, 8> BlockValueStack;
+  SmallVector<std::pair<BasicBlock *, Value *>, 8> BlockValueStack;
 
   /// Keeps track of which block-value pairs are in BlockValueStack.
-  DenseSet<std::pair<BasicBlock*, Value*> > BlockValueSet;
+  DenseSet<std::pair<BasicBlock *, Value *>> BlockValueSet;
 
   /// Push BV onto BlockValueStack unless it's already in there.
   /// Returns true on success.
   bool pushBlockValue(const std::pair<BasicBlock *, Value *> &BV) {
     if (!BlockValueSet.insert(BV).second)
-      return false;  // It's already in the stack.
+      return false; // It's already in the stack.
 
-    LLVM_DEBUG(dbgs() << "PUSH: " << *BV.second << " in "
-                      << BV.first->getName() << "\n");
+    LLVM_DEBUG(dbgs() << "PUSH: " << *BV.second << " in " << BV.first->getName()
+                      << "\n");
     BlockValueStack.push_back(BV);
     return true;
   }
@@ -453,9 +451,7 @@ class LazyValueInfoImpl {
                                      Instruction *CxtI = nullptr);
 
   /// Complete flush all previously computed values
-  void clear() {
-    TheCache.clear();
-  }
+  void clear() { TheCache.clear(); }
 
   /// Printing the LazyValueInfo Analysis.
   void printLVI(Function &F, DominatorTree &DTree, raw_ostream &OS) {
@@ -469,13 +465,11 @@ class LazyValueInfoImpl {
 
   /// This is part of the update interface to inform the cache
   /// that a block has been deleted.
-  void eraseBlock(BasicBlock *BB) {
-    TheCache.eraseBlock(BB);
-  }
+  void eraseBlock(BasicBlock *BB) { TheCache.eraseBlock(BB); }
 
   /// This is the update interface to inform the cache that an edge from
   /// PredBB to OldSucc has been threaded to be from PredBB to NewSucc.
-  void threadEdge(BasicBlock *PredBB,BasicBlock *OldSucc,BasicBlock *NewSucc);
+  void threadEdge(BasicBlock *PredBB, BasicBlock *OldSucc, BasicBlock *NewSucc);
 
   LazyValueInfoImpl(AssumptionCache *AC, const DataLayout &DL,
                     Function *GuardDecl)
@@ -522,9 +516,8 @@ void LazyValueInfoImpl::solve() {
       std::optional<ValueLatticeElement> BBLV =
           TheCache.getCachedValueInfo(e.second, e.first);
       assert(BBLV && "Result should be in cache!");
-      LLVM_DEBUG(
-          dbgs() << "POP " << *e.second << " in " << e.first->getName() << " = "
-                 << *BBLV << "\n");
+      LLVM_DEBUG(dbgs() << "POP " << *e.second << " in " << e.first->getName()
+                        << " = " << *BBLV << "\n");
 #endif
 
       BlockValueStack.pop_back();
@@ -550,7 +543,7 @@ LazyValueInfoImpl::getBlockValue(Value *Val, BasicBlock *BB,
   }
 
   // We have hit a cycle, assume overdefined.
-  if (!pushBlockValue({ BB, Val }))
+  if (!pushBlockValue({BB, Val}))
     return ValueLatticeElement::getOverdefined();
 
   // Yet to be resolved.
@@ -559,7 +552,8 @@ LazyValueInfoImpl::getBlockValue(Value *Val, BasicBlock *BB,
 
 static ValueLatticeElement getFromRangeMetadata(Instruction *BBI) {
   switch (BBI->getOpcode()) {
-  default: break;
+  default:
+    break;
   case Instruction::Load:
   case Instruction::Call:
   case Instruction::Invoke:
@@ -640,18 +634,20 @@ static void AddNonNullPointer(Value *Ptr, NonNullPointerSet &PtrSet) {
     PtrSet.insert(getUnderlyingObject(Ptr));
 }
 
-static void AddNonNullPointersByInstruction(
-    Instruction *I, NonNullPointerSet &PtrSet) {
+static void AddNonNullPointersByInstruction(Instruction *I,
+                                            NonNullPointerSet &PtrSet) {
   if (LoadInst *L = dyn_cast<LoadInst>(I)) {
     AddNonNullPointer(L->getPointerOperand(), PtrSet);
   } else if (StoreInst *S = dyn_cast<StoreInst>(I)) {
     AddNonNullPointer(S->getPointerOperand(), PtrSet);
   } else if (MemIntrinsic *MI = dyn_cast<MemIntrinsic>(I)) {
-    if (MI->isVolatile()) return;
+    if (MI->isVolatile())
+      return;
 
     // FIXME: check whether it has a valuerange that excludes zero?
     ConstantInt *Len = dyn_cast<ConstantInt>(MI->getLength());
-    if (!Len || Len->isZero()) return;
+    if (!Len || Len->isZero())
+      return;
 
     AddNonNullPointer(MI->getRawDest(), PtrSet);
     if (MemTransferInst *MTI = dyn_cast<MemTransferInst>(MI))
@@ -675,7 +671,7 @@ bool LazyValueInfoImpl::isNonNullAtEndOfBlock(Value *Val, BasicBlock *BB) {
 
 std::optional<ValueLatticeElement>
 LazyValueInfoImpl::solveBlockValueNonLocal(Value *Val, BasicBlock *BB) {
-  ValueLatticeElement Result;  // Start Undefined.
+  ValueLatticeElement Result; // Start Undefined.
 
   // If this is the entry block, we must be asking about an argument.  The
   // value is overdefined.
@@ -718,7 +714,7 @@ LazyValueInfoImpl::solveBlockValueNonLocal(Value *Val, BasicBlock *BB) {
 
 std::optional<ValueLatticeElement>
 LazyValueInfoImpl::solveBlockValuePHINode(PHINode *PN, BasicBlock *BB) {
-  ValueLatticeElement Result;  // Start Undefined.
+  ValueLatticeElement Result; // Start Undefined.
 
   // Loop over all of our predecessors, merging what we know from them into
   // result.  See the comment about the chosen traversal order in
@@ -758,7 +754,7 @@ static ValueLatticeElement getValueFromCondition(Value *Val, Value *Cond,
 // If we can determine a constraint on the value given conditions assumed by
 // the program, intersect those constraints with BBLV
 void LazyValueInfoImpl::intersectAssumeOrGuardBlockValueConstantRange(
-        Value *Val, ValueLatticeElement &BBLV, Instruction *BBI) {
+    Value *Val, ValueLatticeElement &BBLV, Instruction *BBI) {
   BBI = BBI ? BBI : dyn_cast<Instruction>(Val);
   if (!BBI)
     return;
@@ -781,8 +777,8 @@ void LazyValueInfoImpl::intersectAssumeOrGuardBlockValueConstantRange(
   // If guards are not used in the module, don't spend time looking for them
   if (GuardDecl && !GuardDecl->use_empty() &&
       BBI->getIterator() != BB->begin()) {
-    for (Instruction &I : make_range(std::next(BBI->getIterator().getReverse()),
-                                     BB->rend())) {
+    for (Instruction &I :
+         make_range(std::next(BBI->getIterator().getReverse()), BB->rend())) {
       Value *Cond = nullptr;
       if (match(&I, m_Intrinsic<Intrinsic::experimental_guard>(m_Value(Cond))))
         BBLV = intersect(BBLV, getValueFromCondition(Val, Cond));
@@ -793,8 +789,7 @@ void LazyValueInfoImpl::intersectAssumeOrGuardBlockValueConstantRange(
     // Check whether we're checking at the terminator, and the pointer has
     // been dereferenced in this block.
     PointerType *PTy = dyn_cast<PointerType>(Val->getType());
-    if (PTy && BB->getTerminator() == BBI &&
-        isNonNullAtEndOfBlock(Val, BB))
+    if (PTy && BB->getTerminator() == BBI && isNonNullAtEndOfBlock(Val, BB))
       BBLV = ValueLatticeElement::getNot(ConstantPointerNull::get(PTy));
   }
 }
@@ -838,13 +833,13 @@ LazyValueInfoImpl::solveBlockValueSelect(SelectInst *SI, BasicBlock *BB) {
         switch (SPR.Flavor) {
         default:
           llvm_unreachable("unexpected minmax type!");
-        case SPF_SMIN:                   /// Signed minimum
+        case SPF_SMIN: /// Signed minimum
           return TrueCR.smin(FalseCR);
-        case SPF_UMIN:                   /// Unsigned minimum
+        case SPF_UMIN: /// Unsigned minimum
           return TrueCR.umin(FalseCR);
-        case SPF_SMAX:                   /// Signed maximum
+        case SPF_SMAX: /// Signed maximum
           return TrueCR.smax(FalseCR);
-        case SPF_UMAX:                   /// Unsigned maximum
+        case SPF_UMAX: /// Unsigned maximum
           return TrueCR.umax(FalseCR);
         };
       }();
@@ -936,8 +931,8 @@ LazyValueInfoImpl::solveBlockValueCast(CastInst *CI, BasicBlock *BB) {
   // NOTE: We're currently limited by the set of operations that ConstantRange
   // can evaluate symbolically.  Enhancing that set will allows us to analyze
   // more definitions.
-  return ValueLatticeElement::getRange(LHSRange.castOp(CI->getOpcode(),
-                                                       ResultBitWidth));
+  return ValueLatticeElement::getRange(
+      LHSRange.castOp(CI->getOpcode(), ResultBitWidth));
 }
 
 std::optional<ValueLatticeElement>
@@ -1024,9 +1019,9 @@ LazyValueInfoImpl::solveBlockValueExtractValue(ExtractValueInst *EVI,
 
   // Handle extractvalue of insertvalue to allow further simplification
   // based on replaced with.overflow intrinsics.
-  if (Value *V = simplifyExtractValueInst(
-          EVI->getAggregateOperand(), EVI->getIndices(),
-          EVI->getModule()->getDataLayout()))
+  if (Value *V = simplifyExtractValueInst(EVI->getAggregateOperand(),
+                                          EVI->getIndices(),
+                                          EVI->getModule()->getDataLayout()))
     return getBlockValue(V, BB, EVI);
 
   LLVM_DEBUG(dbgs() << " compute BB '" << BB->getName()
@@ -1068,8 +1063,9 @@ static bool matchICmpOperand(APInt &Offset, Value *LHS, Value *Val,
 }
 
 /// Get value range for a "(Val + Offset) Pred RHS" condition.
-static ValueLatticeElement getValueFromSimpleICmpCondition(
-    CmpInst::Predicate Pred, Value *RHS, const APInt &Offset) {
+static ValueLatticeElement
+getValueFromSimpleICmpCondition(CmpInst::Predicate Pred, Value *RHS,
+                                const APInt &Offset) {
   ConstantRange RHSRange(RHS->getType()->getIntegerBitWidth(),
                          /*isFullSet=*/true);
   if (ConstantInt *CI = dyn_cast<ConstantInt>(RHS))
@@ -1153,8 +1149,9 @@ static ValueLatticeElement getValueFromICmpCondition(Value *Val, ICmpInst *ICI,
 
 // Handle conditions of the form
 // extractvalue(op.with.overflow(%x, C), 1).
-static ValueLatticeElement getValueFromOverflowCondition(
-    Value *Val, WithOverflowInst *WO, bool IsTrueDest) {
+static ValueLatticeElement getValueFromOverflowCondition(Value *Val,
+                                                         WithOverflowInst *WO,
+                                                         bool IsTrueDest) {
   // TODO: This only works with a constant RHS for now. We could also compute
   // the range of the RHS, but this doesn't fit into the current structure of
   // the edge value calculation.
@@ -1294,20 +1291,18 @@ static ValueLatticeElement constantFoldUser(User *Usr, Value *Op,
                                             const APInt &OpConstVal,
                                             const DataLayout &DL) {
   assert(isOperationFoldable(Usr) && "Precondition");
-  Constant* OpConst = Constant::getIntegerValue(Op->getType(), OpConstVal);
+  Constant *OpConst = Constant::getIntegerValue(Op->getType(), OpConstVal);
   // Check if Usr can be simplified to a constant.
   if (auto *CI = dyn_cast<CastInst>(Usr)) {
     assert(CI->getOperand(0) == Op && "Operand 0 isn't Op");
     if (auto *C = dyn_cast_or_null<ConstantInt>(
-            simplifyCastInst(CI->getOpcode(), OpConst,
-                             CI->getDestTy(), DL))) {
+            simplifyCastInst(CI->getOpcode(), OpConst, CI->getDestTy(), DL))) {
       return ValueLatticeElement::getRange(ConstantRange(C->getValue()));
     }
   } else if (auto *BO = dyn_cast<BinaryOperator>(Usr)) {
     bool Op0Match = BO->getOperand(0) == Op;
     bool Op1Match = BO->getOperand(1) == Op;
-    assert((Op0Match || Op1Match) &&
-           "Operand 0 nor Operand 1 isn't a match");
+    assert((Op0Match || Op1Match) && "Operand 0 nor Operand 1 isn't a match");
     Value *LHS = Op0Match ? OpConst : BO->getOperand(0);
     Value *RHS = Op1Match ? OpConst : BO->getOperand(1);
     if (auto *C = dyn_cast_or_null<ConstantInt>(
@@ -1324,16 +1319,14 @@ static ValueLatticeElement constantFoldUser(User *Usr, Value *Op,
 /// Compute the value of Val on the edge BBFrom -> BBTo. Returns false if
 /// Val is not constrained on the edge.  Result is unspecified if return value
 /// is false.
-static std::optional<ValueLatticeElement> getEdgeValueLocal(Value *Val,
-                                                       BasicBlock *BBFrom,
-                                                       BasicBlock *BBTo) {
+static std::optional<ValueLatticeElement>
+getEdgeValueLocal(Value *Val, BasicBlock *BBFrom, BasicBlock *BBTo) {
   // TODO: Handle more complex conditionals. If (v == 0 || v2 < 1) is false, we
   // know that v != 0.
   if (BranchInst *BI = dyn_cast<BranchInst>(BBFrom->getTerminator())) {
     // If this is a conditional branch and only one successor goes to BBTo, then
     // we may be able to infer something from the condition.
-    if (BI->isConditional() &&
-        BI->getSuccessor(0) != BI->getSuccessor(1)) {
+    if (BI->isConditional() && BI->getSuccessor(0) != BI->getSuccessor(1)) {
       bool isTrueDest = BI->getSuccessor(0) == BBTo;
       assert(BI->getSuccessor(!isTrueDest) == BBTo &&
              "BBTo isn't a successor of BBFrom");
@@ -1342,13 +1335,13 @@ static std::optional<ValueLatticeElement> getEdgeValueLocal(Value *Val,
       // If V is the condition of the branch itself, then we know exactly what
       // it is.
       if (Condition == Val)
-        return ValueLatticeElement::get(ConstantInt::get(
-                              Type::getInt1Ty(Val->getContext()), isTrueDest));
+        return ValueLatticeElement::get(
+            ConstantInt::get(Type::getInt1Ty(Val->getContext()), isTrueDest));
 
       // If the condition of the branch is an equality comparison, we may be
       // able to infer the value.
-      ValueLatticeElement Result = getValueFromCondition(Val, Condition,
-                                                         isTrueDest);
+      ValueLatticeElement Result =
+          getValueFromCondition(Val, Condition, isTrueDest);
       if (!Result.isOverdefined())
         return Result;
 
@@ -1405,8 +1398,8 @@ static std::optional<ValueLatticeElement> getEdgeValueLocal(Value *Val,
     if (Condition != Val) {
       // Check if Val has Condition as an operand.
       if (User *Usr = dyn_cast<User>(Val))
-        ValUsesConditionAndMayBeFoldable = isOperationFoldable(Usr) &&
-            usesOperand(Usr, Condition);
+        ValUsesConditionAndMayBeFoldable =
+            isOperationFoldable(Usr) && usesOperand(Usr, Condition);
       if (!ValUsesConditionAndMayBeFoldable)
         return std::nullopt;
     }
@@ -1415,7 +1408,7 @@ static std::optional<ValueLatticeElement> getEdgeValueLocal(Value *Val,
 
     bool DefaultCase = SI->getDefaultDest() == BBTo;
     unsigned BitWidth = Val->getType()->getIntegerBitWidth();
-    ConstantRange EdgesVals(BitWidth, DefaultCase/*isFullSet*/);
+    ConstantRange EdgesVals(BitWidth, DefaultCase /*isFullSet*/);
 
     for (auto Case : SI->cases()) {
       APInt CaseValue = Case.getCaseValue()->getValue();
@@ -1515,9 +1508,10 @@ ValueLatticeElement LazyValueInfoImpl::getValueAt(Value *V, Instruction *CxtI) {
   return Result;
 }
 
-ValueLatticeElement LazyValueInfoImpl::
-getValueOnEdge(Value *V, BasicBlock *FromBB, BasicBlock *ToBB,
-               Instruction *CxtI) {
+ValueLatticeElement LazyValueInfoImpl::getValueOnEdge(Value *V,
+                                                      BasicBlock *FromBB,
+                                                      BasicBlock *ToBB,
+                                                      Instruction *CxtI) {
   LLVM_DEBUG(dbgs() << "LVI Getting edge value " << *V << " from '"
                     << FromBB->getName() << "' to '" << ToBB->getName()
                     << "'\n");
@@ -1762,7 +1756,8 @@ getPredicateResult(unsigned Pred, Constant *C, const ValueLatticeElement &Val,
 
   if (Val.isConstantRange()) {
     ConstantInt *CI = dyn_cast<ConstantInt>(C);
-    if (!CI) return LazyValueInfo::Unknown;
+    if (!CI)
+      return LazyValueInfo::Unknown;
 
     const ConstantRange &CR = Val.getConstantRange();
     if (Pred == ICmpInst::ICMP_EQ) {
@@ -1795,15 +1790,13 @@ getPredicateResult(unsigned Pred, Constant *C, const ValueLatticeElement &Val,
     if (Pred == ICmpInst::ICMP_EQ) {
       // !C1 == C -> false iff C1 == C.
       Res = ConstantFoldCompareInstOperands(ICmpInst::ICMP_NE,
-                                            Val.getNotConstant(), C, DL,
-                                            TLI);
+                                            Val.getNotConstant(), C, DL, TLI);
       if (Res && Res->isNullValue())
         return LazyValueInfo::False;
     } else if (Pred == ICmpInst::ICMP_NE) {
       // !C1 != C -> true iff C1 == C.
       Res = ConstantFoldCompareInstOperands(ICmpInst::ICMP_NE,
-                                            Val.getNotConstant(), C, DL,
-                                            TLI);
+                                            Val.getNotConstant(), C, DL, TLI);
       if (Res && Res->isNullValue())
         return LazyValueInfo::True;
     }
@@ -1815,10 +1808,11 @@ getPredicateResult(unsigned Pred, Constant *C, const ValueLatticeElement &Val,
 
 /// Determine whether the specified value comparison with a constant is known to
 /// be true or false on the specified CFG edge. Pred is a CmpInst predicate.
-LazyValueInfo::Tristate
-LazyValueInfo::getPredicateOnEdge(unsigned Pred, Value *V, Constant *C,
-                                  BasicBlock *FromBB, BasicBlock *ToBB,
-                                  Instruction *CxtI) {
+LazyValueInfo::Tristate LazyValueInfo::getPredicateOnEdge(unsigned Pred,
+                                                          Value *V, Constant *C,
+                                                          BasicBlock *FromBB,
+                                                          BasicBlock *ToBB,
+                                                          Instruction *CxtI) {
   Module *M = FromBB->getModule();
   ValueLatticeElement Result =
       getOrCreateImpl(M).getValueOnEdge(V, FromBB, ToBB, CxtI);
@@ -1826,9 +1820,10 @@ LazyValueInfo::getPredicateOnEdge(unsigned Pred, Value *V, Constant *C,
   return getPredicateResult(Pred, C, Result, M->getDataLayout(), TLI);
 }
 
-LazyValueInfo::Tristate
-LazyValueInfo::getPredicateAt(unsigned Pred, Value *V, Constant *C,
-                              Instruction *CxtI, bool UseBlockValue) {
+LazyValueInfo::Tristate LazyValueInfo::getPredicateAt(unsigned Pred, Value *V,
+                                                      Constant *C,
+                                                      Instruction *CxtI,
+                                                      bool UseBlockValue) {
   // Is or is not NonNull are common predicates being queried. If
   // isKnownNonZero can tell us the result of the predicate, we can
   // return it quickly. But this is only a fastpath, and falling
@@ -1957,8 +1952,8 @@ LazyValueInfo::Tristate LazyValueInfo::getPredicateAt(unsigned P, Value *LHS,
     ValueLatticeElement R =
         getOrCreateImpl(M).getValueInBlock(RHS, CxtI->getParent(), CxtI);
     Type *Ty = CmpInst::makeCmpResultType(LHS->getType());
-    if (Constant *Res = L.getCompare((CmpInst::Predicate)P, Ty, R,
-                                     M->getDataLayout())) {
+    if (Constant *Res =
+            L.getCompare((CmpInst::Predicate)P, Ty, R, M->getDataLayout())) {
       if (Res->isNullValue())
         return LazyValueInfo::False;
       if (Res->isOneValue())
@@ -1989,7 +1984,8 @@ void LazyValueInfo::clear() {
     getImpl()->clear();
 }
 
-void LazyValueInfo::printLVI(Function &F, DominatorTree &DTree, raw_ostream &OS) {
+void LazyValueInfo::printLVI(Function &F, DominatorTree &DTree,
+                             raw_ostream &OS) {
   if (auto *Impl = getImpl())
     getImpl()->printLVI(F, DTree, OS);
 }
@@ -2016,7 +2012,7 @@ void LazyValueInfoAnnotatedWriter::emitInstructionAnnot(
     const Instruction *I, formatted_raw_ostream &OS) {
 
   auto *ParentBB = I->getParent();
-  SmallPtrSet<const BasicBlock*, 16> BlocksContainingLVI;
+  SmallPtrSet<const BasicBlock *, 16> BlocksContainingLVI;
   // We can generate (solve) LVI values only for blocks that are dominated by
   // the I's parent. However, to avoid generating LVI for all dominating blocks,
   // that contain redundant/uninteresting information, we print LVI for
@@ -2027,9 +2023,9 @@ void LazyValueInfoAnnotatedWriter::emitInstructionAnnot(
       return;
     ValueLatticeElement Result = LVIImpl->getValueInBlock(
         const_cast<Instruction *>(I), const_cast<BasicBlock *>(BB));
-      OS << "; LatticeVal for: '" << *I << "' in BB: '";
-      BB->printAsOperand(OS, false);
-      OS << "' is: " << Result << "\n";
+    OS << "; LatticeVal for: '" << *I << "' in BB: '";
+    BB->printAsOperand(OS, false);
+    OS << "' is: " << Result << "\n";
   };
 
   printResult(ParentBB);
@@ -2044,7 +2040,6 @@ void LazyValueInfoAnnotatedWriter::emitInstructionAnnot(
     if (auto *UseI = dyn_cast<Instruction>(U))
       if (!isa<PHINode>(UseI) || DT.dominates(ParentBB, UseI->getParent()))
         printResult(UseI->getParent());
-
 }
 
 namespace {
@@ -2072,11 +2067,11 @@ class LazyValueInfoPrinter : public FunctionPass {
     return false;
   }
 };
-}
+} // namespace
 
 char LazyValueInfoPrinter::ID = 0;
 INITIALIZE_PASS_BEGIN(LazyValueInfoPrinter, "print-lazy-value-info",
-                "Lazy Value Info Printer Pass", false, false)
+                      "Lazy Value Info Printer Pass", false, false)
 INITIALIZE_PASS_DEPENDENCY(LazyValueInfoWrapperPass)
 INITIALIZE_PASS_END(LazyValueInfoPrinter, "print-lazy-value-info",
-                "Lazy Value Info Printer Pass", false, false)
+                    "Lazy Value Info Printer Pass", false, false)
diff --git a/llvm/lib/Analysis/Lint.cpp b/llvm/lib/Analysis/Lint.cpp
index 1ebc593016bc0d4..b4d58da5b2fcd7b 100644
--- a/llvm/lib/Analysis/Lint.cpp
+++ b/llvm/lib/Analysis/Lint.cpp
@@ -157,7 +157,7 @@ class Lint : public InstVisitor<Lint> {
   /// This calls the Message-only version so that the above is easier to set
   /// a breakpoint on.
   template <typename T1, typename... Ts>
-  void CheckFailed(const Twine &Message, const T1 &V1, const Ts &... Vs) {
+  void CheckFailed(const Twine &Message, const T1 &V1, const Ts &...Vs) {
     CheckFailed(Message);
     WriteValues({V1, Vs...});
   }
@@ -251,8 +251,8 @@ void Lint::visitCallBase(CallBase &I) {
         // Check that an sret argument points to valid memory.
         if (Formal->hasStructRetAttr() && Actual->getType()->isPointerTy()) {
           Type *Ty = Formal->getParamStructRetType();
-          MemoryLocation Loc(
-              Actual, LocationSize::precise(DL->getTypeStoreSize(Ty)));
+          MemoryLocation Loc(Actual,
+                             LocationSize::precise(DL->getTypeStoreSize(Ty)));
           visitMemoryReference(I, Loc, DL->getABITypeAlign(Ty), Ty,
                                MemRef::Read | MemRef::Write);
         }
diff --git a/llvm/lib/Analysis/Loads.cpp b/llvm/lib/Analysis/Loads.cpp
index 97d21db86abf28a..46dc07e3e0517a1 100644
--- a/llvm/lib/Analysis/Loads.cpp
+++ b/llvm/lib/Analysis/Loads.cpp
@@ -78,9 +78,9 @@ static bool isDereferenceableAndAlignedPointer(
   // bitcast instructions are no-ops as far as dereferenceability is concerned.
   if (const BitCastOperator *BC = dyn_cast<BitCastOperator>(V)) {
     if (BC->getSrcTy()->isPointerTy())
-      return isDereferenceableAndAlignedPointer(
-        BC->getOperand(0), Alignment, Size, DL, CtxI, AC, DT, TLI,
-          Visited, MaxDepth);
+      return isDereferenceableAndAlignedPointer(BC->getOperand(0), Alignment,
+                                                Size, DL, CtxI, AC, DT, TLI,
+                                                Visited, MaxDepth);
   }
 
   // Recurse into both hands of select.
@@ -94,9 +94,9 @@ static bool isDereferenceableAndAlignedPointer(
   }
 
   bool CheckForNonNull, CheckForFreed;
-  APInt KnownDerefBytes(Size.getBitWidth(),
-                        V->getPointerDereferenceableBytes(DL, CheckForNonNull,
-                                                          CheckForFreed));
+  APInt KnownDerefBytes(
+      Size.getBitWidth(),
+      V->getPointerDereferenceableBytes(DL, CheckForNonNull, CheckForFreed));
   if (KnownDerefBytes.getBoolValue() && KnownDerefBytes.uge(Size) &&
       !CheckForFreed)
     if (!CheckForNonNull || isKnownNonZero(V, DL, 0, AC, CtxI, DT)) {
@@ -110,7 +110,6 @@ static bool isDereferenceableAndAlignedPointer(
   /// TODO refactor this function to be able to search independently for
   /// Dereferencability and Alignment requirements.
 
-
   if (const auto *Call = dyn_cast<CallBase>(V)) {
     if (auto *RP = getArgumentAliasingToReturnedPointer(Call, true))
       return isDereferenceableAndAlignedPointer(RP, Alignment, Size, DL, CtxI,
@@ -281,7 +280,7 @@ bool llvm::isDereferenceableAndAlignedInLoop(LoadInst *LI, Loop *L,
   auto *AddRec = dyn_cast<SCEVAddRecExpr>(SE.getSCEV(Ptr));
   if (!AddRec || AddRec->getLoop() != L || !AddRec->isAffine())
     return false;
-  auto* Step = dyn_cast<SCEVConstant>(AddRec->getStepRecurrence(SE));
+  auto *Step = dyn_cast<SCEVConstant>(AddRec->getStepRecurrence(SE));
   if (!Step)
     return false;
 
@@ -354,7 +353,7 @@ bool llvm::isSafeToLoadUnconditionally(Value *V, Align Alignment, APInt &Size,
                                        const DominatorTree *DT,
                                        const TargetLibraryInfo *TLI) {
   // If DT is not specified we can't make context-sensitive query
-  const Instruction* CtxI = DT ? ScanFrom : nullptr;
+  const Instruction *CtxI = DT ? ScanFrom : nullptr;
   if (isDereferenceableAndAlignedPointer(V, Alignment, Size, DL, CtxI, AC, DT,
                                          TLI))
     return true;
@@ -413,8 +412,7 @@ bool llvm::isSafeToLoadUnconditionally(Value *V, Align Alignment, APInt &Size,
       continue;
 
     // Handle trivial cases.
-    if (AccessedPtr == V &&
-        LoadSize <= DL.getTypeStoreSize(AccessedTy))
+    if (AccessedPtr == V && LoadSize <= DL.getTypeStoreSize(AccessedTy))
       return true;
 
     if (AreEquivalentAddressValues(AccessedPtr->stripPointerCasts(), V) &&
@@ -444,18 +442,16 @@ bool llvm::isSafeToLoadUnconditionally(Value *V, Type *Ty, Align Alignment,
 /// threading in part by eliminating partially redundant loads.
 /// At that point, the value of MaxInstsToScan was already set to '6'
 /// without documented explanation.
-cl::opt<unsigned>
-llvm::DefMaxInstsToScan("available-load-scan-limit", cl::init(6), cl::Hidden,
-  cl::desc("Use this to specify the default maximum number of instructions "
-           "to scan backward from a given instruction, when searching for "
-           "available loaded value"));
-
-Value *llvm::FindAvailableLoadedValue(LoadInst *Load,
-                                      BasicBlock *ScanBB,
+cl::opt<unsigned> llvm::DefMaxInstsToScan(
+    "available-load-scan-limit", cl::init(6), cl::Hidden,
+    cl::desc("Use this to specify the default maximum number of instructions "
+             "to scan backward from a given instruction, when searching for "
+             "available loaded value"));
+
+Value *llvm::FindAvailableLoadedValue(LoadInst *Load, BasicBlock *ScanBB,
                                       BasicBlock::iterator &ScanFrom,
-                                      unsigned MaxInstsToScan,
-                                      AAResults *AA, bool *IsLoad,
-                                      unsigned *NumScanedInst) {
+                                      unsigned MaxInstsToScan, AAResults *AA,
+                                      bool *IsLoad, unsigned *NumScanedInst) {
   // Don't CSE load that is volatile or anything stronger than unordered.
   if (!Load->isUnordered())
     return nullptr;
@@ -483,10 +479,8 @@ static bool areNonOverlapSameBaseLoadAndStore(const Value *LoadPtr,
     return false;
   auto LoadAccessSize = LocationSize::precise(DL.getTypeStoreSize(LoadTy));
   auto StoreAccessSize = LocationSize::precise(DL.getTypeStoreSize(StoreTy));
-  ConstantRange LoadRange(LoadOffset,
-                          LoadOffset + LoadAccessSize.toRaw());
-  ConstantRange StoreRange(StoreOffset,
-                           StoreOffset + StoreAccessSize.toRaw());
+  ConstantRange LoadRange(LoadOffset, LoadOffset + LoadAccessSize.toRaw());
+  ConstantRange StoreRange(StoreOffset, StoreOffset + StoreAccessSize.toRaw());
   return LoadRange.intersectWith(StoreRange).isEmptySet();
 }
 
@@ -680,8 +674,8 @@ Value *llvm::FindAvailableLoadedValue(LoadInst *Load, AAResults &AA,
   // queries until later.
   Value *Available = nullptr;
   SmallVector<Instruction *> MustNotAliasInsts;
-  for (Instruction &Inst : make_range(++Load->getReverseIterator(),
-                                      ScanBB->rend())) {
+  for (Instruction &Inst :
+       make_range(++Load->getReverseIterator(), ScanBB->rend())) {
     if (Inst.isDebugOrPseudoInst())
       continue;
 
diff --git a/llvm/lib/Analysis/LoopAccessAnalysis.cpp b/llvm/lib/Analysis/LoopAccessAnalysis.cpp
index 4dd150492453f72..e1d6b8064090242 100644
--- a/llvm/lib/Analysis/LoopAccessAnalysis.cpp
+++ b/llvm/lib/Analysis/LoopAccessAnalysis.cpp
@@ -71,17 +71,16 @@ using namespace llvm::PatternMatch;
 #define DEBUG_TYPE "loop-accesses"
 
 static cl::opt<unsigned, true>
-VectorizationFactor("force-vector-width", cl::Hidden,
-                    cl::desc("Sets the SIMD width. Zero is autoselect."),
-                    cl::location(VectorizerParams::VectorizationFactor));
+    VectorizationFactor("force-vector-width", cl::Hidden,
+                        cl::desc("Sets the SIMD width. Zero is autoselect."),
+                        cl::location(VectorizerParams::VectorizationFactor));
 unsigned VectorizerParams::VectorizationFactor;
 
-static cl::opt<unsigned, true>
-VectorizationInterleave("force-vector-interleave", cl::Hidden,
-                        cl::desc("Sets the vectorization interleave count. "
-                                 "Zero is autoselect."),
-                        cl::location(
-                            VectorizerParams::VectorizationInterleave));
+static cl::opt<unsigned, true> VectorizationInterleave(
+    "force-vector-interleave", cl::Hidden,
+    cl::desc("Sets the vectorization interleave count. "
+             "Zero is autoselect."),
+    cl::location(VectorizerParams::VectorizationInterleave));
 unsigned VectorizerParams::VectorizationInterleave;
 
 static cl::opt<unsigned, true> RuntimeMemoryCheckThreshold(
@@ -151,9 +150,9 @@ bool VectorizerParams::isInterleaveForced() {
   return ::VectorizationInterleave.getNumOccurrences() > 0;
 }
 
-const SCEV *llvm::replaceSymbolicStrideSCEV(PredicatedScalarEvolution &PSE,
-                                            const DenseMap<Value *, const SCEV *> &PtrToStride,
-                                            Value *Ptr) {
+const SCEV *llvm::replaceSymbolicStrideSCEV(
+    PredicatedScalarEvolution &PSE,
+    const DenseMap<Value *, const SCEV *> &PtrToStride, Value *Ptr) {
   const SCEV *OrigSCEV = PSE.getSCEV(Ptr);
 
   // If there is an entry in the map return the SCEV of the pointer with the
@@ -175,8 +174,8 @@ const SCEV *llvm::replaceSymbolicStrideSCEV(PredicatedScalarEvolution &PSE,
   PSE.addPredicate(*SE->getEqualPredicate(StrideSCEV, CT));
   auto *Expr = PSE.getSCEV(Ptr);
 
-  LLVM_DEBUG(dbgs() << "LAA: Replacing SCEV: " << *OrigSCEV
-	     << " by: " << *Expr << "\n");
+  LLVM_DEBUG(dbgs() << "LAA: Replacing SCEV: " << *OrigSCEV << " by: " << *Expr
+                    << "\n");
   return Expr;
 }
 
@@ -238,7 +237,7 @@ void RuntimePointerChecking::insert(Loop *Lp, Value *Ptr, const SCEV *PtrExpr,
     }
   }
   assert(SE->isLoopInvariant(ScStart, Lp) && "ScStart needs to be invariant");
-  assert(SE->isLoopInvariant(ScEnd, Lp)&& "ScEnd needs to be invariant");
+  assert(SE->isLoopInvariant(ScEnd, Lp) && "ScEnd needs to be invariant");
 
   // Add the size of the pointed element to ScEnd.
   auto &DL = Lp->getHeader()->getModule()->getDataLayout();
@@ -661,7 +660,7 @@ class AccessAnalysis {
 
   /// Register a load  and whether it is only read from.
   void addLoad(MemoryLocation &Loc, Type *AccessTy, bool IsReadOnly) {
-    Value *Ptr = const_cast<Value*>(Loc.Ptr);
+    Value *Ptr = const_cast<Value *>(Loc.Ptr);
     AST.add(Ptr, LocationSize::beforeOrAfterPointer(), Loc.AATags);
     Accesses[MemAccessInfo(Ptr, false)].insert(AccessTy);
     if (IsReadOnly)
@@ -670,7 +669,7 @@ class AccessAnalysis {
 
   /// Register a store.
   void addStore(MemoryLocation &Loc, Type *AccessTy) {
-    Value *Ptr = const_cast<Value*>(Loc.Ptr);
+    Value *Ptr = const_cast<Value *>(Loc.Ptr);
     AST.add(Ptr, LocationSize::beforeOrAfterPointer(), Loc.AATags);
     Accesses[MemAccessInfo(Ptr, true)].insert(AccessTy);
   }
@@ -695,14 +694,13 @@ class AccessAnalysis {
   /// Returns true if we need no check or if we do and we can generate them
   /// (i.e. the pointers have computable bounds).
   bool canCheckPtrAtRT(RuntimePointerChecking &RtCheck, ScalarEvolution *SE,
-                       Loop *TheLoop, const DenseMap<Value *, const SCEV *> &Strides,
+                       Loop *TheLoop,
+                       const DenseMap<Value *, const SCEV *> &Strides,
                        Value *&UncomputablePtr, bool ShouldCheckWrap = false);
 
   /// Goes over all memory accesses, checks whether a RT check is needed
   /// and builds sets of dependent accesses.
-  void buildDependenceSets() {
-    processMemAccesses();
-  }
+  void buildDependenceSets() { processMemAccesses(); }
 
   /// Initial processing of memory accesses determined that we need to
   /// perform dependency checking.
@@ -737,13 +735,13 @@ class AccessAnalysis {
   MemAccessInfoList CheckDeps;
 
   /// Set of pointers that are read only.
-  SmallPtrSet<Value*, 16> ReadOnlyPtr;
+  SmallPtrSet<Value *, 16> ReadOnlyPtr;
 
   /// Batched alias analysis results.
   BatchAAResults BAA;
 
   /// An alias set tracker to partition the access set by underlying object and
-  //intrinsic property (such as TBAA metadata).
+  // intrinsic property (such as TBAA metadata).
   AliasSetTracker AST;
 
   LoopInfo *LI;
@@ -790,8 +788,8 @@ static bool hasComputableBounds(PredicatedScalarEvolution &PSE, Value *Ptr,
 
 /// Check whether a pointer address cannot wrap.
 static bool isNoWrap(PredicatedScalarEvolution &PSE,
-                     const DenseMap<Value *, const SCEV *> &Strides, Value *Ptr, Type *AccessTy,
-                     Loop *L) {
+                     const DenseMap<Value *, const SCEV *> &Strides, Value *Ptr,
+                     Type *AccessTy, Loop *L) {
   const SCEV *PtrScev = PSE.getSCEV(Ptr);
   if (PSE.getSE()->isLoopInvariant(PtrScev, L))
     return true;
@@ -1006,13 +1004,11 @@ findForkedPointer(PredicatedScalarEvolution &PSE,
   return {{replaceSymbolicStrideSCEV(PSE, StridesMap, Ptr), false}};
 }
 
-bool AccessAnalysis::createCheckForAccess(RuntimePointerChecking &RtCheck,
-                                          MemAccessInfo Access, Type *AccessTy,
-                                          const DenseMap<Value *, const SCEV *> &StridesMap,
-                                          DenseMap<Value *, unsigned> &DepSetId,
-                                          Loop *TheLoop, unsigned &RunningDepId,
-                                          unsigned ASId, bool ShouldCheckWrap,
-                                          bool Assume) {
+bool AccessAnalysis::createCheckForAccess(
+    RuntimePointerChecking &RtCheck, MemAccessInfo Access, Type *AccessTy,
+    const DenseMap<Value *, const SCEV *> &StridesMap,
+    DenseMap<Value *, unsigned> &DepSetId, Loop *TheLoop,
+    unsigned &RunningDepId, unsigned ASId, bool ShouldCheckWrap, bool Assume) {
   Value *Ptr = Access.getPointer();
 
   SmallVector<PointerIntPair<const SCEV *, 1, bool>> TranslatedPtrs =
@@ -1067,16 +1063,17 @@ bool AccessAnalysis::createCheckForAccess(RuntimePointerChecking &RtCheck,
   return true;
 }
 
-bool AccessAnalysis::canCheckPtrAtRT(RuntimePointerChecking &RtCheck,
-                                     ScalarEvolution *SE, Loop *TheLoop,
-                                     const DenseMap<Value *, const SCEV *> &StridesMap,
-                                     Value *&UncomputablePtr, bool ShouldCheckWrap) {
+bool AccessAnalysis::canCheckPtrAtRT(
+    RuntimePointerChecking &RtCheck, ScalarEvolution *SE, Loop *TheLoop,
+    const DenseMap<Value *, const SCEV *> &StridesMap, Value *&UncomputablePtr,
+    bool ShouldCheckWrap) {
   // Find pointers with computable bounds. We are going to use this information
   // to place a runtime bound check.
   bool CanDoRT = true;
 
   bool MayNeedRTCheck = false;
-  if (!IsRTCheckAnalysisNeeded) return true;
+  if (!IsRTCheckAnalysisNeeded)
+    return true;
 
   bool IsDepCheckNeeded = isDependencyCheckNeeded();
 
@@ -1185,7 +1182,7 @@ bool AccessAnalysis::canCheckPtrAtRT(RuntimePointerChecking &RtCheck,
       // Only need to check pointers between two different dependency sets.
       if (RtCheck.Pointers[i].DependencySetId ==
           RtCheck.Pointers[j].DependencySetId)
-       continue;
+        continue;
       // Only need to check pointers in the same alias set.
       if (RtCheck.Pointers[i].AliasSetId != RtCheck.Pointers[j].AliasSetId)
         continue;
@@ -1251,7 +1248,7 @@ void AccessAnalysis::processMemAccesses() {
     bool SetHasWrite = false;
 
     // Map of pointers to last access encountered.
-    typedef DenseMap<const Value*, MemAccessInfo> UnderlyingObjToAccessMap;
+    typedef DenseMap<const Value *, MemAccessInfo> UnderlyingObjToAccessMap;
     UnderlyingObjToAccessMap ObjToLastAccess;
 
     // Set of access to check after all writes have been processed.
@@ -1396,11 +1393,11 @@ static bool isNoWrapAddRec(Value *Ptr, const SCEVAddRecExpr *AR,
 }
 
 /// Check whether the access through \p Ptr has a constant stride.
-std::optional<int64_t> llvm::getPtrStride(PredicatedScalarEvolution &PSE,
-                                          Type *AccessTy, Value *Ptr,
-                                          const Loop *Lp,
-                                          const DenseMap<Value *, const SCEV *> &StridesMap,
-                                          bool Assume, bool ShouldCheckWrap) {
+std::optional<int64_t>
+llvm::getPtrStride(PredicatedScalarEvolution &PSE, Type *AccessTy, Value *Ptr,
+                   const Loop *Lp,
+                   const DenseMap<Value *, const SCEV *> &StridesMap,
+                   bool Assume, bool ShouldCheckWrap) {
   Type *Ty = Ptr->getType();
   assert(Ty->isPointerTy() && "Unexpected non-ptr");
 
@@ -1845,7 +1842,7 @@ MemoryDepChecker::Dependence::DepType
 MemoryDepChecker::isDependent(const MemAccessInfo &A, unsigned AIdx,
                               const MemAccessInfo &B, unsigned BIdx,
                               const DenseMap<Value *, const SCEV *> &Strides) {
-  assert (AIdx < BIdx && "Must pass arguments in program order");
+  assert(AIdx < BIdx && "Must pass arguments in program order");
 
   auto [APtr, AIsWrite] = A;
   auto [BPtr, BIsWrite] = B;
@@ -1862,9 +1859,9 @@ MemoryDepChecker::isDependent(const MemAccessInfo &A, unsigned AIdx,
     return Dependence::Unknown;
 
   int64_t StrideAPtr =
-    getPtrStride(PSE, ATy, APtr, InnermostLoop, Strides, true).value_or(0);
+      getPtrStride(PSE, ATy, APtr, InnermostLoop, Strides, true).value_or(0);
   int64_t StrideBPtr =
-    getPtrStride(PSE, BTy, BPtr, InnermostLoop, Strides, true).value_or(0);
+      getPtrStride(PSE, BTy, BPtr, InnermostLoop, Strides, true).value_or(0);
 
   const SCEV *Src = PSE.getSCEV(APtr);
   const SCEV *Sink = PSE.getSCEV(BPtr);
@@ -1891,7 +1888,7 @@ MemoryDepChecker::isDependent(const MemAccessInfo &A, unsigned AIdx,
   // Need accesses with constant stride. We don't want to vectorize
   // "A[B[i]] += ..." and similar code or pointer arithmetic that could wrap in
   // the address space.
-  if (!StrideAPtr || !StrideBPtr || StrideAPtr != StrideBPtr){
+  if (!StrideAPtr || !StrideBPtr || StrideAPtr != StrideBPtr) {
     LLVM_DEBUG(dbgs() << "Pointer access with non-constant stride\n");
     return Dependence::Unknown;
   }
@@ -1959,10 +1956,12 @@ MemoryDepChecker::isDependent(const MemAccessInfo &A, unsigned AIdx,
   }
 
   // Bail out early if passed-in parameters make vectorization not feasible.
-  unsigned ForcedFactor = (VectorizerParams::VectorizationFactor ?
-                           VectorizerParams::VectorizationFactor : 1);
-  unsigned ForcedUnroll = (VectorizerParams::VectorizationInterleave ?
-                           VectorizerParams::VectorizationInterleave : 1);
+  unsigned ForcedFactor = (VectorizerParams::VectorizationFactor
+                               ? VectorizerParams::VectorizationFactor
+                               : 1);
+  unsigned ForcedUnroll = (VectorizerParams::VectorizationInterleave
+                               ? VectorizerParams::VectorizationInterleave
+                               : 1);
   // The minimum number of iterations for a vectorized/unrolled version.
   unsigned MinNumIter = std::max(ForcedFactor * ForcedUnroll, 2U);
 
@@ -2024,8 +2023,7 @@ MemoryDepChecker::isDependent(const MemAccessInfo &A, unsigned AIdx,
   // is 2. Then we analyze the accesses on array A, the minimum distance needed
   // is 8, which is less than 2 and forbidden vectorization, But actually
   // both A and B could be vectorized by 2 iterations.
-  MinDepDistBytes =
-      std::min(static_cast<uint64_t>(Distance), MinDepDistBytes);
+  MinDepDistBytes = std::min(static_cast<uint64_t>(Distance), MinDepDistBytes);
 
   bool IsTrueDataDependence = (!AIsWrite && BIsWrite);
   uint64_t MinDepDistBytesOld = MinDepDistBytes;
@@ -2050,9 +2048,9 @@ MemoryDepChecker::isDependent(const MemAccessInfo &A, unsigned AIdx,
   return Dependence::BackwardVectorizable;
 }
 
-bool MemoryDepChecker::areDepsSafe(DepCandidates &AccessSets,
-                                   MemAccessInfoList &CheckDeps,
-                                   const DenseMap<Value *, const SCEV *> &Strides) {
+bool MemoryDepChecker::areDepsSafe(
+    DepCandidates &AccessSets, MemAccessInfoList &CheckDeps,
+    const DenseMap<Value *, const SCEV *> &Strides) {
 
   MinDepDistBytes = -1;
   SmallPtrSet<MemAccessInfo, 8> Visited;
@@ -2062,7 +2060,7 @@ bool MemoryDepChecker::areDepsSafe(DepCandidates &AccessSets,
 
     // Get the relevant memory access set.
     EquivalenceClasses<MemAccessInfo>::iterator I =
-      AccessSets.findValue(AccessSets.getLeaderValue(CurAccess));
+        AccessSets.findValue(AccessSets.getLeaderValue(CurAccess));
 
     // Check accesses within this set.
     EquivalenceClasses<MemAccessInfo>::member_iterator AI =
@@ -2081,7 +2079,8 @@ bool MemoryDepChecker::areDepsSafe(DepCandidates &AccessSets,
       while (OI != AE) {
         // Check every accessing instruction pair in program order.
         for (std::vector<unsigned>::iterator I1 = Accesses[*AI].begin(),
-             I1E = Accesses[*AI].end(); I1 != I1E; ++I1)
+                                             I1E = Accesses[*AI].end();
+             I1 != I1E; ++I1)
           // Scan all accesses of another equivalence class, but only the next
           // accesses of the same equivalent class.
           for (std::vector<unsigned>::iterator
@@ -2133,15 +2132,19 @@ MemoryDepChecker::getInstructionsForAccess(Value *Ptr, bool isWrite) const {
   auto &IndexVector = Accesses.find(Access)->second;
 
   SmallVector<Instruction *, 4> Insts;
-  transform(IndexVector,
-                 std::back_inserter(Insts),
-                 [&](unsigned Idx) { return this->InstMap[Idx]; });
+  transform(IndexVector, std::back_inserter(Insts),
+            [&](unsigned Idx) { return this->InstMap[Idx]; });
   return Insts;
 }
 
 const char *MemoryDepChecker::Dependence::DepName[] = {
-    "NoDep", "Unknown", "Forward", "ForwardButPreventsForwarding", "Backward",
-    "BackwardVectorizable", "BackwardVectorizableButPreventsForwarding"};
+    "NoDep",
+    "Unknown",
+    "Forward",
+    "ForwardButPreventsForwarding",
+    "Backward",
+    "BackwardVectorizable",
+    "BackwardVectorizableButPreventsForwarding"};
 
 void MemoryDepChecker::Dependence::print(
     raw_ostream &OS, unsigned Depth,
@@ -2254,7 +2257,7 @@ void LoopAccessInfo::analyzeLoop(AAResults *AA, LoopInfo *LI,
         auto *Ld = dyn_cast<LoadInst>(&I);
         if (!Ld) {
           recordAnalysis("CantVectorizeInstruction", Ld)
-            << "instruction cannot be vectorized";
+              << "instruction cannot be vectorized";
           HasComplexMemInst = true;
           continue;
         }
@@ -2296,7 +2299,7 @@ void LoopAccessInfo::analyzeLoop(AAResults *AA, LoopInfo *LI,
           collectStridedAccess(St);
       }
     } // Next instr.
-  } // Next block.
+  }   // Next block.
 
   if (HasComplexMemInst) {
     CanVecMem = false;
@@ -2380,13 +2383,14 @@ void LoopAccessInfo::analyzeLoop(AAResults *AA, LoopInfo *LI,
     bool IsReadOnlyPtr = false;
     Type *AccessTy = getLoadStoreType(LD);
     if (Seen.insert({Ptr, AccessTy}).second ||
-        !getPtrStride(*PSE, LD->getType(), Ptr, TheLoop, SymbolicStrides).value_or(0)) {
+        !getPtrStride(*PSE, LD->getType(), Ptr, TheLoop, SymbolicStrides)
+             .value_or(0)) {
       ++NumReads;
       IsReadOnlyPtr = true;
     }
 
-    // See if there is an unsafe dependency between a load to a uniform address and
-    // store to the same uniform address.
+    // See if there is an unsafe dependency between a load to a uniform address
+    // and store to the same uniform address.
     if (UniformStores.count(Ptr)) {
       LLVM_DEBUG(dbgs() << "LAA: Found an unsafe dependency between a uniform "
                            "load and uniform store to the same address!\n");
@@ -2427,7 +2431,7 @@ void LoopAccessInfo::analyzeLoop(AAResults *AA, LoopInfo *LI,
                                SymbolicStrides, UncomputablePtr, false);
   if (!CanDoRTIfNeeded) {
     auto *I = dyn_cast_or_null<Instruction>(UncomputablePtr);
-    recordAnalysis("CantIdentifyArrayBounds", I) 
+    recordAnalysis("CantIdentifyArrayBounds", I)
         << "cannot identify array bounds";
     LLVM_DEBUG(dbgs() << "LAA: We can't vectorize because we can't find "
                       << "the array bounds.\n");
@@ -2436,7 +2440,8 @@ void LoopAccessInfo::analyzeLoop(AAResults *AA, LoopInfo *LI,
   }
 
   LLVM_DEBUG(
-    dbgs() << "LAA: May be able to perform a memory runtime check if needed.\n");
+      dbgs()
+      << "LAA: May be able to perform a memory runtime check if needed.\n");
 
   CanVecMem = true;
   if (Accesses.isDependencyCheckNeeded()) {
@@ -2474,7 +2479,7 @@ void LoopAccessInfo::analyzeLoop(AAResults *AA, LoopInfo *LI,
 
   if (HasConvergentOp) {
     recordAnalysis("CantInsertRuntimeCheckWithConvergent")
-      << "cannot add control dependency to convergent operation";
+        << "cannot add control dependency to convergent operation";
     LLVM_DEBUG(dbgs() << "LAA: We can't vectorize because a runtime check "
                          "would be needed with a convergent operation\n");
     CanVecMem = false;
@@ -2544,11 +2549,11 @@ void LoopAccessInfo::emitUnsafeDependenceRemark() {
 }
 
 bool LoopAccessInfo::blockNeedsPredication(BasicBlock *BB, Loop *TheLoop,
-                                           DominatorTree *DT)  {
+                                           DominatorTree *DT) {
   assert(TheLoop->contains(BB) && "Unknown block used");
 
   // Blocks that do not dominate the latch need predication.
-  BasicBlock* Latch = TheLoop->getLoopLatch();
+  BasicBlock *Latch = TheLoop->getLoopLatch();
   return !DT->dominates(BB, Latch);
 }
 
@@ -2567,8 +2572,8 @@ OptimizationRemarkAnalysis &LoopAccessInfo::recordAnalysis(StringRef RemarkName,
       DL = I->getDebugLoc();
   }
 
-  Report = std::make_unique<OptimizationRemarkAnalysis>(DEBUG_TYPE, RemarkName, DL,
-                                                   CodeRegion);
+  Report = std::make_unique<OptimizationRemarkAnalysis>(DEBUG_TYPE, RemarkName,
+                                                        DL, CodeRegion);
   return *Report;
 }
 
@@ -2642,7 +2647,8 @@ static Value *getUniqueCastUse(Value *Ptr, Loop *Lp, Type *Ty) {
 
 /// Get the stride of a pointer access in a loop. Looks for symbolic
 /// strides "a[i*stride]". Returns the symbolic stride, or null otherwise.
-static const SCEV *getStrideFromPointer(Value *Ptr, ScalarEvolution *SE, Loop *Lp) {
+static const SCEV *getStrideFromPointer(Value *Ptr, ScalarEvolution *SE,
+                                        Loop *Lp) {
   auto *PtrTy = dyn_cast<PointerType>(Ptr->getType());
   if (!PtrTy || PtrTy->isAggregateType())
     return nullptr;
@@ -2771,7 +2777,8 @@ void LoopAccessInfo::collectStridedAccess(Value *MemAccess) {
     CastedStride = SE->getNoopOrSignExtend(StrideExpr, BETakenCount->getType());
   else
     CastedBECount = SE->getZeroExtendExpr(BETakenCount, StrideExpr->getType());
-  const SCEV *StrideMinusBETaken = SE->getMinusSCEV(CastedStride, CastedBECount);
+  const SCEV *StrideMinusBETaken =
+      SE->getMinusSCEV(CastedStride, CastedBECount);
   // Since TripCount == BackEdgeTakenCount + 1, checking:
   // "Stride >= TripCount" is equivalent to checking:
   // Stride - BETakenCount > 0
diff --git a/llvm/lib/Analysis/LoopAnalysisManager.cpp b/llvm/lib/Analysis/LoopAnalysisManager.cpp
index 74b1da86eb28d0e..74c318ee5b975b3 100644
--- a/llvm/lib/Analysis/LoopAnalysisManager.cpp
+++ b/llvm/lib/Analysis/LoopAnalysisManager.cpp
@@ -133,7 +133,7 @@ LoopAnalysisManagerFunctionProxy::run(Function &F,
                                       FunctionAnalysisManager &AM) {
   return Result(*InnerAM, AM.getResult<LoopAnalysis>(F));
 }
-}
+} // namespace llvm
 
 PreservedAnalyses llvm::getLoopPassPreservedAnalyses() {
   PreservedAnalyses PA;
diff --git a/llvm/lib/Analysis/LoopCacheAnalysis.cpp b/llvm/lib/Analysis/LoopCacheAnalysis.cpp
index c3a56639b5c8f81..685276cc641fd0c 100644
--- a/llvm/lib/Analysis/LoopCacheAnalysis.cpp
+++ b/llvm/lib/Analysis/LoopCacheAnalysis.cpp
@@ -116,7 +116,7 @@ static const SCEV *computeTripCount(const Loop &L, const SCEV &ElemSize,
 
   if (!TripCount) {
     LLVM_DEBUG(dbgs() << "Trip count of loop " << L.getName()
-               << " could not be computed, using DefaultTripCount\n");
+                      << " could not be computed, using DefaultTripCount\n");
     TripCount = SE.getConstant(ElemSize.getType(), DefaultTripCount);
   }
 
@@ -152,8 +152,8 @@ IndexedReference::IndexedReference(Instruction &StoreOrLoadInst,
 
   IsValid = delinearize(LI);
   if (IsValid)
-    LLVM_DEBUG(dbgs().indent(2) << "Succesfully delinearized: " << *this
-                                << "\n");
+    LLVM_DEBUG(dbgs().indent(2)
+               << "Succesfully delinearized: " << *this << "\n");
 }
 
 std::optional<bool>
@@ -322,7 +322,8 @@ CacheCostTy IndexedReference::computeRefCost(const Loop &L,
       assert(AR && AR->getLoop() && "Expecting valid loop");
       const SCEV *TripCount =
           computeTripCount(*AR->getLoop(), *Sizes.back(), SE);
-      Type *WiderType = SE.getWiderType(RefCost->getType(), TripCount->getType());
+      Type *WiderType =
+          SE.getWiderType(RefCost->getType(), TripCount->getType());
       RefCost = SE.getMulExpr(SE.getNoopOrZeroExtend(RefCost, WiderType),
                               SE.getNoopOrZeroExtend(TripCount, WiderType));
     }
@@ -357,8 +358,7 @@ bool IndexedReference::tryDelinearizeFixedSize(
 
   LLVM_DEBUG({
     dbgs() << "Delinearized subscripts of fixed-size array\n"
-           << "GEP:" << *getLoadStorePointerOperand(&StoreOrLoadInst)
-           << "\n";
+           << "GEP:" << *getLoadStorePointerOperand(&StoreOrLoadInst) << "\n";
   });
   return true;
 }
@@ -422,13 +422,13 @@ bool IndexedReference::delinearize(const LoopInfo &LI) {
       // In this case, reconstruct the access function using the absolute value
       // of the step recurrence.
       const SCEVAddRecExpr *AccessFnAR = dyn_cast<SCEVAddRecExpr>(AccessFn);
-      const SCEV *StepRec = AccessFnAR ? AccessFnAR->getStepRecurrence(SE) : nullptr;
+      const SCEV *StepRec =
+          AccessFnAR ? AccessFnAR->getStepRecurrence(SE) : nullptr;
 
       if (StepRec && SE.isKnownNegative(StepRec))
-        AccessFn = SE.getAddRecExpr(AccessFnAR->getStart(),
-                                    SE.getNegativeSCEV(StepRec),
-                                    AccessFnAR->getLoop(),
-                                    AccessFnAR->getNoWrapFlags());
+        AccessFn = SE.getAddRecExpr(
+            AccessFnAR->getStart(), SE.getNegativeSCEV(StepRec),
+            AccessFnAR->getLoop(), AccessFnAR->getNoWrapFlags());
       const SCEV *Div = SE.getUDivExactExpr(AccessFn, ElemSize);
       Subscripts.push_back(Div);
       Sizes.push_back(ElemSize);
@@ -588,7 +588,8 @@ CacheCost::getCacheCost(Loop &Root, LoopStandardAnalysisResults &AR,
     return nullptr;
   }
 
-  return std::make_unique<CacheCost>(Loops, AR.LI, AR.SE, AR.TTI, AR.AA, DI, TRT);
+  return std::make_unique<CacheCost>(Loops, AR.LI, AR.SE, AR.TTI, AR.AA, DI,
+                                     TRT);
 }
 
 void CacheCost::calculateCacheFootprint() {
@@ -636,18 +637,17 @@ bool CacheCost::populateReferenceGroups(ReferenceGroupsTy &RefGroups) const {
           dbgs().indent(2) << Representative << "\n";
         });
 
-
-       // FIXME: Both positive and negative access functions will be placed
-       // into the same reference group, resulting in a bi-directional array
-       // access such as:
-       //   for (i = N; i > 0; i--)
-       //     A[i] = A[N - i];
-       // having the same cost calculation as a single dimention access pattern
-       //   for (i = 0; i < N; i++)
-       //     A[i] = A[i];
-       // when in actuality, depending on the array size, the first example
-       // should have a cost closer to 2x the second due to the two cache
-       // access per iteration from opposite ends of the array
+        // FIXME: Both positive and negative access functions will be placed
+        // into the same reference group, resulting in a bi-directional array
+        // access such as:
+        //   for (i = N; i > 0; i--)
+        //     A[i] = A[N - i];
+        // having the same cost calculation as a single dimention access pattern
+        //   for (i = 0; i < N; i++)
+        //     A[i] = A[i];
+        // when in actuality, depending on the array size, the first example
+        // should have a cost closer to 2x the second due to the two cache
+        // access per iteration from opposite ends of the array
         std::optional<bool> HasTemporalReuse =
             R->hasTemporalReuse(Representative, *TRT, *InnerMostLoop, DI, AA);
         std::optional<bool> HasSpacialReuse =
@@ -710,8 +710,8 @@ CacheCost::computeLoopCacheCost(const Loop &L,
     LoopCost += RefGroupCost * TripCountsProduct;
   }
 
-  LLVM_DEBUG(dbgs().indent(2) << "Loop '" << L.getName()
-                              << "' has cost=" << LoopCost << "\n");
+  LLVM_DEBUG(dbgs().indent(2)
+             << "Loop '" << L.getName() << "' has cost=" << LoopCost << "\n");
 
   return LoopCost;
 }
diff --git a/llvm/lib/Analysis/LoopNestAnalysis.cpp b/llvm/lib/Analysis/LoopNestAnalysis.cpp
index fe6d270b9ac53cd..10d498aff9766ce 100644
--- a/llvm/lib/Analysis/LoopNestAnalysis.cpp
+++ b/llvm/lib/Analysis/LoopNestAnalysis.cpp
@@ -294,9 +294,7 @@ const BasicBlock &LoopNest::skipEmptyBlockUntil(const BasicBlock *From,
   if (From == End || !From->getUniqueSuccessor())
     return *From;
 
-  auto IsEmpty = [](const BasicBlock *BB) {
-    return (BB->size() == 1);
-  };
+  auto IsEmpty = [](const BasicBlock *BB) { return (BB->size() == 1); };
 
   // Visited is used to avoid running into an infinite loop.
   SmallPtrSet<const BasicBlock *, 4> Visited;
diff --git a/llvm/lib/Analysis/LoopPass.cpp b/llvm/lib/Analysis/LoopPass.cpp
index 294dfd9d41c1707..66a802c2df92cdf 100644
--- a/llvm/lib/Analysis/LoopPass.cpp
+++ b/llvm/lib/Analysis/LoopPass.cpp
@@ -59,7 +59,7 @@ class PrintLoopPassWrapper : public LoopPass {
 };
 
 char PrintLoopPassWrapper::ID = 0;
-}
+} // namespace
 
 //===----------------------------------------------------------------------===//
 // LPPassManager
@@ -289,15 +289,14 @@ bool LPPassManager::runOnFunction(Function &F) {
 
 /// Print passes managed by this manager
 void LPPassManager::dumpPassStructure(unsigned Offset) {
-  errs().indent(Offset*2) << "Loop Pass Manager\n";
+  errs().indent(Offset * 2) << "Loop Pass Manager\n";
   for (unsigned Index = 0; Index < getNumContainedPasses(); ++Index) {
     Pass *P = getContainedPass(Index);
     P->dumpPassStructure(Offset + 1);
-    dumpLastUses(P, Offset+1);
+    dumpLastUses(P, Offset + 1);
   }
 }
 
-
 //===----------------------------------------------------------------------===//
 // LoopPass
 
@@ -315,8 +314,7 @@ Pass *LoopPass::createPrinterPass(raw_ostream &O,
 void LoopPass::preparePassManager(PMStack &PMS) {
 
   // Find LPPassManager
-  while (!PMS.empty() &&
-         PMS.top()->getPassManagerType() > PMT_LoopPassManager)
+  while (!PMS.empty() && PMS.top()->getPassManagerType() > PMT_LoopPassManager)
     PMS.pop();
 
   // If this pass is destroying high level information that is used
@@ -328,19 +326,17 @@ void LoopPass::preparePassManager(PMStack &PMS) {
 }
 
 /// Assign pass manager to manage this pass.
-void LoopPass::assignPassManager(PMStack &PMS,
-                                 PassManagerType PreferredType) {
+void LoopPass::assignPassManager(PMStack &PMS, PassManagerType PreferredType) {
   // Find LPPassManager
-  while (!PMS.empty() &&
-         PMS.top()->getPassManagerType() > PMT_LoopPassManager)
+  while (!PMS.empty() && PMS.top()->getPassManagerType() > PMT_LoopPassManager)
     PMS.pop();
 
   LPPassManager *LPPM;
   if (PMS.top()->getPassManagerType() == PMT_LoopPassManager)
-    LPPM = (LPPassManager*)PMS.top();
+    LPPM = (LPPassManager *)PMS.top();
   else {
     // Create new Loop Pass Manager if it does not exist.
-    assert (!PMS.empty() && "Unable to create Loop Pass Manager");
+    assert(!PMS.empty() && "Unable to create Loop Pass Manager");
     PMDataManager *PMD = PMS.top();
 
     // [1] Create new Loop Pass Manager
@@ -363,9 +359,7 @@ void LoopPass::assignPassManager(PMStack &PMS,
   LPPM->add(this);
 }
 
-static std::string getDescription(const Loop &L) {
-  return "loop";
-}
+static std::string getDescription(const Loop &L) { return "loop"; }
 
 bool LoopPass::skipLoop(const Loop *L) const {
   const Function *F = L->getHeader()->getParent();
diff --git a/llvm/lib/Analysis/MemoryBuiltins.cpp b/llvm/lib/Analysis/MemoryBuiltins.cpp
index 53e089ba1feae57..ea4bf75ca6f5691 100644
--- a/llvm/lib/Analysis/MemoryBuiltins.cpp
+++ b/llvm/lib/Analysis/MemoryBuiltins.cpp
@@ -51,12 +51,12 @@ using namespace llvm;
 #define DEBUG_TYPE "memory-builtins"
 
 enum AllocType : uint8_t {
-  OpNewLike          = 1<<0, // allocates; never returns null
-  MallocLike         = 1<<1, // allocates; may return null
-  StrDupLike         = 1<<2,
-  MallocOrOpNewLike  = MallocLike | OpNewLike,
-  AllocLike          = MallocOrOpNewLike | StrDupLike,
-  AnyAlloc           = AllocLike
+  OpNewLike = 1 << 0,  // allocates; never returns null
+  MallocLike = 1 << 1, // allocates; may return null
+  StrDupLike = 1 << 2,
+  MallocOrOpNewLike = MallocLike | OpNewLike,
+  AllocLike = MallocOrOpNewLike | StrDupLike,
+  AnyAlloc = AllocLike
 };
 
 enum class MallocFamily {
@@ -150,8 +150,7 @@ static const std::pair<LibFunc, AllocFnsTy> AllocationFnData[] = {
 };
 // clang-format on
 
-static const Function *getCalledFunction(const Value *V,
-                                         bool &IsNoBuiltin) {
+static const Function *getCalledFunction(const Value *V, bool &IsNoBuiltin) {
   // Don't care about intrinsics in this case.
   if (isa<IntrinsicInst>(V))
     return nullptr;
@@ -182,10 +181,10 @@ getAllocationDataForFunction(const Function *Callee, AllocType AllocTy,
   if (!TLI || !TLI->getLibFunc(*Callee, TLIFn) || !TLI->has(TLIFn))
     return std::nullopt;
 
-  const auto *Iter = find_if(
-      AllocationFnData, [TLIFn](const std::pair<LibFunc, AllocFnsTy> &P) {
-        return P.first == TLIFn;
-      });
+  const auto *Iter = find_if(AllocationFnData,
+                             [TLIFn](const std::pair<LibFunc, AllocFnsTy> &P) {
+                               return P.first == TLIFn;
+                             });
 
   if (Iter == std::end(AllocationFnData))
     return std::nullopt;
@@ -201,11 +200,9 @@ getAllocationDataForFunction(const Function *Callee, AllocType AllocTy,
 
   if (FTy->getReturnType()->isPointerTy() &&
       FTy->getNumParams() == FnData->NumParams &&
-      (FstParam < 0 ||
-       (FTy->getParamType(FstParam)->isIntegerTy(32) ||
-        FTy->getParamType(FstParam)->isIntegerTy(64))) &&
-      (SndParam < 0 ||
-       FTy->getParamType(SndParam)->isIntegerTy(32) ||
+      (FstParam < 0 || (FTy->getParamType(FstParam)->isIntegerTy(32) ||
+                        FTy->getParamType(FstParam)->isIntegerTy(64))) &&
+      (SndParam < 0 || FTy->getParamType(SndParam)->isIntegerTy(32) ||
        FTy->getParamType(SndParam)->isIntegerTy(64)))
     return *FnData;
   return std::nullopt;
@@ -235,8 +232,7 @@ getAllocationData(const Value *V, AllocType AllocTy,
 static std::optional<AllocFnsTy>
 getAllocationSize(const Value *V, const TargetLibraryInfo *TLI) {
   bool IsNoBuiltinCall;
-  const Function *Callee =
-      getCalledFunction(V, IsNoBuiltinCall);
+  const Function *Callee = getCalledFunction(V, IsNoBuiltinCall);
   if (!Callee)
     return std::nullopt;
 
@@ -311,7 +307,8 @@ bool llvm::isNewLikeFn(const Value *V, const TargetLibraryInfo *TLI) {
 
 /// Tests if a value is a call or invoke to a library function that
 /// allocates memory similar to malloc or calloc.
-bool llvm::isMallocOrCallocLikeFn(const Value *V, const TargetLibraryInfo *TLI) {
+bool llvm::isMallocOrCallocLikeFn(const Value *V,
+                                  const TargetLibraryInfo *TLI) {
   // TODO: Function behavior does not match name.
   return getAllocationData(V, MallocOrOpNewLike, TLI).has_value();
 }
@@ -394,7 +391,7 @@ llvm::getAllocSize(const CallBase *CB, const TargetLibraryInfo *TLI,
     // Strndup limits strlen.
     if (FnData->FstParam > 0) {
       const ConstantInt *Arg =
-        dyn_cast<ConstantInt>(Mapper(CB->getArgOperand(FnData->FstParam)));
+          dyn_cast<ConstantInt>(Mapper(CB->getArgOperand(FnData->FstParam)));
       if (!Arg)
         return std::nullopt;
 
@@ -406,7 +403,7 @@ llvm::getAllocSize(const CallBase *CB, const TargetLibraryInfo *TLI,
   }
 
   const ConstantInt *Arg =
-    dyn_cast<ConstantInt>(Mapper(CB->getArgOperand(FnData->FstParam)));
+      dyn_cast<ConstantInt>(Mapper(CB->getArgOperand(FnData->FstParam)));
   if (!Arg)
     return std::nullopt;
 
@@ -586,7 +583,7 @@ static APInt getSizeWithOverflow(const SizeOffsetType &Data) {
 bool llvm::getObjectSize(const Value *Ptr, uint64_t &Size, const DataLayout &DL,
                          const TargetLibraryInfo *TLI, ObjectSizeOpts Opts) {
   ObjectSizeOffsetVisitor Visitor(DL, TLI, Ptr->getContext(), Opts);
-  SizeOffsetType Data = Visitor.compute(const_cast<Value*>(Ptr));
+  SizeOffsetType Data = Visitor.compute(const_cast<Value *>(Ptr));
   if (!Visitor.bothKnown(Data))
     return false;
 
@@ -627,10 +624,11 @@ Value *llvm::lowerObjectSizeCall(
   auto *ResultType = cast<IntegerType>(ObjectSize->getType());
   bool StaticOnly = cast<ConstantInt>(ObjectSize->getArgOperand(3))->isZero();
   if (StaticOnly) {
-    // FIXME: Does it make sense to just return a failure value if the size won't
-    // fit in the output and `!MustSucceed`?
+    // FIXME: Does it make sense to just return a failure value if the size
+    // won't fit in the output and `!MustSucceed`?
     uint64_t Size;
-    if (getObjectSize(ObjectSize->getArgOperand(0), Size, DL, TLI, EvalOptions) &&
+    if (getObjectSize(ObjectSize->getArgOperand(0), Size, DL, TLI,
+                      EvalOptions) &&
         isUIntN(ResultType->getBitWidth(), Size))
       return ConstantInt::get(ResultType, Size);
   } else {
@@ -783,7 +781,7 @@ SizeOffsetType ObjectSizeOffsetVisitor::visitAllocaInst(AllocaInst &I) {
 SizeOffsetType ObjectSizeOffsetVisitor::visitArgument(Argument &A) {
   Type *MemoryTy = A.getPointeeInMemoryValueType();
   // No interprocedural analysis is done at the moment.
-  if (!MemoryTy|| !MemoryTy->isSized()) {
+  if (!MemoryTy || !MemoryTy->isSized()) {
     ++ObjectVisitorArgument;
     return unknown();
   }
@@ -799,7 +797,7 @@ SizeOffsetType ObjectSizeOffsetVisitor::visitCallBase(CallBase &CB) {
 }
 
 SizeOffsetType
-ObjectSizeOffsetVisitor::visitConstantPointerNull(ConstantPointerNull& CPN) {
+ObjectSizeOffsetVisitor::visitConstantPointerNull(ConstantPointerNull &CPN) {
   // If null is unknown, there's nothing we can do. Additionally, non-zero
   // address spaces can make use of null, so we don't presume to know anything
   // about that.
@@ -813,12 +811,12 @@ ObjectSizeOffsetVisitor::visitConstantPointerNull(ConstantPointerNull& CPN) {
 }
 
 SizeOffsetType
-ObjectSizeOffsetVisitor::visitExtractElementInst(ExtractElementInst&) {
+ObjectSizeOffsetVisitor::visitExtractElementInst(ExtractElementInst &) {
   return unknown();
 }
 
 SizeOffsetType
-ObjectSizeOffsetVisitor::visitExtractValueInst(ExtractValueInst&) {
+ObjectSizeOffsetVisitor::visitExtractValueInst(ExtractValueInst &) {
   // Easy cases were already folded by previous passes.
   return unknown();
 }
@@ -829,7 +827,8 @@ SizeOffsetType ObjectSizeOffsetVisitor::visitGlobalAlias(GlobalAlias &GA) {
   return compute(GA.getAliasee());
 }
 
-SizeOffsetType ObjectSizeOffsetVisitor::visitGlobalVariable(GlobalVariable &GV){
+SizeOffsetType
+ObjectSizeOffsetVisitor::visitGlobalVariable(GlobalVariable &GV) {
   if (!GV.getValueType()->isSized() || GV.hasExternalWeakLinkage() ||
       ((!GV.hasInitializer() || GV.isInterposable()) &&
        Options.EvalMode != ObjectSizeOpts::Mode::Min))
@@ -839,7 +838,7 @@ SizeOffsetType ObjectSizeOffsetVisitor::visitGlobalVariable(GlobalVariable &GV){
   return std::make_pair(align(Size, GV.getAlign()), Zero);
 }
 
-SizeOffsetType ObjectSizeOffsetVisitor::visitIntToPtrInst(IntToPtrInst&) {
+SizeOffsetType ObjectSizeOffsetVisitor::visitIntToPtrInst(IntToPtrInst &) {
   // clueless
   return unknown();
 }
@@ -1005,7 +1004,7 @@ SizeOffsetType ObjectSizeOffsetVisitor::visitSelectInst(SelectInst &I) {
                            compute(I.getFalseValue()));
 }
 
-SizeOffsetType ObjectSizeOffsetVisitor::visitUndefValue(UndefValue&) {
+SizeOffsetType ObjectSizeOffsetVisitor::visitUndefValue(UndefValue &) {
   return std::make_pair(Zero, Zero);
 }
 
@@ -1092,8 +1091,7 @@ SizeOffsetEvalType ObjectSizeOffsetEvaluator::compute_(Value *V) {
   } else if (isa<Argument>(V) ||
              (isa<ConstantExpr>(V) &&
               cast<ConstantExpr>(V)->getOpcode() == Instruction::IntToPtr) ||
-             isa<GlobalAlias>(V) ||
-             isa<GlobalVariable>(V)) {
+             isa<GlobalAlias>(V) || isa<GlobalVariable>(V)) {
     // Ignore values where we cannot do more than ObjectSizeVisitor.
     Result = unknown();
   } else {
@@ -1152,12 +1150,12 @@ SizeOffsetEvalType ObjectSizeOffsetEvaluator::visitCallBase(CallBase &CB) {
 }
 
 SizeOffsetEvalType
-ObjectSizeOffsetEvaluator::visitExtractElementInst(ExtractElementInst&) {
+ObjectSizeOffsetEvaluator::visitExtractElementInst(ExtractElementInst &) {
   return unknown();
 }
 
 SizeOffsetEvalType
-ObjectSizeOffsetEvaluator::visitExtractValueInst(ExtractValueInst&) {
+ObjectSizeOffsetEvaluator::visitExtractValueInst(ExtractValueInst &) {
   return unknown();
 }
 
@@ -1172,7 +1170,8 @@ ObjectSizeOffsetEvaluator::visitGEPOperator(GEPOperator &GEP) {
   return std::make_pair(PtrData.first, Offset);
 }
 
-SizeOffsetEvalType ObjectSizeOffsetEvaluator::visitIntToPtrInst(IntToPtrInst&) {
+SizeOffsetEvalType
+ObjectSizeOffsetEvaluator::visitIntToPtrInst(IntToPtrInst &) {
   // clueless
   return unknown();
 }
@@ -1183,7 +1182,7 @@ SizeOffsetEvalType ObjectSizeOffsetEvaluator::visitLoadInst(LoadInst &LI) {
 
 SizeOffsetEvalType ObjectSizeOffsetEvaluator::visitPHINode(PHINode &PHI) {
   // Create 2 PHIs: one for size and another for offset.
-  PHINode *SizePHI   = Builder.CreatePHI(IntTy, PHI.getNumIncomingValues());
+  PHINode *SizePHI = Builder.CreatePHI(IntTy, PHI.getNumIncomingValues());
   PHINode *OffsetPHI = Builder.CreatePHI(IntTy, PHI.getNumIncomingValues());
 
   // Insert right away in the cache to handle recursive PHIs.
@@ -1224,7 +1223,7 @@ SizeOffsetEvalType ObjectSizeOffsetEvaluator::visitPHINode(PHINode &PHI) {
 }
 
 SizeOffsetEvalType ObjectSizeOffsetEvaluator::visitSelectInst(SelectInst &I) {
-  SizeOffsetEvalType TrueSide  = compute_(I.getTrueValue());
+  SizeOffsetEvalType TrueSide = compute_(I.getTrueValue());
   SizeOffsetEvalType FalseSide = compute_(I.getFalseValue());
 
   if (!bothKnown(TrueSide) || !bothKnown(FalseSide))
@@ -1232,10 +1231,10 @@ SizeOffsetEvalType ObjectSizeOffsetEvaluator::visitSelectInst(SelectInst &I) {
   if (TrueSide == FalseSide)
     return TrueSide;
 
-  Value *Size = Builder.CreateSelect(I.getCondition(), TrueSide.first,
-                                     FalseSide.first);
-  Value *Offset = Builder.CreateSelect(I.getCondition(), TrueSide.second,
-                                       FalseSide.second);
+  Value *Size =
+      Builder.CreateSelect(I.getCondition(), TrueSide.first, FalseSide.first);
+  Value *Offset =
+      Builder.CreateSelect(I.getCondition(), TrueSide.second, FalseSide.second);
   return std::make_pair(Size, Offset);
 }
 
diff --git a/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp b/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp
index 071ecdba8a54ace..1514fb74d67211a 100644
--- a/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp
+++ b/llvm/lib/Analysis/MemoryDependenceAnalysis.cpp
@@ -411,7 +411,7 @@ MemDepResult MemoryDependenceResults::getSimplePointerDependencyFrom(
   // True for volatile instruction.
   // For Load/Store return true if atomic ordering is stronger than AO,
   // for other instruction just true if it can read or write to memory.
-  auto isComplexForReordering = [](Instruction * I, AtomicOrdering AO)->bool {
+  auto isComplexForReordering = [](Instruction *I, AtomicOrdering AO) -> bool {
     if (I->isVolatile())
       return true;
     if (auto *LI = dyn_cast<LoadInst>(I))
@@ -454,7 +454,7 @@ MemDepResult MemoryDependenceResults::getSimplePointerDependencyFrom(
       case Intrinsic::masked_load:
       case Intrinsic::masked_store: {
         MemoryLocation Loc;
-        /*ModRefInfo MR =*/ GetLocation(II, Loc, TLI);
+        /*ModRefInfo MR =*/GetLocation(II, Loc, TLI);
         AliasResult R = BatchAA.alias(Loc, MemLoc);
         if (R == AliasResult::NoAlias)
           continue;
@@ -885,7 +885,7 @@ void MemoryDependenceResults::getNonLocalPointerDependency(
   // translation.
   DenseMap<BasicBlock *, Value *> Visited;
   if (getNonLocalPointerDepFromBB(QueryInst, Address, Loc, isLoad, FromBB,
-                                   Result, Visited, true))
+                                  Result, Visited, true))
     return;
   Result.clear();
   Result.push_back(NonLocalDepResult(FromBB, MemDepResult::getUnknown(),
@@ -1359,8 +1359,8 @@ bool MemoryDependenceResults::getNonLocalPointerDepFromBB(
       // assume it is unknown, but this also does not block PRE of the load.
       if (!CanTranslate ||
           !getNonLocalPointerDepFromBB(QueryInst, PredPointer,
-                                      Loc.getWithNewPtr(PredPtrVal), isLoad,
-                                      Pred, Result, Visited)) {
+                                       Loc.getWithNewPtr(PredPtrVal), isLoad,
+                                       Pred, Result, Visited)) {
         // Add the entry to the Result list.
         NonLocalDepResult Entry(Pred, MemDepResult::getUnknown(), PredPtrVal);
         Result.push_back(Entry);
@@ -1428,7 +1428,6 @@ bool MemoryDependenceResults::getNonLocalPointerDepFromBB(
 
         I.setResult(MemDepResult::getUnknown());
 
-
         break;
       }
     }
@@ -1747,9 +1746,7 @@ MemoryDependenceWrapperPass::MemoryDependenceWrapperPass() : FunctionPass(ID) {
 
 MemoryDependenceWrapperPass::~MemoryDependenceWrapperPass() = default;
 
-void MemoryDependenceWrapperPass::releaseMemory() {
-  MemDep.reset();
-}
+void MemoryDependenceWrapperPass::releaseMemory() { MemDep.reset(); }
 
 void MemoryDependenceWrapperPass::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.setPreservesAll();
@@ -1759,8 +1756,9 @@ void MemoryDependenceWrapperPass::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.addRequiredTransitive<TargetLibraryInfoWrapperPass>();
 }
 
-bool MemoryDependenceResults::invalidate(Function &F, const PreservedAnalyses &PA,
-                               FunctionAnalysisManager::Invalidator &Inv) {
+bool MemoryDependenceResults::invalidate(
+    Function &F, const PreservedAnalyses &PA,
+    FunctionAnalysisManager::Invalidator &Inv) {
   // Check whether our analysis is preserved.
   auto PAC = PA.getChecker<MemoryDependenceAnalysis>();
   if (!PAC.preserved() && !PAC.preservedSet<AllAnalysesOn<Function>>())
diff --git a/llvm/lib/Analysis/MemoryLocation.cpp b/llvm/lib/Analysis/MemoryLocation.cpp
index 0404b32be848ce6..3aa56f883cf3a76 100644
--- a/llvm/lib/Analysis/MemoryLocation.cpp
+++ b/llvm/lib/Analysis/MemoryLocation.cpp
@@ -52,8 +52,8 @@ MemoryLocation MemoryLocation::get(const StoreInst *SI) {
 }
 
 MemoryLocation MemoryLocation::get(const VAArgInst *VI) {
-  return MemoryLocation(VI->getPointerOperand(),
-                        LocationSize::afterPointer(), VI->getAAMetadata());
+  return MemoryLocation(VI->getPointerOperand(), LocationSize::afterPointer(),
+                        VI->getAAMetadata());
 }
 
 MemoryLocation MemoryLocation::get(const AtomicCmpXchgInst *CXI) {
@@ -197,17 +197,15 @@ MemoryLocation MemoryLocation::getForArgument(const CallBase *Call,
     case Intrinsic::masked_load:
       assert(ArgIdx == 0 && "Invalid argument index");
       return MemoryLocation(
-          Arg,
-          LocationSize::upperBound(DL.getTypeStoreSize(II->getType())),
+          Arg, LocationSize::upperBound(DL.getTypeStoreSize(II->getType())),
           AATags);
 
     case Intrinsic::masked_store:
       assert(ArgIdx == 1 && "Invalid argument index");
-      return MemoryLocation(
-          Arg,
-          LocationSize::upperBound(
-              DL.getTypeStoreSize(II->getArgOperand(0)->getType())),
-          AATags);
+      return MemoryLocation(Arg,
+                            LocationSize::upperBound(DL.getTypeStoreSize(
+                                II->getArgOperand(0)->getType())),
+                            AATags);
 
     case Intrinsic::invariant_end:
       // The first argument to an invariant.end is a "descriptor" type (e.g. a
@@ -252,7 +250,8 @@ MemoryLocation MemoryLocation::getForArgument(const CallBase *Call,
     case LibFunc_strcpy:
     case LibFunc_strcat:
     case LibFunc_strncat:
-      assert((ArgIdx == 0 || ArgIdx == 1) && "Invalid argument index for str function");
+      assert((ArgIdx == 0 || ArgIdx == 1) &&
+             "Invalid argument index for str function");
       return MemoryLocation::getAfter(Arg, AATags);
 
     case LibFunc_memset_chk:
diff --git a/llvm/lib/Analysis/MemorySSA.cpp b/llvm/lib/Analysis/MemorySSA.cpp
index 2cf92ceba01032c..2a343ed7737e355 100644
--- a/llvm/lib/Analysis/MemorySSA.cpp
+++ b/llvm/lib/Analysis/MemorySSA.cpp
@@ -448,8 +448,8 @@ checkClobberSanity(const MemoryAccess *Start, MemoryAccess *ClobberAt,
 
       if (const auto *MU = dyn_cast<MemoryUse>(MA)) {
         (void)MU;
-        assert (MU == Start &&
-                "Can only find use in def chain if Start is a use");
+        assert(MU == Start &&
+               "Can only find use in def chain if Start is a use");
         continue;
       }
 
@@ -1106,7 +1106,7 @@ void MemorySSA::renameSuccessorPhis(BasicBlock *BB, MemoryAccess *IncomingVal,
           Phi->setIncomingValue(I, IncomingVal);
           ReplacementDone = true;
         }
-      (void) ReplacementDone;
+      (void)ReplacementDone;
       assert(ReplacementDone && "Incomplete phi during partial rename");
     } else
       Phi->addIncoming(IncomingVal, BB);
@@ -1566,8 +1566,7 @@ MemorySSAWalker *MemorySSA::getSkipSelfWalker() {
 
   SkipWalker = std::make_unique<SkipSelfWalker>(this, WalkerBase.get());
   return SkipWalker.get();
- }
-
+}
 
 // This is a helper function used by the creation routines. It places NewAccess
 // into the access and defs lists for a given basic block, at the given
@@ -1867,14 +1866,14 @@ void MemorySSA::verifyMemorySSA(VerificationLevel VL) const {
 #endif
   // Previously, the verification used to also verify that the clobberingAccess
   // cached by MemorySSA is the same as the clobberingAccess found at a later
-  // query to AA. This does not hold true in general due to the current fragility
-  // of BasicAA which has arbitrary caps on the things it analyzes before giving
-  // up. As a result, transformations that are correct, will lead to BasicAA
-  // returning different Alias answers before and after that transformation.
-  // Invalidating MemorySSA is not an option, as the results in BasicAA can be so
-  // random, in the worst case we'd need to rebuild MemorySSA from scratch after
-  // every transformation, which defeats the purpose of using it. For such an
-  // example, see test4 added in D51960.
+  // query to AA. This does not hold true in general due to the current
+  // fragility of BasicAA which has arbitrary caps on the things it analyzes
+  // before giving up. As a result, transformations that are correct, will lead
+  // to BasicAA returning different Alias answers before and after that
+  // transformation. Invalidating MemorySSA is not an option, as the results in
+  // BasicAA can be so random, in the worst case we'd need to rebuild MemorySSA
+  // from scratch after every transformation, which defeats the purpose of using
+  // it. For such an example, see test4 added in D51960.
 }
 
 void MemorySSA::verifyPrevDefInPhis(Function &F) const {
@@ -2147,9 +2146,12 @@ void MemorySSA::ensureOptimizedUses() {
 
 void MemoryAccess::print(raw_ostream &OS) const {
   switch (getValueID()) {
-  case MemoryPhiVal: return static_cast<const MemoryPhi *>(this)->print(OS);
-  case MemoryDefVal: return static_cast<const MemoryDef *>(this)->print(OS);
-  case MemoryUseVal: return static_cast<const MemoryUse *>(this)->print(OS);
+  case MemoryPhiVal:
+    return static_cast<const MemoryPhi *>(this)->print(OS);
+  case MemoryDefVal:
+    return static_cast<const MemoryDef *>(this)->print(OS);
+  case MemoryUseVal:
+    return static_cast<const MemoryUse *>(this)->print(OS);
   }
   llvm_unreachable("invalid value id");
 }
diff --git a/llvm/lib/Analysis/MemorySSAUpdater.cpp b/llvm/lib/Analysis/MemorySSAUpdater.cpp
index 9ad60f774e9f86f..f002adb377af951 100644
--- a/llvm/lib/Analysis/MemorySSAUpdater.cpp
+++ b/llvm/lib/Analysis/MemorySSAUpdater.cpp
@@ -104,8 +104,8 @@ MemoryAccess *MemorySSAUpdater::getPreviousDefRecursive(
         Phi = MSSA->createMemoryPhi(BB);
 
       // See if the existing phi operands match what we need.
-      // Unlike normal SSA, we only allow one phi node per block, so we can't just
-      // create a new one.
+      // Unlike normal SSA, we only allow one phi node per block, so we can't
+      // just create a new one.
       if (Phi->getNumOperands() != 0) {
         // FIXME: Figure out whether this is dead code and if so remove it.
         if (!std::equal(Phi->op_begin(), Phi->op_end(), PhiOps.begin())) {
@@ -420,13 +420,15 @@ void MemorySSAUpdater::insertDef(MemoryDef *MD, bool RenameUses) {
     fixupDefs(FixupList);
     FixupList.clear();
     // Put any new phis on the fixup list, and process them
-    FixupList.append(InsertedPHIs.begin() + StartingPHISize, InsertedPHIs.end());
+    FixupList.append(InsertedPHIs.begin() + StartingPHISize,
+                     InsertedPHIs.end());
   }
 
   // Optimize potentially non-minimal phis added in this method.
   unsigned NewPhiSize = NewPhiIndexEnd - NewPhiIndex;
   if (NewPhiSize)
-    tryRemoveTrivialPhis(ArrayRef<WeakVH>(&InsertedPHIs[NewPhiIndex], NewPhiSize));
+    tryRemoveTrivialPhis(
+        ArrayRef<WeakVH>(&InsertedPHIs[NewPhiIndex], NewPhiSize));
 
   // Now that all fixups are done, rename all uses if we are asked. The defs are
   // guaranteed to be in reachable code due to the check at the method entry.
diff --git a/llvm/lib/Analysis/ModuleSummaryAnalysis.cpp b/llvm/lib/Analysis/ModuleSummaryAnalysis.cpp
index a88622efa12db8c..fe59417f8550129 100644
--- a/llvm/lib/Analysis/ModuleSummaryAnalysis.cpp
+++ b/llvm/lib/Analysis/ModuleSummaryAnalysis.cpp
@@ -382,7 +382,8 @@ static void computeFunctionSummary(
       // Check if this is an alias to a function. If so, get the
       // called aliasee for the checks below.
       if (auto *GA = dyn_cast<GlobalAlias>(CalledValue)) {
-        assert(!CalledFunction && "Expected null called function in callsite for alias");
+        assert(!CalledFunction &&
+               "Expected null called function in callsite for alias");
         CalledFunction = dyn_cast<Function>(GA->getAliaseeObject());
       }
       // Check if this is a direct call to a known function or a known
@@ -765,7 +766,7 @@ static void computeVariableSummary(ModuleSummaryIndex &Index,
                                        Constant ? false : CanBeInternalized,
                                        Constant, V.getVCallVisibility());
   auto GVarSummary = std::make_unique<GlobalVarSummary>(Flags, VarFlags,
-                                                         RefEdges.takeVector());
+                                                        RefEdges.takeVector());
   if (NonRenamableLocal)
     CantBePromoted.insert(V.getGUID());
   if (HasBlockAddress)
@@ -860,7 +861,8 @@ ModuleSummaryIndex llvm::buildModuleSummaryIndex(
           GlobalValue *GV = M.getNamedValue(Name);
           if (!GV)
             return;
-          assert(GV->isDeclaration() && "Def in module asm already has definition");
+          assert(GV->isDeclaration() &&
+                 "Def in module asm already has definition");
           GlobalValueSummary::GVFlags GVFlags(
               GlobalValue::InternalLinkage, GlobalValue::DefaultVisibility,
               /* NotEligibleToImport = */ true,
@@ -1016,8 +1018,8 @@ ModuleSummaryIndex llvm::buildModuleSummaryIndex(
 
 AnalysisKey ModuleSummaryIndexAnalysis::Key;
 
-ModuleSummaryIndex
-ModuleSummaryIndexAnalysis::run(Module &M, ModuleAnalysisManager &AM) {
+ModuleSummaryIndex ModuleSummaryIndexAnalysis::run(Module &M,
+                                                   ModuleAnalysisManager &AM) {
   ProfileSummaryInfo &PSI = AM.getResult<ProfileSummaryAnalysis>(M);
   auto &FAM = AM.getResult<FunctionAnalysisManagerModuleProxy>(M).getManager();
   bool NeedSSI = needsParamAccessSummary(M);
diff --git a/llvm/lib/Analysis/MustExecute.cpp b/llvm/lib/Analysis/MustExecute.cpp
index d4b31f2b0018783..c1bcbc808ad7fb5 100644
--- a/llvm/lib/Analysis/MustExecute.cpp
+++ b/llvm/lib/Analysis/MustExecute.cpp
@@ -44,9 +44,7 @@ bool SimpleLoopSafetyInfo::blockMayThrow(const BasicBlock *BB) const {
   return anyBlockMayThrow();
 }
 
-bool SimpleLoopSafetyInfo::anyBlockMayThrow() const {
-  return MayThrow;
-}
+bool SimpleLoopSafetyInfo::anyBlockMayThrow() const { return MayThrow; }
 
 void SimpleLoopSafetyInfo::computeLoopSafetyInfo(const Loop *CurLoop) {
   assert(CurLoop != nullptr && "CurLoop can't be null");
@@ -72,9 +70,7 @@ bool ICFLoopSafetyInfo::blockMayThrow(const BasicBlock *BB) const {
   return ICF.hasICF(BB);
 }
 
-bool ICFLoopSafetyInfo::anyBlockMayThrow() const {
-  return MayThrow;
-}
+bool ICFLoopSafetyInfo::anyBlockMayThrow() const { return MayThrow; }
 
 void ICFLoopSafetyInfo::computeLoopSafetyInfo(const Loop *CurLoop) {
   assert(CurLoop != nullptr && "CurLoop can't be null");
@@ -141,10 +137,9 @@ static bool CanProveNotTakenFirstIteration(const BasicBlock *ExitBlock,
     return false;
   auto DL = ExitBlock->getModule()->getDataLayout();
   auto *IVStart = LHS->getIncomingValueForBlock(CurLoop->getLoopPreheader());
-  auto *SimpleValOrNull = simplifyCmpInst(Cond->getPredicate(),
-                                          IVStart, RHS,
-                                          {DL, /*TLI*/ nullptr,
-                                              DT, /*AC*/ nullptr, BI});
+  auto *SimpleValOrNull =
+      simplifyCmpInst(Cond->getPredicate(), IVStart, RHS,
+                      {DL, /*TLI*/ nullptr, DT, /*AC*/ nullptr, BI});
   auto *SimpleCst = dyn_cast_or_null<Constant>(SimpleValOrNull);
   if (!SimpleCst)
     return false;
@@ -228,8 +223,8 @@ bool LoopSafetyInfo::allLoopPathsLeadToBlock(const Loop *CurLoop,
       continue;
 
     for (const auto *Succ : successors(Pred))
-      if (CheckedSuccessors.insert(Succ).second &&
-          Succ != BB && !Predecessors.count(Succ))
+      if (CheckedSuccessors.insert(Succ).second && Succ != BB &&
+          !Predecessors.count(Succ))
         // By discharging conditions that are not executed on the 1st iteration,
         // we guarantee that *at least* on the first iteration all paths from
         // header that *may* execute will lead us to the block of interest. So
@@ -266,8 +261,7 @@ bool SimpleLoopSafetyInfo::isGuaranteedToExecute(const Instruction &Inst,
     // Inst unless we can prove that Inst comes before the potential implicit
     // exit.  At the moment, we use a (cheap) hack for the common case where
     // the instruction of interest is the first one in the block.
-    return !HeaderMayThrow ||
-           Inst.getParent()->getFirstNonPHIOrDbg() == &Inst;
+    return !HeaderMayThrow || Inst.getParent()->getFirstNonPHIOrDbg() == &Inst;
 
   // If there is a path from header to exit or latch that doesn't lead to our
   // instruction's block, return false.
@@ -316,19 +310,19 @@ static bool isMustExecuteIn(const Instruction &I, Loop *L, DominatorTree *DT) {
   SimpleLoopSafetyInfo LSI;
   LSI.computeLoopSafetyInfo(L);
   return LSI.isGuaranteedToExecute(I, DT, L) ||
-    isGuaranteedToExecuteForEveryIteration(&I, L);
+         isGuaranteedToExecuteForEveryIteration(&I, L);
 }
 
 namespace {
 /// An assembly annotator class to print must execute information in
 /// comments.
 class MustExecuteAnnotatedWriter : public AssemblyAnnotationWriter {
-  DenseMap<const Value*, SmallVector<Loop*, 4> > MustExec;
+  DenseMap<const Value *, SmallVector<Loop *, 4>> MustExec;
 
 public:
-  MustExecuteAnnotatedWriter(const Function &F,
-                             DominatorTree &DT, LoopInfo &LI) {
-    for (const auto &I: instructions(F)) {
+  MustExecuteAnnotatedWriter(const Function &F, DominatorTree &DT,
+                             LoopInfo &LI) {
+    for (const auto &I : instructions(F)) {
       Loop *L = LI.getLoopFor(I.getParent());
       while (L) {
         if (isMustExecuteIn(I, L, &DT)) {
@@ -338,21 +332,19 @@ class MustExecuteAnnotatedWriter : public AssemblyAnnotationWriter {
       };
     }
   }
-  MustExecuteAnnotatedWriter(const Module &M,
-                             DominatorTree &DT, LoopInfo &LI) {
+  MustExecuteAnnotatedWriter(const Module &M, DominatorTree &DT, LoopInfo &LI) {
     for (const auto &F : M)
-    for (const auto &I: instructions(F)) {
-      Loop *L = LI.getLoopFor(I.getParent());
-      while (L) {
-        if (isMustExecuteIn(I, L, &DT)) {
-          MustExec[&I].push_back(L);
-        }
-        L = L->getParentLoop();
-      };
-    }
+      for (const auto &I : instructions(F)) {
+        Loop *L = LI.getLoopFor(I.getParent());
+        while (L) {
+          if (isMustExecuteIn(I, L, &DT)) {
+            MustExec[&I].push_back(L);
+          }
+          L = L->getParentLoop();
+        };
+      }
   }
 
-
   void printInfoComment(const Value &V, formatted_raw_ostream &OS) override {
     if (!MustExec.count(&V))
       return;
@@ -482,7 +474,8 @@ MustBeExecutedContextExplorer::findForwardJoinPoint(const BasicBlock *InitBB) {
   if (!JoinBB)
     return nullptr;
 
-  LLVM_DEBUG(dbgs() << "\t\tJoin block candidate: " << JoinBB->getName() << "\n");
+  LLVM_DEBUG(dbgs() << "\t\tJoin block candidate: " << JoinBB->getName()
+                    << "\n");
 
   // In forward direction we check if control will for sure reach JoinBB from
   // InitBB, thus it can not be "stopped" along the way. Ways to "stop" control
diff --git a/llvm/lib/Analysis/PHITransAddr.cpp b/llvm/lib/Analysis/PHITransAddr.cpp
index 5700fd664a4cd3f..78a5f0c1e574caa 100644
--- a/llvm/lib/Analysis/PHITransAddr.cpp
+++ b/llvm/lib/Analysis/PHITransAddr.cpp
@@ -52,7 +52,8 @@ static bool verifySubExpr(Value *Expr,
                           SmallVectorImpl<Instruction *> &InstInputs) {
   // If this is a non-instruction value, there is nothing to do.
   Instruction *I = dyn_cast<Instruction>(Expr);
-  if (!I) return true;
+  if (!I)
+    return true;
 
   // If it's an instruction, it is either in Tmp or its operands recursively
   // are.
@@ -79,9 +80,10 @@ static bool verifySubExpr(Value *Expr,
 /// structure is valid, it returns true.  If invalid, it prints errors and
 /// returns false.
 bool PHITransAddr::verify() const {
-  if (!Addr) return true;
+  if (!Addr)
+    return true;
 
-  SmallVector<Instruction*, 8> Tmp(InstInputs.begin(), InstInputs.end());
+  SmallVector<Instruction *, 8> Tmp(InstInputs.begin(), InstInputs.end());
 
   if (!verifySubExpr(Addr, Tmp))
     return false;
@@ -108,9 +110,10 @@ bool PHITransAddr::isPotentiallyPHITranslatable() const {
 }
 
 static void RemoveInstInputs(Value *V,
-                             SmallVectorImpl<Instruction*> &InstInputs) {
+                             SmallVectorImpl<Instruction *> &InstInputs) {
   Instruction *I = dyn_cast<Instruction>(V);
-  if (!I) return;
+  if (!I)
+    return;
 
   // If the instruction is in the InstInputs list, remove it.
   if (auto Entry = find(InstInputs, I); Entry != InstInputs.end()) {
@@ -131,7 +134,8 @@ Value *PHITransAddr::translateSubExpr(Value *V, BasicBlock *CurBB,
                                       const DominatorTree *DT) {
   // If this is a non-instruction value, it can't require PHI translation.
   Instruction *Inst = dyn_cast<Instruction>(V);
-  if (!Inst) return V;
+  if (!Inst)
+    return V;
 
   // Determine whether 'Inst' is an input to our PHI translatable expression.
   bool isInput = is_contained(InstInputs, Inst);
@@ -171,7 +175,8 @@ Value *PHITransAddr::translateSubExpr(Value *V, BasicBlock *CurBB,
 
   if (CastInst *Cast = dyn_cast<CastInst>(Inst)) {
     Value *PHIIn = translateSubExpr(Cast->getOperand(0), CurBB, PredBB, DT);
-    if (!PHIIn) return nullptr;
+    if (!PHIIn)
+      return nullptr;
     if (PHIIn == Cast->getOperand(0))
       return Cast;
 
@@ -198,11 +203,12 @@ Value *PHITransAddr::translateSubExpr(Value *V, BasicBlock *CurBB,
 
   // Handle getelementptr with at least one PHI translatable operand.
   if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(Inst)) {
-    SmallVector<Value*, 8> GEPOps;
+    SmallVector<Value *, 8> GEPOps;
     bool AnyChanged = false;
     for (Value *Op : GEP->operands()) {
       Value *GEPOp = translateSubExpr(Op, CurBB, PredBB, DT);
-      if (!GEPOp) return nullptr;
+      if (!GEPOp)
+        return nullptr;
 
       AnyChanged |= GEPOp != Op;
       GEPOps.push_back(GEPOp);
@@ -246,7 +252,8 @@ Value *PHITransAddr::translateSubExpr(Value *V, BasicBlock *CurBB,
     bool isNUW = cast<BinaryOperator>(Inst)->hasNoUnsignedWrap();
 
     Value *LHS = translateSubExpr(Inst->getOperand(0), CurBB, PredBB, DT);
-    if (!LHS) return nullptr;
+    if (!LHS)
+      return nullptr;
 
     // If the PHI translated LHS is an add of a constant, fold the immediates.
     if (BinaryOperator *BOp = dyn_cast<BinaryOperator>(LHS))
@@ -264,7 +271,8 @@ Value *PHITransAddr::translateSubExpr(Value *V, BasicBlock *CurBB,
         }
 
     // See if the add simplifies away.
-    if (Value *Res = simplifyAddInst(LHS, RHS, isNSW, isNUW, {DL, TLI, DT, AC})) {
+    if (Value *Res =
+            simplifyAddInst(LHS, RHS, isNSW, isNUW, {DL, TLI, DT, AC})) {
       // If we simplified the operands, the LHS is no longer an input, but Res
       // is.
       RemoveInstInputs(LHS, InstInputs);
@@ -278,8 +286,8 @@ Value *PHITransAddr::translateSubExpr(Value *V, BasicBlock *CurBB,
     // Otherwise, see if we have this add available somewhere.
     for (User *U : LHS->users()) {
       if (BinaryOperator *BO = dyn_cast<BinaryOperator>(U))
-        if (BO->getOpcode() == Instruction::Add &&
-            BO->getOperand(0) == LHS && BO->getOperand(1) == RHS &&
+        if (BO->getOpcode() == Instruction::Add && BO->getOperand(0) == LHS &&
+            BO->getOperand(1) == RHS &&
             BO->getParent()->getParent() == CurBB->getParent() &&
             (!DT || DT->dominates(BO->getParent(), PredBB)))
           return BO;
@@ -332,7 +340,8 @@ PHITransAddr::translateWithInsertion(BasicBlock *CurBB, BasicBlock *PredBB,
   Addr = insertTranslatedSubExpr(Addr, CurBB, PredBB, DT, NewInsts);
 
   // If successful, return the new value.
-  if (Addr) return Addr;
+  if (Addr)
+    return Addr;
 
   // If not, destroy any intermediate instructions inserted.
   while (NewInsts.size() != NISize)
@@ -364,7 +373,8 @@ Value *PHITransAddr::insertTranslatedSubExpr(
   if (CastInst *Cast = dyn_cast<CastInst>(Inst)) {
     Value *OpVal = insertTranslatedSubExpr(Cast->getOperand(0), CurBB, PredBB,
                                            DT, NewInsts);
-    if (!OpVal) return nullptr;
+    if (!OpVal)
+      return nullptr;
 
     // Otherwise insert a cast at the end of PredBB.
     CastInst *New = CastInst::Create(Cast->getOpcode(), OpVal, InVal->getType(),
@@ -377,11 +387,12 @@ Value *PHITransAddr::insertTranslatedSubExpr(
 
   // Handle getelementptr with at least one PHI operand.
   if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(Inst)) {
-    SmallVector<Value*, 8> GEPOps;
+    SmallVector<Value *, 8> GEPOps;
     BasicBlock *CurBB = GEP->getParent();
     for (Value *Op : GEP->operands()) {
       Value *OpVal = insertTranslatedSubExpr(Op, CurBB, PredBB, DT, NewInsts);
-      if (!OpVal) return nullptr;
+      if (!OpVal)
+        return nullptr;
       GEPOps.push_back(OpVal);
     }
 
@@ -408,9 +419,9 @@ Value *PHITransAddr::insertTranslatedSubExpr(
     if (OpVal == nullptr)
       return nullptr;
 
-    BinaryOperator *Res = BinaryOperator::CreateAdd(OpVal, Inst->getOperand(1),
-                                           InVal->getName()+".phi.trans.insert",
-                                                    PredBB->getTerminator());
+    BinaryOperator *Res = BinaryOperator::CreateAdd(
+        OpVal, Inst->getOperand(1), InVal->getName() + ".phi.trans.insert",
+        PredBB->getTerminator());
     Res->setHasNoSignedWrap(cast<BinaryOperator>(Inst)->hasNoSignedWrap());
     Res->setHasNoUnsignedWrap(cast<BinaryOperator>(Inst)->hasNoUnsignedWrap());
     NewInsts.push_back(Res);
diff --git a/llvm/lib/Analysis/PhiValues.cpp b/llvm/lib/Analysis/PhiValues.cpp
index 656a17e9dc30ed2..2d23f40245f36cf 100644
--- a/llvm/lib/Analysis/PhiValues.cpp
+++ b/llvm/lib/Analysis/PhiValues.cpp
@@ -212,9 +212,7 @@ bool PhiValuesWrapperPass::runOnFunction(Function &F) {
   return false;
 }
 
-void PhiValuesWrapperPass::releaseMemory() {
-  Result->releaseMemory();
-}
+void PhiValuesWrapperPass::releaseMemory() { Result->releaseMemory(); }
 
 void PhiValuesWrapperPass::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.setPreservesAll();
@@ -222,5 +220,5 @@ void PhiValuesWrapperPass::getAnalysisUsage(AnalysisUsage &AU) const {
 
 char PhiValuesWrapperPass::ID = 0;
 
-INITIALIZE_PASS(PhiValuesWrapperPass, "phi-values", "Phi Values Analysis", false,
-                true)
+INITIALIZE_PASS(PhiValuesWrapperPass, "phi-values", "Phi Values Analysis",
+                false, true)
diff --git a/llvm/lib/Analysis/PostDominators.cpp b/llvm/lib/Analysis/PostDominators.cpp
index f01d51504d7cd4d..4f001dff4b910ec 100644
--- a/llvm/lib/Analysis/PostDominators.cpp
+++ b/llvm/lib/Analysis/PostDominators.cpp
@@ -85,11 +85,12 @@ void PostDominatorTreeWrapperPass::verifyAnalysis() const {
     assert(DT.verify(PostDominatorTree::VerificationLevel::Basic));
 }
 
-void PostDominatorTreeWrapperPass::print(raw_ostream &OS, const Module *) const {
+void PostDominatorTreeWrapperPass::print(raw_ostream &OS,
+                                         const Module *) const {
   DT.print(OS);
 }
 
-FunctionPass* llvm::createPostDomTree() {
+FunctionPass *llvm::createPostDomTree() {
   return new PostDominatorTreeWrapperPass();
 }
 
@@ -102,7 +103,7 @@ PostDominatorTree PostDominatorTreeAnalysis::run(Function &F,
 }
 
 PostDominatorTreePrinterPass::PostDominatorTreePrinterPass(raw_ostream &OS)
-  : OS(OS) {}
+    : OS(OS) {}
 
 PreservedAnalyses
 PostDominatorTreePrinterPass::run(Function &F, FunctionAnalysisManager &AM) {
diff --git a/llvm/lib/Analysis/PtrUseVisitor.cpp b/llvm/lib/Analysis/PtrUseVisitor.cpp
index 49304818d7efed6..1f301e69a66f8f0 100644
--- a/llvm/lib/Analysis/PtrUseVisitor.cpp
+++ b/llvm/lib/Analysis/PtrUseVisitor.cpp
@@ -20,10 +20,8 @@ using namespace llvm;
 void detail::PtrUseVisitorBase::enqueueUsers(Instruction &I) {
   for (Use &U : I.uses()) {
     if (VisitedUses.insert(&U).second) {
-      UseToVisit NewU = {
-        UseToVisit::UseAndIsOffsetKnownPair(&U, IsOffsetKnown),
-        Offset
-      };
+      UseToVisit NewU = {UseToVisit::UseAndIsOffsetKnownPair(&U, IsOffsetKnown),
+                         Offset};
       Worklist.push_back(std::move(NewU));
     }
   }
diff --git a/llvm/lib/Analysis/RegionInfo.cpp b/llvm/lib/Analysis/RegionInfo.cpp
index 9be23a374eca5ae..53539c022e1ecdb 100644
--- a/llvm/lib/Analysis/RegionInfo.cpp
+++ b/llvm/lib/Analysis/RegionInfo.cpp
@@ -33,38 +33,32 @@ template class RegionInfoBase<RegionTraits<Function>>;
 
 } // end namespace llvm
 
-STATISTIC(numRegions,       "The # of regions");
+STATISTIC(numRegions, "The # of regions");
 STATISTIC(numSimpleRegions, "The # of simple regions");
 
 // Always verify if expensive checking is enabled.
 
-static cl::opt<bool,true>
-VerifyRegionInfoX(
-  "verify-region-info",
-  cl::location(RegionInfoBase<RegionTraits<Function>>::VerifyRegionInfo),
-  cl::desc("Verify region info (time consuming)"));
-
-static cl::opt<Region::PrintStyle, true> printStyleX("print-region-style",
-  cl::location(RegionInfo::printStyle),
-  cl::Hidden,
-  cl::desc("style of printing regions"),
-  cl::values(
-    clEnumValN(Region::PrintNone, "none",  "print no details"),
-    clEnumValN(Region::PrintBB, "bb",
-               "print regions in detail with block_iterator"),
-    clEnumValN(Region::PrintRN, "rn",
-               "print regions in detail with element_iterator")));
+static cl::opt<bool, true> VerifyRegionInfoX(
+    "verify-region-info",
+    cl::location(RegionInfoBase<RegionTraits<Function>>::VerifyRegionInfo),
+    cl::desc("Verify region info (time consuming)"));
+
+static cl::opt<Region::PrintStyle, true> printStyleX(
+    "print-region-style", cl::location(RegionInfo::printStyle), cl::Hidden,
+    cl::desc("style of printing regions"),
+    cl::values(clEnumValN(Region::PrintNone, "none", "print no details"),
+               clEnumValN(Region::PrintBB, "bb",
+                          "print regions in detail with block_iterator"),
+               clEnumValN(Region::PrintRN, "rn",
+                          "print regions in detail with element_iterator")));
 
 //===----------------------------------------------------------------------===//
 // Region implementation
 //
 
-Region::Region(BasicBlock *Entry, BasicBlock *Exit,
-               RegionInfo* RI,
-               DominatorTree *DT, Region *Parent) :
-  RegionBase<RegionTraits<Function>>(Entry, Exit, RI, DT, Parent) {
-
-}
+Region::Region(BasicBlock *Entry, BasicBlock *Exit, RegionInfo *RI,
+               DominatorTree *DT, Region *Parent)
+    : RegionBase<RegionTraits<Function>>(Entry, Exit, RI, DT, Parent) {}
 
 Region::~Region() = default;
 
@@ -99,8 +93,7 @@ void RegionInfo::recalculate(Function &F, DominatorTree *DT_,
   PDT = PDT_;
   DF = DF_;
 
-  TopLevelRegion = new Region(&F.getEntryBlock(), nullptr,
-                              this, DT, nullptr);
+  TopLevelRegion = new Region(&F.getEntryBlock(), nullptr, this, DT, nullptr);
   updateStatistics(TopLevelRegion);
   calculate(F);
 }
@@ -132,13 +125,9 @@ bool RegionInfoPass::runOnFunction(Function &F) {
   return false;
 }
 
-void RegionInfoPass::releaseMemory() {
-  RI.releaseMemory();
-}
+void RegionInfoPass::releaseMemory() { RI.releaseMemory(); }
 
-void RegionInfoPass::verifyAnalysis() const {
-    RI.verifyAnalysis();
-}
+void RegionInfoPass::verifyAnalysis() const { RI.verifyAnalysis(); }
 
 void RegionInfoPass::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.setPreservesAll();
@@ -152,20 +141,18 @@ void RegionInfoPass::print(raw_ostream &OS, const Module *) const {
 }
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
-LLVM_DUMP_METHOD void RegionInfoPass::dump() const {
-  RI.dump();
-}
+LLVM_DUMP_METHOD void RegionInfoPass::dump() const { RI.dump(); }
 #endif
 
 char RegionInfoPass::ID = 0;
 
 INITIALIZE_PASS_BEGIN(RegionInfoPass, "regions",
-                "Detect single entry single exit regions", true, true)
+                      "Detect single entry single exit regions", true, true)
 INITIALIZE_PASS_DEPENDENCY(DominatorTreeWrapperPass)
 INITIALIZE_PASS_DEPENDENCY(PostDominatorTreeWrapperPass)
 INITIALIZE_PASS_DEPENDENCY(DominanceFrontierWrapperPass)
 INITIALIZE_PASS_END(RegionInfoPass, "regions",
-                "Detect single entry single exit regions", true, true)
+                    "Detect single entry single exit regions", true, true)
 
 // Create methods available outside of this file, to use them
 // "include/llvm/LinkAllPasses.h". Otherwise the pass would be deleted by
@@ -173,9 +160,7 @@ INITIALIZE_PASS_END(RegionInfoPass, "regions",
 
 namespace llvm {
 
-  FunctionPass *createRegionInfoPass() {
-    return new RegionInfoPass();
-  }
+FunctionPass *createRegionInfoPass() { return new RegionInfoPass(); }
 
 } // end namespace llvm
 
@@ -195,8 +180,7 @@ RegionInfo RegionInfoAnalysis::run(Function &F, FunctionAnalysisManager &AM) {
   return RI;
 }
 
-RegionInfoPrinterPass::RegionInfoPrinterPass(raw_ostream &OS)
-  : OS(OS) {}
+RegionInfoPrinterPass::RegionInfoPrinterPass(raw_ostream &OS) : OS(OS) {}
 
 PreservedAnalyses RegionInfoPrinterPass::run(Function &F,
                                              FunctionAnalysisManager &AM) {
diff --git a/llvm/lib/Analysis/RegionPass.cpp b/llvm/lib/Analysis/RegionPass.cpp
index 9ea7d711918f5b7..af8b5021ddd03ac 100644
--- a/llvm/lib/Analysis/RegionPass.cpp
+++ b/llvm/lib/Analysis/RegionPass.cpp
@@ -75,11 +75,11 @@ bool RGPassManager::runOnFunction(Function &F) {
   // Walk Regions
   while (!RQ.empty()) {
 
-    CurrentRegion  = RQ.back();
+    CurrentRegion = RQ.back();
 
     // Run all passes on the current Region.
     for (unsigned Index = 0; Index < getNumContainedPasses(); ++Index) {
-      RegionPass *P = (RegionPass*)getContainedPass(Index);
+      RegionPass *P = (RegionPass *)getContainedPass(Index);
 
       if (isPassDebuggingExecutionsOrMore()) {
         dumpPassInfo(P, EXECUTION_MSG, ON_REGION_MSG,
@@ -113,7 +113,7 @@ bool RGPassManager::runOnFunction(Function &F) {
       if (isPassDebuggingExecutionsOrMore()) {
         if (LocalChanged)
           dumpPassInfo(P, MODIFICATION_MSG, ON_REGION_MSG,
-                                      CurrentRegion->getNameStr());
+                       CurrentRegion->getNameStr());
         dumpPreservedSet(P);
       }
 
@@ -149,7 +149,7 @@ bool RGPassManager::runOnFunction(Function &F) {
 
   // Finalization
   for (unsigned Index = 0; Index < getNumContainedPasses(); ++Index) {
-    RegionPass *P = (RegionPass*)getContainedPass(Index);
+    RegionPass *P = (RegionPass *)getContainedPass(Index);
     Changed |= P->doFinalization();
   }
 
@@ -163,11 +163,11 @@ bool RGPassManager::runOnFunction(Function &F) {
 
 /// Print passes managed by this manager
 void RGPassManager::dumpPassStructure(unsigned Offset) {
-  errs().indent(Offset*2) << "Region Pass Manager\n";
+  errs().indent(Offset * 2) << "Region Pass Manager\n";
   for (unsigned Index = 0; Index < getNumContainedPasses(); ++Index) {
     Pass *P = getContainedPass(Index);
     P->dumpPassStructure(Offset + 1);
-    dumpLastUses(P, Offset+1);
+    dumpLastUses(P, Offset + 1);
   }
 }
 
@@ -177,7 +177,7 @@ namespace {
 class PrintRegionPass : public RegionPass {
 private:
   std::string Banner;
-  raw_ostream &Out;       // raw_ostream to print on.
+  raw_ostream &Out; // raw_ostream to print on.
 
 public:
   static char ID;
@@ -206,7 +206,7 @@ class PrintRegionPass : public RegionPass {
 };
 
 char PrintRegionPass::ID = 0;
-}  //end anonymous namespace
+} // end anonymous namespace
 
 //===----------------------------------------------------------------------===//
 // RegionPass
@@ -224,18 +224,17 @@ void RegionPass::preparePassManager(PMStack &PMS) {
          PMS.top()->getPassManagerType() > PMT_RegionPassManager)
     PMS.pop();
 
-
   // If this pass is destroying high level information that is used
   // by other passes that are managed by LPM then do not insert
   // this pass in current LPM. Use new RGPassManager.
   if (PMS.top()->getPassManagerType() == PMT_RegionPassManager &&
-    !PMS.top()->preserveHigherLevelAnalysis(this))
+      !PMS.top()->preserveHigherLevelAnalysis(this))
     PMS.pop();
 }
 
 /// Assign pass manager to manage this pass.
 void RegionPass::assignPassManager(PMStack &PMS,
-                                 PassManagerType PreferredType) {
+                                   PassManagerType PreferredType) {
   // Find RGPassManager
   while (!PMS.empty() &&
          PMS.top()->getPassManagerType() > PMT_RegionPassManager)
@@ -245,10 +244,10 @@ void RegionPass::assignPassManager(PMStack &PMS,
 
   // Create new Region Pass Manager if it does not exist.
   if (PMS.top()->getPassManagerType() == PMT_RegionPassManager)
-    RGPM = (RGPassManager*)PMS.top();
+    RGPM = (RGPassManager *)PMS.top();
   else {
 
-    assert (!PMS.empty() && "Unable to create Region Pass Manager");
+    assert(!PMS.empty() && "Unable to create Region Pass Manager");
     PMDataManager *PMD = PMS.top();
 
     // [1] Create new Region Pass Manager
@@ -272,13 +271,11 @@ void RegionPass::assignPassManager(PMStack &PMS,
 
 /// Get the printer pass
 Pass *RegionPass::createPrinterPass(raw_ostream &O,
-                                  const std::string &Banner) const {
+                                    const std::string &Banner) const {
   return new PrintRegionPass(Banner, O);
 }
 
-static std::string getDescription(const Region &R) {
-  return "region";
-}
+static std::string getDescription(const Region &R) { return "region"; }
 
 bool RegionPass::skipRegion(Region &R) const {
   Function &F = *R.getEntry()->getParent();
diff --git a/llvm/lib/Analysis/RegionPrinter.cpp b/llvm/lib/Analysis/RegionPrinter.cpp
index fbd3d17febff487..66902beb42b2b8f 100644
--- a/llvm/lib/Analysis/RegionPrinter.cpp
+++ b/llvm/lib/Analysis/RegionPrinter.cpp
@@ -24,11 +24,10 @@ using namespace llvm;
 
 //===----------------------------------------------------------------------===//
 /// onlySimpleRegion - Show only the simple regions in the RegionViewer.
-static cl::opt<bool>
-onlySimpleRegions("only-simple-regions",
-                  cl::desc("Show only simple regions in the graphviz viewer"),
-                  cl::Hidden,
-                  cl::init(false));
+static cl::opt<bool> onlySimpleRegions(
+    "only-simple-regions",
+    cl::desc("Show only simple regions in the graphviz viewer"), cl::Hidden,
+    cl::init(false));
 
 namespace llvm {
 
@@ -49,8 +48,8 @@ std::string DOTGraphTraits<RegionNode *>::getNodeLabel(RegionNode *Node,
 template <>
 struct DOTGraphTraits<RegionInfo *> : public DOTGraphTraits<RegionNode *> {
 
-  DOTGraphTraits (bool isSimple = false)
-    : DOTGraphTraits<RegionNode*>(isSimple) {}
+  DOTGraphTraits(bool isSimple = false)
+      : DOTGraphTraits<RegionNode *>(isSimple) {}
 
   static std::string getGraphName(const RegionInfo *) { return "Region Graph"; }
 
@@ -90,31 +89,32 @@ struct DOTGraphTraits<RegionInfo *> : public DOTGraphTraits<RegionNode *> {
   static void printRegionCluster(const Region &R, GraphWriter<RegionInfo *> &GW,
                                  unsigned depth = 0) {
     raw_ostream &O = GW.getOStream();
-    O.indent(2 * depth) << "subgraph cluster_" << static_cast<const void*>(&R)
-      << " {\n";
+    O.indent(2 * depth) << "subgraph cluster_" << static_cast<const void *>(&R)
+                        << " {\n";
     O.indent(2 * (depth + 1)) << "label = \"\";\n";
 
     if (!onlySimpleRegions || R.isSimple()) {
       O.indent(2 * (depth + 1)) << "style = filled;\n";
-      O.indent(2 * (depth + 1)) << "color = "
-        << ((R.getDepth() * 2 % 12) + 1) << "\n";
+      O.indent(2 * (depth + 1))
+          << "color = " << ((R.getDepth() * 2 % 12) + 1) << "\n";
 
     } else {
       O.indent(2 * (depth + 1)) << "style = solid;\n";
-      O.indent(2 * (depth + 1)) << "color = "
-        << ((R.getDepth() * 2 % 12) + 2) << "\n";
+      O.indent(2 * (depth + 1))
+          << "color = " << ((R.getDepth() * 2 % 12) + 2) << "\n";
     }
 
     for (const auto &RI : R)
       printRegionCluster(*RI, GW, depth + 1);
 
-    const RegionInfo &RI = *static_cast<const RegionInfo*>(R.getRegionInfo());
+    const RegionInfo &RI = *static_cast<const RegionInfo *>(R.getRegionInfo());
 
     for (auto *BB : R.blocks())
       if (RI.getRegionFor(BB) == &R)
-        O.indent(2 * (depth + 1)) << "Node"
-          << static_cast<const void*>(RI.getTopLevelRegion()->getBBNode(BB))
-          << ";\n";
+        O.indent(2 * (depth + 1))
+            << "Node"
+            << static_cast<const void *>(RI.getTopLevelRegion()->getBBNode(BB))
+            << ";\n";
 
     O.indent(2 * depth) << "}\n";
   }
@@ -185,7 +185,7 @@ struct RegionOnlyViewer
 };
 char RegionOnlyViewer::ID = 0;
 
-} //end anonymous namespace
+} // end anonymous namespace
 
 INITIALIZE_PASS(RegionPrinter, "dot-regions",
                 "Print regions of function to 'dot' file", true, true)
@@ -195,12 +195,12 @@ INITIALIZE_PASS(
     "Print regions of function to 'dot' file (with no function bodies)", true,
     true)
 
-INITIALIZE_PASS(RegionViewer, "view-regions", "View regions of function",
-                true, true)
+INITIALIZE_PASS(RegionViewer, "view-regions", "View regions of function", true,
+                true)
 
 INITIALIZE_PASS(RegionOnlyViewer, "view-regions-only",
-                "View regions of function (with no function bodies)",
-                true, true)
+                "View regions of function (with no function bodies)", true,
+                true)
 
 FunctionPass *llvm::createRegionPrinterPass() { return new RegionPrinter(); }
 
@@ -208,11 +208,9 @@ FunctionPass *llvm::createRegionOnlyPrinterPass() {
   return new RegionOnlyPrinter();
 }
 
-FunctionPass* llvm::createRegionViewerPass() {
-  return new RegionViewer();
-}
+FunctionPass *llvm::createRegionViewerPass() { return new RegionViewer(); }
 
-FunctionPass* llvm::createRegionOnlyViewerPass() {
+FunctionPass *llvm::createRegionOnlyViewerPass() {
   return new RegionOnlyViewer();
 }
 
diff --git a/llvm/lib/Analysis/ScalarEvolution.cpp b/llvm/lib/Analysis/ScalarEvolution.cpp
index f951068c4c79c09..fad30500c4aeebc 100644
--- a/llvm/lib/Analysis/ScalarEvolution.cpp
+++ b/llvm/lib/Analysis/ScalarEvolution.cpp
@@ -212,20 +212,20 @@ static cl::opt<unsigned>
                   cl::desc("Max coefficients in AddRec during evolving"),
                   cl::init(8));
 
-static cl::opt<unsigned>
-    HugeExprThreshold("scalar-evolution-huge-expr-threshold", cl::Hidden,
-                  cl::desc("Size of the expression which is considered huge"),
-                  cl::init(4096));
+static cl::opt<unsigned> HugeExprThreshold(
+    "scalar-evolution-huge-expr-threshold", cl::Hidden,
+    cl::desc("Size of the expression which is considered huge"),
+    cl::init(4096));
 
 static cl::opt<unsigned> RangeIterThreshold(
     "scev-range-iter-threshold", cl::Hidden,
     cl::desc("Threshold for switching to iteratively computing SCEV ranges"),
     cl::init(32));
 
-static cl::opt<bool>
-ClassifyExpressions("scalar-evolution-classify-expressions",
-    cl::Hidden, cl::init(true),
-    cl::desc("When printing analysis, include information on every instruction"));
+static cl::opt<bool> ClassifyExpressions(
+    "scalar-evolution-classify-expressions", cl::Hidden, cl::init(true),
+    cl::desc(
+        "When printing analysis, include information on every instruction"));
 
 static cl::opt<bool> UseExpensiveRangeSharpening(
     "scalar-evolution-use-expensive-range-sharpening", cl::Hidden,
@@ -289,15 +289,15 @@ void SCEV::print(raw_ostream &OS) const {
   case scZeroExtend: {
     const SCEVZeroExtendExpr *ZExt = cast<SCEVZeroExtendExpr>(this);
     const SCEV *Op = ZExt->getOperand();
-    OS << "(zext " << *Op->getType() << " " << *Op << " to "
-       << *ZExt->getType() << ")";
+    OS << "(zext " << *Op->getType() << " " << *Op << " to " << *ZExt->getType()
+       << ")";
     return;
   }
   case scSignExtend: {
     const SCEVSignExtendExpr *SExt = cast<SCEVSignExtendExpr>(this);
     const SCEV *Op = SExt->getOperand();
-    OS << "(sext " << *Op->getType() << " " << *Op << " to "
-       << *SExt->getType() << ")";
+    OS << "(sext " << *Op->getType() << " " << *Op << " to " << *SExt->getType()
+       << ")";
     return;
   }
   case scAddRecExpr: {
@@ -327,10 +327,18 @@ void SCEV::print(raw_ostream &OS) const {
     const SCEVNAryExpr *NAry = cast<SCEVNAryExpr>(this);
     const char *OpStr = nullptr;
     switch (NAry->getSCEVType()) {
-    case scAddExpr: OpStr = " + "; break;
-    case scMulExpr: OpStr = " * "; break;
-    case scUMaxExpr: OpStr = " umax "; break;
-    case scSMaxExpr: OpStr = " smax "; break;
+    case scAddExpr:
+      OpStr = " + ";
+      break;
+    case scMulExpr:
+      OpStr = " * ";
+      break;
+    case scUMaxExpr:
+      OpStr = " umax ";
+      break;
+    case scSMaxExpr:
+      OpStr = " smax ";
+      break;
     case scUMinExpr:
       OpStr = " umin ";
       break;
@@ -459,18 +467,20 @@ bool SCEV::isAllOnesValue() const {
 
 bool SCEV::isNonConstantNegative() const {
   const SCEVMulExpr *Mul = dyn_cast<SCEVMulExpr>(this);
-  if (!Mul) return false;
+  if (!Mul)
+    return false;
 
   // If there is a constant factor, it will be first.
   const SCEVConstant *SC = dyn_cast<SCEVConstant>(Mul->getOperand(0));
-  if (!SC) return false;
+  if (!SC)
+    return false;
 
   // Return true if the value is negative, this matches things like (-42 * V).
   return SC->getAPInt().isNegative();
 }
 
-SCEVCouldNotCompute::SCEVCouldNotCompute() :
-  SCEV(FoldingSetNodeIDRef(), scCouldNotCompute, 0) {}
+SCEVCouldNotCompute::SCEVCouldNotCompute()
+    : SCEV(FoldingSetNodeIDRef(), scCouldNotCompute, 0) {}
 
 bool SCEVCouldNotCompute::classof(const SCEV *S) {
   return S->getSCEVType() == scCouldNotCompute;
@@ -481,7 +491,8 @@ const SCEV *ScalarEvolution::getConstant(ConstantInt *V) {
   ID.AddInteger(scConstant);
   ID.AddPointer(V);
   void *IP = nullptr;
-  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP)) return S;
+  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP))
+    return S;
   SCEV *S = new (SCEVAllocator) SCEVConstant(ID.Intern(SCEVAllocator), V);
   UniqueSCEVs.InsertNode(S, IP);
   return S;
@@ -491,8 +502,7 @@ const SCEV *ScalarEvolution::getConstant(const APInt &Val) {
   return getConstant(ConstantInt::get(getContext(), Val));
 }
 
-const SCEV *
-ScalarEvolution::getConstant(Type *Ty, uint64_t V, bool isSigned) {
+const SCEV *ScalarEvolution::getConstant(Type *Ty, uint64_t V, bool isSigned) {
   IntegerType *ITy = cast<IntegerType>(getEffectiveSCEVType(Ty));
   return getConstant(ConstantInt::get(ITy, V, isSigned));
 }
@@ -790,9 +800,10 @@ CompareSCEVComplexity(EquivalenceClasses<const SCEV *> &EqCacheSCEV,
 /// results from this routine.  In other words, we don't want the results of
 /// this to depend on where the addresses of various SCEV objects happened to
 /// land in memory.
-static void GroupByComplexity(SmallVectorImpl<const SCEV *> &Ops,
-                              LoopInfo *LI, DominatorTree &DT) {
-  if (Ops.size() < 2) return;  // Noop
+static void GroupByComplexity(SmallVectorImpl<const SCEV *> &Ops, LoopInfo *LI,
+                              DominatorTree &DT) {
+  if (Ops.size() < 2)
+    return; // Noop
 
   EquivalenceClasses<const SCEV *> EqCacheSCEV;
   EquivalenceClasses<const Value *> EqCacheValue;
@@ -821,18 +832,20 @@ static void GroupByComplexity(SmallVectorImpl<const SCEV *> &Ops,
   // complexity.  Note that this is, at worst, N^2, but the vector is likely to
   // be extremely short in practice.  Note that we take this approach because we
   // do not want to depend on the addresses of the objects we are grouping.
-  for (unsigned i = 0, e = Ops.size(); i != e-2; ++i) {
+  for (unsigned i = 0, e = Ops.size(); i != e - 2; ++i) {
     const SCEV *S = Ops[i];
     unsigned Complexity = S->getSCEVType();
 
     // If there are any objects of the same complexity and same value as this
     // one, group them.
-    for (unsigned j = i+1; j != e && Ops[j]->getSCEVType() == Complexity; ++j) {
+    for (unsigned j = i + 1; j != e && Ops[j]->getSCEVType() == Complexity;
+         ++j) {
       if (Ops[j] == S) { // Found a duplicate.
         // Move it to immediately after i'th element.
-        std::swap(Ops[i+1], Ops[j]);
-        ++i;   // no need to rescan it.
-        if (i == e-2) return;  // Done!
+        std::swap(Ops[i + 1], Ops[j]);
+        ++i; // no need to rescan it.
+        if (i == e - 2)
+          return; // Done!
       }
     }
   }
@@ -852,8 +865,7 @@ static bool hasHugeExpression(ArrayRef<const SCEV *> Ops) {
 
 /// Compute BC(It, K).  The result has width W.  Assume, K > 0.
 static const SCEV *BinomialCoefficient(const SCEV *It, unsigned K,
-                                       ScalarEvolution &SE,
-                                       Type *ResultTy) {
+                                       ScalarEvolution &SE, Type *ResultTy) {
   // Handle the simplest case efficiently.
   if (K == 1)
     return SE.getTruncateOrZeroExtend(It, ResultTy);
@@ -937,19 +949,19 @@ static const SCEV *BinomialCoefficient(const SCEV *It, unsigned K,
   // Calculate the multiplicative inverse of K! / 2^T;
   // this multiplication factor will perform the exact division by
   // K! / 2^T.
-  APInt Mod = APInt::getSignedMinValue(W+1);
-  APInt MultiplyFactor = OddFactorial.zext(W+1);
+  APInt Mod = APInt::getSignedMinValue(W + 1);
+  APInt MultiplyFactor = OddFactorial.zext(W + 1);
   MultiplyFactor = MultiplyFactor.multiplicativeInverse(Mod);
   MultiplyFactor = MultiplyFactor.trunc(W);
 
   // Calculate the product, at width T+W
-  IntegerType *CalculationTy = IntegerType::get(SE.getContext(),
-                                                      CalculationBits);
+  IntegerType *CalculationTy =
+      IntegerType::get(SE.getContext(), CalculationBits);
   const SCEV *Dividend = SE.getTruncateOrZeroExtend(It, CalculationTy);
   for (unsigned i = 1; i != K; ++i) {
     const SCEV *S = SE.getMinusSCEV(It, SE.getConstant(It->getType(), i));
-    Dividend = SE.getMulExpr(Dividend,
-                             SE.getTruncateOrZeroExtend(S, CalculationTy));
+    Dividend =
+        SE.getMulExpr(Dividend, SE.getTruncateOrZeroExtend(S, CalculationTy));
   }
 
   // Divide by 2^T
@@ -974,9 +986,9 @@ const SCEV *SCEVAddRecExpr::evaluateAtIteration(const SCEV *It,
   return evaluateAtIteration(operands(), It, SE);
 }
 
-const SCEV *
-SCEVAddRecExpr::evaluateAtIteration(ArrayRef<const SCEV *> Operands,
-                                    const SCEV *It, ScalarEvolution &SE) {
+const SCEV *SCEVAddRecExpr::evaluateAtIteration(ArrayRef<const SCEV *> Operands,
+                                                const SCEV *It,
+                                                ScalarEvolution &SE) {
   assert(Operands.size() > 0);
   const SCEV *Result = Operands[0];
   for (unsigned i = 1, e = Operands.size(); i != e; ++i) {
@@ -1134,8 +1146,7 @@ const SCEV *ScalarEvolution::getTruncateExpr(const SCEV *Op, Type *Ty,
                                              unsigned Depth) {
   assert(getTypeSizeInBits(Op->getType()) > getTypeSizeInBits(Ty) &&
          "This is not a truncating conversion!");
-  assert(isSCEVable(Ty) &&
-         "This is not a conversion to a SCEVable type!");
+  assert(isSCEVable(Ty) && "This is not a conversion to a SCEVable type!");
   assert(!Op->getType()->isPointerTy() && "Can't truncate pointer!");
   Ty = getEffectiveSCEVType(Ty);
 
@@ -1144,12 +1155,13 @@ const SCEV *ScalarEvolution::getTruncateExpr(const SCEV *Op, Type *Ty,
   ID.AddPointer(Op);
   ID.AddPointer(Ty);
   void *IP = nullptr;
-  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP)) return S;
+  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP))
+    return S;
 
   // Fold if the operand is constant.
   if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(Op))
     return getConstant(
-      cast<ConstantInt>(ConstantExpr::getTrunc(SC->getValue(), Ty)));
+        cast<ConstantInt>(ConstantExpr::getTrunc(SC->getValue(), Ty)));
 
   // trunc(trunc(x)) --> trunc(x)
   if (const SCEVTruncateExpr *ST = dyn_cast<SCEVTruncateExpr>(Op))
@@ -1217,8 +1229,8 @@ const SCEV *ScalarEvolution::getTruncateExpr(const SCEV *Op, Type *Ty,
   // The cast wasn't folded; create an explicit cast node. We can reuse
   // the existing insert position since if we get here, we won't have
   // made any changes which would invalidate it.
-  SCEV *S = new (SCEVAllocator) SCEVTruncateExpr(ID.Intern(SCEVAllocator),
-                                                 Op, Ty);
+  SCEV *S =
+      new (SCEVAllocator) SCEVTruncateExpr(ID.Intern(SCEVAllocator), Op, Ty);
   UniqueSCEVs.InsertNode(S, IP);
   registerUser(S, Op);
   return S;
@@ -1290,8 +1302,9 @@ struct ExtendOpTraits<SCEVSignExtendExpr> : public ExtendOpTraitsBase {
   }
 };
 
-const ExtendOpTraitsBase::GetExtendExprTy ExtendOpTraits<
-    SCEVSignExtendExpr>::GetExtendExpr = &ScalarEvolution::getSignExtendExpr;
+const ExtendOpTraitsBase::GetExtendExprTy
+    ExtendOpTraits<SCEVSignExtendExpr>::GetExtendExpr =
+        &ScalarEvolution::getSignExtendExpr;
 
 template <>
 struct ExtendOpTraits<SCEVZeroExtendExpr> : public ExtendOpTraitsBase {
@@ -1306,8 +1319,9 @@ struct ExtendOpTraits<SCEVZeroExtendExpr> : public ExtendOpTraitsBase {
   }
 };
 
-const ExtendOpTraitsBase::GetExtendExprTy ExtendOpTraits<
-    SCEVZeroExtendExpr>::GetExtendExpr = &ScalarEvolution::getZeroExtendExpr;
+const ExtendOpTraitsBase::GetExtendExprTy
+    ExtendOpTraits<SCEVZeroExtendExpr>::GetExtendExpr =
+        &ScalarEvolution::getZeroExtendExpr;
 
 } // end anonymous namespace
 
@@ -1349,7 +1363,7 @@ static const SCEV *getPreStartForExtend(const SCEVAddRecExpr *AR, Type *Ty,
 
   // 1. NSW/NUW flags on the step increment.
   auto PreStartFlags =
-    ScalarEvolution::maskFlags(SA->getNoWrapFlags(), SCEV::FlagNUW);
+      ScalarEvolution::maskFlags(SA->getNoWrapFlags(), SCEV::FlagNUW);
   const SCEV *PreStart = SE->getAddExpr(DiffOps, PreStartFlags);
   const SCEVAddRecExpr *PreAR = dyn_cast<SCEVAddRecExpr>(
       SE->getAddRecExpr(PreStart, Step, L, SCEV::FlagAnyWrap));
@@ -1394,17 +1408,16 @@ static const SCEV *getPreStartForExtend(const SCEVAddRecExpr *AR, Type *Ty,
 // Get the normalized zero or sign extended expression for this AddRec's Start.
 template <typename ExtendOpTy>
 static const SCEV *getExtendAddRecStart(const SCEVAddRecExpr *AR, Type *Ty,
-                                        ScalarEvolution *SE,
-                                        unsigned Depth) {
+                                        ScalarEvolution *SE, unsigned Depth) {
   auto GetExtendExpr = ExtendOpTraits<ExtendOpTy>::GetExtendExpr;
 
   const SCEV *PreStart = getPreStartForExtend<ExtendOpTy>(AR, Ty, SE, Depth);
   if (!PreStart)
     return (SE->*GetExtendExpr)(AR->getStart(), Ty, Depth);
 
-  return SE->getAddExpr((SE->*GetExtendExpr)(AR->getStepRecurrence(*SE), Ty,
-                                             Depth),
-                        (SE->*GetExtendExpr)(PreStart, Ty, Depth));
+  return SE->getAddExpr(
+      (SE->*GetExtendExpr)(AR->getStepRecurrence(*SE), Ty, Depth),
+      (SE->*GetExtendExpr)(PreStart, Ty, Depth));
 }
 
 // Try to prove away overflow by looking at "nearby" add recurrences.  A
@@ -1465,16 +1478,16 @@ bool ScalarEvolution::proveNoWrapByVaryingStart(const SCEV *Start,
     ID.AddPointer(L);
     void *IP = nullptr;
     const auto *PreAR =
-      static_cast<SCEVAddRecExpr *>(UniqueSCEVs.FindNodeOrInsertPos(ID, IP));
+        static_cast<SCEVAddRecExpr *>(UniqueSCEVs.FindNodeOrInsertPos(ID, IP));
 
     // Give up if we don't already have the add recurrence we need because
     // actually constructing an add recurrence is relatively expensive.
-    if (PreAR && PreAR->getNoWrapFlags(WrapType)) {  // proves (2)
+    if (PreAR && PreAR->getNoWrapFlags(WrapType)) { // proves (2)
       const SCEV *DeltaS = getConstant(StartC->getType(), Delta);
       ICmpInst::Predicate Pred = ICmpInst::BAD_ICMP_PREDICATE;
       const SCEV *Limit = ExtendOpTraits<ExtendOpTy>::getOverflowLimitForStep(
           DeltaS, &Pred, this);
-      if (Limit && isKnownPredicate(Pred, PreAR, Limit))  // proves (1)
+      if (Limit && isKnownPredicate(Pred, PreAR, Limit)) // proves (1)
         return true;
     }
   }
@@ -1542,12 +1555,11 @@ static void insertFoldCacheEntry(
   R.first->second.push_back(ID);
 }
 
-const SCEV *
-ScalarEvolution::getZeroExtendExpr(const SCEV *Op, Type *Ty, unsigned Depth) {
+const SCEV *ScalarEvolution::getZeroExtendExpr(const SCEV *Op, Type *Ty,
+                                               unsigned Depth) {
   assert(getTypeSizeInBits(Op->getType()) < getTypeSizeInBits(Ty) &&
          "This is not an extending conversion!");
-  assert(isSCEVable(Ty) &&
-         "This is not a conversion to a SCEVable type!");
+  assert(isSCEVable(Ty) && "This is not a conversion to a SCEVable type!");
   assert(!Op->getType()->isPointerTy() && "Can't extend pointer!");
   Ty = getEffectiveSCEVType(Ty);
 
@@ -1572,7 +1584,7 @@ const SCEV *ScalarEvolution::getZeroExtendExprImpl(const SCEV *Op, Type *Ty,
   // Fold if the operand is constant.
   if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(Op))
     return getConstant(
-      cast<ConstantInt>(ConstantExpr::getZExt(SC->getValue(), Ty)));
+        cast<ConstantInt>(ConstantExpr::getZExt(SC->getValue(), Ty)));
 
   // zext(zext(x)) --> zext(x)
   if (const SCEVZeroExtendExpr *SZ = dyn_cast<SCEVZeroExtendExpr>(Op))
@@ -1585,10 +1597,11 @@ const SCEV *ScalarEvolution::getZeroExtendExprImpl(const SCEV *Op, Type *Ty,
   ID.AddPointer(Op);
   ID.AddPointer(Ty);
   void *IP = nullptr;
-  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP)) return S;
+  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP))
+    return S;
   if (Depth > MaxCastDepth) {
-    SCEV *S = new (SCEVAllocator) SCEVZeroExtendExpr(ID.Intern(SCEVAllocator),
-                                                     Op, Ty);
+    SCEV *S = new (SCEVAllocator)
+        SCEVZeroExtendExpr(ID.Intern(SCEVAllocator), Op, Ty);
     UniqueSCEVs.InsertNode(S, IP);
     registerUser(S, Op);
     return S;
@@ -1648,21 +1661,20 @@ const SCEV *ScalarEvolution::getZeroExtendExprImpl(const SCEV *Op, Type *Ty,
         if (MaxBECount == RecastedMaxBECount) {
           Type *WideTy = IntegerType::get(getContext(), BitWidth * 2);
           // Check whether Start+Step*MaxBECount has no unsigned overflow.
-          const SCEV *ZMul = getMulExpr(CastedMaxBECount, Step,
-                                        SCEV::FlagAnyWrap, Depth + 1);
-          const SCEV *ZAdd = getZeroExtendExpr(getAddExpr(Start, ZMul,
-                                                          SCEV::FlagAnyWrap,
-                                                          Depth + 1),
-                                               WideTy, Depth + 1);
+          const SCEV *ZMul =
+              getMulExpr(CastedMaxBECount, Step, SCEV::FlagAnyWrap, Depth + 1);
+          const SCEV *ZAdd = getZeroExtendExpr(
+              getAddExpr(Start, ZMul, SCEV::FlagAnyWrap, Depth + 1), WideTy,
+              Depth + 1);
           const SCEV *WideStart = getZeroExtendExpr(Start, WideTy, Depth + 1);
           const SCEV *WideMaxBECount =
-            getZeroExtendExpr(CastedMaxBECount, WideTy, Depth + 1);
+              getZeroExtendExpr(CastedMaxBECount, WideTy, Depth + 1);
           const SCEV *OperandExtendedAdd =
-            getAddExpr(WideStart,
-                       getMulExpr(WideMaxBECount,
-                                  getZeroExtendExpr(Step, WideTy, Depth + 1),
-                                  SCEV::FlagAnyWrap, Depth + 1),
-                       SCEV::FlagAnyWrap, Depth + 1);
+              getAddExpr(WideStart,
+                         getMulExpr(WideMaxBECount,
+                                    getZeroExtendExpr(Step, WideTy, Depth + 1),
+                                    SCEV::FlagAnyWrap, Depth + 1),
+                         SCEV::FlagAnyWrap, Depth + 1);
           if (ZAdd == OperandExtendedAdd) {
             // Cache knowledge of AR NUW, which is propagated to this AddRec.
             setNoWrapFlags(const_cast<SCEVAddRecExpr *>(AR), SCEV::FlagNUW);
@@ -1675,11 +1687,11 @@ const SCEV *ScalarEvolution::getZeroExtendExprImpl(const SCEV *Op, Type *Ty,
           // Similar to above, only this time treat the step value as signed.
           // This covers loops that count down.
           OperandExtendedAdd =
-            getAddExpr(WideStart,
-                       getMulExpr(WideMaxBECount,
-                                  getSignExtendExpr(Step, WideTy, Depth + 1),
-                                  SCEV::FlagAnyWrap, Depth + 1),
-                       SCEV::FlagAnyWrap, Depth + 1);
+              getAddExpr(WideStart,
+                         getMulExpr(WideMaxBECount,
+                                    getSignExtendExpr(Step, WideTy, Depth + 1),
+                                    SCEV::FlagAnyWrap, Depth + 1),
+                         SCEV::FlagAnyWrap, Depth + 1);
           if (ZAdd == OperandExtendedAdd) {
             // Cache knowledge of AR NW, which is propagated to this AddRec.
             // Negative step causes unsigned wrap, but it still can't self-wrap.
@@ -1870,20 +1882,20 @@ const SCEV *ScalarEvolution::getZeroExtendExprImpl(const SCEV *Op, Type *Ty,
 
   // The cast wasn't folded; create an explicit cast node.
   // Recompute the insert position, as it may have been invalidated.
-  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP)) return S;
-  SCEV *S = new (SCEVAllocator) SCEVZeroExtendExpr(ID.Intern(SCEVAllocator),
-                                                   Op, Ty);
+  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP))
+    return S;
+  SCEV *S =
+      new (SCEVAllocator) SCEVZeroExtendExpr(ID.Intern(SCEVAllocator), Op, Ty);
   UniqueSCEVs.InsertNode(S, IP);
   registerUser(S, Op);
   return S;
 }
 
-const SCEV *
-ScalarEvolution::getSignExtendExpr(const SCEV *Op, Type *Ty, unsigned Depth) {
+const SCEV *ScalarEvolution::getSignExtendExpr(const SCEV *Op, Type *Ty,
+                                               unsigned Depth) {
   assert(getTypeSizeInBits(Op->getType()) < getTypeSizeInBits(Ty) &&
          "This is not an extending conversion!");
-  assert(isSCEVable(Ty) &&
-         "This is not a conversion to a SCEVable type!");
+  assert(isSCEVable(Ty) && "This is not a conversion to a SCEVable type!");
   assert(!Op->getType()->isPointerTy() && "Can't extend pointer!");
   Ty = getEffectiveSCEVType(Ty);
 
@@ -1909,7 +1921,7 @@ const SCEV *ScalarEvolution::getSignExtendExprImpl(const SCEV *Op, Type *Ty,
   // Fold if the operand is constant.
   if (const SCEVConstant *SC = dyn_cast<SCEVConstant>(Op))
     return getConstant(
-      cast<ConstantInt>(ConstantExpr::getSExt(SC->getValue(), Ty)));
+        cast<ConstantInt>(ConstantExpr::getSExt(SC->getValue(), Ty)));
 
   // sext(sext(x)) --> sext(x)
   if (const SCEVSignExtendExpr *SS = dyn_cast<SCEVSignExtendExpr>(Op))
@@ -1926,11 +1938,12 @@ const SCEV *ScalarEvolution::getSignExtendExprImpl(const SCEV *Op, Type *Ty,
   ID.AddPointer(Op);
   ID.AddPointer(Ty);
   void *IP = nullptr;
-  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP)) return S;
+  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP))
+    return S;
   // Limit recursion depth.
   if (Depth > MaxCastDepth) {
-    SCEV *S = new (SCEVAllocator) SCEVSignExtendExpr(ID.Intern(SCEVAllocator),
-                                                     Op, Ty);
+    SCEV *S = new (SCEVAllocator)
+        SCEVSignExtendExpr(ID.Intern(SCEVAllocator), Op, Ty);
     UniqueSCEVs.InsertNode(S, IP);
     registerUser(S, Op);
     return S;
@@ -2024,21 +2037,20 @@ const SCEV *ScalarEvolution::getSignExtendExprImpl(const SCEV *Op, Type *Ty,
         if (MaxBECount == RecastedMaxBECount) {
           Type *WideTy = IntegerType::get(getContext(), BitWidth * 2);
           // Check whether Start+Step*MaxBECount has no signed overflow.
-          const SCEV *SMul = getMulExpr(CastedMaxBECount, Step,
-                                        SCEV::FlagAnyWrap, Depth + 1);
-          const SCEV *SAdd = getSignExtendExpr(getAddExpr(Start, SMul,
-                                                          SCEV::FlagAnyWrap,
-                                                          Depth + 1),
-                                               WideTy, Depth + 1);
+          const SCEV *SMul =
+              getMulExpr(CastedMaxBECount, Step, SCEV::FlagAnyWrap, Depth + 1);
+          const SCEV *SAdd = getSignExtendExpr(
+              getAddExpr(Start, SMul, SCEV::FlagAnyWrap, Depth + 1), WideTy,
+              Depth + 1);
           const SCEV *WideStart = getSignExtendExpr(Start, WideTy, Depth + 1);
           const SCEV *WideMaxBECount =
-            getZeroExtendExpr(CastedMaxBECount, WideTy, Depth + 1);
+              getZeroExtendExpr(CastedMaxBECount, WideTy, Depth + 1);
           const SCEV *OperandExtendedAdd =
-            getAddExpr(WideStart,
-                       getMulExpr(WideMaxBECount,
-                                  getSignExtendExpr(Step, WideTy, Depth + 1),
-                                  SCEV::FlagAnyWrap, Depth + 1),
-                       SCEV::FlagAnyWrap, Depth + 1);
+              getAddExpr(WideStart,
+                         getMulExpr(WideMaxBECount,
+                                    getSignExtendExpr(Step, WideTy, Depth + 1),
+                                    SCEV::FlagAnyWrap, Depth + 1),
+                         SCEV::FlagAnyWrap, Depth + 1);
           if (SAdd == OperandExtendedAdd) {
             // Cache knowledge of AR NSW, which is propagated to this AddRec.
             setNoWrapFlags(const_cast<SCEVAddRecExpr *>(AR), SCEV::FlagNSW);
@@ -2051,11 +2063,11 @@ const SCEV *ScalarEvolution::getSignExtendExprImpl(const SCEV *Op, Type *Ty,
           // Similar to above, only this time treat the step value as unsigned.
           // This covers loops that count up with an unsigned step.
           OperandExtendedAdd =
-            getAddExpr(WideStart,
-                       getMulExpr(WideMaxBECount,
-                                  getZeroExtendExpr(Step, WideTy, Depth + 1),
-                                  SCEV::FlagAnyWrap, Depth + 1),
-                       SCEV::FlagAnyWrap, Depth + 1);
+              getAddExpr(WideStart,
+                         getMulExpr(WideMaxBECount,
+                                    getZeroExtendExpr(Step, WideTy, Depth + 1),
+                                    SCEV::FlagAnyWrap, Depth + 1),
+                         SCEV::FlagAnyWrap, Depth + 1);
           if (SAdd == OperandExtendedAdd) {
             // If AR wraps around then
             //
@@ -2134,11 +2146,12 @@ const SCEV *ScalarEvolution::getSignExtendExprImpl(const SCEV *Op, Type *Ty,
 
   // The cast wasn't folded; create an explicit cast node.
   // Recompute the insert position, as it may have been invalidated.
-  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP)) return S;
-  SCEV *S = new (SCEVAllocator) SCEVSignExtendExpr(ID.Intern(SCEVAllocator),
-                                                   Op, Ty);
+  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP))
+    return S;
+  SCEV *S =
+      new (SCEVAllocator) SCEVSignExtendExpr(ID.Intern(SCEVAllocator), Op, Ty);
   UniqueSCEVs.InsertNode(S, IP);
-  registerUser(S, { Op });
+  registerUser(S, {Op});
   return S;
 }
 
@@ -2160,12 +2173,10 @@ const SCEV *ScalarEvolution::getCastExpr(SCEVTypes Kind, const SCEV *Op,
 
 /// getAnyExtendExpr - Return a SCEV for the given operand extended with
 /// unspecified bits out to the given type.
-const SCEV *ScalarEvolution::getAnyExtendExpr(const SCEV *Op,
-                                              Type *Ty) {
+const SCEV *ScalarEvolution::getAnyExtendExpr(const SCEV *Op, Type *Ty) {
   assert(getTypeSizeInBits(Op->getType()) < getTypeSizeInBits(Ty) &&
          "This is not an extending conversion!");
-  assert(isSCEVable(Ty) &&
-         "This is not a conversion to a SCEVable type!");
+  assert(isSCEVable(Ty) && "This is not a conversion to a SCEVable type!");
   Ty = getEffectiveSCEVType(Ty);
 
   // Sign-extend negative constants.
@@ -2230,12 +2241,12 @@ const SCEV *ScalarEvolution::getAnyExtendExpr(const SCEV *Op,
 /// may be exposed. This helps getAddRecExpr short-circuit extra work in
 /// the common case where no interesting opportunities are present, and
 /// is also used as a check to avoid infinite recursion.
-static bool
-CollectAddOperandsWithScales(DenseMap<const SCEV *, APInt> &M,
-                             SmallVectorImpl<const SCEV *> &NewOps,
-                             APInt &AccumulatedConstant,
-                             ArrayRef<const SCEV *> Ops, const APInt &Scale,
-                             ScalarEvolution &SE) {
+static bool CollectAddOperandsWithScales(DenseMap<const SCEV *, APInt> &M,
+                                         SmallVectorImpl<const SCEV *> &NewOps,
+                                         APInt &AccumulatedConstant,
+                                         ArrayRef<const SCEV *> Ops,
+                                         const APInt &Scale,
+                                         ScalarEvolution &SE) {
   bool Interesting = false;
 
   // Iterate over the add operands. They are sorted, with constants first.
@@ -2258,9 +2269,8 @@ CollectAddOperandsWithScales(DenseMap<const SCEV *, APInt> &M,
       if (Mul->getNumOperands() == 2 && isa<SCEVAddExpr>(Mul->getOperand(1))) {
         // A multiplication of a constant with another add; recurse.
         const SCEVAddExpr *Add = cast<SCEVAddExpr>(Mul->getOperand(1));
-        Interesting |=
-          CollectAddOperandsWithScales(M, NewOps, AccumulatedConstant,
-                                       Add->operands(), NewScale, SE);
+        Interesting |= CollectAddOperandsWithScales(
+            M, NewOps, AccumulatedConstant, Add->operands(), NewScale, SE);
       } else {
         // A multiplication of a constant with some other value. Update
         // the map.
@@ -2418,10 +2428,10 @@ ScalarEvolution::getStrengthenedNoWrapFlagsFromBinOp(
 // We're trying to construct a SCEV of type `Type' with `Ops' as operands and
 // `OldFlags' as can't-wrap behavior.  Infer a more aggressive set of
 // can't-overflow flags for the operation if possible.
-static SCEV::NoWrapFlags
-StrengthenNoWrapFlags(ScalarEvolution *SE, SCEVTypes Type,
-                      const ArrayRef<const SCEV *> Ops,
-                      SCEV::NoWrapFlags Flags) {
+static SCEV::NoWrapFlags StrengthenNoWrapFlags(ScalarEvolution *SE,
+                                               SCEVTypes Type,
+                                               const ArrayRef<const SCEV *> Ops,
+                                               SCEV::NoWrapFlags Flags) {
   using namespace std::placeholders;
 
   using OBO = OverflowingBinaryOperator;
@@ -2512,7 +2522,8 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
   assert(!(OrigFlags & ~(SCEV::FlagNUW | SCEV::FlagNSW)) &&
          "only nuw or nsw allowed");
   assert(!Ops.empty() && "Cannot get empty add!");
-  if (Ops.size() == 1) return Ops[0];
+  if (Ops.size() == 1)
+    return Ops[0];
 #ifndef NDEBUG
   Type *ETy = getEffectiveSCEVType(Ops[0]->getType());
   for (unsigned i = 1, e = Ops.size(); i != e; ++i)
@@ -2534,8 +2545,9 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
     while (const SCEVConstant *RHSC = dyn_cast<SCEVConstant>(Ops[Idx])) {
       // We found two constants, fold them together!
       Ops[0] = getConstant(LHSC->getAPInt() + RHSC->getAPInt());
-      if (Ops.size() == 2) return Ops[0];
-      Ops.erase(Ops.begin()+1);  // Erase the folded element
+      if (Ops.size() == 2)
+        return Ops[0];
+      Ops.erase(Ops.begin() + 1); // Erase the folded element
       LHSC = cast<SCEVConstant>(Ops[0]);
     }
 
@@ -2545,7 +2557,8 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
       --Idx;
     }
 
-    if (Ops.size() == 1) return Ops[0];
+    if (Ops.size() == 1)
+      return Ops[0];
   }
 
   // Delay expensive flag strengthening until necessary.
@@ -2570,11 +2583,11 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
   // sorted the list, these values are required to be adjacent.
   Type *Ty = Ops[0]->getType();
   bool FoundMatch = false;
-  for (unsigned i = 0, e = Ops.size(); i != e-1; ++i)
-    if (Ops[i] == Ops[i+1]) {      //  X + Y + Y  -->  X + Y*2
+  for (unsigned i = 0, e = Ops.size(); i != e - 1; ++i)
+    if (Ops[i] == Ops[i + 1]) { //  X + Y + Y  -->  X + Y*2
       // Scan ahead to count how many equal operands there are.
       unsigned Count = 2;
-      while (i+Count != e && Ops[i+Count] == Ops[i])
+      while (i + Count != e && Ops[i + Count] == Ops[i])
         ++Count;
       // Merge the values into a multiply.
       const SCEV *Scale = getConstant(Ty, Count);
@@ -2582,8 +2595,9 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
       if (Ops.size() == Count)
         return Mul;
       Ops[i] = Mul;
-      Ops.erase(Ops.begin()+i+1, Ops.begin()+i+Count);
-      --i; e -= Count - 1;
+      Ops.erase(Ops.begin() + i + 1, Ops.begin() + i + Count);
+      --i;
+      e -= Count - 1;
       FoundMatch = true;
     }
   if (FoundMatch)
@@ -2625,7 +2639,7 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
         SmallVector<const SCEV *, 8> LargeMulOps;
         for (unsigned j = 0, f = M->getNumOperands(); j != f && Ok; ++j) {
           if (const SCEVTruncateExpr *T =
-                dyn_cast<SCEVTruncateExpr>(M->getOperand(j))) {
+                  dyn_cast<SCEVTruncateExpr>(M->getOperand(j))) {
             if (T->getOperand()->getType() != SrcType) {
               Ok = false;
               break;
@@ -2639,7 +2653,8 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
           }
         }
         if (Ok)
-          LargeOps.push_back(getMulExpr(LargeMulOps, SCEV::FlagAnyWrap, Depth + 1));
+          LargeOps.push_back(
+              getMulExpr(LargeMulOps, SCEV::FlagAnyWrap, Depth + 1));
       } else {
         Ok = false;
         break;
@@ -2723,7 +2738,7 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
         break;
       // If we have an add, expand the add operands onto the end of the operands
       // list.
-      Ops.erase(Ops.begin()+Idx);
+      Ops.erase(Ops.begin() + Idx);
       append_range(Ops, Add->operands());
       DeletedAdd = true;
       CommonFlags = maskFlags(CommonFlags, Add->getNoWrapFlags());
@@ -2747,8 +2762,8 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
     DenseMap<const SCEV *, APInt> M;
     SmallVector<const SCEV *, 8> NewOps;
     APInt AccumulatedConstant(BitWidth, 0);
-    if (CollectAddOperandsWithScales(M, NewOps, AccumulatedConstant,
-                                     Ops, APInt(BitWidth, 1), *this)) {
+    if (CollectAddOperandsWithScales(M, NewOps, AccumulatedConstant, Ops,
+                                     APInt(BitWidth, 1), *this)) {
       struct APIntCompare {
         bool operator()(const APInt &LHS, const APInt &RHS) const {
           return LHS.ult(RHS);
@@ -2769,10 +2784,10 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
         if (MulOp.first == 1) {
           Ops.push_back(getAddExpr(MulOp.second, SCEV::FlagAnyWrap, Depth + 1));
         } else if (MulOp.first != 0) {
-          Ops.push_back(getMulExpr(
-              getConstant(MulOp.first),
-              getAddExpr(MulOp.second, SCEV::FlagAnyWrap, Depth + 1),
-              SCEV::FlagAnyWrap, Depth + 1));
+          Ops.push_back(
+              getMulExpr(getConstant(MulOp.first),
+                         getAddExpr(MulOp.second, SCEV::FlagAnyWrap, Depth + 1),
+                         SCEV::FlagAnyWrap, Depth + 1));
         }
       }
       if (Ops.empty())
@@ -2806,43 +2821,44 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
           }
           SmallVector<const SCEV *, 2> TwoOps = {getOne(Ty), InnerMul};
           const SCEV *AddOne = getAddExpr(TwoOps, SCEV::FlagAnyWrap, Depth + 1);
-          const SCEV *OuterMul = getMulExpr(AddOne, MulOpSCEV,
-                                            SCEV::FlagAnyWrap, Depth + 1);
-          if (Ops.size() == 2) return OuterMul;
+          const SCEV *OuterMul =
+              getMulExpr(AddOne, MulOpSCEV, SCEV::FlagAnyWrap, Depth + 1);
+          if (Ops.size() == 2)
+            return OuterMul;
           if (AddOp < Idx) {
-            Ops.erase(Ops.begin()+AddOp);
-            Ops.erase(Ops.begin()+Idx-1);
+            Ops.erase(Ops.begin() + AddOp);
+            Ops.erase(Ops.begin() + Idx - 1);
           } else {
-            Ops.erase(Ops.begin()+Idx);
-            Ops.erase(Ops.begin()+AddOp-1);
+            Ops.erase(Ops.begin() + Idx);
+            Ops.erase(Ops.begin() + AddOp - 1);
           }
           Ops.push_back(OuterMul);
           return getAddExpr(Ops, SCEV::FlagAnyWrap, Depth + 1);
         }
 
       // Check this multiply against other multiplies being added together.
-      for (unsigned OtherMulIdx = Idx+1;
+      for (unsigned OtherMulIdx = Idx + 1;
            OtherMulIdx < Ops.size() && isa<SCEVMulExpr>(Ops[OtherMulIdx]);
            ++OtherMulIdx) {
         const SCEVMulExpr *OtherMul = cast<SCEVMulExpr>(Ops[OtherMulIdx]);
         // If MulOp occurs in OtherMul, we can fold the two multiplies
         // together.
-        for (unsigned OMulOp = 0, e = OtherMul->getNumOperands();
-             OMulOp != e; ++OMulOp)
+        for (unsigned OMulOp = 0, e = OtherMul->getNumOperands(); OMulOp != e;
+             ++OMulOp)
           if (OtherMul->getOperand(OMulOp) == MulOpSCEV) {
             // Fold X + (A*B*C) + (A*D*E) --> X + (A*(B*C+D*E))
             const SCEV *InnerMul1 = Mul->getOperand(MulOp == 0);
             if (Mul->getNumOperands() != 2) {
               SmallVector<const SCEV *, 4> MulOps(
                   Mul->operands().take_front(MulOp));
-              append_range(MulOps, Mul->operands().drop_front(MulOp+1));
+              append_range(MulOps, Mul->operands().drop_front(MulOp + 1));
               InnerMul1 = getMulExpr(MulOps, SCEV::FlagAnyWrap, Depth + 1);
             }
             const SCEV *InnerMul2 = OtherMul->getOperand(OMulOp == 0);
             if (OtherMul->getNumOperands() != 2) {
               SmallVector<const SCEV *, 4> MulOps(
                   OtherMul->operands().take_front(OMulOp));
-              append_range(MulOps, OtherMul->operands().drop_front(OMulOp+1));
+              append_range(MulOps, OtherMul->operands().drop_front(OMulOp + 1));
               InnerMul2 = getMulExpr(MulOps, SCEV::FlagAnyWrap, Depth + 1);
             }
             SmallVector<const SCEV *, 2> TwoOps = {InnerMul1, InnerMul2};
@@ -2850,9 +2866,10 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
                 getAddExpr(TwoOps, SCEV::FlagAnyWrap, Depth + 1);
             const SCEV *OuterMul = getMulExpr(MulOpSCEV, InnerMulSum,
                                               SCEV::FlagAnyWrap, Depth + 1);
-            if (Ops.size() == 2) return OuterMul;
-            Ops.erase(Ops.begin()+Idx);
-            Ops.erase(Ops.begin()+OtherMulIdx-1);
+            if (Ops.size() == 2)
+              return OuterMul;
+            Ops.erase(Ops.begin() + Idx);
+            Ops.erase(Ops.begin() + OtherMulIdx - 1);
             Ops.push_back(OuterMul);
             return getAddExpr(Ops, SCEV::FlagAnyWrap, Depth + 1);
           }
@@ -2876,8 +2893,9 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
     for (unsigned i = 0, e = Ops.size(); i != e; ++i)
       if (isAvailableAtLoopEntry(Ops[i], AddRecLoop)) {
         LIOps.push_back(Ops[i]);
-        Ops.erase(Ops.begin()+i);
-        --i; --e;
+        Ops.erase(Ops.begin() + i);
+        --i;
+        --e;
       }
 
     // If we found some loop invariants, fold them into the recurrence.
@@ -2920,7 +2938,8 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
       const SCEV *NewRec = getAddRecExpr(AddRecOps, AddRecLoop, Flags);
 
       // If all of the other operands were loop invariant, we are done.
-      if (Ops.size() == 1) return NewRec;
+      if (Ops.size() == 1)
+        return NewRec;
 
       // Otherwise, add the folded AddRec by the non-invariant parts.
       for (unsigned i = 0;; ++i)
@@ -2934,15 +2953,15 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
     // Okay, if there weren't any loop invariants to be folded, check to see if
     // there are multiple AddRec's with the same loop induction variable being
     // added together.  If so, we can fold them.
-    for (unsigned OtherIdx = Idx+1;
+    for (unsigned OtherIdx = Idx + 1;
          OtherIdx < Ops.size() && isa<SCEVAddRecExpr>(Ops[OtherIdx]);
          ++OtherIdx) {
       // We expect the AddRecExpr's to be sorted in reverse dominance order,
       // so that the 1st found AddRecExpr is dominated by all others.
       assert(DT.dominates(
-           cast<SCEVAddRecExpr>(Ops[OtherIdx])->getLoop()->getHeader(),
-           AddRec->getLoop()->getHeader()) &&
-        "AddRecExprs are not sorted in reverse dominance order?");
+                 cast<SCEVAddRecExpr>(Ops[OtherIdx])->getLoop()->getHeader(),
+                 AddRec->getLoop()->getHeader()) &&
+             "AddRecExprs are not sorted in reverse dominance order?");
       if (AddRecLoop == cast<SCEVAddRecExpr>(Ops[OtherIdx])->getLoop()) {
         // Other + {A,+,B}<L> + {C,+,D}<L>  -->  Other + {A+C,+,B+D}<L>
         SmallVector<const SCEV *, 4> AddRecOps(AddRec->operands());
@@ -2950,8 +2969,8 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
              ++OtherIdx) {
           const auto *OtherAddRec = cast<SCEVAddRecExpr>(Ops[OtherIdx]);
           if (OtherAddRec->getLoop() == AddRecLoop) {
-            for (unsigned i = 0, e = OtherAddRec->getNumOperands();
-                 i != e; ++i) {
+            for (unsigned i = 0, e = OtherAddRec->getNumOperands(); i != e;
+                 ++i) {
               if (i >= AddRecOps.size()) {
                 append_range(AddRecOps, OtherAddRec->operands().drop_front(i));
                 break;
@@ -2960,7 +2979,8 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
                   AddRecOps[i], OtherAddRec->getOperand(i)};
               AddRecOps[i] = getAddExpr(TwoOps, SCEV::FlagAnyWrap, Depth + 1);
             }
-            Ops.erase(Ops.begin() + OtherIdx); --OtherIdx;
+            Ops.erase(Ops.begin() + OtherIdx);
+            --OtherIdx;
           }
         }
         // Step size has changed, so we cannot guarantee no self-wraparound.
@@ -2978,9 +2998,8 @@ const SCEV *ScalarEvolution::getAddExpr(SmallVectorImpl<const SCEV *> &Ops,
   return getOrCreateAddExpr(Ops, ComputeFlags(Ops));
 }
 
-const SCEV *
-ScalarEvolution::getOrCreateAddExpr(ArrayRef<const SCEV *> Ops,
-                                    SCEV::NoWrapFlags Flags) {
+const SCEV *ScalarEvolution::getOrCreateAddExpr(ArrayRef<const SCEV *> Ops,
+                                                SCEV::NoWrapFlags Flags) {
   FoldingSetNodeID ID;
   ID.AddInteger(scAddExpr);
   for (const SCEV *Op : Ops)
@@ -3000,9 +3019,9 @@ ScalarEvolution::getOrCreateAddExpr(ArrayRef<const SCEV *> Ops,
   return S;
 }
 
-const SCEV *
-ScalarEvolution::getOrCreateAddRecExpr(ArrayRef<const SCEV *> Ops,
-                                       const Loop *L, SCEV::NoWrapFlags Flags) {
+const SCEV *ScalarEvolution::getOrCreateAddRecExpr(ArrayRef<const SCEV *> Ops,
+                                                   const Loop *L,
+                                                   SCEV::NoWrapFlags Flags) {
   FoldingSetNodeID ID;
   ID.AddInteger(scAddRecExpr);
   for (const SCEV *Op : Ops)
@@ -3024,21 +3043,20 @@ ScalarEvolution::getOrCreateAddRecExpr(ArrayRef<const SCEV *> Ops,
   return S;
 }
 
-const SCEV *
-ScalarEvolution::getOrCreateMulExpr(ArrayRef<const SCEV *> Ops,
-                                    SCEV::NoWrapFlags Flags) {
+const SCEV *ScalarEvolution::getOrCreateMulExpr(ArrayRef<const SCEV *> Ops,
+                                                SCEV::NoWrapFlags Flags) {
   FoldingSetNodeID ID;
   ID.AddInteger(scMulExpr);
   for (const SCEV *Op : Ops)
     ID.AddPointer(Op);
   void *IP = nullptr;
   SCEVMulExpr *S =
-    static_cast<SCEVMulExpr *>(UniqueSCEVs.FindNodeOrInsertPos(ID, IP));
+      static_cast<SCEVMulExpr *>(UniqueSCEVs.FindNodeOrInsertPos(ID, IP));
   if (!S) {
     const SCEV **O = SCEVAllocator.Allocate<const SCEV *>(Ops.size());
     std::uninitialized_copy(Ops.begin(), Ops.end(), O);
-    S = new (SCEVAllocator) SCEVMulExpr(ID.Intern(SCEVAllocator),
-                                        O, Ops.size());
+    S = new (SCEVAllocator)
+        SCEVMulExpr(ID.Intern(SCEVAllocator), O, Ops.size());
     UniqueSCEVs.InsertNode(S, IP);
     registerUser(S, Ops);
   }
@@ -3047,8 +3065,9 @@ ScalarEvolution::getOrCreateMulExpr(ArrayRef<const SCEV *> Ops,
 }
 
 static uint64_t umul_ov(uint64_t i, uint64_t j, bool &Overflow) {
-  uint64_t k = i*j;
-  if (j > 1 && k / j != i) Overflow = true;
+  uint64_t k = i * j;
+  if (j > 1 && k / j != i)
+    Overflow = true;
   return k;
 }
 
@@ -3064,15 +3083,17 @@ static uint64_t Choose(uint64_t n, uint64_t k, bool &Overflow) {
   // intermediate computations. However, we can still overflow even when the
   // final result would fit.
 
-  if (n == 0 || n == k) return 1;
-  if (k > n) return 0;
+  if (n == 0 || n == k)
+    return 1;
+  if (k > n)
+    return 0;
 
-  if (k > n/2)
-    k = n-k;
+  if (k > n / 2)
+    k = n - k;
 
   uint64_t r = 1;
   for (uint64_t i = 1; i <= k; ++i) {
-    r = umul_ov(r, n-(i-1), Overflow);
+    r = umul_ov(r, n - (i - 1), Overflow);
     r /= i;
   }
   return r;
@@ -3089,9 +3110,7 @@ static bool containsConstantInAddMulChain(const SCEV *StartExpr) {
       return isa<SCEVAddExpr>(S) || isa<SCEVMulExpr>(S);
     }
 
-    bool isDone() const {
-      return FoundConstant;
-    }
+    bool isDone() const { return FoundConstant; }
   };
 
   FindConstantInAddMulChain F;
@@ -3107,7 +3126,8 @@ const SCEV *ScalarEvolution::getMulExpr(SmallVectorImpl<const SCEV *> &Ops,
   assert(OrigFlags == maskFlags(OrigFlags, SCEV::FlagNUW | SCEV::FlagNSW) &&
          "only nuw or nsw allowed");
   assert(!Ops.empty() && "Cannot get empty mul!");
-  if (Ops.size() == 1) return Ops[0];
+  if (Ops.size() == 1)
+    return Ops[0];
 #ifndef NDEBUG
   Type *ETy = Ops[0]->getType();
   assert(!ETy->isPointerTy());
@@ -3127,8 +3147,9 @@ const SCEV *ScalarEvolution::getMulExpr(SmallVectorImpl<const SCEV *> &Ops,
     while (const SCEVConstant *RHSC = dyn_cast<SCEVConstant>(Ops[Idx])) {
       // We found two constants, fold them together!
       Ops[0] = getConstant(LHSC->getAPInt() * RHSC->getAPInt());
-      if (Ops.size() == 2) return Ops[0];
-      Ops.erase(Ops.begin()+1);  // Erase the folded element
+      if (Ops.size() == 2)
+        return Ops[0];
+      Ops.erase(Ops.begin() + 1); // Erase the folded element
       LHSC = cast<SCEVConstant>(Ops[0]);
     }
 
@@ -3188,9 +3209,10 @@ const SCEV *ScalarEvolution::getMulExpr(SmallVectorImpl<const SCEV *> &Ops,
           SmallVector<const SCEV *, 4> NewOps;
           bool AnyFolded = false;
           for (const SCEV *AddOp : Add->operands()) {
-            const SCEV *Mul = getMulExpr(Ops[0], AddOp, SCEV::FlagAnyWrap,
-                                         Depth + 1);
-            if (!isa<SCEVMulExpr>(Mul)) AnyFolded = true;
+            const SCEV *Mul =
+                getMulExpr(Ops[0], AddOp, SCEV::FlagAnyWrap, Depth + 1);
+            if (!isa<SCEVMulExpr>(Mul))
+              AnyFolded = true;
             NewOps.push_back(Mul);
           }
           if (AnyFolded)
@@ -3199,8 +3221,8 @@ const SCEV *ScalarEvolution::getMulExpr(SmallVectorImpl<const SCEV *> &Ops,
           // Negation preserves a recurrence's no self-wrap property.
           SmallVector<const SCEV *, 4> Operands;
           for (const SCEV *AddRecOp : AddRec->operands())
-            Operands.push_back(getMulExpr(Ops[0], AddRecOp, SCEV::FlagAnyWrap,
-                                          Depth + 1));
+            Operands.push_back(
+                getMulExpr(Ops[0], AddRecOp, SCEV::FlagAnyWrap, Depth + 1));
           // Let M be the minimum representable signed value. AddRec with nsw
           // multiplied by -1 can have signed overflow if and only if it takes a
           // value of M: M * (-1) would stay M and (M + 1) * (-1) would be the
@@ -3232,7 +3254,7 @@ const SCEV *ScalarEvolution::getMulExpr(SmallVectorImpl<const SCEV *> &Ops,
         break;
       // If we have an mul, expand the mul operands onto the end of the
       // operands list.
-      Ops.erase(Ops.begin()+Idx);
+      Ops.erase(Ops.begin() + Idx);
       append_range(Ops, Mul->operands());
       DeletedMul = true;
     }
@@ -3259,8 +3281,9 @@ const SCEV *ScalarEvolution::getMulExpr(SmallVectorImpl<const SCEV *> &Ops,
     for (unsigned i = 0, e = Ops.size(); i != e; ++i)
       if (isAvailableAtLoopEntry(Ops[i], AddRec->getLoop())) {
         LIOps.push_back(Ops[i]);
-        Ops.erase(Ops.begin()+i);
-        --i; --e;
+        Ops.erase(Ops.begin() + i);
+        --i;
+        --e;
       }
 
     // If we found some loop invariants, fold them into the recurrence.
@@ -3279,11 +3302,12 @@ const SCEV *ScalarEvolution::getMulExpr(SmallVectorImpl<const SCEV *> &Ops,
       // No self-wrap cannot be guaranteed after changing the step size, but
       // will be inferred if either NUW or NSW is true.
       SCEV::NoWrapFlags Flags = ComputeFlags({Scale, AddRec});
-      const SCEV *NewRec = getAddRecExpr(
-          NewOps, AddRec->getLoop(), AddRec->getNoWrapFlags(Flags));
+      const SCEV *NewRec = getAddRecExpr(NewOps, AddRec->getLoop(),
+                                         AddRec->getNoWrapFlags(Flags));
 
       // If all of the other operands were loop invariant, we are done.
-      if (Ops.size() == 1) return NewRec;
+      if (Ops.size() == 1)
+        return NewRec;
 
       // Otherwise, multiply the folded AddRec by the non-invariant parts.
       for (unsigned i = 0;; ++i)
@@ -3309,40 +3333,42 @@ const SCEV *ScalarEvolution::getMulExpr(SmallVectorImpl<const SCEV *> &Ops,
     // addrec's are of different length (mathematically, it's equivalent to
     // an infinite stream of zeros on the right).
     bool OpsModified = false;
-    for (unsigned OtherIdx = Idx+1;
+    for (unsigned OtherIdx = Idx + 1;
          OtherIdx != Ops.size() && isa<SCEVAddRecExpr>(Ops[OtherIdx]);
          ++OtherIdx) {
       const SCEVAddRecExpr *OtherAddRec =
-        dyn_cast<SCEVAddRecExpr>(Ops[OtherIdx]);
+          dyn_cast<SCEVAddRecExpr>(Ops[OtherIdx]);
       if (!OtherAddRec || OtherAddRec->getLoop() != AddRec->getLoop())
         continue;
 
       // Limit max number of arguments to avoid creation of unreasonably big
       // SCEVAddRecs with very complex operands.
       if (AddRec->getNumOperands() + OtherAddRec->getNumOperands() - 1 >
-          MaxAddRecSize || hasHugeExpression({AddRec, OtherAddRec}))
+              MaxAddRecSize ||
+          hasHugeExpression({AddRec, OtherAddRec}))
         continue;
 
       bool Overflow = false;
       Type *Ty = AddRec->getType();
       bool LargerThan64Bits = getTypeSizeInBits(Ty) > 64;
-      SmallVector<const SCEV*, 7> AddRecOps;
+      SmallVector<const SCEV *, 7> AddRecOps;
       for (int x = 0, xe = AddRec->getNumOperands() +
-             OtherAddRec->getNumOperands() - 1; x != xe && !Overflow; ++x) {
-        SmallVector <const SCEV *, 7> SumOps;
-        for (int y = x, ye = 2*x+1; y != ye && !Overflow; ++y) {
-          uint64_t Coeff1 = Choose(x, 2*x - y, Overflow);
-          for (int z = std::max(y-x, y-(int)AddRec->getNumOperands()+1),
-                 ze = std::min(x+1, (int)OtherAddRec->getNumOperands());
+                           OtherAddRec->getNumOperands() - 1;
+           x != xe && !Overflow; ++x) {
+        SmallVector<const SCEV *, 7> SumOps;
+        for (int y = x, ye = 2 * x + 1; y != ye && !Overflow; ++y) {
+          uint64_t Coeff1 = Choose(x, 2 * x - y, Overflow);
+          for (int z = std::max(y - x, y - (int)AddRec->getNumOperands() + 1),
+                   ze = std::min(x + 1, (int)OtherAddRec->getNumOperands());
                z < ze && !Overflow; ++z) {
-            uint64_t Coeff2 = Choose(2*x - y, x-z, Overflow);
+            uint64_t Coeff2 = Choose(2 * x - y, x - z, Overflow);
             uint64_t Coeff;
             if (LargerThan64Bits)
               Coeff = umul_ov(Coeff1, Coeff2, Overflow);
             else
-              Coeff = Coeff1*Coeff2;
+              Coeff = Coeff1 * Coeff2;
             const SCEV *CoeffTerm = getConstant(Ty, Coeff);
-            const SCEV *Term1 = AddRec->getOperand(y-z);
+            const SCEV *Term1 = AddRec->getOperand(y - z);
             const SCEV *Term2 = OtherAddRec->getOperand(z);
             SumOps.push_back(getMulExpr(CoeffTerm, Term1, Term2,
                                         SCEV::FlagAnyWrap, Depth + 1));
@@ -3353,11 +3379,13 @@ const SCEV *ScalarEvolution::getMulExpr(SmallVectorImpl<const SCEV *> &Ops,
         AddRecOps.push_back(getAddExpr(SumOps, SCEV::FlagAnyWrap, Depth + 1));
       }
       if (!Overflow) {
-        const SCEV *NewAddRec = getAddRecExpr(AddRecOps, AddRec->getLoop(),
-                                              SCEV::FlagAnyWrap);
-        if (Ops.size() == 2) return NewAddRec;
+        const SCEV *NewAddRec =
+            getAddRecExpr(AddRecOps, AddRec->getLoop(), SCEV::FlagAnyWrap);
+        if (Ops.size() == 2)
+          return NewAddRec;
         Ops[Idx] = NewAddRec;
-        Ops.erase(Ops.begin() + OtherIdx); --OtherIdx;
+        Ops.erase(Ops.begin() + OtherIdx);
+        --OtherIdx;
         OpsModified = true;
         AddRec = dyn_cast<SCEVAddRecExpr>(NewAddRec);
         if (!AddRec)
@@ -3377,10 +3405,9 @@ const SCEV *ScalarEvolution::getMulExpr(SmallVectorImpl<const SCEV *> &Ops,
 }
 
 /// Represents an unsigned remainder expression based on unsigned division.
-const SCEV *ScalarEvolution::getURemExpr(const SCEV *LHS,
-                                         const SCEV *RHS) {
+const SCEV *ScalarEvolution::getURemExpr(const SCEV *LHS, const SCEV *RHS) {
   assert(getEffectiveSCEVType(LHS->getType()) ==
-         getEffectiveSCEVType(RHS->getType()) &&
+             getEffectiveSCEVType(RHS->getType()) &&
          "SCEVURemExpr operand types don't match!");
 
   // Short-circuit easy cases
@@ -3406,8 +3433,7 @@ const SCEV *ScalarEvolution::getURemExpr(const SCEV *LHS,
 
 /// Get a canonical unsigned division expression, or something simpler if
 /// possible.
-const SCEV *ScalarEvolution::getUDivExpr(const SCEV *LHS,
-                                         const SCEV *RHS) {
+const SCEV *ScalarEvolution::getUDivExpr(const SCEV *LHS, const SCEV *RHS) {
   assert(!LHS->getType()->isPointerTy() &&
          "SCEVUDivExpr operand can't be pointer!");
   assert(LHS->getType() == RHS->getType() &&
@@ -3428,7 +3454,7 @@ const SCEV *ScalarEvolution::getUDivExpr(const SCEV *LHS,
 
   if (const SCEVConstant *RHSC = dyn_cast<SCEVConstant>(RHS)) {
     if (RHSC->getValue()->isOne())
-      return LHS;                               // X udiv 1 --> x
+      return LHS; // X udiv 1 --> x
     // If the denominator is zero, the result of the udiv is undefined. Don't
     // try to analyze it, because the resolution chosen here may differ from
     // the resolution chosen in other parts of the compiler.
@@ -3444,18 +3470,18 @@ const SCEV *ScalarEvolution::getUDivExpr(const SCEV *LHS,
       if (!RHSC->getAPInt().isPowerOf2())
         ++MaxShiftAmt;
       IntegerType *ExtTy =
-        IntegerType::get(getContext(), getTypeSizeInBits(Ty) + MaxShiftAmt);
+          IntegerType::get(getContext(), getTypeSizeInBits(Ty) + MaxShiftAmt);
       if (const SCEVAddRecExpr *AR = dyn_cast<SCEVAddRecExpr>(LHS))
         if (const SCEVConstant *Step =
-            dyn_cast<SCEVConstant>(AR->getStepRecurrence(*this))) {
+                dyn_cast<SCEVConstant>(AR->getStepRecurrence(*this))) {
           // {X,+,N}/C --> {X/C,+,N/C} if safe and N/C can be folded.
           const APInt &StepInt = Step->getAPInt();
           const APInt &DivInt = RHSC->getAPInt();
           if (!StepInt.urem(DivInt) &&
               getZeroExtendExpr(AR, ExtTy) ==
-              getAddRecExpr(getZeroExtendExpr(AR->getStart(), ExtTy),
-                            getZeroExtendExpr(Step, ExtTy),
-                            AR->getLoop(), SCEV::FlagAnyWrap)) {
+                  getAddRecExpr(getZeroExtendExpr(AR->getStart(), ExtTy),
+                                getZeroExtendExpr(Step, ExtTy), AR->getLoop(),
+                                SCEV::FlagAnyWrap)) {
             SmallVector<const SCEV *, 4> Operands;
             for (const SCEV *Op : AR->operands())
               Operands.push_back(getUDivExpr(Op, RHS));
@@ -3467,9 +3493,9 @@ const SCEV *ScalarEvolution::getUDivExpr(const SCEV *LHS,
           const SCEVConstant *StartC = dyn_cast<SCEVConstant>(AR->getStart());
           if (StartC && !DivInt.urem(StepInt) &&
               getZeroExtendExpr(AR, ExtTy) ==
-              getAddRecExpr(getZeroExtendExpr(AR->getStart(), ExtTy),
-                            getZeroExtendExpr(Step, ExtTy),
-                            AR->getLoop(), SCEV::FlagAnyWrap)) {
+                  getAddRecExpr(getZeroExtendExpr(AR->getStart(), ExtTy),
+                                getZeroExtendExpr(Step, ExtTy), AR->getLoop(),
+                                SCEV::FlagAnyWrap)) {
             const APInt &StartInt = StartC->getAPInt();
             const APInt &StartRem = StartInt.urem(StepInt);
             if (StartRem != 0) {
@@ -3552,9 +3578,10 @@ const SCEV *ScalarEvolution::getUDivExpr(const SCEV *LHS,
   // The Insertion Point (IP) might be invalid by now (due to UniqueSCEVs
   // changes). Make sure we get a new one.
   IP = nullptr;
-  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP)) return S;
-  SCEV *S = new (SCEVAllocator) SCEVUDivExpr(ID.Intern(SCEVAllocator),
-                                             LHS, RHS);
+  if (const SCEV *S = UniqueSCEVs.FindNodeOrInsertPos(ID, IP))
+    return S;
+  SCEV *S =
+      new (SCEVAllocator) SCEVUDivExpr(ID.Intern(SCEVAllocator), LHS, RHS);
   UniqueSCEVs.InsertNode(S, IP);
   registerUser(S, {LHS, RHS});
   return S;
@@ -3652,7 +3679,8 @@ const SCEV *ScalarEvolution::getAddRecExpr(const SCEV *Start, const SCEV *Step,
 const SCEV *
 ScalarEvolution::getAddRecExpr(SmallVectorImpl<const SCEV *> &Operands,
                                const Loop *L, SCEV::NoWrapFlags Flags) {
-  if (Operands.size() == 1) return Operands[0];
+  if (Operands.size() == 1)
+    return Operands[0];
 #ifndef NDEBUG
   Type *ETy = getEffectiveSCEVType(Operands[0]->getType());
   for (unsigned i = 1, e = Operands.size(); i != e; ++i) {
@@ -3670,8 +3698,8 @@ ScalarEvolution::getAddRecExpr(SmallVectorImpl<const SCEV *> &Operands,
     return getAddRecExpr(Operands, L, SCEV::FlagAnyWrap); // {X,+,0}  -->  X
   }
 
-  // It's tempting to want to call getConstantMaxBackedgeTakenCount count here and
-  // use that information to infer NUW and NSW flags. However, computing a
+  // It's tempting to want to call getConstantMaxBackedgeTakenCount count here
+  // and use that information to infer NUW and NSW flags. However, computing a
   // BE count requires calling getAddRecExpr, so we may not yet have a
   // meaningful BE count at this point (and if we don't, we'd be stuck
   // with a SCEVCouldNotCompute as the cached BE count).
@@ -3699,7 +3727,7 @@ ScalarEvolution::getAddRecExpr(SmallVectorImpl<const SCEV *> &Operands,
         // The outer recurrence keeps its NW flag but only keeps NUW/NSW if the
         // inner recurrence has the same property.
         SCEV::NoWrapFlags OuterFlags =
-          maskFlags(Flags, SCEV::FlagNW | NestedAR->getNoWrapFlags());
+            maskFlags(Flags, SCEV::FlagNW | NestedAR->getNoWrapFlags());
 
         NestedOperands[0] = getAddRecExpr(Operands, L, OuterFlags);
         AllInvariant = all_of(NestedOperands, [&](const SCEV *Op) {
@@ -3712,7 +3740,7 @@ ScalarEvolution::getAddRecExpr(SmallVectorImpl<const SCEV *> &Operands,
           // The inner recurrence keeps its NW flag but only keeps NUW/NSW if
           // the outer recurrence has the same property.
           SCEV::NoWrapFlags InnerFlags =
-            maskFlags(NestedAR->getNoWrapFlags(), SCEV::FlagNW | Flags);
+              maskFlags(NestedAR->getNoWrapFlags(), SCEV::FlagNW | Flags);
           return getAddRecExpr(NestedOperands, NestedLoop, InnerFlags);
         }
       }
@@ -3747,7 +3775,7 @@ ScalarEvolution::getGEPExpr(GEPOperator *GEP,
   }();
 
   SCEV::NoWrapFlags OffsetWrap =
-    AssumeInBoundsFlags ? SCEV::FlagNSW : SCEV::FlagAnyWrap;
+      AssumeInBoundsFlags ? SCEV::FlagNSW : SCEV::FlagAnyWrap;
 
   Type *CurTy = GEP->getType();
   bool FirstIter = true;
@@ -3794,7 +3822,8 @@ ScalarEvolution::getGEPExpr(GEPOperator *GEP,
   // base address is unsigned. However, if we know that the offset is
   // non-negative, we can use nuw.
   SCEV::NoWrapFlags BaseWrap = AssumeInBoundsFlags && isKnownNonNegative(Offset)
-                                   ? SCEV::FlagNUW : SCEV::FlagAnyWrap;
+                                   ? SCEV::FlagNUW
+                                   : SCEV::FlagAnyWrap;
   auto *GEPExpr = getAddExpr(BaseExpr, Offset, BaseWrap);
   assert(BaseExpr->getType() == GEPExpr->getType() &&
          "GEP should not change type mid-flight.");
@@ -3820,7 +3849,8 @@ const SCEV *ScalarEvolution::getMinMaxExpr(SCEVTypes Kind,
                                            SmallVectorImpl<const SCEV *> &Ops) {
   assert(SCEVMinMaxExpr::isMinMaxType(Kind) && "Not a SCEVMinMaxExpr!");
   assert(!Ops.empty() && "Cannot get empty (u|s)(min|max)!");
-  if (Ops.size() == 1) return Ops[0];
+  if (Ops.size() == 1)
+    return Ops[0];
 #ifndef NDEBUG
   Type *ETy = getEffectiveSCEVType(Ops[0]->getType());
   for (unsigned i = 1, e = Ops.size(); i != e; ++i) {
@@ -3868,8 +3898,9 @@ const SCEV *ScalarEvolution::getMinMaxExpr(SCEVTypes Kind,
       ConstantInt *Fold = ConstantInt::get(
           getContext(), FoldOp(LHSC->getAPInt(), RHSC->getAPInt()));
       Ops[0] = getConstant(Fold);
-      Ops.erase(Ops.begin()+1);  // Erase the folded element
-      if (Ops.size() == 1) return Ops[0];
+      Ops.erase(Ops.begin() + 1); // Erase the folded element
+      if (Ops.size() == 1)
+        return Ops[0];
       LHSC = cast<SCEVConstant>(Ops[0]);
     }
 
@@ -3886,7 +3917,8 @@ const SCEV *ScalarEvolution::getMinMaxExpr(SCEVTypes Kind,
       return LHSC;
     }
 
-    if (Ops.size() == 1) return Ops[0];
+    if (Ops.size() == 1)
+      return Ops[0];
   }
 
   // Find the first operation of the same kind
@@ -3899,7 +3931,7 @@ const SCEV *ScalarEvolution::getMinMaxExpr(SCEVTypes Kind,
     bool DeletedAny = false;
     while (Ops[Idx]->getSCEVType() == Kind) {
       const SCEVMinMaxExpr *SMME = cast<SCEVMinMaxExpr>(Ops[Idx]);
-      Ops.erase(Ops.begin()+Idx);
+      Ops.erase(Ops.begin() + Idx);
       append_range(Ops, SMME->operands());
       DeletedAny = true;
     }
@@ -3934,7 +3966,8 @@ const SCEV *ScalarEvolution::getMinMaxExpr(SCEVTypes Kind,
     }
   }
 
-  if (Ops.size() == 1) return Ops[0];
+  if (Ops.size() == 1)
+    return Ops[0];
 
   assert(!Ops.empty() && "Reduced smax down to nothing!");
 
@@ -4302,9 +4335,8 @@ const SCEV *ScalarEvolution::getUMaxExpr(SmallVectorImpl<const SCEV *> &Ops) {
   return getMinMaxExpr(scUMaxExpr, Ops);
 }
 
-const SCEV *ScalarEvolution::getSMinExpr(const SCEV *LHS,
-                                         const SCEV *RHS) {
-  SmallVector<const SCEV *, 2> Ops = { LHS, RHS };
+const SCEV *ScalarEvolution::getSMinExpr(const SCEV *LHS, const SCEV *RHS) {
+  SmallVector<const SCEV *, 2> Ops = {LHS, RHS};
   return getSMinExpr(Ops);
 }
 
@@ -4314,7 +4346,7 @@ const SCEV *ScalarEvolution::getSMinExpr(SmallVectorImpl<const SCEV *> &Ops) {
 
 const SCEV *ScalarEvolution::getUMinExpr(const SCEV *LHS, const SCEV *RHS,
                                          bool Sequential) {
-  SmallVector<const SCEV *, 2> Ops = { LHS, RHS };
+  SmallVector<const SCEV *, 2> Ops = {LHS, RHS};
   return getUMinExpr(Ops, Sequential);
 }
 
@@ -4324,8 +4356,7 @@ const SCEV *ScalarEvolution::getUMinExpr(SmallVectorImpl<const SCEV *> &Ops,
                     : getMinMaxExpr(scUMinExpr, Ops);
 }
 
-const SCEV *
-ScalarEvolution::getSizeOfExpr(Type *IntTy, TypeSize Size) {
+const SCEV *ScalarEvolution::getSizeOfExpr(Type *IntTy, TypeSize Size) {
   const SCEV *Res = getConstant(IntTy, Size.getKnownMinValue());
   if (Size.isScalable())
     Res = getMulExpr(Res, getVScale(IntTy));
@@ -4340,8 +4371,7 @@ const SCEV *ScalarEvolution::getStoreSizeOfExpr(Type *IntTy, Type *StoreTy) {
   return getSizeOfExpr(IntTy, getDataLayout().getTypeStoreSize(StoreTy));
 }
 
-const SCEV *ScalarEvolution::getOffsetOfExpr(Type *IntTy,
-                                             StructType *STy,
+const SCEV *ScalarEvolution::getOffsetOfExpr(Type *IntTy, StructType *STy,
                                              unsigned FieldNo) {
   // We can bypass creating a target-independent constant expression and then
   // folding it back into a ConstantInt. This is just a compile-time
@@ -4367,8 +4397,8 @@ const SCEV *ScalarEvolution::getUnknown(Value *V) {
            "Stale SCEVUnknown in uniquing map!");
     return S;
   }
-  SCEV *S = new (SCEVAllocator) SCEVUnknown(ID.Intern(SCEVAllocator), V, this,
-                                            FirstUnknown);
+  SCEV *S = new (SCEVAllocator)
+      SCEVUnknown(ID.Intern(SCEVAllocator), V, this, FirstUnknown);
   FirstUnknown = cast<SCEVUnknown>(S);
   UniqueSCEVs.InsertNode(S, IP);
   return S;
@@ -4411,7 +4441,7 @@ Type *ScalarEvolution::getEffectiveSCEVType(Type *Ty) const {
 }
 
 Type *ScalarEvolution::getWiderType(Type *T1, Type *T2) const {
-  return  getTypeSizeInBits(T1) >= getTypeSizeInBits(T2) ? T1 : T2;
+  return getTypeSizeInBits(T1) >= getTypeSizeInBits(T2) ? T1 : T2;
 }
 
 bool ScalarEvolution::instructionCouldExistWithOperands(const SCEV *A,
@@ -4425,7 +4455,7 @@ bool ScalarEvolution::instructionCouldExistWithOperands(const SCEV *A,
     // Can't tell.
     return false;
   return (ScopeA == ScopeB) || DT.dominates(ScopeA, ScopeB) ||
-    DT.dominates(ScopeB, ScopeA);
+         DT.dominates(ScopeB, ScopeA);
 }
 
 const SCEV *ScalarEvolution::getCouldNotCompute() {
@@ -4469,7 +4499,7 @@ void ScalarEvolution::eraseValueFromMap(Value *V) {
   if (I != ValueExprMap.end()) {
     auto EVIt = ExprValueMap.find(I->second);
     bool Removed = EVIt->second.remove(V);
-    (void) Removed;
+    (void)Removed;
     assert(Removed && "Value not in ExprValueMap?");
     ValueExprMap.erase(I);
   }
@@ -4529,8 +4559,7 @@ const SCEV *ScalarEvolution::getExistingSCEV(Value *V) {
 const SCEV *ScalarEvolution::getNegativeSCEV(const SCEV *V,
                                              SCEV::NoWrapFlags Flags) {
   if (const SCEVConstant *VC = dyn_cast<SCEVConstant>(V))
-    return getConstant(
-               cast<ConstantInt>(ConstantExpr::getNeg(VC->getValue())));
+    return getConstant(cast<ConstantInt>(ConstantExpr::getNeg(VC->getValue())));
 
   Type *Ty = V->getType();
   Ty = getEffectiveSCEVType(Ty);
@@ -4557,8 +4586,7 @@ const SCEV *ScalarEvolution::getNotSCEV(const SCEV *V) {
   assert(!V->getType()->isPointerTy() && "Can't negate pointer");
 
   if (const SCEVConstant *VC = dyn_cast<SCEVConstant>(V))
-    return getConstant(
-                cast<ConstantInt>(ConstantExpr::getNot(VC->getValue())));
+    return getConstant(cast<ConstantInt>(ConstantExpr::getNot(VC->getValue())));
 
   // Fold ~(u|s)(min|max)(~x, ~y) to (u|s)(max|min)(x, y)
   if (const SCEVMinMaxExpr *MME = dyn_cast<SCEVMinMaxExpr>(V)) {
@@ -4633,8 +4661,7 @@ const SCEV *ScalarEvolution::getMinusSCEV(const SCEV *LHS, const SCEV *RHS,
   // We represent LHS - RHS as LHS + (-1)*RHS. This transformation
   // makes it so that we cannot make much use of NUW.
   auto AddFlags = SCEV::FlagAnyWrap;
-  const bool RHSIsNotMinSigned =
-      !getSignedRangeMin(RHS).isMinSignedValue();
+  const bool RHSIsNotMinSigned = !getSignedRangeMin(RHS).isMinSignedValue();
   if (hasFlags(Flags, SCEV::FlagNSW)) {
     // Let M be the minimum representable signed value. Then (-1)*RHS
     // signed-wraps if and only if RHS is M. That can happen even for
@@ -4668,7 +4695,7 @@ const SCEV *ScalarEvolution::getTruncateOrZeroExtend(const SCEV *V, Type *Ty,
   assert(SrcTy->isIntOrPtrTy() && Ty->isIntOrPtrTy() &&
          "Cannot truncate or zero extend with non-integer arguments!");
   if (getTypeSizeInBits(SrcTy) == getTypeSizeInBits(Ty))
-    return V;  // No conversion
+    return V; // No conversion
   if (getTypeSizeInBits(SrcTy) > getTypeSizeInBits(Ty))
     return getTruncateExpr(V, Ty, Depth);
   return getZeroExtendExpr(V, Ty, Depth);
@@ -4680,57 +4707,53 @@ const SCEV *ScalarEvolution::getTruncateOrSignExtend(const SCEV *V, Type *Ty,
   assert(SrcTy->isIntOrPtrTy() && Ty->isIntOrPtrTy() &&
          "Cannot truncate or zero extend with non-integer arguments!");
   if (getTypeSizeInBits(SrcTy) == getTypeSizeInBits(Ty))
-    return V;  // No conversion
+    return V; // No conversion
   if (getTypeSizeInBits(SrcTy) > getTypeSizeInBits(Ty))
     return getTruncateExpr(V, Ty, Depth);
   return getSignExtendExpr(V, Ty, Depth);
 }
 
-const SCEV *
-ScalarEvolution::getNoopOrZeroExtend(const SCEV *V, Type *Ty) {
+const SCEV *ScalarEvolution::getNoopOrZeroExtend(const SCEV *V, Type *Ty) {
   Type *SrcTy = V->getType();
   assert(SrcTy->isIntOrPtrTy() && Ty->isIntOrPtrTy() &&
          "Cannot noop or zero extend with non-integer arguments!");
   assert(getTypeSizeInBits(SrcTy) <= getTypeSizeInBits(Ty) &&
          "getNoopOrZeroExtend cannot truncate!");
   if (getTypeSizeInBits(SrcTy) == getTypeSizeInBits(Ty))
-    return V;  // No conversion
+    return V; // No conversion
   return getZeroExtendExpr(V, Ty);
 }
 
-const SCEV *
-ScalarEvolution::getNoopOrSignExtend(const SCEV *V, Type *Ty) {
+const SCEV *ScalarEvolution::getNoopOrSignExtend(const SCEV *V, Type *Ty) {
   Type *SrcTy = V->getType();
   assert(SrcTy->isIntOrPtrTy() && Ty->isIntOrPtrTy() &&
          "Cannot noop or sign extend with non-integer arguments!");
   assert(getTypeSizeInBits(SrcTy) <= getTypeSizeInBits(Ty) &&
          "getNoopOrSignExtend cannot truncate!");
   if (getTypeSizeInBits(SrcTy) == getTypeSizeInBits(Ty))
-    return V;  // No conversion
+    return V; // No conversion
   return getSignExtendExpr(V, Ty);
 }
 
-const SCEV *
-ScalarEvolution::getNoopOrAnyExtend(const SCEV *V, Type *Ty) {
+const SCEV *ScalarEvolution::getNoopOrAnyExtend(const SCEV *V, Type *Ty) {
   Type *SrcTy = V->getType();
   assert(SrcTy->isIntOrPtrTy() && Ty->isIntOrPtrTy() &&
          "Cannot noop or any extend with non-integer arguments!");
   assert(getTypeSizeInBits(SrcTy) <= getTypeSizeInBits(Ty) &&
          "getNoopOrAnyExtend cannot truncate!");
   if (getTypeSizeInBits(SrcTy) == getTypeSizeInBits(Ty))
-    return V;  // No conversion
+    return V; // No conversion
   return getAnyExtendExpr(V, Ty);
 }
 
-const SCEV *
-ScalarEvolution::getTruncateOrNoop(const SCEV *V, Type *Ty) {
+const SCEV *ScalarEvolution::getTruncateOrNoop(const SCEV *V, Type *Ty) {
   Type *SrcTy = V->getType();
   assert(SrcTy->isIntOrPtrTy() && Ty->isIntOrPtrTy() &&
          "Cannot truncate or noop with non-integer arguments!");
   assert(getTypeSizeInBits(SrcTy) >= getTypeSizeInBits(Ty) &&
          "getTruncateOrNoop cannot extend!");
   if (getTypeSizeInBits(SrcTy) == getTypeSizeInBits(Ty))
-    return V;  // No conversion
+    return V; // No conversion
   return getTruncateExpr(V, Ty);
 }
 
@@ -4750,7 +4773,7 @@ const SCEV *ScalarEvolution::getUMaxFromMismatchedTypes(const SCEV *LHS,
 const SCEV *ScalarEvolution::getUMinFromMismatchedTypes(const SCEV *LHS,
                                                         const SCEV *RHS,
                                                         bool Sequential) {
-  SmallVector<const SCEV *, 2> Ops = { LHS, RHS };
+  SmallVector<const SCEV *, 2> Ops = {LHS, RHS};
   return getUMinFromMismatchedTypes(Ops, Sequential);
 }
 
@@ -4870,12 +4893,12 @@ class SCEVInitRewriter : public SCEVRewriteVisitor<SCEVInitRewriter> {
 /// If SCEV contains non-invariant unknown SCEV rewrite cannot be done.
 class SCEVPostIncRewriter : public SCEVRewriteVisitor<SCEVPostIncRewriter> {
 public:
-  static const SCEV *rewrite(const SCEV *S, const Loop *L, ScalarEvolution &SE) {
+  static const SCEV *rewrite(const SCEV *S, const Loop *L,
+                             ScalarEvolution &SE) {
     SCEVPostIncRewriter Rewriter(L, SE);
     const SCEV *Result = Rewriter.visit(S);
-    return Rewriter.hasSeenLoopVariantSCEVUnknown()
-        ? SE.getCouldNotCompute()
-        : Result;
+    return Rewriter.hasSeenLoopVariantSCEVUnknown() ? SE.getCouldNotCompute()
+                                                    : Result;
   }
 
   const SCEV *visitUnknown(const SCEVUnknown *Expr) {
@@ -5035,7 +5058,7 @@ ScalarEvolution::proveNoWrapViaConstantRanges(const SCEVAddRecExpr *AR) {
       ConstantRange StepCR = getSignedRange(AR->getStepRecurrence(*this));
       const APInt &BECountAP = BECountMax->getAPInt();
       unsigned NoOverflowBitWidth =
-        BECountAP.getActiveBits() + StepCR.getMinSignedBits();
+          BECountAP.getActiveBits() + StepCR.getMinSignedBits();
       if (NoOverflowBitWidth <= getTypeSizeInBits(AR->getType()))
         Result = ScalarEvolution::setFlags(Result, SCEV::FlagNW);
     }
@@ -5108,8 +5131,7 @@ ScalarEvolution::proveNoSignedWrapViaInduction(const SCEVAddRecExpr *AR) {
   // start value and the backedge is guarded by a comparison with the post-inc
   // value, the addrec is safe.
   ICmpInst::Predicate Pred;
-  const SCEV *OverflowLimit =
-    getSignedOverflowLimitForStep(Step, &Pred, this);
+  const SCEV *OverflowLimit = getSignedOverflowLimitForStep(Step, &Pred, this);
   if (OverflowLimit &&
       (isLoopBackedgeGuardedByCond(L, Pred, AR, OverflowLimit) ||
        isKnownOnEveryIteration(Pred, AR, OverflowLimit))) {
@@ -5162,8 +5184,8 @@ ScalarEvolution::proveNoUnsignedWrapViaInduction(const SCEVAddRecExpr *AR) {
   // start value and the backedge is guarded by a comparison with the post-inc
   // value, the addrec is safe.
   if (isKnownPositive(Step)) {
-    const SCEV *N = getConstant(APInt::getMinValue(BitWidth) -
-                                getUnsignedRangeMax(Step));
+    const SCEV *N =
+        getConstant(APInt::getMinValue(BitWidth) - getUnsignedRangeMax(Step));
     if (isLoopBackedgeGuardedByCond(L, ICmpInst::ICMP_ULT, AR, N) ||
         isKnownOnEveryIteration(ICmpInst::ICMP_ULT, AR, N)) {
       Result = setFlags(Result, SCEV::FlagNUW);
@@ -5243,7 +5265,8 @@ static std::optional<BinaryOp> MatchBinaryOp(Value *V, const DataLayout &DL,
   case Instruction::Xor:
     if (auto *RHSC = dyn_cast<ConstantInt>(Op->getOperand(1)))
       // If the RHS of the xor is a signmask, then this is just an add.
-      // Instcombine turns add of signmask into xor as a strength reduction step.
+      // Instcombine turns add of signmask into xor as a strength reduction
+      // step.
       if (RHSC->getValue().isSignMask())
         return BinaryOp(Instruction::Add, Op->getOperand(0), Op->getOperand(1));
     // Binary `xor` is a bit-wise `add`.
@@ -5383,9 +5406,10 @@ static const Loop *isIntegerLoopHeaderPHI(const PHINode *PN, LoopInfo &LI) {
 //    will return the pair {NewAddRec, SmallPredsVec} where:
 //         NewAddRec = {%Start,+,%Step}
 //         SmallPredsVec = {P1, P2, P3} as follows:
-//           P1(WrapPred): AR: {trunc(%Start),+,(trunc %Step)}<nsw> Flags: <nssw>
-//           P2(EqualPred): %Start == (sext i32 (trunc i64 %Start to i32) to i64)
-//           P3(EqualPred): %Step == (sext i32 (trunc i64 %Step to i32) to i64)
+//           P1(WrapPred): AR: {trunc(%Start),+,(trunc %Step)}<nsw> Flags:
+//           <nssw> P2(EqualPred): %Start == (sext i32 (trunc i64 %Start to i32)
+//           to i64) P3(EqualPred): %Step == (sext i32 (trunc i64 %Step to i32)
+//           to i64)
 //    The returned pair means that SymbolicPHI can be rewritten into NewAddRec
 //    under the predicates {P1,P2,P3}.
 //    This predicated rewrite will be cached in PredicatedSCEVRewrites:
@@ -5414,7 +5438,8 @@ static const Loop *isIntegerLoopHeaderPHI(const PHINode *PN, LoopInfo &LI) {
 //
 // 3) Outline common code with createAddRecFromPHI to avoid duplication.
 std::optional<std::pair<const SCEV *, SmallVector<const SCEVPredicate *, 3>>>
-ScalarEvolution::createAddRecFromPHIWithCastsImpl(const SCEVUnknown *SymbolicPHI) {
+ScalarEvolution::createAddRecFromPHIWithCastsImpl(
+    const SCEVUnknown *SymbolicPHI) {
   SmallVector<const SCEVPredicate *, 3> Predicates;
 
   // *** Part1: Analyze if we have a phi-with-cast pattern for which we can
@@ -5647,7 +5672,7 @@ ScalarEvolution::createAddRecFromPHIWithCasts(const SCEVUnknown *SymbolicPHI) {
   }
 
   std::optional<std::pair<const SCEV *, SmallVector<const SCEVPredicate *, 3>>>
-    Rewrite = createAddRecFromPHIWithCastsImpl(SymbolicPHI);
+      Rewrite = createAddRecFromPHIWithCastsImpl(SymbolicPHI);
 
   // Record in the cache that the analysis failed
   if (!Rewrite) {
@@ -5879,8 +5904,7 @@ const SCEV *ScalarEvolution::createAddRecFromPHI(PHINode *PN) {
     //   PHI(f(0), f({1,+,1})) --> f({0,+,1})
     const SCEV *Shifted = SCEVShiftRewriter::rewrite(BEValue, L, *this);
     const SCEV *Start = SCEVInitRewriter::rewrite(Shifted, L, *this, false);
-    if (Shifted != getCouldNotCompute() &&
-        Start != getCouldNotCompute()) {
+    if (Shifted != getCouldNotCompute() && Start != getCouldNotCompute()) {
       const SCEV *StartVal = getSCEV(StartValueV);
       if (Start == StartVal) {
         // Okay, for the entire analysis of this edge we assumed the PHI
@@ -5936,8 +5960,9 @@ static bool BrPHIToSelect(DominatorTree &DT, BranchInst *BI, PHINode *Merge,
 }
 
 const SCEV *ScalarEvolution::createNodeFromSelectLikePHI(PHINode *PN) {
-  auto IsReachable =
-      [&](BasicBlock *BB) { return DT.isReachableFromEntry(BB); };
+  auto IsReachable = [&](BasicBlock *BB) {
+    return DT.isReachableFromEntry(BB);
+  };
   if (PN->getNumIncomingValues() == 2 && all_of(PN->blocks(), IsReachable)) {
     // Try to match
     //
@@ -6278,7 +6303,7 @@ APInt ScalarEvolution::getConstantMultipleImpl(const SCEV *S) {
   case scAddRecExpr: {
     const SCEVNAryExpr *N = cast<SCEVNAryExpr>(S);
     if (N->hasNoUnsignedWrap())
-        return GetGCDMultiple(N);
+      return GetGCDMultiple(N);
     // Find the trailing bits, which is the minimum of its operands.
     uint32_t TZ = getMinTrailingZeros(N->getOperand(0));
     for (const SCEV *Operand : N->operands().drop_front())
@@ -6345,8 +6370,8 @@ void ScalarEvolution::setNoWrapFlags(SCEVAddRecExpr *AddRec,
   }
 }
 
-ConstantRange ScalarEvolution::
-getRangeForUnknownRecurrence(const SCEVUnknown *U) {
+ConstantRange
+ScalarEvolution::getRangeForUnknownRecurrence(const SCEVUnknown *U) {
   const DataLayout &DL = getDataLayout();
 
   unsigned BitWidth = getTypeSizeInBits(U->getType());
@@ -6413,7 +6438,7 @@ getRangeForUnknownRecurrence(const SCEVUnknown *U) {
 
   // Compute total shift amount, being careful of overflow and bitwidths.
   auto MaxShiftAmt = KnownStep.getMaxValue();
-  APInt TCAP(BitWidth, TC-1);
+  APInt TCAP(BitWidth, TC - 1);
   bool Overflow = false;
   auto TotalShift = MaxShiftAmt.umul_ov(TCAP, Overflow);
   if (Overflow)
@@ -6428,8 +6453,8 @@ getRangeForUnknownRecurrence(const SCEVUnknown *U) {
     //   saturation => 0 or -1
     //   other => a value closer to zero (of the same sign)
     // Thus, the end value is closer to zero than the start.
-    auto KnownEnd = KnownBits::ashr(KnownStart,
-                                    KnownBits::makeConstant(TotalShift));
+    auto KnownEnd =
+        KnownBits::ashr(KnownStart, KnownBits::makeConstant(TotalShift));
     if (KnownStart.isNonNegative())
       // Analogous to lshr (simply not yet canonicalized)
       return ConstantRange::getNonEmpty(KnownEnd.getMinValue(),
@@ -6446,15 +6471,15 @@ getRangeForUnknownRecurrence(const SCEVUnknown *U) {
     //   saturation => 0
     //   other => a smaller positive number
     // Thus, the low end of the unsigned range is the last value produced.
-    auto KnownEnd = KnownBits::lshr(KnownStart,
-                                    KnownBits::makeConstant(TotalShift));
+    auto KnownEnd =
+        KnownBits::lshr(KnownStart, KnownBits::makeConstant(TotalShift));
     return ConstantRange::getNonEmpty(KnownEnd.getMinValue(),
                                       KnownStart.getMaxValue() + 1);
   }
   case Instruction::Shl: {
     // Iff no bits are shifted out, value increases on every shift.
-    auto KnownEnd = KnownBits::shl(KnownStart,
-                                   KnownBits::makeConstant(TotalShift));
+    auto KnownEnd =
+        KnownBits::shl(KnownStart, KnownBits::makeConstant(TotalShift));
     if (TotalShift.ult(KnownStart.countMinLeadingZeros()))
       return ConstantRange(KnownStart.getMinValue(),
                            KnownEnd.getMaxValue() + 1);
@@ -6582,8 +6607,7 @@ const ConstantRange &ScalarEvolution::getRangeRef(
       ConservativeResult =
           ConstantRange(APInt::getMinValue(BitWidth),
                         APInt::getMaxValue(BitWidth) - Remainder + 1);
-  }
-  else {
+  } else {
     uint32_t TZ = getMinTrailingZeros(S);
     if (TZ != 0) {
       ConservativeResult = ConstantRange(
@@ -7034,8 +7058,8 @@ ConstantRange ScalarEvolution::getRangeForAffineNoSelfWrappingAR(
   if (RangeBetween.isFullSet())
     return RangeBetween;
   // Only deal with ranges that do not wrap (i.e. RangeMin < RangeMax).
-  bool IsWrappedSet = IsSigned ? RangeBetween.isSignWrappedSet()
-                               : RangeBetween.isWrappedSet();
+  bool IsWrappedSet =
+      IsSigned ? RangeBetween.isSignWrappedSet() : RangeBetween.isWrappedSet();
   if (IsWrappedSet)
     return ConstantRange::getFull(BitWidth);
 
@@ -7043,7 +7067,7 @@ ConstantRange ScalarEvolution::getRangeForAffineNoSelfWrappingAR(
       isKnownPredicateViaConstantRanges(LEPred, Start, End))
     return RangeBetween;
   if (isKnownNegative(Step) &&
-           isKnownPredicateViaConstantRanges(GEPred, Start, End))
+      isKnownPredicateViaConstantRanges(GEPred, Start, End))
     return RangeBetween;
   return ConstantRange::getFull(BitWidth);
 }
@@ -7069,8 +7093,7 @@ ConstantRange ScalarEvolution::getRangeViaFactoring(const SCEV *Start,
       std::optional<unsigned> CastOp;
       APInt Offset(BitWidth, 0);
 
-      assert(SE.getTypeSizeInBits(S->getType()) == BitWidth &&
-             "Should be!");
+      assert(SE.getTypeSizeInBits(S->getType()) == BitWidth && "Should be!");
 
       // Peel off a constant offset:
       if (auto *SA = dyn_cast<SCEVAddExpr>(S)) {
@@ -7168,7 +7191,8 @@ ConstantRange ScalarEvolution::getRangeViaFactoring(const SCEV *Start,
 }
 
 SCEV::NoWrapFlags ScalarEvolution::getNoWrapFlagsFromUB(const Value *V) {
-  if (isa<ConstantExpr>(V)) return SCEV::FlagAnyWrap;
+  if (isa<ConstantExpr>(V))
+    return SCEV::FlagAnyWrap;
   const BinaryOperator *BinOp = cast<BinaryOperator>(V);
 
   // Return early if there are no flags to propagate to the SCEV.
@@ -7252,7 +7276,6 @@ bool ScalarEvolution::isGuaranteedToTransferExecutionTo(const Instruction *A,
   return false;
 }
 
-
 bool ScalarEvolution::isSCEVExprNeverPoison(const Instruction *I) {
   // Only proceed if we can prove that I does not yield poison.
   if (!programUndefinedIfPoison(I))
@@ -7734,8 +7757,7 @@ const SCEV *ScalarEvolution::createSCEV(Value *V) {
         unsigned TZ = A.countr_zero();
         unsigned BitWidth = A.getBitWidth();
         KnownBits Known(BitWidth);
-        computeKnownBits(BO->LHS, Known, getDataLayout(),
-                         0, &AC, nullptr, &DT);
+        computeKnownBits(BO->LHS, Known, getDataLayout(), 0, &AC, nullptr, &DT);
 
         APInt EffectiveMask =
             APInt::getLowBitsSet(BitWidth, BitWidth - LZ - TZ).shl(TZ);
@@ -7749,7 +7771,7 @@ const SCEV *ScalarEvolution::createSCEV(Value *V) {
               unsigned MulZeros = OpC->getAPInt().countr_zero();
               unsigned GCD = std::min(MulZeros, TZ);
               APInt DivAmt = APInt::getOneBitSet(BitWidth, TZ - GCD);
-              SmallVector<const SCEV*, 4> MulOps;
+              SmallVector<const SCEV *, 4> MulOps;
               MulOps.push_back(getConstant(OpC->getAPInt().lshr(GCD)));
               append_range(MulOps, LHSMul->operands().drop_front());
               auto *NewMul = getMulExpr(MulOps, LHSMul->getNoWrapFlags());
@@ -7760,7 +7782,8 @@ const SCEV *ScalarEvolution::createSCEV(Value *V) {
             ShiftedLHS = getUDivExpr(LHS, MulCount);
           return getMulExpr(
               getZeroExtendExpr(
-                  getTruncateExpr(ShiftedLHS,
+                  getTruncateExpr(
+                      ShiftedLHS,
                       IntegerType::get(getContext(), BitWidth - LZ - TZ)),
                   BO->LHS->getType()),
               MulCount);
@@ -8095,7 +8118,7 @@ const SCEV *ScalarEvolution::getTripCountFromExitCount(const SCEV *ExitCount,
 
   auto CanAddOneWithoutOverflow = [&]() {
     ConstantRange ExitCountRange =
-      getRangeRef(ExitCount, RangeSignHint::HINT_RANGE_UNSIGNED);
+        getRangeRef(ExitCount, RangeSignHint::HINT_RANGE_UNSIGNED);
     if (!ExitCountRange.contains(APInt::getMaxValue(ExitCountSize)))
       return true;
 
@@ -8216,9 +8239,8 @@ const SCEV *ScalarEvolution::getExitCount(const Loop *L,
   llvm_unreachable("Invalid ExitCountKind!");
 }
 
-const SCEV *
-ScalarEvolution::getPredicatedBackedgeTakenCount(const Loop *L,
-                                                 SmallVector<const SCEVPredicate *, 4> &Preds) {
+const SCEV *ScalarEvolution::getPredicatedBackedgeTakenCount(
+    const Loop *L, SmallVector<const SCEVPredicate *, 4> &Preds) {
   return getPredicatedBackedgeTakenInfo(L).getExact(L, this, &Preds);
 }
 
@@ -8385,7 +8407,7 @@ void ScalarEvolution::forgetLoop(const Loop *L) {
     auto LoopUsersItr = LoopUsers.find(CurrL);
     if (LoopUsersItr != LoopUsers.end()) {
       ToForget.insert(ToForget.end(), LoopUsersItr->second.begin(),
-                LoopUsersItr->second.end());
+                      LoopUsersItr->second.end());
     }
 
     // Drop information about expressions based on loop-header PHIs.
@@ -8406,7 +8428,8 @@ void ScalarEvolution::forgetTopmostLoop(const Loop *L) {
 
 void ScalarEvolution::forgetValue(Value *V) {
   Instruction *I = dyn_cast<Instruction>(V);
-  if (!I) return;
+  if (!I)
+    return;
 
   // Drop information about expressions based on loop-header PHIs.
   SmallVector<Instruction *, 16> Worklist;
@@ -8463,9 +8486,9 @@ void ScalarEvolution::forgetBlockAndLoopDispositions(Value *V) {
 /// is never skipped. This is a valid assumption as long as the loop exits via
 /// that test. For precise results, it is the caller's responsibility to specify
 /// the relevant loop exiting block using getExact(ExitingBlock, SE).
-const SCEV *
-ScalarEvolution::BackedgeTakenInfo::getExact(const Loop *L, ScalarEvolution *SE,
-                                             SmallVector<const SCEVPredicate *, 4> *Preds) const {
+const SCEV *ScalarEvolution::BackedgeTakenInfo::getExact(
+    const Loop *L, ScalarEvolution *SE,
+    SmallVector<const SCEVPredicate *, 4> *Preds) const {
   // If any exits were not computable, the loop is not computable.
   if (!isComplete() || ExitNotTaken.empty())
     return SE->getCouldNotCompute();
@@ -8606,7 +8629,7 @@ ScalarEvolution::ExitLimit::ExitLimit(
     const SCEV *SymbolicMaxNotTaken, bool MaxOrZero,
     const SmallPtrSetImpl<const SCEVPredicate *> &PredSet)
     : ExitLimit(E, ConstantMaxNotTaken, SymbolicMaxNotTaken, MaxOrZero,
-                { &PredSet }) {}
+                {&PredSet}) {}
 
 /// Allocate memory for BackedgeTakenInfo and copy the not-taken count of each
 /// computable exit into a persistent ExitNotTakenInfo array.
@@ -8620,12 +8643,12 @@ ScalarEvolution::BackedgeTakenInfo::BackedgeTakenInfo(
   std::transform(ExitCounts.begin(), ExitCounts.end(),
                  std::back_inserter(ExitNotTaken),
                  [&](const EdgeExitInfo &EEI) {
-        BasicBlock *ExitBB = EEI.first;
-        const ExitLimit &EL = EEI.second;
-        return ExitNotTakenInfo(ExitBB, EL.ExactNotTaken,
-                                EL.ConstantMaxNotTaken, EL.SymbolicMaxNotTaken,
-                                EL.Predicates);
-  });
+                   BasicBlock *ExitBB = EEI.first;
+                   const ExitLimit &EL = EEI.second;
+                   return ExitNotTakenInfo(
+                       ExitBB, EL.ExactNotTaken, EL.ConstantMaxNotTaken,
+                       EL.SymbolicMaxNotTaken, EL.Predicates);
+                 });
   assert((isa<SCEVCouldNotCompute>(ConstantMax) ||
           isa<SCEVConstant>(ConstantMax)) &&
          "No point in having a non-constant max backedge taken count!");
@@ -8715,8 +8738,10 @@ ScalarEvolution::computeBackedgeTakenCount(const Loop *L,
       }
     }
   }
-  const SCEV *MaxBECount = MustExitMaxBECount ? MustExitMaxBECount :
-    (MayExitMaxBECount ? MayExitMaxBECount : getCouldNotCompute());
+  const SCEV *MaxBECount =
+      MustExitMaxBECount
+          ? MustExitMaxBECount
+          : (MayExitMaxBECount ? MayExitMaxBECount : getCouldNotCompute());
   // The loop backedge will be taken the maximum or zero times if there's
   // a single exit that must be taken the maximum or zero times.
   bool MaxOrZero = (MustExitMaxOrZero && ExitingBlocks.size() == 1);
@@ -8738,7 +8763,7 @@ ScalarEvolution::computeBackedgeTakenCount(const Loop *L,
 
 ScalarEvolution::ExitLimit
 ScalarEvolution::computeExitLimit(const Loop *L, BasicBlock *ExitingBlock,
-                                      bool AllowPredicates) {
+                                  bool AllowPredicates) {
   assert(L->contains(ExitingBlock) && "Exit count for non-loop block?");
   // If our exiting block does not dominate the latch, then its connection with
   // loop's exit limit may be far from trivial.
@@ -8872,9 +8897,8 @@ ScalarEvolution::ExitLimit ScalarEvolution::computeExitLimitFromCondImpl(
   const APInt *C;
   if (match(ExitCond, m_ExtractValue<1>(m_WithOverflowInst(WO))) &&
       match(WO->getRHS(), m_APInt(C))) {
-    ConstantRange NWR =
-      ConstantRange::makeExactNoWrapRegion(WO->getBinaryOp(), *C,
-                                           WO->getNoWrapKind());
+    ConstantRange NWR = ConstantRange::makeExactNoWrapRegion(
+        WO->getBinaryOp(), *C, WO->getNoWrapKind());
     CmpInst::Predicate Pred;
     APInt NewRHSC, Offset;
     NWR.getEquivalentICmp(Pred, NewRHSC, Offset);
@@ -8971,7 +8995,7 @@ ScalarEvolution::computeExitLimitFromCondFromBinOp(
     SymbolicMaxBECount =
         isa<SCEVCouldNotCompute>(BECount) ? ConstantMaxBECount : BECount;
   return ExitLimit(BECount, ConstantMaxBECount, SymbolicMaxBECount, false,
-                   { &EL0.Predicates, &EL1.Predicates });
+                   {&EL0.Predicates, &EL1.Predicates});
 }
 
 ScalarEvolution::ExitLimit ScalarEvolution::computeExitLimitFromICmp(
@@ -8993,8 +9017,7 @@ ScalarEvolution::ExitLimit ScalarEvolution::computeExitLimitFromICmp(
   if (EL.hasAnyInfo())
     return EL;
 
-  auto *ExhaustiveCount =
-      computeExitCountExhaustively(L, ExitCond, ExitIfTrue);
+  auto *ExhaustiveCount = computeExitCountExhaustively(L, ExitCond, ExitIfTrue);
 
   if (!isa<SCEVCouldNotCompute>(ExhaustiveCount))
     return ExhaustiveCount;
@@ -9033,7 +9056,8 @@ ScalarEvolution::ExitLimit ScalarEvolution::computeExitLimitFromICmp(
             ConstantRange::makeExactICmpRegion(Pred, RHSC->getAPInt());
 
         const SCEV *Ret = AddRec->getNumIterationsInRange(CompRange, *this);
-        if (!isa<SCEVCouldNotCompute>(Ret)) return Ret;
+        if (!isa<SCEVCouldNotCompute>(Ret))
+          return Ret;
       }
 
   // If this loop must exit based on this condition (or execute undefined
@@ -9053,7 +9077,7 @@ ScalarEvolution::ExitLimit ScalarEvolution::computeExitLimitFromICmp(
           StrideC && StrideC->getAPInt().isPowerOf2()) {
         auto Flags = AR->getNoWrapFlags();
         Flags = setFlags(Flags, SCEV::FlagNW);
-        SmallVector<const SCEV*> Operands{AR->operands()};
+        SmallVector<const SCEV *> Operands{AR->operands()};
         Flags = StrengthenNoWrapFlags(this, scAddRecExpr, Operands, Flags);
         setNoWrapFlags(const_cast<SCEVAddRecExpr *>(AR), Flags);
       }
@@ -9061,7 +9085,7 @@ ScalarEvolution::ExitLimit ScalarEvolution::computeExitLimitFromICmp(
   }
 
   switch (Pred) {
-  case ICmpInst::ICMP_NE: {                     // while (X != Y)
+  case ICmpInst::ICMP_NE: { // while (X != Y)
     // Convert to: while (X-Y != 0)
     if (LHS->getType()->isPointerTy()) {
       LHS = getLosslessPtrToIntExpr(LHS);
@@ -9079,7 +9103,7 @@ ScalarEvolution::ExitLimit ScalarEvolution::computeExitLimitFromICmp(
       return EL;
     break;
   }
-  case ICmpInst::ICMP_EQ: {                     // while (X == Y)
+  case ICmpInst::ICMP_EQ: { // while (X == Y)
     // Convert to: while (X-Y == 0)
     if (LHS->getType()->isPointerTy()) {
       LHS = getLosslessPtrToIntExpr(LHS);
@@ -9092,7 +9116,8 @@ ScalarEvolution::ExitLimit ScalarEvolution::computeExitLimitFromICmp(
         return RHS;
     }
     ExitLimit EL = howFarToNonZero(getMinusSCEV(LHS, RHS), L);
-    if (EL.hasAnyInfo()) return EL;
+    if (EL.hasAnyInfo())
+      return EL;
     break;
   }
   case ICmpInst::ICMP_SLE:
@@ -9188,9 +9213,8 @@ ScalarEvolution::ExitLimit ScalarEvolution::computeShiftCompareExitLimit(
 
   // Return true if V is of the form "LHS `shift_op` <positive constant>".
   // Return LHS in OutLHS and shift_opt in OutOpCode.
-  auto MatchPositiveShift =
-      [](Value *V, Value *&OutLHS, Instruction::BinaryOps &OutOpCode) {
-
+  auto MatchPositiveShift = [](Value *V, Value *&OutLHS,
+                               Instruction::BinaryOps &OutOpCode) {
     using namespace PatternMatch;
 
     ConstantInt *ShiftAmt;
@@ -9214,8 +9238,8 @@ ScalarEvolution::ExitLimit ScalarEvolution::computeShiftCompareExitLimit(
   //
   // Return true on a successful match.  Return the corresponding PHI node (%iv
   // above) in PNOut and the opcode of the shift operation in OpCodeOut.
-  auto MatchShiftRecurrence =
-      [&](Value *V, PHINode *&PNOut, Instruction::BinaryOps &OpCodeOut) {
+  auto MatchShiftRecurrence = [&](Value *V, PHINode *&PNOut,
+                                  Instruction::BinaryOps &OpCodeOut) {
     std::optional<Instruction::BinaryOps> PostShiftOpCode;
 
     {
@@ -9318,9 +9342,9 @@ ScalarEvolution::ExitLimit ScalarEvolution::computeShiftCompareExitLimit(
 /// Return true if we can constant fold an instruction of the specified type,
 /// assuming that all operands were constants.
 static bool CanConstantFold(const Instruction *I) {
-  if (isa<BinaryOperator>(I) || isa<CmpInst>(I) ||
-      isa<SelectInst>(I) || isa<CastInst>(I) || isa<GetElementPtrInst>(I) ||
-      isa<LoadInst>(I) || isa<ExtractValueInst>(I))
+  if (isa<BinaryOperator>(I) || isa<CmpInst>(I) || isa<SelectInst>(I) ||
+      isa<CastInst>(I) || isa<GetElementPtrInst>(I) || isa<LoadInst>(I) ||
+      isa<ExtractValueInst>(I))
     return true;
 
   if (const CallInst *CI = dyn_cast<CallInst>(I))
@@ -9333,7 +9357,8 @@ static bool CanConstantFold(const Instruction *I) {
 /// assuming its operands can all constant evolve.
 static bool canConstantEvolve(Instruction *I, const Loop *L) {
   // An instruction outside of the loop can't be derived from a loop PHI.
-  if (!L->contains(I)) return false;
+  if (!L->contains(I))
+    return false;
 
   if (isa<PHINode>(I)) {
     // We don't currently keep track of the control flow needed to evaluate
@@ -9359,10 +9384,12 @@ getConstantEvolvingPHIOperands(Instruction *UseInst, const Loop *L,
   // constant or derived from a PHI node themselves.
   PHINode *PHI = nullptr;
   for (Value *Op : UseInst->operands()) {
-    if (isa<Constant>(Op)) continue;
+    if (isa<Constant>(Op))
+      continue;
 
     Instruction *OpInst = dyn_cast<Instruction>(Op);
-    if (!OpInst || !canConstantEvolve(OpInst, L)) return nullptr;
+    if (!OpInst || !canConstantEvolve(OpInst, L))
+      return nullptr;
 
     PHINode *P = dyn_cast<PHINode>(OpInst);
     if (!P)
@@ -9377,9 +9404,9 @@ getConstantEvolvingPHIOperands(Instruction *UseInst, const Loop *L,
       PHIMap[OpInst] = P;
     }
     if (!P)
-      return nullptr;  // Not evolving from PHI
+      return nullptr; // Not evolving from PHI
     if (PHI && PHI != P)
-      return nullptr;  // Evolving from multiple different PHIs.
+      return nullptr; // Evolving from multiple different PHIs.
     PHI = P;
   }
   // This is a expression evolving from a constant PHI!
@@ -9393,7 +9420,8 @@ getConstantEvolvingPHIOperands(Instruction *UseInst, const Loop *L,
 /// constraints, return null.
 static PHINode *getConstantEvolvingPHI(Value *V, const Loop *L) {
   Instruction *I = dyn_cast<Instruction>(V);
-  if (!I || !canConstantEvolve(I, L)) return nullptr;
+  if (!I || !canConstantEvolve(I, L))
+    return nullptr;
 
   if (PHINode *PN = dyn_cast<PHINode>(I))
     return PN;
@@ -9412,40 +9440,46 @@ static Constant *EvaluateExpression(Value *V, const Loop *L,
                                     const DataLayout &DL,
                                     const TargetLibraryInfo *TLI) {
   // Convenient constant check, but redundant for recursive calls.
-  if (Constant *C = dyn_cast<Constant>(V)) return C;
+  if (Constant *C = dyn_cast<Constant>(V))
+    return C;
   Instruction *I = dyn_cast<Instruction>(V);
-  if (!I) return nullptr;
+  if (!I)
+    return nullptr;
 
-  if (Constant *C = Vals.lookup(I)) return C;
+  if (Constant *C = Vals.lookup(I))
+    return C;
 
   // An instruction inside the loop depends on a value outside the loop that we
   // weren't given a mapping for, or a value such as a call inside the loop.
-  if (!canConstantEvolve(I, L)) return nullptr;
+  if (!canConstantEvolve(I, L))
+    return nullptr;
 
   // An unmapped PHI can be due to a branch or another loop inside this loop,
   // or due to this not being the initial iteration through a loop where we
   // couldn't compute the evolution of this particular PHI last time.
-  if (isa<PHINode>(I)) return nullptr;
+  if (isa<PHINode>(I))
+    return nullptr;
 
-  std::vector<Constant*> Operands(I->getNumOperands());
+  std::vector<Constant *> Operands(I->getNumOperands());
 
   for (unsigned i = 0, e = I->getNumOperands(); i != e; ++i) {
     Instruction *Operand = dyn_cast<Instruction>(I->getOperand(i));
     if (!Operand) {
       Operands[i] = dyn_cast<Constant>(I->getOperand(i));
-      if (!Operands[i]) return nullptr;
+      if (!Operands[i])
+        return nullptr;
       continue;
     }
     Constant *C = EvaluateExpression(Operand, L, Vals, DL, TLI);
     Vals[Operand] = C;
-    if (!C) return nullptr;
+    if (!C)
+      return nullptr;
     Operands[i] = C;
   }
 
   return ConstantFoldInstOperands(I, Operands, DL, TLI);
 }
 
-
 // If every incoming value to PN except the one for BB is a specific Constant,
 // return that, else return nullptr.
 static Constant *getOtherIncomingValue(PHINode *PN, BasicBlock *BB) {
@@ -9473,16 +9507,16 @@ static Constant *getOtherIncomingValue(PHINode *PN, BasicBlock *BB) {
 /// in the header of its containing loop, we know the loop executes a
 /// constant number of times, and the PHI node is just a recurrence
 /// involving constants, fold it.
-Constant *
-ScalarEvolution::getConstantEvolutionLoopExitValue(PHINode *PN,
-                                                   const APInt &BEs,
-                                                   const Loop *L) {
+Constant *ScalarEvolution::getConstantEvolutionLoopExitValue(PHINode *PN,
+                                                             const APInt &BEs,
+                                                             const Loop *L) {
   auto I = ConstantEvolutionLoopExitValue.find(PN);
   if (I != ConstantEvolutionLoopExitValue.end())
     return I->second;
 
   if (BEs.ugt(MaxBruteForceIterations))
-    return ConstantEvolutionLoopExitValue[PN] = nullptr;  // Not going to evaluate it.
+    return ConstantEvolutionLoopExitValue[PN] =
+               nullptr; // Not going to evaluate it.
 
   Constant *&RetVal = ConstantEvolutionLoopExitValue[PN];
 
@@ -9510,9 +9544,9 @@ ScalarEvolution::getConstantEvolutionLoopExitValue(PHINode *PN,
   unsigned NumIterations = BEs.getZExtValue(); // must be in range
   unsigned IterationNum = 0;
   const DataLayout &DL = getDataLayout();
-  for (; ; ++IterationNum) {
+  for (;; ++IterationNum) {
     if (IterationNum == NumIterations)
-      return RetVal = CurrentIterVals[PN];  // Got exit value!
+      return RetVal = CurrentIterVals[PN]; // Got exit value!
 
     // Compute the value of the PHIs for the next iteration.
     // EvaluateExpression adds non-phi values to the CurrentIterVals map.
@@ -9520,7 +9554,7 @@ ScalarEvolution::getConstantEvolutionLoopExitValue(PHINode *PN,
     Constant *NextPHI =
         EvaluateExpression(BEValue, L, CurrentIterVals, DL, &TLI);
     if (!NextPHI)
-      return nullptr;        // Couldn't evaluate!
+      return nullptr; // Couldn't evaluate!
     NextIterVals[PN] = NextPHI;
 
     bool StoppedEvolving = NextPHI == CurrentIterVals[PN];
@@ -9531,7 +9565,8 @@ ScalarEvolution::getConstantEvolutionLoopExitValue(PHINode *PN,
     SmallVector<std::pair<PHINode *, Constant *>, 8> PHIsToCompute;
     for (const auto &I : CurrentIterVals) {
       PHINode *PHI = dyn_cast<PHINode>(I.first);
-      if (!PHI || PHI == PN || PHI->getParent() != Header) continue;
+      if (!PHI || PHI == PN || PHI->getParent() != Header)
+        continue;
       PHIsToCompute.emplace_back(PHI, I.second);
     }
     // We use two distinct loops because EvaluateExpression may invalidate any
@@ -9539,7 +9574,7 @@ ScalarEvolution::getConstantEvolutionLoopExitValue(PHINode *PN,
     for (const auto &I : PHIsToCompute) {
       PHINode *PHI = I.first;
       Constant *&NextPHI = NextIterVals[PHI];
-      if (!NextPHI) {   // Not already computed.
+      if (!NextPHI) { // Not already computed.
         Value *BEValue = PHI->getIncomingValueForBlock(Latch);
         NextPHI = EvaluateExpression(BEValue, L, CurrentIterVals, DL, &TLI);
       }
@@ -9560,11 +9595,13 @@ const SCEV *ScalarEvolution::computeExitCountExhaustively(const Loop *L,
                                                           Value *Cond,
                                                           bool ExitWhen) {
   PHINode *PN = getConstantEvolvingPHI(Cond, L);
-  if (!PN) return getCouldNotCompute();
+  if (!PN)
+    return getCouldNotCompute();
 
   // If the loop is canonicalized, the PHI will have exactly two entries.
   // That's the only form we support here.
-  if (PN->getNumIncomingValues() != 2) return getCouldNotCompute();
+  if (PN->getNumIncomingValues() != 2)
+    return getCouldNotCompute();
 
   DenseMap<Instruction *, Constant *> CurrentIterVals;
   BasicBlock *Header = L->getHeader();
@@ -9583,14 +9620,16 @@ const SCEV *ScalarEvolution::computeExitCountExhaustively(const Loop *L,
   // Okay, we find a PHI node that defines the trip count of this loop.  Execute
   // the loop symbolically to determine when the condition gets a value of
   // "ExitWhen".
-  unsigned MaxIterations = MaxBruteForceIterations;   // Limit analysis.
+  unsigned MaxIterations = MaxBruteForceIterations; // Limit analysis.
   const DataLayout &DL = getDataLayout();
-  for (unsigned IterationNum = 0; IterationNum != MaxIterations;++IterationNum){
+  for (unsigned IterationNum = 0; IterationNum != MaxIterations;
+       ++IterationNum) {
     auto *CondVal = dyn_cast_or_null<ConstantInt>(
         EvaluateExpression(Cond, L, CurrentIterVals, DL, &TLI));
 
     // Couldn't symbolically evaluate.
-    if (!CondVal) return getCouldNotCompute();
+    if (!CondVal)
+      return getCouldNotCompute();
 
     if (CondVal->getValue() == uint64_t(ExitWhen)) {
       ++NumBruteForceTripCountsComputed;
@@ -9606,12 +9645,14 @@ const SCEV *ScalarEvolution::computeExitCountExhaustively(const Loop *L,
     SmallVector<PHINode *, 8> PHIsToCompute;
     for (const auto &I : CurrentIterVals) {
       PHINode *PHI = dyn_cast<PHINode>(I.first);
-      if (!PHI || PHI->getParent() != Header) continue;
+      if (!PHI || PHI->getParent() != Header)
+        continue;
       PHIsToCompute.push_back(PHI);
     }
     for (PHINode *PHI : PHIsToCompute) {
       Constant *&NextPHI = NextIterVals[PHI];
-      if (NextPHI) continue;    // Already computed!
+      if (NextPHI)
+        continue; // Already computed!
 
       Value *BEValue = PHI->getIncomingValueForBlock(Latch);
       NextPHI = EvaluateExpression(BEValue, L, CurrentIterVals, DL, &TLI);
@@ -9989,7 +10030,7 @@ const SCEV *ScalarEvolution::stripInjectiveFunctions(const SCEV *S) const {
 ///
 /// If the equation does not have a solution, SCEVCouldNotCompute is returned.
 static const SCEV *SolveLinEquationWithOverflow(const APInt &A, const SCEV *B,
-                                               ScalarEvolution &SE) {
+                                                ScalarEvolution &SE) {
   uint32_t BW = A.getBitWidth();
   assert(BW == SE.getTypeSizeInBits(B->getType()));
   assert(A != 0 && "A must be non-zero.");
@@ -10014,9 +10055,9 @@ static const SCEV *SolveLinEquationWithOverflow(const APInt &A, const SCEV *B,
   // If D == 1, (N / D) == N == 2^BW, so we need one extra bit to represent
   // (N / D) in general. The inverse itself always fits into BW bits, though,
   // so we immediately truncate it.
-  APInt AD = A.lshr(Mult2).zext(BW + 1);  // AD = A / D
+  APInt AD = A.lshr(Mult2).zext(BW + 1); // AD = A / D
   APInt Mod(BW + 1, 0);
-  Mod.setBit(BW - Mult2);  // Mod = N / D
+  Mod.setBit(BW - Mult2); // Mod = N / D
   APInt I = AD.multiplicativeInverse(Mod).trunc(BW);
 
   // 4. Compute the minimum unsigned root of the equation:
@@ -10042,8 +10083,8 @@ GetQuadraticEquation(const SCEVAddRecExpr *AddRec) {
   const SCEVConstant *LC = dyn_cast<SCEVConstant>(AddRec->getOperand(0));
   const SCEVConstant *MC = dyn_cast<SCEVConstant>(AddRec->getOperand(1));
   const SCEVConstant *NC = dyn_cast<SCEVConstant>(AddRec->getOperand(2));
-  LLVM_DEBUG(dbgs() << __func__ << ": analyzing quadratic addrec: "
-                    << *AddRec << '\n');
+  LLVM_DEBUG(dbgs() << __func__ << ": analyzing quadratic addrec: " << *AddRec
+                    << '\n');
 
   // We currently can only solve this if the coefficients are constants.
   if (!LC || !MC || !NC) {
@@ -10058,8 +10099,7 @@ GetQuadraticEquation(const SCEVAddRecExpr *AddRec) {
 
   unsigned BitWidth = LC->getAPInt().getBitWidth();
   unsigned NewWidth = BitWidth + 1;
-  LLVM_DEBUG(dbgs() << __func__ << ": addrec coeff bw: "
-                    << BitWidth << '\n');
+  LLVM_DEBUG(dbgs() << __func__ << ": addrec coeff bw: " << BitWidth << '\n');
   // The sign-extension (as opposed to a zero-extension) here matches the
   // extension used in SolveQuadraticEquationWrap (with the same motivation).
   N = N.sext(NewWidth);
@@ -10080,9 +10120,9 @@ GetQuadraticEquation(const SCEVAddRecExpr *AddRec) {
   APInt B = 2 * M - A;
   APInt C = 2 * L;
   APInt T = APInt(NewWidth, 2);
-  LLVM_DEBUG(dbgs() << __func__ << ": equation " << A << "x^2 + " << B
-                    << "x + " << C << ", coeff bw: " << NewWidth
-                    << ", multiplied by " << T << '\n');
+  LLVM_DEBUG(dbgs() << __func__ << ": equation " << A << "x^2 + " << B << "x + "
+                    << C << ", coeff bw: " << NewWidth << ", multiplied by "
+                    << T << '\n');
   return std::make_tuple(A, B, C, T, BitWidth);
 }
 
@@ -10216,13 +10256,13 @@ SolveQuadraticAddRecRange(const SCEVAddRecExpr *AddRec,
     std::optional<APInt> UO =
         APIntOps::SolveQuadraticEquationWrap(A, B, -Bound, BitWidth + 1);
 
-    auto LeavesRange = [&] (const APInt &X) {
+    auto LeavesRange = [&](const APInt &X) {
       ConstantInt *C0 = ConstantInt::get(SE.getContext(), X);
       ConstantInt *V0 = EvaluateConstantChrecAtConstant(AddRec, C0, SE);
       if (Range.contains(V0->getValue()))
         return false;
       // X should be at least 1, so X-1 is non-negative.
-      ConstantInt *C1 = ConstantInt::get(SE.getContext(), X-1);
+      ConstantInt *C1 = ConstantInt::get(SE.getContext(), X - 1);
       ConstantInt *V1 = EvaluateConstantChrecAtConstant(AddRec, C1, SE);
       if (Range.contains(V1->getValue()))
         return true;
@@ -10239,10 +10279,10 @@ SolveQuadraticAddRecRange(const SCEVAddRecExpr *AddRec,
     // At this point, both SO and UO must have values.
     std::optional<APInt> Min = MinOptional(SO, UO);
     if (LeavesRange(*Min))
-      return { Min, true };
+      return {Min, true};
     std::optional<APInt> Max = Min == SO ? UO : SO;
     if (LeavesRange(*Max))
-      return { Max, true };
+      return {Max, true};
 
     // Solutions were found, but were eliminated, hence the "true".
     return {std::nullopt, true};
@@ -10314,8 +10354,9 @@ ScalarEvolution::ExitLimit ScalarEvolution::howFarToZero(const SCEV *V,
   // If the value is a constant
   if (const SCEVConstant *C = dyn_cast<SCEVConstant>(V)) {
     // If the value is already zero, the branch will execute zero times.
-    if (C->getValue()->isZero()) return C;
-    return getCouldNotCompute();  // Otherwise it will loop infinitely.
+    if (C->getValue()->isZero())
+      return C;
+    return getCouldNotCompute(); // Otherwise it will loop infinitely.
   }
 
   const SCEVAddRecExpr *AddRec =
@@ -10387,9 +10428,9 @@ ScalarEvolution::ExitLimit ScalarEvolution::howFarToZero(const SCEV *V,
     APInt MaxBECount = getUnsignedRangeMax(applyLoopGuards(Distance, L));
     MaxBECount = APIntOps::umin(MaxBECount, getUnsignedRangeMax(Distance));
 
-    // When a loop like "for (int i = 0; i != n; ++i) { /* body */ }" is rotated,
-    // we end up with a loop whose backedge-taken count is n - 1.  Detect this
-    // case, and see if we can improve the bound.
+    // When a loop like "for (int i = 0; i != n; ++i) { /* body */ }" is
+    // rotated, we end up with a loop whose backedge-taken count is n - 1.
+    // Detect this case, and see if we can improve the bound.
     //
     // Explicitly handling this here is necessary because getUnsignedRange
     // isn't context-sensitive; it doesn't know that we only care about the
@@ -10440,8 +10481,8 @@ ScalarEvolution::ExitLimit ScalarEvolution::howFarToZero(const SCEV *V,
   return ExitLimit(E, M, S, false, Predicates);
 }
 
-ScalarEvolution::ExitLimit
-ScalarEvolution::howFarToNonZero(const SCEV *V, const Loop *L) {
+ScalarEvolution::ExitLimit ScalarEvolution::howFarToNonZero(const SCEV *V,
+                                                            const Loop *L) {
   // Loops that look like: while (X == 0) are very strange indeed.  We don't
   // handle them yet except for the trivial case.  This could be expanded in the
   // future as needed.
@@ -10451,7 +10492,7 @@ ScalarEvolution::howFarToNonZero(const SCEV *V, const Loop *L) {
   if (const SCEVConstant *C = dyn_cast<SCEVConstant>(V)) {
     if (!C->getValue()->isZero())
       return getZero(C->getType());
-    return getCouldNotCompute();  // Otherwise it will loop infinitely.
+    return getCouldNotCompute(); // Otherwise it will loop infinitely.
   }
 
   // We could implement others, but I really doubt anyone writes loops like
@@ -10460,8 +10501,8 @@ ScalarEvolution::howFarToNonZero(const SCEV *V, const Loop *L) {
 }
 
 std::pair<const BasicBlock *, const BasicBlock *>
-ScalarEvolution::getPredecessorWithUniqueSuccessorForBB(const BasicBlock *BB)
-    const {
+ScalarEvolution::getPredecessorWithUniqueSuccessorForBB(
+    const BasicBlock *BB) const {
   // If the block has a unique predecessor, then there is no path from the
   // predecessor to the block that does not go through the direct edge
   // from the predecessor to the block.
@@ -10483,13 +10524,15 @@ ScalarEvolution::getPredecessorWithUniqueSuccessorForBB(const BasicBlock *BB)
 /// front-end may have replicated the controlling expression.
 static bool HasSameValue(const SCEV *A, const SCEV *B) {
   // Quick check to see if they are the same SCEV.
-  if (A == B) return true;
+  if (A == B)
+    return true;
 
   auto ComputesEqualValues = [](const Instruction *A, const Instruction *B) {
     // Not all instructions that are "identical" compute the same value.  For
     // instance, two distinct alloca instructions allocating the same type are
     // identical and do not read memory; but compute distinct values.
-    return A->isIdenticalTo(B) && (isa<BinaryOperator>(A) || isa<GetElementPtrInst>(A));
+    return A->isIdenticalTo(B) &&
+           (isa<BinaryOperator>(A) || isa<GetElementPtrInst>(A));
   };
 
   // Otherwise, if they're both SCEVUnknown, it's possible that they hold
@@ -10524,9 +10567,8 @@ bool ScalarEvolution::SimplifyICmpOperands(ICmpInst::Predicate &Pred,
   if (const SCEVConstant *LHSC = dyn_cast<SCEVConstant>(LHS)) {
     // Check for both operands constant.
     if (const SCEVConstant *RHSC = dyn_cast<SCEVConstant>(RHS)) {
-      if (ConstantExpr::getICmp(Pred,
-                                LHSC->getValue(),
-                                RHSC->getValue())->isNullValue())
+      if (ConstantExpr::getICmp(Pred, LHSC->getValue(), RHSC->getValue())
+              ->isNullValue())
         return TrivialCase(false);
       return TrivialCase(true);
     }
@@ -10592,7 +10634,6 @@ bool ScalarEvolution::SimplifyICmpOperands(ICmpInst::Predicate &Pred,
               }
         break;
 
-
         // The "Should have been caught earlier!" messages refer to the fact
         // that the ExactCR.isFullSet() or ExactCR.isEmptySet() check above
         // should have fired on the corresponding cases, and canonicalized the
@@ -10639,8 +10680,8 @@ bool ScalarEvolution::SimplifyICmpOperands(ICmpInst::Predicate &Pred,
   switch (Pred) {
   case ICmpInst::ICMP_SLE:
     if (!getSignedRangeMax(RHS).isMaxSignedValue()) {
-      RHS = getAddExpr(getConstant(RHS->getType(), 1, true), RHS,
-                       SCEV::FlagNSW);
+      RHS =
+          getAddExpr(getConstant(RHS->getType(), 1, true), RHS, SCEV::FlagNSW);
       Pred = ICmpInst::ICMP_SLT;
       Changed = true;
     } else if (!getSignedRangeMin(LHS).isMinSignedValue()) {
@@ -10657,16 +10698,16 @@ bool ScalarEvolution::SimplifyICmpOperands(ICmpInst::Predicate &Pred,
       Pred = ICmpInst::ICMP_SGT;
       Changed = true;
     } else if (!getSignedRangeMax(LHS).isMaxSignedValue()) {
-      LHS = getAddExpr(getConstant(RHS->getType(), 1, true), LHS,
-                       SCEV::FlagNSW);
+      LHS =
+          getAddExpr(getConstant(RHS->getType(), 1, true), LHS, SCEV::FlagNSW);
       Pred = ICmpInst::ICMP_SGT;
       Changed = true;
     }
     break;
   case ICmpInst::ICMP_ULE:
     if (!getUnsignedRangeMax(RHS).isMaxValue()) {
-      RHS = getAddExpr(getConstant(RHS->getType(), 1, true), RHS,
-                       SCEV::FlagNUW);
+      RHS =
+          getAddExpr(getConstant(RHS->getType(), 1, true), RHS, SCEV::FlagNUW);
       Pred = ICmpInst::ICMP_ULT;
       Changed = true;
     } else if (!getUnsignedRangeMin(LHS).isMinValue()) {
@@ -10681,8 +10722,8 @@ bool ScalarEvolution::SimplifyICmpOperands(ICmpInst::Predicate &Pred,
       Pred = ICmpInst::ICMP_UGT;
       Changed = true;
     } else if (!getUnsignedRangeMax(LHS).isMaxValue()) {
-      LHS = getAddExpr(getConstant(RHS->getType(), 1, true), LHS,
-                       SCEV::FlagNUW);
+      LHS =
+          getAddExpr(getConstant(RHS->getType(), 1, true), LHS, SCEV::FlagNUW);
       Pred = ICmpInst::ICMP_UGT;
       Changed = true;
     }
@@ -10726,11 +10767,11 @@ ScalarEvolution::SplitIntoInitAndPostInc(const Loop *L, const SCEV *S) {
   // Compute SCEV on entry of loop L.
   const SCEV *Start = SCEVInitRewriter::rewrite(S, L, *this);
   if (Start == getCouldNotCompute())
-    return { Start, Start };
+    return {Start, Start};
   // Compute post increment SCEV for loop L.
   const SCEV *PostInc = SCEVPostIncRewriter::rewrite(S, L, *this);
   assert(PostInc != getCouldNotCompute() && "Unexpected could not compute");
-  return { Start, PostInc };
+  return {Start, PostInc};
 }
 
 bool ScalarEvolution::isKnownViaInduction(ICmpInst::Predicate Pred,
@@ -10743,7 +10784,7 @@ bool ScalarEvolution::isKnownViaInduction(ICmpInst::Predicate Pred,
   if (LoopsUsed.empty())
     return false;
 
-  // Domination relationship must be a linear order on collected loops.
+    // Domination relationship must be a linear order on collected loops.
 #ifndef NDEBUG
   for (const auto *L1 : LoopsUsed)
     for (const auto *L2 : LoopsUsed)
@@ -10752,24 +10793,23 @@ bool ScalarEvolution::isKnownViaInduction(ICmpInst::Predicate Pred,
              "Domination relationship is not a linear order");
 #endif
 
-  const Loop *MDL =
-      *std::max_element(LoopsUsed.begin(), LoopsUsed.end(),
-                        [&](const Loop *L1, const Loop *L2) {
-         return DT.properlyDominates(L1->getHeader(), L2->getHeader());
-       });
+  const Loop *MDL = *std::max_element(
+      LoopsUsed.begin(), LoopsUsed.end(), [&](const Loop *L1, const Loop *L2) {
+        return DT.properlyDominates(L1->getHeader(), L2->getHeader());
+      });
 
   // Get init and post increment value for LHS.
   auto SplitLHS = SplitIntoInitAndPostInc(MDL, LHS);
   // if LHS contains unknown non-invariant SCEV then bail out.
   if (SplitLHS.first == getCouldNotCompute())
     return false;
-  assert (SplitLHS.second != getCouldNotCompute() && "Unexpected CNC");
+  assert(SplitLHS.second != getCouldNotCompute() && "Unexpected CNC");
   // Get init and post increment value for RHS.
   auto SplitRHS = SplitIntoInitAndPostInc(MDL, RHS);
   // if RHS contains unknown non-invariant SCEV then bail out.
   if (SplitRHS.first == getCouldNotCompute())
     return false;
-  assert (SplitRHS.second != getCouldNotCompute() && "Unexpected CNC");
+  assert(SplitRHS.second != getCouldNotCompute() && "Unexpected CNC");
   // It is possible that init SCEV contains an invariant load but it does
   // not dominate MDL and is not available at MDL loop entry, so we should
   // check it here.
@@ -10826,9 +10866,8 @@ ScalarEvolution::evaluatePredicateAt(ICmpInst::Predicate Pred, const SCEV *LHS,
 
   if (isBasicBlockEntryGuardedByCond(CtxI->getParent(), Pred, LHS, RHS))
     return true;
-  if (isBasicBlockEntryGuardedByCond(CtxI->getParent(),
-                                          ICmpInst::getInversePredicate(Pred),
-                                          LHS, RHS))
+  if (isBasicBlockEntryGuardedByCond(
+          CtxI->getParent(), ICmpInst::getInversePredicate(Pred), LHS, RHS))
     return false;
   return std::nullopt;
 }
@@ -11251,10 +11290,10 @@ bool ScalarEvolution::isImpliedViaGuard(const BasicBlock *BB,
 /// isLoopBackedgeGuardedByCond - Test whether the backedge of the loop is
 /// protected by a conditional between LHS and RHS.  This is used to
 /// to eliminate casts.
-bool
-ScalarEvolution::isLoopBackedgeGuardedByCond(const Loop *L,
-                                             ICmpInst::Predicate Pred,
-                                             const SCEV *LHS, const SCEV *RHS) {
+bool ScalarEvolution::isLoopBackedgeGuardedByCond(const Loop *L,
+                                                  ICmpInst::Predicate Pred,
+                                                  const SCEV *LHS,
+                                                  const SCEV *RHS) {
   // Interpret a null as meaning no loop, where there is obviously no guard
   // (interprocedural conditions notwithstanding). Do not bother about
   // unreachable loops.
@@ -11265,7 +11304,6 @@ ScalarEvolution::isLoopBackedgeGuardedByCond(const Loop *L,
     assert(!verifyFunction(*L->getHeader()->getParent(), &dbgs()) &&
            "This cannot be done on broken IR!");
 
-
   if (isKnownViaNonRecursiveReasoning(Pred, LHS, RHS))
     return true;
 
@@ -11274,10 +11312,9 @@ ScalarEvolution::isLoopBackedgeGuardedByCond(const Loop *L,
     return false;
 
   BranchInst *LoopContinuePredicate =
-    dyn_cast<BranchInst>(Latch->getTerminator());
+      dyn_cast<BranchInst>(Latch->getTerminator());
   if (LoopContinuePredicate && LoopContinuePredicate->isConditional() &&
-      isImpliedCond(Pred, LHS, RHS,
-                    LoopContinuePredicate->getCondition(),
+      isImpliedCond(Pred, LHS, RHS, LoopContinuePredicate->getCondition(),
                     LoopContinuePredicate->getSuccessor(0) != L->getHeader()))
     return true;
 
@@ -11298,7 +11335,7 @@ ScalarEvolution::isLoopBackedgeGuardedByCond(const Loop *L,
     Type *Ty = LatchBECount->getType();
     auto NoWrapFlags = SCEV::NoWrapFlags(SCEV::FlagNUW | SCEV::FlagNW);
     const SCEV *LoopCounter =
-      getAddRecExpr(getZero(Ty), getOne(Ty), L, NoWrapFlags);
+        getAddRecExpr(getZero(Ty), getOne(Ty), L, NoWrapFlags);
     if (isImpliedCond(Pred, LHS, RHS, ICmpInst::ICMP_ULT, LoopCounter,
                       LatchBECount))
       return true;
@@ -11379,7 +11416,7 @@ bool ScalarEvolution::isBasicBlockEntryGuardedByCond(const BasicBlock *BB,
   bool ProvedNonEquality = false;
 
   auto SplitAndProve =
-    [&](std::function<bool(ICmpInst::Predicate)> Fn) -> bool {
+      [&](std::function<bool(ICmpInst::Predicate)> Fn) -> bool {
     if (!ProvedNonStrictComparison)
       ProvedNonStrictComparison = Fn(NonStrictPredicate);
     if (!ProvedNonEquality)
@@ -11412,9 +11449,9 @@ bool ScalarEvolution::isBasicBlockEntryGuardedByCond(const BasicBlock *BB,
     return false;
   };
 
-  // Starting at the block's predecessor, climb up the predecessor chain, as long
-  // as there are predecessors that can be found that have unique successors
-  // leading to the original block.
+  // Starting at the block's predecessor, climb up the predecessor chain, as
+  // long as there are predecessors that can be found that have unique
+  // successors leading to the original block.
   const Loop *ContainingLoop = LI.getLoopFor(BB);
   const BasicBlock *PredBB;
   if (ContainingLoop && ContainingLoop->getHeader() == BB)
@@ -11506,7 +11543,8 @@ bool ScalarEvolution::isImpliedCond(ICmpInst::Predicate Pred, const SCEV *LHS,
   }
 
   const ICmpInst *ICI = dyn_cast<ICmpInst>(FoundCondValue);
-  if (!ICI) return false;
+  if (!ICI)
+    return false;
 
   // Now that we found a conditional branch that dominates the loop or controls
   // the loop latch. Check to see if it is the comparison we are looking for.
@@ -11562,8 +11600,9 @@ bool ScalarEvolution::isImpliedCond(ICmpInst::Predicate Pred, const SCEV *LHS,
       RHS = getZeroExtendExpr(RHS, FoundLHS->getType());
     }
   } else if (getTypeSizeInBits(LHS->getType()) >
-      getTypeSizeInBits(FoundLHS->getType())) {
-    if (FoundLHS->getType()->isPointerTy() || FoundRHS->getType()->isPointerTy())
+             getTypeSizeInBits(FoundLHS->getType())) {
+    if (FoundLHS->getType()->isPointerTy() ||
+        FoundRHS->getType()->isPointerTy())
       return false;
     if (CmpInst::isSigned(FoundPred)) {
       FoundLHS = getSignExtendExpr(FoundLHS, LHS->getType());
@@ -11707,8 +11746,8 @@ bool ScalarEvolution::isImpliedCondBalancedTypes(
     // range we consider has to correspond to same signedness as the
     // predicate we're interested in folding.
 
-    APInt Min = ICmpInst::isSigned(Pred) ?
-        getSignedRangeMin(V) : getUnsignedRangeMin(V);
+    APInt Min = ICmpInst::isSigned(Pred) ? getSignedRangeMin(V)
+                                         : getUnsignedRangeMin(V);
 
     if (Min == C->getAPInt()) {
       // Given (V >= Min && V != Min) we conclude V >= (Min + 1).
@@ -11718,48 +11757,49 @@ bool ScalarEvolution::isImpliedCondBalancedTypes(
       APInt SharperMin = Min + 1;
 
       switch (Pred) {
-        case ICmpInst::ICMP_SGE:
-        case ICmpInst::ICMP_UGE:
-          // We know V `Pred` SharperMin.  If this implies LHS `Pred`
-          // RHS, we're done.
-          if (isImpliedCondOperands(Pred, LHS, RHS, V, getConstant(SharperMin),
-                                    CtxI))
-            return true;
-          [[fallthrough]];
+      case ICmpInst::ICMP_SGE:
+      case ICmpInst::ICMP_UGE:
+        // We know V `Pred` SharperMin.  If this implies LHS `Pred`
+        // RHS, we're done.
+        if (isImpliedCondOperands(Pred, LHS, RHS, V, getConstant(SharperMin),
+                                  CtxI))
+          return true;
+        [[fallthrough]];
 
-        case ICmpInst::ICMP_SGT:
-        case ICmpInst::ICMP_UGT:
-          // We know from the range information that (V `Pred` Min ||
-          // V == Min).  We know from the guarding condition that !(V
-          // == Min).  This gives us
-          //
-          //       V `Pred` Min || V == Min && !(V == Min)
-          //   =>  V `Pred` Min
-          //
-          // If V `Pred` Min implies LHS `Pred` RHS, we're done.
+      case ICmpInst::ICMP_SGT:
+      case ICmpInst::ICMP_UGT:
+        // We know from the range information that (V `Pred` Min ||
+        // V == Min).  We know from the guarding condition that !(V
+        // == Min).  This gives us
+        //
+        //       V `Pred` Min || V == Min && !(V == Min)
+        //   =>  V `Pred` Min
+        //
+        // If V `Pred` Min implies LHS `Pred` RHS, we're done.
 
-          if (isImpliedCondOperands(Pred, LHS, RHS, V, getConstant(Min), CtxI))
-            return true;
-          break;
+        if (isImpliedCondOperands(Pred, LHS, RHS, V, getConstant(Min), CtxI))
+          return true;
+        break;
 
-        // `LHS < RHS` and `LHS <= RHS` are handled in the same way as `RHS > LHS` and `RHS >= LHS` respectively.
-        case ICmpInst::ICMP_SLE:
-        case ICmpInst::ICMP_ULE:
-          if (isImpliedCondOperands(CmpInst::getSwappedPredicate(Pred), RHS,
-                                    LHS, V, getConstant(SharperMin), CtxI))
-            return true;
-          [[fallthrough]];
+      // `LHS < RHS` and `LHS <= RHS` are handled in the same way as `RHS > LHS`
+      // and `RHS >= LHS` respectively.
+      case ICmpInst::ICMP_SLE:
+      case ICmpInst::ICMP_ULE:
+        if (isImpliedCondOperands(CmpInst::getSwappedPredicate(Pred), RHS, LHS,
+                                  V, getConstant(SharperMin), CtxI))
+          return true;
+        [[fallthrough]];
 
-        case ICmpInst::ICMP_SLT:
-        case ICmpInst::ICMP_ULT:
-          if (isImpliedCondOperands(CmpInst::getSwappedPredicate(Pred), RHS,
-                                    LHS, V, getConstant(Min), CtxI))
-            return true;
-          break;
+      case ICmpInst::ICMP_SLT:
+      case ICmpInst::ICMP_ULT:
+        if (isImpliedCondOperands(CmpInst::getSwappedPredicate(Pred), RHS, LHS,
+                                  V, getConstant(Min), CtxI))
+          return true;
+        break;
 
-        default:
-          // No change
-          break;
+      default:
+        // No change
+        break;
       }
     }
   }
@@ -11778,9 +11818,8 @@ bool ScalarEvolution::isImpliedCondBalancedTypes(
   return false;
 }
 
-bool ScalarEvolution::splitBinaryAdd(const SCEV *Expr,
-                                     const SCEV *&L, const SCEV *&R,
-                                     SCEV::NoWrapFlags &Flags) {
+bool ScalarEvolution::splitBinaryAdd(const SCEV *Expr, const SCEV *&L,
+                                     const SCEV *&R, SCEV::NoWrapFlags &Flags) {
   const auto *AE = dyn_cast<SCEVAddExpr>(Expr);
   if (!AE || AE->getNumOperands() != 2)
     return false;
@@ -11963,7 +12002,8 @@ bool ScalarEvolution::isImpliedCondOperandsViaNoOverflow(
     FoundRHSLimit = -(*RDiff);
   } else {
     assert(Pred == CmpInst::ICMP_SLT && "Checked above!");
-    FoundRHSLimit = APInt::getSignedMinValue(getTypeSizeInBits(RHS->getType())) - *RDiff;
+    FoundRHSLimit =
+        APInt::getSignedMinValue(getTypeSizeInBits(RHS->getType())) - *RDiff;
   }
 
   // Try to prove (1) or (2), as needed.
@@ -12052,7 +12092,8 @@ bool ScalarEvolution::isImpliedViaMerge(ICmpInst::Predicate Pred,
     // PHIs, for it we can compare incoming values of AddRec from above the loop
     // and latch with their respective incoming values of LPhi.
     // TODO: Generalize to handle loops with many inputs in a header.
-    if (LPhi->getNumIncomingValues() != 2) return false;
+    if (LPhi->getNumIncomingValues() != 2)
+      return false;
 
     auto *RLoop = RAR->getLoop();
     auto *Predecessor = RLoop->getLoopPredecessor();
@@ -12146,8 +12187,7 @@ bool ScalarEvolution::isImpliedCondOperands(ICmpInst::Predicate Pred,
                                           CtxI))
     return true;
 
-  return isImpliedCondOperandsHelper(Pred, LHS, RHS,
-                                     FoundLHS, FoundRHS);
+  return isImpliedCondOperandsHelper(Pred, LHS, RHS, FoundLHS, FoundRHS);
 }
 
 /// Is MaybeMinMaxExpr an (U|S)(Min|Max) of Candidate and some other values?
@@ -12185,8 +12225,8 @@ static bool IsKnownPredicateViaAddRecStart(ScalarEvolution &SE,
   if (LAR->getStepRecurrence(SE) != RAR->getStepRecurrence(SE))
     return false;
 
-  SCEV::NoWrapFlags NW = ICmpInst::isSigned(Pred) ?
-                         SCEV::FlagNSW : SCEV::FlagNUW;
+  SCEV::NoWrapFlags NW =
+      ICmpInst::isSigned(Pred) ? SCEV::FlagNSW : SCEV::FlagNUW;
   if (!LAR->getNoWrapFlags(NW) || !RAR->getNoWrapFlags(NW))
     return false;
 
@@ -12427,9 +12467,9 @@ static bool isKnownPredicateExtendIdiom(ICmpInst::Predicate Pred,
   return false;
 }
 
-bool
-ScalarEvolution::isKnownViaNonRecursiveReasoning(ICmpInst::Predicate Pred,
-                                           const SCEV *LHS, const SCEV *RHS) {
+bool ScalarEvolution::isKnownViaNonRecursiveReasoning(ICmpInst::Predicate Pred,
+                                                      const SCEV *LHS,
+                                                      const SCEV *RHS) {
   return isKnownPredicateExtendIdiom(Pred, LHS, RHS) ||
          isKnownPredicateViaConstantRanges(Pred, LHS, RHS) ||
          IsKnownPredicateViaMinOrMax(*this, Pred, LHS, RHS) ||
@@ -12437,13 +12477,14 @@ ScalarEvolution::isKnownViaNonRecursiveReasoning(ICmpInst::Predicate Pred,
          isKnownPredicateViaNoOverflow(Pred, LHS, RHS);
 }
 
-bool
-ScalarEvolution::isImpliedCondOperandsHelper(ICmpInst::Predicate Pred,
-                                             const SCEV *LHS, const SCEV *RHS,
-                                             const SCEV *FoundLHS,
-                                             const SCEV *FoundRHS) {
+bool ScalarEvolution::isImpliedCondOperandsHelper(ICmpInst::Predicate Pred,
+                                                  const SCEV *LHS,
+                                                  const SCEV *RHS,
+                                                  const SCEV *FoundLHS,
+                                                  const SCEV *FoundRHS) {
   switch (Pred) {
-  default: llvm_unreachable("Unexpected ICmpInst::Predicate value!");
+  default:
+    llvm_unreachable("Unexpected ICmpInst::Predicate value!");
   case ICmpInst::ICMP_EQ:
   case ICmpInst::ICMP_NE:
     if (HasSameValue(LHS, FoundLHS) && HasSameValue(RHS, FoundRHS))
@@ -12693,15 +12734,14 @@ ScalarEvolution::howManyLessThans(const SCEV *LHS, const SCEV *RHS,
           const SCEV *Step = AR->getStepRecurrence(*this);
           Type *Ty = ZExt->getType();
           auto *S = getAddRecExpr(
-            getExtendAddRecStart<SCEVZeroExtendExpr>(AR, Ty, this, 0),
-            getZeroExtendExpr(Step, Ty, 0), L, AR->getNoWrapFlags());
+              getExtendAddRecStart<SCEVZeroExtendExpr>(AR, Ty, this, 0),
+              getZeroExtendExpr(Step, Ty, 0), L, AR->getNoWrapFlags());
           IV = dyn_cast<SCEVAddRecExpr>(S);
         }
       }
     }
   }
 
-
   if (!IV && AllowPredicates) {
     // Try to make this an AddRec using runtime tests, in the first X
     // iterations of this loop, where X is the SCEV expression found by the
@@ -12912,8 +12952,8 @@ ScalarEvolution::howManyLessThans(const SCEV *LHS, const SCEV *RHS,
       //
       // FIXME: Should isLoopEntryGuardedByCond do this for us?
       auto CondGT = IsSigned ? ICmpInst::ICMP_SGT : ICmpInst::ICMP_UGT;
-      auto *StartMinusOne = getAddExpr(OrigStart,
-                                       getMinusOne(OrigStart->getType()));
+      auto *StartMinusOne =
+          getAddExpr(OrigStart, getMinusOne(OrigStart->getType()));
       return isLoopEntryGuardedByCond(L, CondGT, OrigRHS, StartMinusOne);
     };
 
@@ -12935,7 +12975,8 @@ ScalarEvolution::howManyLessThans(const SCEV *LHS, const SCEV *RHS,
 
       // See what would happen if we assume the backedge is taken. This is
       // used to compute MaxBECount.
-      BECountIfBackedgeTaken = getUDivCeilSCEV(getMinusSCEV(RHS, Start), Stride);
+      BECountIfBackedgeTaken =
+          getUDivCeilSCEV(getMinusSCEV(RHS, Start), Stride);
     }
 
     // At this point, we know:
@@ -13112,11 +13153,11 @@ ScalarEvolution::ExitLimit ScalarEvolution::howManyGreaterThans(
   const SCEV *BECount = getUDivExpr(
       getAddExpr(getMinusSCEV(Start, End), getMinusSCEV(Stride, One)), Stride);
 
-  APInt MaxStart = IsSigned ? getSignedRangeMax(Start)
-                            : getUnsignedRangeMax(Start);
+  APInt MaxStart =
+      IsSigned ? getSignedRangeMax(Start) : getUnsignedRangeMax(Start);
 
-  APInt MinStride = IsSigned ? getSignedRangeMin(Stride)
-                             : getUnsignedRangeMin(Stride);
+  APInt MinStride =
+      IsSigned ? getSignedRangeMin(Stride) : getUnsignedRangeMin(Stride);
 
   unsigned BitWidth = getTypeSizeInBits(LHS->getType());
   APInt Limit = IsSigned ? APInt::getSignedMinValue(BitWidth) + (MinStride - 1)
@@ -13125,9 +13166,8 @@ ScalarEvolution::ExitLimit ScalarEvolution::howManyGreaterThans(
   // Although End can be a MIN expression we estimate MinEnd considering only
   // the case End = RHS. This is safe because in the other case (Start - End)
   // is zero, leading to a zero maximum backedge taken count.
-  APInt MinEnd =
-    IsSigned ? APIntOps::smax(getSignedRangeMin(RHS), Limit)
-             : APIntOps::umax(getUnsignedRangeMin(RHS), Limit);
+  APInt MinEnd = IsSigned ? APIntOps::smax(getSignedRangeMin(RHS), Limit)
+                          : APIntOps::umax(getUnsignedRangeMin(RHS), Limit);
 
   const SCEV *ConstantMaxBECount =
       isa<SCEVConstant>(BECount)
@@ -13146,7 +13186,7 @@ ScalarEvolution::ExitLimit ScalarEvolution::howManyGreaterThans(
 
 const SCEV *SCEVAddRecExpr::getNumIterationsInRange(const ConstantRange &Range,
                                                     ScalarEvolution &SE) const {
-  if (Range.isFullSet())  // Infinite loop.
+  if (Range.isFullSet()) // Infinite loop.
     return SE.getCouldNotCompute();
 
   // If the start is a non-zero constant, shift the range to simplify things.
@@ -13154,8 +13194,8 @@ const SCEV *SCEVAddRecExpr::getNumIterationsInRange(const ConstantRange &Range,
     if (!SC->getValue()->isZero()) {
       SmallVector<const SCEV *, 4> Operands(operands());
       Operands[0] = SE.getZero(SC->getType());
-      const SCEV *Shifted = SE.getAddRecExpr(Operands, getLoop(),
-                                             getNoWrapFlags(FlagNW));
+      const SCEV *Shifted =
+          SE.getAddRecExpr(Operands, getLoop(), getNoWrapFlags(FlagNW));
       if (const auto *ShiftedAddRec = dyn_cast<SCEVAddRecExpr>(Shifted))
         return ShiftedAddRec->getNumIterationsInRange(
             Range.subtract(SC->getAPInt()), SE);
@@ -13197,12 +13237,13 @@ const SCEV *SCEVAddRecExpr::getNumIterationsInRange(const ConstantRange &Range,
     // things must have happened.
     ConstantInt *Val = EvaluateConstantChrecAtConstant(this, ExitValue, SE);
     if (Range.contains(Val->getValue()))
-      return SE.getCouldNotCompute();  // Something strange happened
+      return SE.getCouldNotCompute(); // Something strange happened
 
     // Ensure that the previous value is in the range.
     assert(Range.contains(
-           EvaluateConstantChrecAtConstant(this,
-           ConstantInt::get(SE.getContext(), ExitVal - 1), SE)->getValue()) &&
+               EvaluateConstantChrecAtConstant(
+                   this, ConstantInt::get(SE.getContext(), ExitVal - 1), SE)
+                   ->getValue()) &&
            "Linear scev computation is off in a bad way!");
     return SE.getConstant(ExitValue);
   }
@@ -13236,8 +13277,8 @@ SCEVAddRecExpr::getPostIncExpr(ScalarEvolution &SE) const {
   const SCEV *Last = getOperand(getNumOperands() - 1);
   assert(!Last->isZero() && "Recurrency with zero step?");
   Ops.push_back(Last);
-  return cast<SCEVAddRecExpr>(SE.getAddRecExpr(Ops, getLoop(),
-                                               SCEV::FlagAnyWrap));
+  return cast<SCEVAddRecExpr>(
+      SE.getAddRecExpr(Ops, getLoop(), SCEV::FlagAnyWrap));
 }
 
 // Return true when S contains at least an undef value.
@@ -13314,7 +13355,7 @@ void ScalarEvolution::SCEVCallbackVH::allUsesReplacedWith(Value *V) {
 }
 
 ScalarEvolution::SCEVCallbackVH::SCEVCallbackVH(Value *V, ScalarEvolution *se)
-  : CallbackVH(V), SE(se) {}
+    : CallbackVH(V), SE(se) {}
 
 //===----------------------------------------------------------------------===//
 //                   ScalarEvolution Class Implementation
@@ -13399,8 +13440,7 @@ bool ScalarEvolution::hasLoopInvariantBackedgeTakenCount(const Loop *L) {
   return !isa<SCEVCouldNotCompute>(getBackedgeTakenCount(L));
 }
 
-static void PrintLoopInfo(raw_ostream &OS, ScalarEvolution *SE,
-                          const Loop *L) {
+static void PrintLoopInfo(raw_ostream &OS, ScalarEvolution *SE, const Loop *L) {
   // Print all inner loops first
   for (Loop *I : *L)
     PrintLoopInfo(OS, SE, I);
@@ -13514,7 +13554,7 @@ raw_ostream &operator<<(raw_ostream &OS, ScalarEvolution::BlockDisposition BD) {
   }
   return OS;
 }
-}
+} // namespace llvm
 
 void ScalarEvolution::print(raw_ostream &OS) const {
   // ScalarEvolution's implementation of the print method is to print
@@ -13557,7 +13597,8 @@ void ScalarEvolution::print(raw_ostream &OS) const {
         }
 
         if (L) {
-          OS << "\t\t" "Exits: ";
+          OS << "\t\t"
+                "Exits: ";
           const SCEV *ExitValue = SE.getSCEVAtScope(SV, L->getParentLoop());
           if (!SE.isLoopInvariant(ExitValue, L)) {
             OS << "<<Unknown>>";
@@ -13568,7 +13609,8 @@ void ScalarEvolution::print(raw_ostream &OS) const {
           bool First = true;
           for (const auto *Iter = L; Iter; Iter = Iter->getParentLoop()) {
             if (First) {
-              OS << "\t\t" "LoopDispositions: { ";
+              OS << "\t\t"
+                    "LoopDispositions: { ";
               First = false;
             } else {
               OS << ", ";
@@ -13582,7 +13624,8 @@ void ScalarEvolution::print(raw_ostream &OS) const {
             if (InnerL == L)
               continue;
             if (First) {
-              OS << "\t\t" "LoopDispositions: { ";
+              OS << "\t\t"
+                    "LoopDispositions: { ";
               First = false;
             } else {
               OS << ", ";
@@ -13645,7 +13688,8 @@ ScalarEvolution::computeLoopDisposition(const SCEV *S, const Loop *L) {
     // Everything that is not defined at loop entry is variant.
     if (DT.dominates(L->getHeader(), AR->getLoop()->getHeader()))
       return LoopVariant;
-    assert(!L->contains(AR->getLoop()) && "Containing loop's header does not"
+    assert(!L->contains(AR->getLoop()) &&
+           "Containing loop's header does not"
            " dominate the contained loop's header?");
 
     // This recurrence is invariant w.r.t. L if AR's loop contains L.
@@ -13766,7 +13810,7 @@ ScalarEvolution::computeBlockDisposition(const SCEV *S, const BasicBlock *BB) {
   }
   case scUnknown:
     if (Instruction *I =
-          dyn_cast<Instruction>(cast<SCEVUnknown>(S)->getValue())) {
+            dyn_cast<Instruction>(cast<SCEVUnknown>(S)->getValue())) {
       if (I->getParent() == BB)
         return DominatesBlock;
       if (DT.properlyDominates(I->getParent(), BB))
@@ -13892,9 +13936,8 @@ void ScalarEvolution::forgetMemoizedResultsImpl(const SCEV *S) {
   FoldCacheUser.erase(S);
 }
 
-void
-ScalarEvolution::getUsedLoops(const SCEV *S,
-                              SmallPtrSetImpl<const Loop *> &LoopsUsed) {
+void ScalarEvolution::getUsedLoops(const SCEV *S,
+                                   SmallPtrSetImpl<const Loop *> &LoopsUsed) {
   struct FindUsedLoops {
     FindUsedLoops(SmallPtrSetImpl<const Loop *> &LoopsUsed)
         : LoopsUsed(LoopsUsed) {}
@@ -14123,8 +14166,9 @@ void ScalarEvolution::verify() const {
         if (It != ValuesAtScopesUsers.end() &&
             is_contained(It->second, std::make_pair(L, Value)))
           continue;
-        dbgs() << "Value: " << *Value << ", Loop: " << *L << ", ValueAtScope: "
-               << *ValueAtScope << " missing in ValuesAtScopesUsers\n";
+        dbgs() << "Value: " << *Value << ", Loop: " << *L
+               << ", ValueAtScope: " << *ValueAtScope
+               << " missing in ValuesAtScopesUsers\n";
         std::abort();
       }
     }
@@ -14140,8 +14184,9 @@ void ScalarEvolution::verify() const {
       if (It != ValuesAtScopes.end() &&
           is_contained(It->second, std::make_pair(L, ValueAtScope)))
         continue;
-      dbgs() << "Value: " << *Value << ", Loop: " << *L << ", ValueAtScope: "
-             << *ValueAtScope << " missing in ValuesAtScopes\n";
+      dbgs() << "Value: " << *Value << ", Loop: " << *L
+             << ", ValueAtScope: " << *ValueAtScope
+             << " missing in ValuesAtScopes\n";
       std::abort();
     }
   }
@@ -14156,7 +14201,7 @@ void ScalarEvolution::verify() const {
           if (!isa<SCEVConstant>(S)) {
             auto UserIt = BECountUsers.find(S);
             if (UserIt != BECountUsers.end() &&
-                UserIt->second.contains({ LoopAndBEInfo.first, Predicated }))
+                UserIt->second.contains({LoopAndBEInfo.first, Predicated}))
               continue;
             dbgs() << "Value " << *S << " for loop " << *LoopAndBEInfo.first
                    << " missing from BECountUsers\n";
@@ -14242,9 +14287,8 @@ void ScalarEvolution::verify() const {
   }
 }
 
-bool ScalarEvolution::invalidate(
-    Function &F, const PreservedAnalyses &PA,
-    FunctionAnalysisManager::Invalidator &Inv) {
+bool ScalarEvolution::invalidate(Function &F, const PreservedAnalyses &PA,
+                                 FunctionAnalysisManager::Invalidator &Inv) {
   // Invalidate the ScalarEvolution object whenever it isn't preserved or one
   // of its dependencies is invalidated.
   auto PAC = PA.getChecker<ScalarEvolutionAnalysis>();
@@ -14271,8 +14315,8 @@ ScalarEvolutionVerifierPass::run(Function &F, FunctionAnalysisManager &AM) {
   return PreservedAnalyses::all();
 }
 
-PreservedAnalyses
-ScalarEvolutionPrinterPass::run(Function &F, FunctionAnalysisManager &AM) {
+PreservedAnalyses ScalarEvolutionPrinterPass::run(Function &F,
+                                                  FunctionAnalysisManager &AM) {
   // For compatibility with opt's -analyze feature under legacy pass manager
   // which was not ported to NPM. This keeps tests using
   // update_analyze_test_checks.py working.
@@ -14347,7 +14391,7 @@ ScalarEvolution::getComparePredicate(const ICmpInst::Predicate Pred,
   if (const auto *S = UniquePreds.FindNodeOrInsertPos(ID, IP))
     return S;
   SCEVComparePredicate *Eq = new (SCEVAllocator)
-    SCEVComparePredicate(ID.Intern(SCEVAllocator), Pred, LHS, RHS);
+      SCEVComparePredicate(ID.Intern(SCEVAllocator), Pred, LHS, RHS);
   UniquePreds.InsertNode(Eq, IP);
   return Eq;
 }
@@ -14373,7 +14417,6 @@ namespace {
 
 class SCEVPredicateRewriter : public SCEVRewriteVisitor<SCEVPredicateRewriter> {
 public:
-
   /// Rewrites \p S in the context of a loop L and the SCEV predication
   /// infrastructure.
   ///
@@ -14439,9 +14482,10 @@ class SCEVPredicateRewriter : public SCEVRewriteVisitor<SCEVPredicateRewriter> {
   }
 
 private:
-  explicit SCEVPredicateRewriter(const Loop *L, ScalarEvolution &SE,
-                        SmallPtrSetImpl<const SCEVPredicate *> *NewPreds,
-                        const SCEVPredicate *Pred)
+  explicit SCEVPredicateRewriter(
+      const Loop *L, ScalarEvolution &SE,
+      SmallPtrSetImpl<const SCEVPredicate *> *NewPreds,
+      const SCEVPredicate *Pred)
       : SCEVRewriteVisitor(SE), NewPreds(NewPreds), Pred(Pred), L(L) {}
 
   bool addOverflowAssumption(const SCEVPredicate *P) {
@@ -14473,7 +14517,7 @@ class SCEVPredicateRewriter : public SCEVRewriteVisitor<SCEVPredicateRewriter> {
         PredicatedRewrite = SE.createAddRecFromPHIWithCasts(Expr);
     if (!PredicatedRewrite)
       return Expr;
-    for (const auto *P : PredicatedRewrite->second){
+    for (const auto *P : PredicatedRewrite->second) {
       // Wrap predicates from outer loops are not supported.
       if (auto *WP = dyn_cast<const SCEVWrapPredicate>(P)) {
         if (L != WP->getExpr()->getLoop())
@@ -14492,9 +14536,8 @@ class SCEVPredicateRewriter : public SCEVRewriteVisitor<SCEVPredicateRewriter> {
 
 } // end anonymous namespace
 
-const SCEV *
-ScalarEvolution::rewriteUsingPredicate(const SCEV *S, const Loop *L,
-                                       const SCEVPredicate &Preds) {
+const SCEV *ScalarEvolution::rewriteUsingPredicate(const SCEV *S, const Loop *L,
+                                                   const SCEVPredicate &Preds) {
   return SCEVPredicateRewriter::rewrite(S, L, *this, nullptr, &Preds);
 }
 
@@ -14522,9 +14565,9 @@ SCEVPredicate::SCEVPredicate(const FoldingSetNodeIDRef ID,
     : FastID(ID), Kind(Kind) {}
 
 SCEVComparePredicate::SCEVComparePredicate(const FoldingSetNodeIDRef ID,
-                                   const ICmpInst::Predicate Pred,
-                                   const SCEV *LHS, const SCEV *RHS)
-  : SCEVPredicate(ID, P_Compare), Pred(Pred), LHS(LHS), RHS(RHS) {
+                                           const ICmpInst::Predicate Pred,
+                                           const SCEV *LHS, const SCEV *RHS)
+    : SCEVPredicate(ID, P_Compare), Pred(Pred), LHS(LHS), RHS(RHS) {
   assert(LHS->getType() == RHS->getType() && "LHS and RHS types don't match");
   assert(LHS != RHS && "LHS and RHS are the same SCEV");
 }
@@ -14549,7 +14592,6 @@ void SCEVComparePredicate::print(raw_ostream &OS, unsigned Depth) const {
   else
     OS.indent(Depth) << "Compare predicate: " << *LHS << " " << Pred << ") "
                      << *RHS << "\n";
-
 }
 
 SCEVWrapPredicate::SCEVWrapPredicate(const FoldingSetNodeIDRef ID,
@@ -14607,7 +14649,7 @@ SCEVWrapPredicate::getImpliedFlags(const SCEVAddRecExpr *AR,
 
 /// Union predicates don't get cached so create a dummy set ID for it.
 SCEVUnionPredicate::SCEVUnionPredicate(ArrayRef<const SCEVPredicate *> Preds)
-  : SCEVPredicate(FoldingSetNodeIDRef(nullptr, 0), P_Union) {
+    : SCEVPredicate(FoldingSetNodeIDRef(nullptr, 0), P_Union) {
   for (const auto *P : Preds)
     add(P);
 }
@@ -14622,8 +14664,7 @@ bool SCEVUnionPredicate::implies(const SCEVPredicate *N) const {
     return all_of(Set->Preds,
                   [this](const SCEVPredicate *I) { return this->implies(I); });
 
-  return any_of(Preds,
-                [N](const SCEVPredicate *I) { return I->implies(N); });
+  return any_of(Preds, [N](const SCEVPredicate *I) { return I->implies(N); });
 }
 
 void SCEVUnionPredicate::print(raw_ostream &OS, unsigned Depth) const {
@@ -14644,7 +14685,7 @@ void SCEVUnionPredicate::add(const SCEVPredicate *N) {
 PredicatedScalarEvolution::PredicatedScalarEvolution(ScalarEvolution &SE,
                                                      Loop &L)
     : SE(SE), L(L) {
-  SmallVector<const SCEVPredicate*, 4> Empty;
+  SmallVector<const SCEVPredicate *, 4> Empty;
   Preds = std::make_unique<SCEVUnionPredicate>(Empty);
 }
 
@@ -14692,7 +14733,8 @@ void PredicatedScalarEvolution::addPredicate(const SCEVPredicate &Pred) {
     return;
 
   auto &OldPreds = Preds->getPredicates();
-  SmallVector<const SCEVPredicate*, 4> NewPreds(OldPreds.begin(), OldPreds.end());
+  SmallVector<const SCEVPredicate *, 4> NewPreds(OldPreds.begin(),
+                                                 OldPreds.end());
   NewPreds.push_back(&Pred);
   Preds = std::make_unique<SCEVUnionPredicate>(NewPreds);
   updateGeneration();
@@ -14761,9 +14803,9 @@ const SCEVAddRecExpr *PredicatedScalarEvolution::getAsAddRec(Value *V) {
 
 PredicatedScalarEvolution::PredicatedScalarEvolution(
     const PredicatedScalarEvolution &Init)
-  : RewriteMap(Init.RewriteMap), SE(Init.SE), L(Init.L),
-    Preds(std::make_unique<SCEVUnionPredicate>(Init.Preds->getPredicates())),
-    Generation(Init.Generation), BackedgeCount(Init.BackedgeCount) {
+    : RewriteMap(Init.RewriteMap), SE(Init.SE), L(Init.L),
+      Preds(std::make_unique<SCEVUnionPredicate>(Init.Preds->getPredicates())),
+      Generation(Init.Generation), BackedgeCount(Init.BackedgeCount) {
   for (auto I : Init.FlagsMap)
     FlagsMap.insert(I);
 }
@@ -14851,13 +14893,13 @@ bool ScalarEvolution::matchURem(const SCEV *Expr, const SCEV *&LHS,
 
 const SCEV *
 ScalarEvolution::computeSymbolicMaxBackedgeTakenCount(const Loop *L) {
-  SmallVector<BasicBlock*, 16> ExitingBlocks;
+  SmallVector<BasicBlock *, 16> ExitingBlocks;
   L->getExitingBlocks(ExitingBlocks);
 
   // Form an expression for the maximum exit count possible for this loop. We
   // merge the max and exact information to approximate a version of
   // getConstantMaxBackedgeTakenCount which isn't restricted to just constants.
-  SmallVector<const SCEV*, 4> ExitCounts;
+  SmallVector<const SCEV *, 4> ExitCounts;
   for (BasicBlock *ExitingBB : ExitingBlocks) {
     const SCEV *ExitCount =
         getExitCount(L, ExitingBB, ScalarEvolution::SymbolicMaximum);
@@ -15189,31 +15231,31 @@ const SCEV *ScalarEvolution::applyLoopGuards(const SCEV *Expr, const Loop *L) {
     // predicate.
     const SCEV *One = getOne(RHS->getType());
     switch (Predicate) {
-      case CmpInst::ICMP_ULT:
-        if (RHS->getType()->isPointerTy())
-          return;
-        RHS = getUMaxExpr(RHS, One);
-        [[fallthrough]];
-      case CmpInst::ICMP_SLT: {
-        RHS = getMinusSCEV(RHS, One);
-        RHS = DividesBy ? GetPreviousSCEVDividesByDivisor(RHS, DividesBy) : RHS;
-        break;
-      }
-      case CmpInst::ICMP_UGT:
-      case CmpInst::ICMP_SGT:
-        RHS = getAddExpr(RHS, One);
-        RHS = DividesBy ? GetNextSCEVDividesByDivisor(RHS, DividesBy) : RHS;
-        break;
-      case CmpInst::ICMP_ULE:
-      case CmpInst::ICMP_SLE:
-        RHS = DividesBy ? GetPreviousSCEVDividesByDivisor(RHS, DividesBy) : RHS;
-        break;
-      case CmpInst::ICMP_UGE:
-      case CmpInst::ICMP_SGE:
-        RHS = DividesBy ? GetNextSCEVDividesByDivisor(RHS, DividesBy) : RHS;
-        break;
-      default:
-        break;
+    case CmpInst::ICMP_ULT:
+      if (RHS->getType()->isPointerTy())
+        return;
+      RHS = getUMaxExpr(RHS, One);
+      [[fallthrough]];
+    case CmpInst::ICMP_SLT: {
+      RHS = getMinusSCEV(RHS, One);
+      RHS = DividesBy ? GetPreviousSCEVDividesByDivisor(RHS, DividesBy) : RHS;
+      break;
+    }
+    case CmpInst::ICMP_UGT:
+    case CmpInst::ICMP_SGT:
+      RHS = getAddExpr(RHS, One);
+      RHS = DividesBy ? GetNextSCEVDividesByDivisor(RHS, DividesBy) : RHS;
+      break;
+    case CmpInst::ICMP_ULE:
+    case CmpInst::ICMP_SLE:
+      RHS = DividesBy ? GetPreviousSCEVDividesByDivisor(RHS, DividesBy) : RHS;
+      break;
+    case CmpInst::ICMP_UGE:
+    case CmpInst::ICMP_SGE:
+      RHS = DividesBy ? GetNextSCEVDividesByDivisor(RHS, DividesBy) : RHS;
+      break;
+    default:
+      break;
     }
 
     SmallVector<const SCEV *, 16> Worklist(1, LHS);
@@ -15290,13 +15332,15 @@ const SCEV *ScalarEvolution::applyLoopGuards(const SCEV *Expr, const Loop *L) {
     Terms.emplace_back(AssumeI->getOperand(0), true);
   }
 
-  // Second, collect information from llvm.experimental.guards dominating the loop.
+  // Second, collect information from llvm.experimental.guards dominating the
+  // loop.
   auto *GuardDecl = F.getParent()->getFunction(
       Intrinsic::getName(Intrinsic::experimental_guard));
   if (GuardDecl)
     for (const auto *GU : GuardDecl->users())
       if (const auto *Guard = dyn_cast<IntrinsicInst>(GU))
-        if (Guard->getFunction() == Header->getParent() && DT.dominates(Guard, Header))
+        if (Guard->getFunction() == Header->getParent() &&
+            DT.dominates(Guard, Header))
           Terms.emplace_back(Guard->getArgOperand(0), true);
 
   // Third, collect conditions from dominating branches. Starting at the loop
diff --git a/llvm/lib/Analysis/ScalarEvolutionAliasAnalysis.cpp b/llvm/lib/Analysis/ScalarEvolutionAliasAnalysis.cpp
index dc1af1bbb1d1613..fae2b23ee5dd271 100644
--- a/llvm/lib/Analysis/ScalarEvolutionAliasAnalysis.cpp
+++ b/llvm/lib/Analysis/ScalarEvolutionAliasAnalysis.cpp
@@ -24,8 +24,8 @@
 #include "llvm/InitializePasses.h"
 using namespace llvm;
 
-static bool canComputePointerDiff(ScalarEvolution &SE,
-                                  const SCEV *A, const SCEV *B) {
+static bool canComputePointerDiff(ScalarEvolution &SE, const SCEV *A,
+                                  const SCEV *B) {
   if (SE.getEffectiveSCEVType(A->getType()) !=
       SE.getEffectiveSCEVType(B->getType()))
     return false;
diff --git a/llvm/lib/Analysis/StackLifetime.cpp b/llvm/lib/Analysis/StackLifetime.cpp
index 3e1b5dea6f6c25b..e0dee9b938da346 100644
--- a/llvm/lib/Analysis/StackLifetime.cpp
+++ b/llvm/lib/Analysis/StackLifetime.cpp
@@ -301,9 +301,9 @@ LLVM_DUMP_METHOD void StackLifetime::dumpBlockLiveness() const {
     const BasicBlock *BB = IT.getFirst();
     const BlockLifetimeInfo &BlockInfo = BlockLiveness.find(BB)->getSecond();
     auto BlockRange = BlockInstRange.find(BB)->getSecond();
-    dbgs() << "  BB (" << BB->getName() << ") [" << BlockRange.first << ", " << BlockRange.second
-           << "): begin " << BlockInfo.Begin << ", end " << BlockInfo.End
-           << ", livein " << BlockInfo.LiveIn << ", liveout "
+    dbgs() << "  BB (" << BB->getName() << ") [" << BlockRange.first << ", "
+           << BlockRange.second << "): begin " << BlockInfo.Begin << ", end "
+           << BlockInfo.End << ", livein " << BlockInfo.LiveIn << ", liveout "
            << BlockInfo.LiveOut << "\n";
   }
 }
diff --git a/llvm/lib/Analysis/StackSafetyAnalysis.cpp b/llvm/lib/Analysis/StackSafetyAnalysis.cpp
index d7cfec7b19b17b0..08d3d7509fa72b6 100644
--- a/llvm/lib/Analysis/StackSafetyAnalysis.cpp
+++ b/llvm/lib/Analysis/StackSafetyAnalysis.cpp
@@ -54,10 +54,12 @@ STATISTIC(NumCombinedParamAccessesAfter,
           "Number of total param accesses after generateParamAccessSummary.");
 STATISTIC(NumCombinedDataFlowNodes,
           "Number of total nodes in combined index for dataflow processing.");
-STATISTIC(NumIndexCalleeUnhandled, "Number of index callee which are unhandled.");
-STATISTIC(NumIndexCalleeMultipleWeak, "Number of index callee non-unique weak.");
-STATISTIC(NumIndexCalleeMultipleExternal, "Number of index callee non-unique external.");
-
+STATISTIC(NumIndexCalleeUnhandled,
+          "Number of index callee which are unhandled.");
+STATISTIC(NumIndexCalleeMultipleWeak,
+          "Number of index callee non-unique weak.");
+STATISTIC(NumIndexCalleeMultipleExternal,
+          "Number of index callee non-unique external.");
 
 static cl::opt<int> StackSafetyMaxIterations("stack-safety-max-iterations",
                                              cl::init(20), cl::Hidden);
@@ -253,7 +255,6 @@ class StackSafetyLocalAnalysis {
   void analyzeAllUses(Value *Ptr, UseInfo<GlobalValue> &AS,
                       const StackLifetime &SL);
 
-
   bool isSafeAccess(const Use &U, AllocaInst *AI, const SCEV *AccessSize);
   bool isSafeAccess(const Use &U, AllocaInst *AI, Value *V);
   bool isSafeAccess(const Use &U, AllocaInst *AI, TypeSize AccessSize);
@@ -408,7 +409,7 @@ void StackSafetyLocalAnalysis::analyzeAllUses(Value *Ptr,
 
       assert(V == UI.get());
 
-      auto RecordStore = [&](const Value* StoredVal) {
+      auto RecordStore = [&](const Value *StoredVal) {
         if (V == StoredVal) {
           // Stored the pointer - conservatively assume it may be unsafe.
           US.addRange(I, UnknownRange, /*IsSafe=*/false);
@@ -703,8 +704,8 @@ FunctionSummary *findCalleeFunctionSummary(ValueInfo VI, StringRef ModuleId) {
   if (!VI)
     return nullptr;
   auto SummaryList = VI.getSummaryList();
-  GlobalValueSummary* S = nullptr;
-  for (const auto& GVS : SummaryList) {
+  GlobalValueSummary *S = nullptr;
+  for (const auto &GVS : SummaryList) {
     if (!GVS->isLive())
       continue;
     if (const AliasSummary *AS = dyn_cast<AliasSummary>(GVS.get()))
@@ -795,9 +796,9 @@ void resolveAllCalls(UseInfo<GlobalValue> &Use,
 
     if (!Index)
       return Use.updateRange(FullSet);
-    FunctionSummary *FS =
-        findCalleeFunctionSummary(Index->getValueInfo(C.first.Callee->getGUID()),
-                                  C.first.Callee->getParent()->getModuleIdentifier());
+    FunctionSummary *FS = findCalleeFunctionSummary(
+        Index->getValueInfo(C.first.Callee->getGUID()),
+        C.first.Callee->getParent()->getModuleIdentifier());
     ++NumModuleCalleeLookupTotal;
     if (!FS) {
       ++NumModuleCalleeLookupFailed;
diff --git a/llvm/lib/Analysis/TFLiteUtils.cpp b/llvm/lib/Analysis/TFLiteUtils.cpp
index b2862033e9cfbff..b854ac1b86a26fe 100644
--- a/llvm/lib/Analysis/TFLiteUtils.cpp
+++ b/llvm/lib/Analysis/TFLiteUtils.cpp
@@ -24,10 +24,10 @@
 
 #include "tensorflow/lite/interpreter.h"
 #include "tensorflow/lite/kernels/register.h"
+#include "tensorflow/lite/logger.h"
 #include "tensorflow/lite/model.h"
 #include "tensorflow/lite/model_builder.h"
 #include "tensorflow/lite/op_resolver.h"
-#include "tensorflow/lite/logger.h"
 
 #include <cassert>
 #include <numeric>
diff --git a/llvm/lib/Analysis/TargetLibraryInfo.cpp b/llvm/lib/Analysis/TargetLibraryInfo.cpp
index 15ba6468a307085..aaa4733899cfc41 100644
--- a/llvm/lib/Analysis/TargetLibraryInfo.cpp
+++ b/llvm/lib/Analysis/TargetLibraryInfo.cpp
@@ -175,8 +175,9 @@ static void initialize(TargetLibraryInfoImpl &TLI, const Triple &T,
 
   bool ShouldExtI32Param, ShouldExtI32Return;
   bool ShouldSignExtI32Param, ShouldSignExtI32Return;
-  TargetLibraryInfo::initExtensionsForTriple(ShouldExtI32Param,
-       ShouldExtI32Return, ShouldSignExtI32Param, ShouldSignExtI32Return, T);
+  TargetLibraryInfo::initExtensionsForTriple(
+      ShouldExtI32Param, ShouldExtI32Return, ShouldSignExtI32Param,
+      ShouldSignExtI32Return, T);
   TLI.setShouldExtI32Param(ShouldExtI32Param);
   TLI.setShouldExtI32Return(ShouldExtI32Return);
   TLI.setShouldSignExtI32Param(ShouldSignExtI32Param);
@@ -262,7 +263,8 @@ static void initialize(TargetLibraryInfoImpl &TLI, const Triple &T,
   }
 
   if (T.isOSWindows() && !T.isOSCygMing()) {
-    // XXX: The earliest documentation available at the moment is for VS2015/VC19:
+    // XXX: The earliest documentation available at the moment is for
+    // VS2015/VC19:
     // https://docs.microsoft.com/en-us/cpp/c-runtime-library/floating-point-support?view=vs-2015
     // XXX: In order to use an MSVCRT older than VC19,
     // the specific library version must be explicit in the target triple,
@@ -274,10 +276,8 @@ static void initialize(TargetLibraryInfoImpl &TLI, const Triple &T,
     }
 
     // Latest targets support C89 math functions, in part.
-    bool isARM = (T.getArch() == Triple::aarch64 ||
-                  T.getArch() == Triple::arm);
-    bool hasPartialFloat = (isARM ||
-                            T.getArch() == Triple::x86_64);
+    bool isARM = (T.getArch() == Triple::aarch64 || T.getArch() == Triple::arm);
+    bool hasPartialFloat = (isARM || T.getArch() == Triple::x86_64);
 
     // Win32 does not support float C89 math functions, in general.
     if (!hasPartialFloat) {
@@ -907,7 +907,8 @@ TargetLibraryInfoImpl::TargetLibraryInfoImpl(TargetLibraryInfoImpl &&TLI)
   ScalarDescs = TLI.ScalarDescs;
 }
 
-TargetLibraryInfoImpl &TargetLibraryInfoImpl::operator=(const TargetLibraryInfoImpl &TLI) {
+TargetLibraryInfoImpl &
+TargetLibraryInfoImpl::operator=(const TargetLibraryInfoImpl &TLI) {
   CustomNames = TLI.CustomNames;
   ShouldExtI32Param = TLI.ShouldExtI32Param;
   ShouldExtI32Return = TLI.ShouldExtI32Return;
@@ -918,7 +919,8 @@ TargetLibraryInfoImpl &TargetLibraryInfoImpl::operator=(const TargetLibraryInfoI
   return *this;
 }
 
-TargetLibraryInfoImpl &TargetLibraryInfoImpl::operator=(TargetLibraryInfoImpl &&TLI) {
+TargetLibraryInfoImpl &
+TargetLibraryInfoImpl::operator=(TargetLibraryInfoImpl &&TLI) {
   CustomNames = std::move(TLI.CustomNames);
   ShouldExtI32Param = TLI.ShouldExtI32Param;
   ShouldExtI32Return = TLI.ShouldExtI32Return;
@@ -1124,7 +1126,8 @@ bool TargetLibraryInfoImpl::getLibFunc(const Function &FDecl,
   // Intrinsics don't overlap w/libcalls; if our module has a large number of
   // intrinsics, this ends up being an interesting compile time win since we
   // avoid string normalization and comparison.
-  if (FDecl.isIntrinsic()) return false;
+  if (FDecl.isIntrinsic())
+    return false;
 
   const Module *M = FDecl.getParent();
   assert(M && "Expecting FDecl to be connected to a Module.");
@@ -1162,40 +1165,40 @@ void TargetLibraryInfoImpl::addVectorizableFunctionsFromVecLib(
   switch (VecLib) {
   case Accelerate: {
     const VecDesc VecFuncs[] = {
-    #define TLI_DEFINE_ACCELERATE_VECFUNCS
-    #include "llvm/Analysis/VecFuncs.def"
+#define TLI_DEFINE_ACCELERATE_VECFUNCS
+#include "llvm/Analysis/VecFuncs.def"
     };
     addVectorizableFunctions(VecFuncs);
     break;
   }
   case DarwinLibSystemM: {
     const VecDesc VecFuncs[] = {
-    #define TLI_DEFINE_DARWIN_LIBSYSTEM_M_VECFUNCS
-    #include "llvm/Analysis/VecFuncs.def"
+#define TLI_DEFINE_DARWIN_LIBSYSTEM_M_VECFUNCS
+#include "llvm/Analysis/VecFuncs.def"
     };
     addVectorizableFunctions(VecFuncs);
     break;
   }
   case LIBMVEC_X86: {
     const VecDesc VecFuncs[] = {
-    #define TLI_DEFINE_LIBMVEC_X86_VECFUNCS
-    #include "llvm/Analysis/VecFuncs.def"
+#define TLI_DEFINE_LIBMVEC_X86_VECFUNCS
+#include "llvm/Analysis/VecFuncs.def"
     };
     addVectorizableFunctions(VecFuncs);
     break;
   }
   case MASSV: {
     const VecDesc VecFuncs[] = {
-    #define TLI_DEFINE_MASSV_VECFUNCS
-    #include "llvm/Analysis/VecFuncs.def"
+#define TLI_DEFINE_MASSV_VECFUNCS
+#include "llvm/Analysis/VecFuncs.def"
     };
     addVectorizableFunctions(VecFuncs);
     break;
   }
   case SVML: {
     const VecDesc VecFuncs[] = {
-    #define TLI_DEFINE_SVML_VECFUNCS
-    #include "llvm/Analysis/VecFuncs.def"
+#define TLI_DEFINE_SVML_VECFUNCS
+#include "llvm/Analysis/VecFuncs.def"
     };
     addVectorizableFunctions(VecFuncs);
     break;
@@ -1286,8 +1289,8 @@ TargetLibraryInfo TargetLibraryAnalysis::run(const Function &F,
 }
 
 unsigned TargetLibraryInfoImpl::getWCharSize(const Module &M) const {
-  if (auto *ShortWChar = cast_or_null<ConstantAsMetadata>(
-      M.getModuleFlag("wchar_size")))
+  if (auto *ShortWChar =
+          cast_or_null<ConstantAsMetadata>(M.getModuleFlag("wchar_size")))
     return cast<ConstantInt>(ShortWChar->getValue())->getZExtValue();
   return 0;
 }
diff --git a/llvm/lib/Analysis/TargetTransformInfo.cpp b/llvm/lib/Analysis/TargetTransformInfo.cpp
index c751d174a48ab1f..c2ca9b5f447587d 100644
--- a/llvm/lib/Analysis/TargetTransformInfo.cpp
+++ b/llvm/lib/Analysis/TargetTransformInfo.cpp
@@ -597,12 +597,11 @@ bool TargetTransformInfo::isFPVectorizationPotentiallyUnsafe() const {
   return TTIImpl->isFPVectorizationPotentiallyUnsafe();
 }
 
-bool
-TargetTransformInfo::allowsMisalignedMemoryAccesses(LLVMContext &Context,
-                                                    unsigned BitWidth,
-                                                    unsigned AddressSpace,
-                                                    Align Alignment,
-                                                    unsigned *Fast) const {
+bool TargetTransformInfo::allowsMisalignedMemoryAccesses(LLVMContext &Context,
+                                                         unsigned BitWidth,
+                                                         unsigned AddressSpace,
+                                                         Align Alignment,
+                                                         unsigned *Fast) const {
   return TTIImpl->allowsMisalignedMemoryAccesses(Context, BitWidth,
                                                  AddressSpace, Alignment, Fast);
 }
@@ -834,10 +833,8 @@ InstructionCost TargetTransformInfo::getArithmeticInstrCost(
     unsigned Opcode, Type *Ty, TTI::TargetCostKind CostKind,
     OperandValueInfo Op1Info, OperandValueInfo Op2Info,
     ArrayRef<const Value *> Args, const Instruction *CxtI) const {
-  InstructionCost Cost =
-      TTIImpl->getArithmeticInstrCost(Opcode, Ty, CostKind,
-                                      Op1Info, Op2Info,
-                                      Args, CxtI);
+  InstructionCost Cost = TTIImpl->getArithmeticInstrCost(
+      Opcode, Ty, CostKind, Op1Info, Op2Info, Args, CxtI);
   assert(Cost >= 0 && "TTI should not produce negative costs!");
   return Cost;
 }
diff --git a/llvm/lib/Analysis/Trace.cpp b/llvm/lib/Analysis/Trace.cpp
index 879c7172d038935..06e2dbae4b7db41 100644
--- a/llvm/lib/Analysis/Trace.cpp
+++ b/llvm/lib/Analysis/Trace.cpp
@@ -28,9 +28,7 @@ Function *Trace::getFunction() const {
   return getEntryBasicBlock()->getParent();
 }
 
-Module *Trace::getModule() const {
-  return getFunction()->getParent();
-}
+Module *Trace::getModule() const { return getFunction()->getParent(); }
 
 /// print - Write trace to output stream.
 void Trace::print(raw_ostream &O) const {
@@ -47,7 +45,5 @@ void Trace::print(raw_ostream &O) const {
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
 /// dump - Debugger convenience method; writes trace to standard error
 /// output stream.
-LLVM_DUMP_METHOD void Trace::dump() const {
-  print(dbgs());
-}
+LLVM_DUMP_METHOD void Trace::dump() const { print(dbgs()); }
 #endif
diff --git a/llvm/lib/Analysis/TypeBasedAliasAnalysis.cpp b/llvm/lib/Analysis/TypeBasedAliasAnalysis.cpp
index e4dc1a867f6f0c2..74aba13c951e6da 100644
--- a/llvm/lib/Analysis/TypeBasedAliasAnalysis.cpp
+++ b/llvm/lib/Analysis/TypeBasedAliasAnalysis.cpp
@@ -145,8 +145,7 @@ static bool isNewFormatTypeNode(const MDNode *N) {
 /// This is a simple wrapper around an MDNode which provides a higher-level
 /// interface by hiding the details of how alias analysis information is encoded
 /// in its operands.
-template<typename MDNodeTy>
-class TBAANodeImpl {
+template <typename MDNodeTy> class TBAANodeImpl {
   MDNodeTy *Node = nullptr;
 
 public:
@@ -197,8 +196,7 @@ using MutableTBAANode = TBAANodeImpl<MDNode>;
 /// This is a simple wrapper around an MDNode which provides a
 /// higher-level interface by hiding the details of how alias analysis
 /// information is encoded in its operands.
-template<typename MDNodeTy>
-class TBAAStructTagNodeImpl {
+template <typename MDNodeTy> class TBAAStructTagNodeImpl {
   /// This node should be created with createTBAAAccessTag().
   MDNodeTy *Node;
 
@@ -281,9 +279,7 @@ class TBAAStructTypeNode {
   }
 
   /// getId - Return type identifier.
-  Metadata *getId() const {
-    return Node->getOperand(isNewFormat() ? 2 : 0);
-  }
+  Metadata *getId() const { return Node->getOperand(isNewFormat() ? 2 : 0); }
 
   unsigned getNumFields() const {
     unsigned FirstFieldOpNo = isNewFormat() ? 3 : 1;
@@ -464,7 +460,7 @@ bool MDNode::isTBAAVtableAccess() const {
   // For struct-path aware TBAA, we use the access type of the tag.
   TBAAStructTagNode Tag(this);
   TBAAStructTypeNode AccessType(Tag.getAccessType());
-  if(auto *Id = dyn_cast<MDString>(AccessType.getId()))
+  if (auto *Id = dyn_cast<MDString>(AccessType.getId()))
     if (Id->getString() == "vtable pointer")
       return true;
   return false;
@@ -476,7 +472,7 @@ static bool matchAccessTags(const MDNode *A, const MDNode *B,
 MDNode *MDNode::getMostGenericTBAA(MDNode *A, MDNode *B) {
   const MDNode *GenericTag;
   matchAccessTags(A, B, &GenericTag);
-  return const_cast<MDNode*>(GenericTag);
+  return const_cast<MDNode *>(GenericTag);
 }
 
 static const MDNode *getLeastCommonType(const MDNode *A, const MDNode *B) {
@@ -550,15 +546,13 @@ static const MDNode *createAccessTag(const MDNode *AccessType) {
     uint64_t AccessSize = UINT64_MAX;
     auto *SizeNode =
         ConstantAsMetadata::get(ConstantInt::get(Int64, AccessSize));
-    Metadata *Ops[] = {const_cast<MDNode*>(AccessType),
-                       const_cast<MDNode*>(AccessType),
-                       OffsetNode, SizeNode};
+    Metadata *Ops[] = {const_cast<MDNode *>(AccessType),
+                       const_cast<MDNode *>(AccessType), OffsetNode, SizeNode};
     return MDNode::get(AccessType->getContext(), Ops);
   }
 
-  Metadata *Ops[] = {const_cast<MDNode*>(AccessType),
-                     const_cast<MDNode*>(AccessType),
-                     OffsetNode};
+  Metadata *Ops[] = {const_cast<MDNode *>(AccessType),
+                     const_cast<MDNode *>(AccessType), OffsetNode};
   return MDNode::get(AccessType->getContext(), Ops);
 }
 
@@ -614,8 +608,8 @@ static bool mayBeAccessToSubobjectOf(TBAAStructTagNode BaseTag,
     if (BaseType.getNode() == SubobjectTag.getBaseType()) {
       bool SameMemberAccess = OffsetInBase == SubobjectTag.getOffset();
       if (GenericTag) {
-        *GenericTag = SameMemberAccess ? SubobjectTag.getNode() :
-                                         createAccessTag(CommonType);
+        *GenericTag = SameMemberAccess ? SubobjectTag.getNode()
+                                       : createAccessTag(CommonType);
       }
       MayAlias = SameMemberAccess;
       return true;
@@ -671,8 +665,8 @@ static bool matchAccessTags(const MDNode *A, const MDNode *B,
   assert(isStructPathTBAA(B) && "Access B is not struct-path aware!");
 
   TBAAStructTagNode TagA(A), TagB(B);
-  const MDNode *CommonType = getLeastCommonType(TagA.getAccessType(),
-                                                TagB.getAccessType());
+  const MDNode *CommonType =
+      getLeastCommonType(TagA.getAccessType(), TagB.getAccessType());
 
   // If the final access types have different roots, they're part of different
   // potentially unrelated type systems, so we must be conservative.
diff --git a/llvm/lib/Analysis/ValueLattice.cpp b/llvm/lib/Analysis/ValueLattice.cpp
index 1d2177a92eb4654..0cc8a8dac67b1c1 100644
--- a/llvm/lib/Analysis/ValueLattice.cpp
+++ b/llvm/lib/Analysis/ValueLattice.cpp
@@ -10,10 +10,9 @@
 #include "llvm/Analysis/ConstantFolding.h"
 
 namespace llvm {
-Constant *
-ValueLatticeElement::getCompare(CmpInst::Predicate Pred, Type *Ty,
-                                const ValueLatticeElement &Other,
-                                const DataLayout &DL) const {
+Constant *ValueLatticeElement::getCompare(CmpInst::Predicate Pred, Type *Ty,
+                                          const ValueLatticeElement &Other,
+                                          const DataLayout &DL) const {
   // Not yet resolved.
   if (isUnknown() || Other.isUnknown())
     return nullptr;
diff --git a/llvm/lib/Analysis/ValueTracking.cpp b/llvm/lib/Analysis/ValueTracking.cpp
index 79198b09d310b80..ae5a08e0da2f99a 100644
--- a/llvm/lib/Analysis/ValueTracking.cpp
+++ b/llvm/lib/Analysis/ValueTracking.cpp
@@ -85,7 +85,6 @@ using namespace llvm::PatternMatch;
 static cl::opt<unsigned> DomConditionsMaxUses("dom-conditions-max-uses",
                                               cl::Hidden, cl::init(20));
 
-
 /// Returns the bitwidth of the given scalar or pointer type. For vector types,
 /// returns the element type's bitwidth.
 static unsigned getBitWidth(Type *Ty, const DataLayout &DL) {
@@ -111,7 +110,8 @@ static const Instruction *safeCxtI(const Value *V, const Instruction *CxtI) {
   return nullptr;
 }
 
-static const Instruction *safeCxtI(const Value *V1, const Value *V2, const Instruction *CxtI) {
+static const Instruction *safeCxtI(const Value *V1, const Value *V2,
+                                   const Instruction *CxtI) {
   // If we've been provided with a context instruction, then use that (provided
   // it has been inserted).
   if (CxtI && CxtI->getParent())
@@ -133,7 +133,7 @@ static bool getShuffleDemandedElts(const ShuffleVectorInst *Shuf,
                                    const APInt &DemandedElts,
                                    APInt &DemandedLHS, APInt &DemandedRHS) {
   if (isa<ScalableVectorType>(Shuf->getType())) {
-    assert(DemandedElts == APInt(1,1));
+    assert(DemandedElts == APInt(1, 1));
     DemandedLHS = DemandedRHS = DemandedElts;
     return true;
   }
@@ -482,18 +482,17 @@ static bool isEphemeralValueOf(const Instruction *I, const Value *E) {
       continue;
 
     // If all uses of this value are ephemeral, then so is this value.
-    if (llvm::all_of(V->users(), [&](const User *U) {
-                                   return EphValues.count(U);
-                                 })) {
+    if (llvm::all_of(V->users(),
+                     [&](const User *U) { return EphValues.count(U); })) {
       if (V == E)
         return true;
 
-      if (V == I || (isa<Instruction>(V) &&
-                     !cast<Instruction>(V)->mayHaveSideEffects() &&
-                     !cast<Instruction>(V)->isTerminator())) {
-       EphValues.insert(V);
-       if (const User *U = dyn_cast<User>(V))
-         append_range(WorkSet, U->operands());
+      if (V == I ||
+          (isa<Instruction>(V) && !cast<Instruction>(V)->mayHaveSideEffects() &&
+           !cast<Instruction>(V)->isTerminator())) {
+        EphValues.insert(V);
+        if (const User *U = dyn_cast<User>(V))
+          append_range(WorkSet, U->operands());
       }
     }
   }
@@ -794,7 +793,7 @@ void llvm::computeKnownBitsFromAssume(const Value *V, KnownBits &Known,
   // Refine Known set if the pointer alignment is set by assume bundles.
   if (V->getType()->isPointerTy()) {
     if (RetainedKnowledge RK = getKnowledgeValidInContext(
-            V, { Attribute::Alignment }, Q.CxtI, Q.DT, Q.AC)) {
+            V, {Attribute::Alignment}, Q.CxtI, Q.DT, Q.AC)) {
       if (isPowerOf2_64(RK.ArgValue))
         Known.Zero.setLowBits(Log2_64(RK.ArgValue));
     }
@@ -989,7 +988,8 @@ static void computeKnownBitsFromOperator(const Operator *I,
 
   KnownBits Known2(BitWidth);
   switch (I->getOpcode()) {
-  default: break;
+  default:
+    break;
   case Instruction::Load:
     if (MDNode *MD =
             Q.IIQ.getMetadata(cast<LoadInst>(I), LLVMContext::MD_range))
@@ -1094,9 +1094,9 @@ static void computeKnownBitsFromOperator(const Operator *I,
     // Note that we handle pointer operands here because of inttoptr/ptrtoint
     // which fall through here.
     Type *ScalarTy = SrcTy->getScalarType();
-    SrcBitWidth = ScalarTy->isPointerTy() ?
-      Q.DL.getPointerTypeSizeInBits(ScalarTy) :
-      Q.DL.getTypeSizeInBits(ScalarTy);
+    SrcBitWidth = ScalarTy->isPointerTy()
+                      ? Q.DL.getPointerTypeSizeInBits(ScalarTy)
+                      : Q.DL.getTypeSizeInBits(ScalarTy);
 
     assert(SrcBitWidth && "SrcBitWidth can't be zero");
     Known = Known.anyextOrTrunc(SrcBitWidth);
@@ -1365,10 +1365,8 @@ static void computeKnownBitsFromOperator(const Operator *I,
       // Check for operations that have the property that if
       // both their operands have low zero bits, the result
       // will have low zero bits.
-      if (Opcode == Instruction::Add ||
-          Opcode == Instruction::Sub ||
-          Opcode == Instruction::And ||
-          Opcode == Instruction::Or ||
+      if (Opcode == Instruction::Add || Opcode == Instruction::Sub ||
+          Opcode == Instruction::And || Opcode == Instruction::Or ||
           Opcode == Instruction::Mul) {
         // Change the context instruction to the "edge" that flows into the
         // phi. This is important because that is where the value is actually
@@ -1378,7 +1376,7 @@ static void computeKnownBitsFromOperator(const Operator *I,
 
         unsigned OpNum = P->getOperand(0) == R ? 0 : 1;
         Instruction *RInst = P->getIncomingBlock(OpNum)->getTerminator();
-        Instruction *LInst = P->getIncomingBlock(1-OpNum)->getTerminator();
+        Instruction *LInst = P->getIncomingBlock(1 - OpNum)->getTerminator();
 
         // Ok, we have a PHI of the form L op= R. Check for low
         // zero bits.
@@ -1446,7 +1444,8 @@ static void computeKnownBitsFromOperator(const Operator *I,
       for (unsigned u = 0, e = P->getNumIncomingValues(); u < e; ++u) {
         Value *IncValue = P->getIncomingValue(u);
         // Skip direct self references.
-        if (IncValue == P) continue;
+        if (IncValue == P)
+          continue;
 
         // Change the context instruction to the "edge" that flows into the
         // phi. This is important because that is where the value is actually
@@ -1518,7 +1517,8 @@ static void computeKnownBitsFromOperator(const Operator *I,
     }
     if (const IntrinsicInst *II = dyn_cast<IntrinsicInst>(I)) {
       switch (II->getIntrinsicID()) {
-      default: break;
+      default:
+        break;
       case Intrinsic::abs: {
         computeKnownBits(I->getOperand(0), Known2, Depth + 1, Q);
         bool IntMinIsPoison = match(II->getArgOperand(1), m_One());
@@ -1743,10 +1743,12 @@ static void computeKnownBitsFromOperator(const Operator *I,
   case Instruction::ExtractValue:
     if (IntrinsicInst *II = dyn_cast<IntrinsicInst>(I->getOperand(0))) {
       const ExtractValueInst *EVI = cast<ExtractValueInst>(I);
-      if (EVI->getNumIndices() != 1) break;
+      if (EVI->getNumIndices() != 1)
+        break;
       if (EVI->getIndices()[0] == 0) {
         switch (II->getIntrinsicID()) {
-        default: break;
+        default:
+          break;
         case Intrinsic::uadd_with_overflow:
         case Intrinsic::sadd_with_overflow:
           computeKnownBitsAddSub(true, II->getArgOperand(0),
@@ -1864,7 +1866,8 @@ void computeKnownBits(const Value *V, const APInt &DemandedElts,
     assert(!isa<ScalableVectorType>(V->getType()));
     // We know that CDV must be a vector of integers. Take the intersection of
     // each element.
-    Known.Zero.setAllBits(); Known.One.setAllBits();
+    Known.Zero.setAllBits();
+    Known.One.setAllBits();
     for (unsigned i = 0, e = CDV->getNumElements(); i != e; ++i) {
       if (!DemandedElts[i])
         continue;
@@ -1879,7 +1882,8 @@ void computeKnownBits(const Value *V, const APInt &DemandedElts,
     assert(!isa<ScalableVectorType>(V->getType()));
     // We know that CV must be a vector of integers. Take the intersection of
     // each element.
-    Known.Zero.setAllBits(); Known.One.setAllBits();
+    Known.Zero.setAllBits();
+    Known.One.setAllBits();
     for (unsigned i = 0, e = CV->getNumOperands(); i != e; ++i) {
       if (!DemandedElts[i])
         continue;
@@ -2321,7 +2325,8 @@ static bool isKnownNonNullFromDominatingCondition(const Value *V,
 /// Does the 'Range' metadata (which must be a valid MD_range operand list)
 /// ensure that the value it's attached to is never Value?  'RangeType' is
 /// is the type of the value described by the range.
-static bool rangeMetadataExcludesValue(const MDNode* Ranges, const APInt& Value) {
+static bool rangeMetadataExcludesValue(const MDNode *Ranges,
+                                       const APInt &Value) {
   const unsigned NumRanges = Ranges->getNumOperands() / 2;
   assert(NumRanges >= 1);
   for (unsigned i = 0; i < NumRanges; ++i) {
@@ -2944,9 +2949,8 @@ bool isKnownNonZero(const Value *V, unsigned Depth, const SimplifyQuery &Q) {
 /// every input value to exactly one output value.  This is equivalent to
 /// saying that Op1 and Op2 are equal exactly when the specified pair of
 /// operands are equal, (except that Op1 and Op2 may be poison more often.)
-static std::optional<std::pair<Value*, Value*>>
-getInvertibleOperands(const Operator *Op1,
-                      const Operator *Op2) {
+static std::optional<std::pair<Value *, Value *>>
+getInvertibleOperands(const Operator *Op1, const Operator *Op2) {
   if (Op1->getOpcode() != Op2->getOpcode())
     return std::nullopt;
 
@@ -3026,10 +3030,10 @@ getInvertibleOperands(const Operator *Op1,
         !matchSimpleRecurrence(PN2, BO2, Start2, Step2))
       break;
 
-    auto Values = getInvertibleOperands(cast<Operator>(BO1),
-                                        cast<Operator>(BO2));
+    auto Values =
+        getInvertibleOperands(cast<Operator>(BO1), cast<Operator>(BO2));
     if (!Values)
-       break;
+      break;
 
     // We have to be careful of mutually defined recurrences here.  Ex:
     // * X_i = X_(i-1) OP Y_(i-1), and Y_i = X_(i-1) OP V
@@ -3220,7 +3224,8 @@ static bool isSignedMinMaxIntrinsicClamp(const IntrinsicInst *II,
                                          const APInt *&CLow,
                                          const APInt *&CHigh) {
   assert((II->getIntrinsicID() == Intrinsic::smin ||
-          II->getIntrinsicID() == Intrinsic::smax) && "Must be smin/smax");
+          II->getIntrinsicID() == Intrinsic::smax) &&
+         "Must be smin/smax");
 
   Intrinsic::ID InverseID = getInverseMinMaxIntrinsic(II->getIntrinsicID());
   auto *InnerII = dyn_cast<IntrinsicInst>(II->getArgOperand(0));
@@ -3301,9 +3306,9 @@ static unsigned ComputeNumSignBitsImpl(const Value *V,
   // same behavior for poison though -- that's a FIXME today.
 
   Type *ScalarTy = Ty->getScalarType();
-  unsigned TyBits = ScalarTy->isPointerTy() ?
-    Q.DL.getPointerTypeSizeInBits(ScalarTy) :
-    Q.DL.getTypeSizeInBits(ScalarTy);
+  unsigned TyBits = ScalarTy->isPointerTy()
+                        ? Q.DL.getPointerTypeSizeInBits(ScalarTy)
+                        : Q.DL.getTypeSizeInBits(ScalarTy);
 
   unsigned Tmp, Tmp2;
   unsigned FirstAnswer = 1;
@@ -3316,7 +3321,8 @@ static unsigned ComputeNumSignBitsImpl(const Value *V,
 
   if (auto *U = dyn_cast<Operator>(V)) {
     switch (Operator::getOpcode(V)) {
-    default: break;
+    default:
+      break;
     case Instruction::SExt:
       Tmp = TyBits - U->getOperand(0)->getType()->getScalarSizeInBits();
       return ComputeNumSignBits(U->getOperand(0), Depth + 1, Q) + Tmp;
@@ -3379,7 +3385,8 @@ static unsigned ComputeNumSignBitsImpl(const Value *V,
           break; // Bad shift.
         unsigned ShAmtLimited = ShAmt->getZExtValue();
         Tmp += ShAmtLimited;
-        if (Tmp > TyBits) Tmp = TyBits;
+        if (Tmp > TyBits)
+          Tmp = TyBits;
       }
       return Tmp;
     }
@@ -3388,8 +3395,9 @@ static unsigned ComputeNumSignBitsImpl(const Value *V,
       if (match(U->getOperand(1), m_APInt(ShAmt))) {
         // shl destroys sign bits.
         Tmp = ComputeNumSignBits(U->getOperand(0), Depth + 1, Q);
-        if (ShAmt->uge(TyBits) ||   // Bad shift.
-            ShAmt->uge(Tmp)) break; // Shifted all sign bits out.
+        if (ShAmt->uge(TyBits) || // Bad shift.
+            ShAmt->uge(Tmp))
+          break; // Shifted all sign bits out.
         Tmp2 = ShAmt->getZExtValue();
         return Tmp - Tmp2;
       }
@@ -3418,7 +3426,8 @@ static unsigned ComputeNumSignBitsImpl(const Value *V,
         return std::min(CLow->getNumSignBits(), CHigh->getNumSignBits());
 
       Tmp = ComputeNumSignBits(U->getOperand(1), Depth + 1, Q);
-      if (Tmp == 1) break;
+      if (Tmp == 1)
+        break;
       Tmp2 = ComputeNumSignBits(U->getOperand(2), Depth + 1, Q);
       return std::min(Tmp, Tmp2);
     }
@@ -3427,7 +3436,8 @@ static unsigned ComputeNumSignBitsImpl(const Value *V,
       // Add can have at most one carry bit.  Thus we know that the output
       // is, at worst, one more bit than the inputs.
       Tmp = ComputeNumSignBits(U->getOperand(0), Depth + 1, Q);
-      if (Tmp == 1) break;
+      if (Tmp == 1)
+        break;
 
       // Special case decrementing a value (ADD X, -1):
       if (const auto *CRHS = dyn_cast<Constant>(U->getOperand(1)))
@@ -3447,12 +3457,14 @@ static unsigned ComputeNumSignBitsImpl(const Value *V,
         }
 
       Tmp2 = ComputeNumSignBits(U->getOperand(1), Depth + 1, Q);
-      if (Tmp2 == 1) break;
+      if (Tmp2 == 1)
+        break;
       return std::min(Tmp, Tmp2) - 1;
 
     case Instruction::Sub:
       Tmp2 = ComputeNumSignBits(U->getOperand(1), Depth + 1, Q);
-      if (Tmp2 == 1) break;
+      if (Tmp2 == 1)
+        break;
 
       // Handle NEG.
       if (const auto *CLHS = dyn_cast<Constant>(U->getOperand(0)))
@@ -3476,16 +3488,19 @@ static unsigned ComputeNumSignBitsImpl(const Value *V,
       // Sub can have at most one carry bit.  Thus we know that the output
       // is, at worst, one more bit than the inputs.
       Tmp = ComputeNumSignBits(U->getOperand(0), Depth + 1, Q);
-      if (Tmp == 1) break;
+      if (Tmp == 1)
+        break;
       return std::min(Tmp, Tmp2) - 1;
 
     case Instruction::Mul: {
       // The output of the Mul can be at most twice the valid bits in the
       // inputs.
       unsigned SignBitsOp0 = ComputeNumSignBits(U->getOperand(0), Depth + 1, Q);
-      if (SignBitsOp0 == 1) break;
+      if (SignBitsOp0 == 1)
+        break;
       unsigned SignBitsOp1 = ComputeNumSignBits(U->getOperand(1), Depth + 1, Q);
-      if (SignBitsOp1 == 1) break;
+      if (SignBitsOp1 == 1)
+        break;
       unsigned OutValidBits =
           (TyBits - SignBitsOp0 + 1) + (TyBits - SignBitsOp1 + 1);
       return OutValidBits > TyBits ? 1 : TyBits - OutValidBits + 1;
@@ -3495,16 +3510,19 @@ static unsigned ComputeNumSignBitsImpl(const Value *V,
       const PHINode *PN = cast<PHINode>(U);
       unsigned NumIncomingValues = PN->getNumIncomingValues();
       // Don't analyze large in-degree PHIs.
-      if (NumIncomingValues > 4) break;
+      if (NumIncomingValues > 4)
+        break;
       // Unreachable blocks may have zero-operand PHI nodes.
-      if (NumIncomingValues == 0) break;
+      if (NumIncomingValues == 0)
+        break;
 
       // Take the minimum of all incoming values.  This can't infinitely loop
       // because of our depth threshold.
       SimplifyQuery RecQ = Q;
       Tmp = TyBits;
       for (unsigned i = 0, e = NumIncomingValues; i != e; ++i) {
-        if (Tmp == 1) return Tmp;
+        if (Tmp == 1)
+          return Tmp;
         RecQ.CxtI = PN->getIncomingBlock(i)->getTerminator();
         Tmp = std::min(
             Tmp, ComputeNumSignBits(PN->getIncomingValue(i), Depth + 1, RecQ));
@@ -3517,7 +3535,8 @@ static unsigned ComputeNumSignBitsImpl(const Value *V,
       // truncation, then we can make use of that. Otherwise we don't know
       // anything.
       Tmp = ComputeNumSignBits(U->getOperand(0), Depth + 1, Q);
-      unsigned OperandTyBits = U->getOperand(0)->getType()->getScalarSizeInBits();
+      unsigned OperandTyBits =
+          U->getOperand(0)->getType()->getScalarSizeInBits();
       if (Tmp > (OperandTyBits - TyBits))
         return Tmp - (OperandTyBits - TyBits);
 
@@ -3568,10 +3587,12 @@ static unsigned ComputeNumSignBitsImpl(const Value *V,
     case Instruction::Call: {
       if (const auto *II = dyn_cast<IntrinsicInst>(U)) {
         switch (II->getIntrinsicID()) {
-        default: break;
+        default:
+          break;
         case Intrinsic::abs:
           Tmp = ComputeNumSignBits(U->getOperand(0), Depth + 1, Q);
-          if (Tmp == 1) break;
+          if (Tmp == 1)
+            break;
 
           // Absolute value reduces number of sign bits by at most 1.
           return Tmp - 1;
@@ -4282,8 +4303,8 @@ static void computeKnownFPClassForFPTrunc(const Operator *Op,
                                           FPClassTest InterestedClasses,
                                           KnownFPClass &Known, unsigned Depth,
                                           const SimplifyQuery &Q) {
-  if ((InterestedClasses &
-       (KnownFPClass::OrderedLessThanZeroMask | fcNan)) == fcNone)
+  if ((InterestedClasses & (KnownFPClass::OrderedLessThanZeroMask | fcNan)) ==
+      fcNone)
     return;
 
   KnownFPClass KnownSrc;
@@ -4372,9 +4393,8 @@ void computeKnownFPClass(const Value *V, const APInt &DemandedElts,
   // assume this from flags/attributes.
   InterestedClasses &= ~KnownNotFromFlags;
 
-  auto ClearClassesFromFlags = make_scope_exit([=, &Known] {
-    Known.knownNot(KnownNotFromFlags);
-  });
+  auto ClearClassesFromFlags =
+      make_scope_exit([=, &Known] { Known.knownNot(KnownNotFromFlags); });
 
   if (!Op)
     return;
@@ -5020,7 +5040,8 @@ void computeKnownFPClass(const Value *V, const APInt &DemandedElts,
 
       // X / -0.0 is -Inf (or NaN).
       // +X / +X is +X
-      if (KnownLHS.isKnownNever(fcNegative) && KnownRHS.isKnownNever(fcNegative))
+      if (KnownLHS.isKnownNever(fcNegative) &&
+          KnownRHS.isKnownNever(fcNegative))
         Known.knownNot(fcNegative);
     } else {
       // Inf REM x and x REM 0 produce NaN.
@@ -5157,13 +5178,14 @@ void computeKnownFPClass(const Value *V, const APInt &DemandedElts,
     // the shuffle result.
     APInt DemandedLHS, DemandedRHS;
     auto *Shuf = dyn_cast<ShuffleVectorInst>(Op);
-    if (!Shuf || !getShuffleDemandedElts(Shuf, DemandedElts, DemandedLHS, DemandedRHS))
+    if (!Shuf ||
+        !getShuffleDemandedElts(Shuf, DemandedElts, DemandedLHS, DemandedRHS))
       return;
 
     if (!!DemandedLHS) {
       const Value *LHS = Shuf->getOperand(0);
-      computeKnownFPClass(LHS, DemandedLHS, InterestedClasses, Known,
-                          Depth + 1, Q);
+      computeKnownFPClass(LHS, DemandedLHS, InterestedClasses, Known, Depth + 1,
+                          Q);
 
       // If we don't know any bits, early out.
       if (Known.isUnknown())
@@ -5278,11 +5300,12 @@ void computeKnownFPClass(const Value *V, const APInt &DemandedElts,
   }
 }
 
-KnownFPClass llvm::computeKnownFPClass(
-    const Value *V, const APInt &DemandedElts, const DataLayout &DL,
-    FPClassTest InterestedClasses, unsigned Depth, const TargetLibraryInfo *TLI,
-    AssumptionCache *AC, const Instruction *CxtI, const DominatorTree *DT,
-    bool UseInstrInfo) {
+KnownFPClass
+llvm::computeKnownFPClass(const Value *V, const APInt &DemandedElts,
+                          const DataLayout &DL, FPClassTest InterestedClasses,
+                          unsigned Depth, const TargetLibraryInfo *TLI,
+                          AssumptionCache *AC, const Instruction *CxtI,
+                          const DominatorTree *DT, bool UseInstrInfo) {
   KnownFPClass KnownClasses;
   ::computeKnownFPClass(
       V, DemandedElts, InterestedClasses, KnownClasses, Depth,
@@ -5408,10 +5431,9 @@ Value *llvm::isBytewiseValue(Value *V, const DataLayout &DL) {
 // indices from Idxs that should be left out when inserting into the resulting
 // struct. To is the result struct built so far, new insertvalue instructions
 // build on that.
-static Value *BuildSubAggregate(Value *From, Value* To, Type *IndexedType,
+static Value *BuildSubAggregate(Value *From, Value *To, Type *IndexedType,
                                 SmallVectorImpl<unsigned> &Idxs,
-                                unsigned IdxSkip,
-                                Instruction *InsertBefore) {
+                                unsigned IdxSkip, Instruction *InsertBefore) {
   StructType *STy = dyn_cast<StructType>(IndexedType);
   if (STy) {
     // Save the original To argument so we can modify it
@@ -5427,7 +5449,7 @@ static Value *BuildSubAggregate(Value *From, Value* To, Type *IndexedType,
       if (!To) {
         // Couldn't find any inserted value for this index? Cleanup
         while (PrevTo != OrigTo) {
-          InsertValueInst* Del = cast<InsertValueInst>(PrevTo);
+          InsertValueInst *Del = cast<InsertValueInst>(PrevTo);
           PrevTo = Del->getAggregateOperand();
           Del->eraseFromParent();
         }
@@ -5470,8 +5492,8 @@ static Value *BuildSubAggregate(Value *From, Value* To, Type *IndexedType,
 static Value *BuildSubAggregate(Value *From, ArrayRef<unsigned> idx_range,
                                 Instruction *InsertBefore) {
   assert(InsertBefore && "Must have someplace to insert!");
-  Type *IndexedType = ExtractValueInst::getIndexedType(From->getType(),
-                                                             idx_range);
+  Type *IndexedType =
+      ExtractValueInst::getIndexedType(From->getType(), idx_range);
   Value *To = PoisonValue::get(IndexedType);
   SmallVector<unsigned, 10> Idxs(idx_range.begin(), idx_range.end());
   unsigned IdxSkip = Idxs.size();
@@ -5499,7 +5521,8 @@ Value *llvm::FindInsertedValue(Value *V, ArrayRef<unsigned> idx_range,
 
   if (Constant *C = dyn_cast<Constant>(V)) {
     C = C->getAggregateElement(idx_range[0]);
-    if (!C) return nullptr;
+    if (!C)
+      return nullptr;
     return FindInsertedValue(C, idx_range.slice(1), InsertBefore);
   }
 
@@ -5507,8 +5530,8 @@ Value *llvm::FindInsertedValue(Value *V, ArrayRef<unsigned> idx_range,
     // Loop the indices for the insertvalue instruction in parallel with the
     // requested indices
     const unsigned *req_idx = idx_range.begin();
-    for (const unsigned *i = I->idx_begin(), *e = I->idx_end();
-         i != e; ++i, ++req_idx) {
+    for (const unsigned *i = I->idx_begin(), *e = I->idx_end(); i != e;
+         ++i, ++req_idx) {
       if (req_idx == idx_range.end()) {
         // We can't handle this without inserting insertvalues
         if (!InsertBefore)
@@ -5558,8 +5581,7 @@ Value *llvm::FindInsertedValue(Value *V, ArrayRef<unsigned> idx_range,
     // Add requested indices
     Idxs.append(idx_range.begin(), idx_range.end());
 
-    assert(Idxs.size() == size
-           && "Number of indices added not correct?");
+    assert(Idxs.size() == size && "Number of indices added not correct?");
 
     return FindInsertedValue(I->getAggregateOperand(), Idxs, InsertBefore);
   }
@@ -5605,8 +5627,7 @@ bool llvm::getConstantDataArrayInfo(const Value *V,
   // Drill down into the pointer expression V, ignoring any intervening
   // casts, and determine the identity of the object it references along
   // with the cumulative byte offset into it.
-  const GlobalVariable *GV =
-    dyn_cast<GlobalVariable>(getUnderlyingObject(V));
+  const GlobalVariable *GV = dyn_cast<GlobalVariable>(getUnderlyingObject(V));
   if (!GV || !GV->isConstant() || !GV->hasDefinitiveInitializer())
     // Fail if V is not based on constant global object.
     return false;
@@ -5736,7 +5757,7 @@ bool llvm::getConstantStringInfo(const Value *V, StringRef &Str,
 /// If we can compute the length of the string pointed to by
 /// the specified pointer, return 'len+1'.  If we can't, return 0.
 static uint64_t GetStringLengthH(const Value *V,
-                                 SmallPtrSetImpl<const PHINode*> &PHIs,
+                                 SmallPtrSetImpl<const PHINode *> &PHIs,
                                  unsigned CharSize) {
   // Look through noop bitcast instructions.
   V = V->stripPointerCasts();
@@ -5745,18 +5766,20 @@ static uint64_t GetStringLengthH(const Value *V,
   // or we haven't.
   if (const PHINode *PN = dyn_cast<PHINode>(V)) {
     if (!PHIs.insert(PN).second)
-      return ~0ULL;  // already in the set.
+      return ~0ULL; // already in the set.
 
     // If it was new, see if all the input strings are the same length.
     uint64_t LenSoFar = ~0ULL;
     for (Value *IncValue : PN->incoming_values()) {
       uint64_t Len = GetStringLengthH(IncValue, PHIs, CharSize);
-      if (Len == 0) return 0; // Unknown length -> unknown.
+      if (Len == 0)
+        return 0; // Unknown length -> unknown.
 
-      if (Len == ~0ULL) continue;
+      if (Len == ~0ULL)
+        continue;
 
       if (Len != LenSoFar && LenSoFar != ~0ULL)
-        return 0;    // Disagree -> unknown.
+        return 0; // Disagree -> unknown.
       LenSoFar = Len;
     }
 
@@ -5767,12 +5790,17 @@ static uint64_t GetStringLengthH(const Value *V,
   // strlen(select(c,x,y)) -> strlen(x) ^ strlen(y)
   if (const SelectInst *SI = dyn_cast<SelectInst>(V)) {
     uint64_t Len1 = GetStringLengthH(SI->getTrueValue(), PHIs, CharSize);
-    if (Len1 == 0) return 0;
+    if (Len1 == 0)
+      return 0;
     uint64_t Len2 = GetStringLengthH(SI->getFalseValue(), PHIs, CharSize);
-    if (Len2 == 0) return 0;
-    if (Len1 == ~0ULL) return Len2;
-    if (Len2 == ~0ULL) return Len1;
-    if (Len1 != Len2) return 0;
+    if (Len2 == 0)
+      return 0;
+    if (Len1 == ~0ULL)
+      return Len2;
+    if (Len2 == ~0ULL)
+      return Len1;
+    if (Len1 != Len2)
+      return 0;
     return Len1;
   }
 
@@ -5804,7 +5832,7 @@ uint64_t llvm::GetStringLength(const Value *V, unsigned CharSize) {
   if (!V->getType()->isPointerTy())
     return 0;
 
-  SmallPtrSet<const PHINode*, 32> PHIs;
+  SmallPtrSet<const PHINode *, 32> PHIs;
   uint64_t Len = GetStringLengthH(V, PHIs, CharSize);
   // If Len is ~0ULL, we had an infinite phi cycle: this is dead code, so return
   // an empty string as a length.
@@ -6008,7 +6036,7 @@ bool llvm::getUnderlyingObjectsForCodeGen(const Value *V,
         continue;
       if (Operator::getOpcode(V) == Instruction::IntToPtr) {
         const Value *O =
-          getUnderlyingObjectFromInt(cast<User>(V)->getOperand(0));
+            getUnderlyingObjectFromInt(cast<User>(V)->getOperand(0));
         if (O->getType()->isPointerTy()) {
           Working.push_back(O);
           continue;
@@ -6104,9 +6132,9 @@ bool llvm::mustSuppressSpeculation(const LoadInst &LI) {
   const Function &F = *LI.getFunction();
   // Speculative load may create a race that did not exist in the source.
   return F.hasFnAttribute(Attribute::SanitizeThread) ||
-    // Speculative load may load data from dirty regions.
-    F.hasFnAttribute(Attribute::SanitizeAddress) ||
-    F.hasFnAttribute(Attribute::SanitizeHWAddress);
+         // Speculative load may load data from dirty regions.
+         F.hasFnAttribute(Attribute::SanitizeAddress) ||
+         F.hasFnAttribute(Attribute::SanitizeHWAddress);
 }
 
 bool llvm::isSafeToSpeculativelyExecute(const Instruction *Inst,
@@ -6237,14 +6265,14 @@ bool llvm::mayHaveNonDefUseDependency(const Instruction &I) {
 /// Convert ConstantRange OverflowResult into ValueTracking OverflowResult.
 static OverflowResult mapOverflowResult(ConstantRange::OverflowResult OR) {
   switch (OR) {
-    case ConstantRange::OverflowResult::MayOverflow:
-      return OverflowResult::MayOverflow;
-    case ConstantRange::OverflowResult::AlwaysOverflowsLow:
-      return OverflowResult::AlwaysOverflowsLow;
-    case ConstantRange::OverflowResult::AlwaysOverflowsHigh:
-      return OverflowResult::AlwaysOverflowsHigh;
-    case ConstantRange::OverflowResult::NeverOverflows:
-      return OverflowResult::NeverOverflows;
+  case ConstantRange::OverflowResult::MayOverflow:
+    return OverflowResult::MayOverflow;
+  case ConstantRange::OverflowResult::AlwaysOverflowsLow:
+    return OverflowResult::AlwaysOverflowsLow;
+  case ConstantRange::OverflowResult::AlwaysOverflowsHigh:
+    return OverflowResult::AlwaysOverflowsHigh;
+  case ConstantRange::OverflowResult::NeverOverflows:
+    return OverflowResult::NeverOverflows;
   }
   llvm_unreachable("Unknown OverflowResult");
 }
@@ -6266,10 +6294,10 @@ OverflowResult llvm::computeOverflowForUnsignedMul(
     const Value *LHS, const Value *RHS, const DataLayout &DL,
     AssumptionCache *AC, const Instruction *CxtI, const DominatorTree *DT,
     bool UseInstrInfo) {
-  KnownBits LHSKnown = computeKnownBits(LHS, DL, /*Depth=*/0, AC, CxtI, DT,
-                                        UseInstrInfo);
-  KnownBits RHSKnown = computeKnownBits(RHS, DL, /*Depth=*/0, AC, CxtI, DT,
-                                        UseInstrInfo);
+  KnownBits LHSKnown =
+      computeKnownBits(LHS, DL, /*Depth=*/0, AC, CxtI, DT, UseInstrInfo);
+  KnownBits RHSKnown =
+      computeKnownBits(RHS, DL, /*Depth=*/0, AC, CxtI, DT, UseInstrInfo);
   ConstantRange LHSRange = ConstantRange::fromKnownBits(LHSKnown, false);
   ConstantRange RHSRange = ConstantRange::fromKnownBits(RHSKnown, false);
   return mapOverflowResult(LHSRange.unsignedMulMayOverflow(RHSRange));
@@ -6308,10 +6336,10 @@ llvm::computeOverflowForSignedMul(const Value *LHS, const Value *RHS,
     // product is exactly the minimum negative number.
     // E.g. mul i16 with 17 sign bits: 0xff00 * 0xff80 = 0x8000
     // For simplicity we just check if at least one side is not negative.
-    KnownBits LHSKnown = computeKnownBits(LHS, DL, /*Depth=*/0, AC, CxtI, DT,
-                                          UseInstrInfo);
-    KnownBits RHSKnown = computeKnownBits(RHS, DL, /*Depth=*/0, AC, CxtI, DT,
-                                          UseInstrInfo);
+    KnownBits LHSKnown =
+        computeKnownBits(LHS, DL, /*Depth=*/0, AC, CxtI, DT, UseInstrInfo);
+    KnownBits RHSKnown =
+        computeKnownBits(RHS, DL, /*Depth=*/0, AC, CxtI, DT, UseInstrInfo);
     if (LHSKnown.isNonNegative() || RHSKnown.isNonNegative())
       return OverflowResult::NeverOverflows;
   }
@@ -6329,13 +6357,11 @@ OverflowResult llvm::computeOverflowForUnsignedAdd(
   return mapOverflowResult(LHSRange.unsignedAddMayOverflow(RHSRange));
 }
 
-static OverflowResult computeOverflowForSignedAdd(const Value *LHS,
-                                                  const Value *RHS,
-                                                  const AddOperator *Add,
-                                                  const DataLayout &DL,
-                                                  AssumptionCache *AC,
-                                                  const Instruction *CxtI,
-                                                  const DominatorTree *DT) {
+static OverflowResult
+computeOverflowForSignedAdd(const Value *LHS, const Value *RHS,
+                            const AddOperator *Add, const DataLayout &DL,
+                            AssumptionCache *AC, const Instruction *CxtI,
+                            const DominatorTree *DT) {
   if (Add && Add->hasNoSignedWrap()) {
     return OverflowResult::NeverOverflows;
   }
@@ -6393,12 +6419,9 @@ static OverflowResult computeOverflowForSignedAdd(const Value *LHS,
   return OverflowResult::MayOverflow;
 }
 
-OverflowResult llvm::computeOverflowForUnsignedSub(const Value *LHS,
-                                                   const Value *RHS,
-                                                   const DataLayout &DL,
-                                                   AssumptionCache *AC,
-                                                   const Instruction *CxtI,
-                                                   const DominatorTree *DT) {
+OverflowResult llvm::computeOverflowForUnsignedSub(
+    const Value *LHS, const Value *RHS, const DataLayout &DL,
+    AssumptionCache *AC, const Instruction *CxtI, const DominatorTree *DT) {
   // X - (X % ?)
   // The remainder of a value can't have greater magnitude than itself,
   // so the subtraction can't overflow.
@@ -6432,12 +6455,9 @@ OverflowResult llvm::computeOverflowForUnsignedSub(const Value *LHS,
   return mapOverflowResult(LHSRange.unsignedSubMayOverflow(RHSRange));
 }
 
-OverflowResult llvm::computeOverflowForSignedSub(const Value *LHS,
-                                                 const Value *RHS,
-                                                 const DataLayout &DL,
-                                                 AssumptionCache *AC,
-                                                 const Instruction *CxtI,
-                                                 const DominatorTree *DT) {
+OverflowResult llvm::computeOverflowForSignedSub(
+    const Value *LHS, const Value *RHS, const DataLayout &DL,
+    AssumptionCache *AC, const Instruction *CxtI, const DominatorTree *DT) {
   // X - (X % ?)
   // The remainder of a value can't have greater magnitude than itself,
   // so the subtraction can't overflow.
@@ -6704,8 +6724,8 @@ bool llvm::canCreatePoison(const Operator *Op, bool ConsiderFlagsAndMetadata) {
                                   ConsiderFlagsAndMetadata);
 }
 
-static bool directlyImpliesPoison(const Value *ValAssumedPoison,
-                                  const Value *V, unsigned Depth) {
+static bool directlyImpliesPoison(const Value *ValAssumedPoison, const Value *V,
+                                  unsigned Depth) {
   if (ValAssumedPoison == V)
     return true;
 
@@ -6757,8 +6777,7 @@ bool llvm::impliesPoison(const Value *ValAssumedPoison, const Value *V) {
   return ::impliesPoison(ValAssumedPoison, V, /* Depth */ 0);
 }
 
-static bool programUndefinedIfUndefOrPoison(const Value *V,
-                                            bool PoisonOnly);
+static bool programUndefinedIfUndefOrPoison(const Value *V, bool PoisonOnly);
 
 static bool isGuaranteedNotToBeUndefOrPoison(const Value *V,
                                              AssumptionCache *AC,
@@ -6927,7 +6946,7 @@ bool llvm::mustExecuteUBIfPoisonOnPathTo(Instruction *Root,
   // The set of all recursive users we've visited (which are assumed to all be
   // poison because of said visit)
   SmallSet<const Value *, 16> KnownPoison;
-  SmallVector<const Instruction*, 16> Worklist;
+  SmallVector<const Instruction *, 16> Worklist;
   Worklist.push_back(Root);
   while (!Worklist.empty()) {
     const Instruction *I = Worklist.pop_back_val();
@@ -6962,19 +6981,16 @@ OverflowResult llvm::computeOverflowForSignedAdd(const AddOperator *Add,
                                        Add, DL, AC, CxtI, DT);
 }
 
-OverflowResult llvm::computeOverflowForSignedAdd(const Value *LHS,
-                                                 const Value *RHS,
-                                                 const DataLayout &DL,
-                                                 AssumptionCache *AC,
-                                                 const Instruction *CxtI,
-                                                 const DominatorTree *DT) {
+OverflowResult llvm::computeOverflowForSignedAdd(
+    const Value *LHS, const Value *RHS, const DataLayout &DL,
+    AssumptionCache *AC, const Instruction *CxtI, const DominatorTree *DT) {
   return ::computeOverflowForSignedAdd(LHS, RHS, nullptr, DL, AC, CxtI, DT);
 }
 
 bool llvm::isGuaranteedToTransferExecutionToSuccessor(const Instruction *I) {
   // Note: An atomic operation isn't guaranteed to return in a reasonable amount
-  // of time because it's possible for another thread to interfere with it for an
-  // arbitrary length of time, but programs aren't allowed to rely on that.
+  // of time because it's possible for another thread to interfere with it for
+  // an arbitrary length of time, but programs aren't allowed to rely on that.
 
   // If there is no successor, then execution can't transfer to it.
   if (isa<ReturnInst>(I))
@@ -7013,18 +7029,18 @@ bool llvm::isGuaranteedToTransferExecutionToSuccessor(const BasicBlock *BB) {
 }
 
 bool llvm::isGuaranteedToTransferExecutionToSuccessor(
-   BasicBlock::const_iterator Begin, BasicBlock::const_iterator End,
-   unsigned ScanLimit) {
+    BasicBlock::const_iterator Begin, BasicBlock::const_iterator End,
+    unsigned ScanLimit) {
   return isGuaranteedToTransferExecutionToSuccessor(make_range(Begin, End),
                                                     ScanLimit);
 }
 
 bool llvm::isGuaranteedToTransferExecutionToSuccessor(
-   iterator_range<BasicBlock::const_iterator> Range, unsigned ScanLimit) {
+    iterator_range<BasicBlock::const_iterator> Range, unsigned ScanLimit) {
   assert(ScanLimit && "scan limit must be non-zero");
   for (const Instruction &I : Range) {
     if (isa<DbgInfoIntrinsic>(I))
-        continue;
+      continue;
     if (--ScanLimit == 0)
       return false;
     if (!isGuaranteedToTransferExecutionToSuccessor(&I))
@@ -7039,11 +7055,14 @@ bool llvm::isGuaranteedToExecuteForEveryIteration(const Instruction *I,
   //
   // FIXME: Relax this constraint to cover all basic blocks that are
   // guaranteed to be executed at every iteration.
-  if (I->getParent() != L->getHeader()) return false;
+  if (I->getParent() != L->getHeader())
+    return false;
 
   for (const Instruction &LI : *L->getHeader()) {
-    if (&LI == I) return true;
-    if (!isGuaranteedToTransferExecutionToSuccessor(&LI)) return false;
+    if (&LI == I)
+      return true;
+    if (!isGuaranteedToTransferExecutionToSuccessor(&LI))
+      return false;
   }
   llvm_unreachable("Instruction not contained in its own parent basic block.");
 }
@@ -7092,52 +7111,52 @@ bool llvm::propagatesPoison(const Use &PoisonOp) {
 void llvm::getGuaranteedWellDefinedOps(
     const Instruction *I, SmallVectorImpl<const Value *> &Operands) {
   switch (I->getOpcode()) {
-    case Instruction::Store:
-      Operands.push_back(cast<StoreInst>(I)->getPointerOperand());
-      break;
+  case Instruction::Store:
+    Operands.push_back(cast<StoreInst>(I)->getPointerOperand());
+    break;
 
-    case Instruction::Load:
-      Operands.push_back(cast<LoadInst>(I)->getPointerOperand());
-      break;
+  case Instruction::Load:
+    Operands.push_back(cast<LoadInst>(I)->getPointerOperand());
+    break;
 
-    // Since dereferenceable attribute imply noundef, atomic operations
-    // also implicitly have noundef pointers too
-    case Instruction::AtomicCmpXchg:
-      Operands.push_back(cast<AtomicCmpXchgInst>(I)->getPointerOperand());
-      break;
+  // Since dereferenceable attribute imply noundef, atomic operations
+  // also implicitly have noundef pointers too
+  case Instruction::AtomicCmpXchg:
+    Operands.push_back(cast<AtomicCmpXchgInst>(I)->getPointerOperand());
+    break;
 
-    case Instruction::AtomicRMW:
-      Operands.push_back(cast<AtomicRMWInst>(I)->getPointerOperand());
-      break;
+  case Instruction::AtomicRMW:
+    Operands.push_back(cast<AtomicRMWInst>(I)->getPointerOperand());
+    break;
 
-    case Instruction::Call:
-    case Instruction::Invoke: {
-      const CallBase *CB = cast<CallBase>(I);
-      if (CB->isIndirectCall())
-        Operands.push_back(CB->getCalledOperand());
-      for (unsigned i = 0; i < CB->arg_size(); ++i) {
-        if (CB->paramHasAttr(i, Attribute::NoUndef) ||
-            CB->paramHasAttr(i, Attribute::Dereferenceable) ||
-            CB->paramHasAttr(i, Attribute::DereferenceableOrNull))
-          Operands.push_back(CB->getArgOperand(i));
-      }
-      break;
-    }
-    case Instruction::Ret:
-      if (I->getFunction()->hasRetAttribute(Attribute::NoUndef))
-        Operands.push_back(I->getOperand(0));
-      break;
-    case Instruction::Switch:
-      Operands.push_back(cast<SwitchInst>(I)->getCondition());
-      break;
-    case Instruction::Br: {
-      auto *BR = cast<BranchInst>(I);
-      if (BR->isConditional())
-        Operands.push_back(BR->getCondition());
-      break;
+  case Instruction::Call:
+  case Instruction::Invoke: {
+    const CallBase *CB = cast<CallBase>(I);
+    if (CB->isIndirectCall())
+      Operands.push_back(CB->getCalledOperand());
+    for (unsigned i = 0; i < CB->arg_size(); ++i) {
+      if (CB->paramHasAttr(i, Attribute::NoUndef) ||
+          CB->paramHasAttr(i, Attribute::Dereferenceable) ||
+          CB->paramHasAttr(i, Attribute::DereferenceableOrNull))
+        Operands.push_back(CB->getArgOperand(i));
     }
-    default:
-      break;
+    break;
+  }
+  case Instruction::Ret:
+    if (I->getFunction()->hasRetAttribute(Attribute::NoUndef))
+      Operands.push_back(I->getOperand(0));
+    break;
+  case Instruction::Switch:
+    Operands.push_back(cast<SwitchInst>(I)->getCondition());
+    break;
+  case Instruction::Br: {
+    auto *BR = cast<BranchInst>(I);
+    if (BR->isConditional())
+      Operands.push_back(BR->getCondition());
+    break;
+  }
+  default:
+    break;
   }
 }
 
@@ -7169,8 +7188,7 @@ bool llvm::mustTriggerUB(const Instruction *I,
   return false;
 }
 
-static bool programUndefinedIfUndefOrPoison(const Value *V,
-                                            bool PoisonOnly) {
+static bool programUndefinedIfUndefOrPoison(const Value *V, bool PoisonOnly) {
   // We currently only look for uses of values within the same basic
   // block, as that makes it easier to guarantee that the uses will be
   // executed given that Inst is executed.
@@ -7334,7 +7352,8 @@ static SelectPatternResult matchFastFloatClamp(CmpInst::Predicate Pred,
     Pred = CmpInst::getInversePredicate(Pred);
   }
 
-  // Assume success now. If there's no match, callers should not use these anyway.
+  // Assume success now. If there's no match, callers should not use these
+  // anyway.
   LHS = TrueVal;
   RHS = FalseVal;
 
@@ -7373,9 +7392,9 @@ static SelectPatternResult matchFastFloatClamp(CmpInst::Predicate Pred,
 
 /// Recognize variations of:
 ///   CLAMP(v,l,h) ==> ((v) < (l) ? (l) : ((v) > (h) ? (h) : (v)))
-static SelectPatternResult matchClamp(CmpInst::Predicate Pred,
-                                      Value *CmpLHS, Value *CmpRHS,
-                                      Value *TrueVal, Value *FalseVal) {
+static SelectPatternResult matchClamp(CmpInst::Predicate Pred, Value *CmpLHS,
+                                      Value *CmpRHS, Value *TrueVal,
+                                      Value *FalseVal) {
   // Swap the select operands and predicate to match the patterns below.
   if (CmpRHS != TrueVal) {
     Pred = ICmpInst::getSwappedPredicate(Pred);
@@ -7518,11 +7537,10 @@ static Value *getNotValue(Value *V) {
 }
 
 /// Match non-obvious integer minimum and maximum sequences.
-static SelectPatternResult matchMinMax(CmpInst::Predicate Pred,
-                                       Value *CmpLHS, Value *CmpRHS,
-                                       Value *TrueVal, Value *FalseVal,
-                                       Value *&LHS, Value *&RHS,
-                                       unsigned Depth) {
+static SelectPatternResult matchMinMax(CmpInst::Predicate Pred, Value *CmpLHS,
+                                       Value *CmpRHS, Value *TrueVal,
+                                       Value *FalseVal, Value *&LHS,
+                                       Value *&RHS, unsigned Depth) {
   // Assume success. If there's no match, callers should not use these anyway.
   LHS = TrueVal;
   RHS = FalseVal;
@@ -7540,11 +7558,16 @@ static SelectPatternResult matchMinMax(CmpInst::Predicate Pred,
   // (X < Y) ? ~X : ~Y ==> (~X > ~Y) ? ~X : ~Y ==> MAX(~X, ~Y)
   if (CmpLHS == getNotValue(TrueVal) && CmpRHS == getNotValue(FalseVal)) {
     switch (Pred) {
-    case CmpInst::ICMP_SGT: return {SPF_SMIN, SPNB_NA, false};
-    case CmpInst::ICMP_SLT: return {SPF_SMAX, SPNB_NA, false};
-    case CmpInst::ICMP_UGT: return {SPF_UMIN, SPNB_NA, false};
-    case CmpInst::ICMP_ULT: return {SPF_UMAX, SPNB_NA, false};
-    default: break;
+    case CmpInst::ICMP_SGT:
+      return {SPF_SMIN, SPNB_NA, false};
+    case CmpInst::ICMP_SLT:
+      return {SPF_SMAX, SPNB_NA, false};
+    case CmpInst::ICMP_UGT:
+      return {SPF_UMIN, SPNB_NA, false};
+    case CmpInst::ICMP_ULT:
+      return {SPF_UMAX, SPNB_NA, false};
+    default:
+      break;
     }
   }
 
@@ -7552,11 +7575,16 @@ static SelectPatternResult matchMinMax(CmpInst::Predicate Pred,
   // (X < Y) ? ~Y : ~X ==> (~X > ~Y) ? ~Y : ~X ==> MIN(~Y, ~X)
   if (CmpLHS == getNotValue(FalseVal) && CmpRHS == getNotValue(TrueVal)) {
     switch (Pred) {
-    case CmpInst::ICMP_SGT: return {SPF_SMAX, SPNB_NA, false};
-    case CmpInst::ICMP_SLT: return {SPF_SMIN, SPNB_NA, false};
-    case CmpInst::ICMP_UGT: return {SPF_UMAX, SPNB_NA, false};
-    case CmpInst::ICMP_ULT: return {SPF_UMIN, SPNB_NA, false};
-    default: break;
+    case CmpInst::ICMP_SGT:
+      return {SPF_SMAX, SPNB_NA, false};
+    case CmpInst::ICMP_SLT:
+      return {SPF_SMIN, SPNB_NA, false};
+    case CmpInst::ICMP_UGT:
+      return {SPF_UMAX, SPNB_NA, false};
+    case CmpInst::ICMP_ULT:
+      return {SPF_UMIN, SPNB_NA, false};
+    default:
+      break;
     }
   }
 
@@ -7603,17 +7631,16 @@ bool llvm::isKnownNegation(const Value *X, const Value *Y, bool NeedNSW) {
   // X = sub (A, B), Y = sub (B, A) || X = sub nsw (A, B), Y = sub nsw (B, A)
   Value *A, *B;
   return (!NeedNSW && (match(X, m_Sub(m_Value(A), m_Value(B))) &&
-                        match(Y, m_Sub(m_Specific(B), m_Specific(A))))) ||
+                       match(Y, m_Sub(m_Specific(B), m_Specific(A))))) ||
          (NeedNSW && (match(X, m_NSWSub(m_Value(A), m_Value(B))) &&
-                       match(Y, m_NSWSub(m_Specific(B), m_Specific(A)))));
+                      match(Y, m_NSWSub(m_Specific(B), m_Specific(A)))));
 }
 
 static SelectPatternResult matchSelectPattern(CmpInst::Predicate Pred,
-                                              FastMathFlags FMF,
-                                              Value *CmpLHS, Value *CmpRHS,
-                                              Value *TrueVal, Value *FalseVal,
-                                              Value *&LHS, Value *&RHS,
-                                              unsigned Depth) {
+                                              FastMathFlags FMF, Value *CmpLHS,
+                                              Value *CmpRHS, Value *TrueVal,
+                                              Value *FalseVal, Value *&LHS,
+                                              Value *&RHS, unsigned Depth) {
   bool HasMismatchedZeros = false;
   if (CmpInst::isFPPredicate(Pred)) {
     // IEEE-754 ignores the sign of 0.0 in comparisons. So if the select has one
@@ -7649,14 +7676,19 @@ static SelectPatternResult matchSelectPattern(CmpInst::Predicate Pred,
   // Therefore, we behave conservatively and only proceed if at least one of the
   // operands is known to not be zero or if we don't care about signed zero.
   switch (Pred) {
-  default: break;
-  case CmpInst::FCMP_OGT: case CmpInst::FCMP_OLT:
-  case CmpInst::FCMP_UGT: case CmpInst::FCMP_ULT:
+  default:
+    break;
+  case CmpInst::FCMP_OGT:
+  case CmpInst::FCMP_OLT:
+  case CmpInst::FCMP_UGT:
+  case CmpInst::FCMP_ULT:
     if (!HasMismatchedZeros)
       break;
     [[fallthrough]];
-  case CmpInst::FCMP_OGE: case CmpInst::FCMP_OLE:
-  case CmpInst::FCMP_UGE: case CmpInst::FCMP_ULE:
+  case CmpInst::FCMP_OGE:
+  case CmpInst::FCMP_OLE:
+  case CmpInst::FCMP_UGE:
+  case CmpInst::FCMP_ULE:
     if (!FMF.noSignedZeros() && !isKnownNonZero(CmpLHS) &&
         !isKnownNonZero(CmpRHS))
       return {SPF_UNKNOWN, SPNB_NA, false};
@@ -7717,23 +7749,30 @@ static SelectPatternResult matchSelectPattern(CmpInst::Predicate Pred,
   // ([if]cmp X, Y) ? X : Y
   if (TrueVal == CmpLHS && FalseVal == CmpRHS) {
     switch (Pred) {
-    default: return {SPF_UNKNOWN, SPNB_NA, false}; // Equality.
+    default:
+      return {SPF_UNKNOWN, SPNB_NA, false}; // Equality.
     case ICmpInst::ICMP_UGT:
-    case ICmpInst::ICMP_UGE: return {SPF_UMAX, SPNB_NA, false};
+    case ICmpInst::ICMP_UGE:
+      return {SPF_UMAX, SPNB_NA, false};
     case ICmpInst::ICMP_SGT:
-    case ICmpInst::ICMP_SGE: return {SPF_SMAX, SPNB_NA, false};
+    case ICmpInst::ICMP_SGE:
+      return {SPF_SMAX, SPNB_NA, false};
     case ICmpInst::ICMP_ULT:
-    case ICmpInst::ICMP_ULE: return {SPF_UMIN, SPNB_NA, false};
+    case ICmpInst::ICMP_ULE:
+      return {SPF_UMIN, SPNB_NA, false};
     case ICmpInst::ICMP_SLT:
-    case ICmpInst::ICMP_SLE: return {SPF_SMIN, SPNB_NA, false};
+    case ICmpInst::ICMP_SLE:
+      return {SPF_SMIN, SPNB_NA, false};
     case FCmpInst::FCMP_UGT:
     case FCmpInst::FCMP_UGE:
     case FCmpInst::FCMP_OGT:
-    case FCmpInst::FCMP_OGE: return {SPF_FMAXNUM, NaNBehavior, Ordered};
+    case FCmpInst::FCMP_OGE:
+      return {SPF_FMAXNUM, NaNBehavior, Ordered};
     case FCmpInst::FCMP_ULT:
     case FCmpInst::FCMP_ULE:
     case FCmpInst::FCMP_OLT:
-    case FCmpInst::FCMP_OLE: return {SPF_FMINNUM, NaNBehavior, Ordered};
+    case FCmpInst::FCMP_OLE:
+      return {SPF_FMINNUM, NaNBehavior, Ordered};
     }
   }
 
@@ -7765,8 +7804,7 @@ static SelectPatternResult matchSelectPattern(CmpInst::Predicate Pred,
       // (-X <s 0) ? -X : X or (-X <s 1) ? -X : X --> NABS(X)
       if (Pred == ICmpInst::ICMP_SLT && match(CmpRHS, ZeroOrOne))
         return {SPF_NABS, SPNB_NA, false};
-    }
-    else if (match(FalseVal, MaybeSExtCmpLHS)) {
+    } else if (match(FalseVal, MaybeSExtCmpLHS)) {
       // Set the return values. If the compare uses the negated value (-X >s 0),
       // swap the return values because the negated value is always 'RHS'.
       LHS = FalseVal;
@@ -7787,7 +7825,8 @@ static SelectPatternResult matchSelectPattern(CmpInst::Predicate Pred,
   }
 
   if (CmpInst::isIntPredicate(Pred))
-    return matchMinMax(Pred, CmpLHS, CmpRHS, TrueVal, FalseVal, LHS, RHS, Depth);
+    return matchMinMax(Pred, CmpLHS, CmpRHS, TrueVal, FalseVal, LHS, RHS,
+                       Depth);
 
   // According to (IEEE 754-2008 5.3.1), minNum(0.0, -0.0) and similar
   // may return either -0.0 or 0.0, so fcmp/select pair has stricter
@@ -7914,10 +7953,12 @@ SelectPatternResult llvm::matchSelectPattern(Value *V, Value *&LHS, Value *&RHS,
     return {SPF_UNKNOWN, SPNB_NA, false};
 
   SelectInst *SI = dyn_cast<SelectInst>(V);
-  if (!SI) return {SPF_UNKNOWN, SPNB_NA, false};
+  if (!SI)
+    return {SPF_UNKNOWN, SPNB_NA, false};
 
   CmpInst *CmpI = dyn_cast<CmpInst>(SI->getCondition());
-  if (!CmpI) return {SPF_UNKNOWN, SPNB_NA, false};
+  if (!CmpI)
+    return {SPF_UNKNOWN, SPNB_NA, false};
 
   Value *TrueVal = SI->getTrueValue();
   Value *FalseVal = SI->getFalseValue();
@@ -7956,20 +7997,24 @@ SelectPatternResult llvm::matchDecomposedSelectPattern(
       // -0.0 because there is no corresponding integer value.
       if (*CastOp == Instruction::FPToSI || *CastOp == Instruction::FPToUI)
         FMF.setNoSignedZeros();
-      return ::matchSelectPattern(Pred, FMF, CmpLHS, CmpRHS,
-                                  C, cast<CastInst>(FalseVal)->getOperand(0),
-                                  LHS, RHS, Depth);
+      return ::matchSelectPattern(Pred, FMF, CmpLHS, CmpRHS, C,
+                                  cast<CastInst>(FalseVal)->getOperand(0), LHS,
+                                  RHS, Depth);
     }
   }
-  return ::matchSelectPattern(Pred, FMF, CmpLHS, CmpRHS, TrueVal, FalseVal,
-                              LHS, RHS, Depth);
+  return ::matchSelectPattern(Pred, FMF, CmpLHS, CmpRHS, TrueVal, FalseVal, LHS,
+                              RHS, Depth);
 }
 
 CmpInst::Predicate llvm::getMinMaxPred(SelectPatternFlavor SPF, bool Ordered) {
-  if (SPF == SPF_SMIN) return ICmpInst::ICMP_SLT;
-  if (SPF == SPF_UMIN) return ICmpInst::ICMP_ULT;
-  if (SPF == SPF_SMAX) return ICmpInst::ICMP_SGT;
-  if (SPF == SPF_UMAX) return ICmpInst::ICMP_UGT;
+  if (SPF == SPF_SMIN)
+    return ICmpInst::ICMP_SLT;
+  if (SPF == SPF_UMIN)
+    return ICmpInst::ICMP_ULT;
+  if (SPF == SPF_SMAX)
+    return ICmpInst::ICMP_SGT;
+  if (SPF == SPF_UMAX)
+    return ICmpInst::ICMP_UGT;
   if (SPF == SPF_FMINNUM)
     return Ordered ? FCmpInst::FCMP_OLT : FCmpInst::FCMP_ULT;
   if (SPF == SPF_FMAXNUM)
@@ -7978,36 +8023,54 @@ CmpInst::Predicate llvm::getMinMaxPred(SelectPatternFlavor SPF, bool Ordered) {
 }
 
 SelectPatternFlavor llvm::getInverseMinMaxFlavor(SelectPatternFlavor SPF) {
-  if (SPF == SPF_SMIN) return SPF_SMAX;
-  if (SPF == SPF_UMIN) return SPF_UMAX;
-  if (SPF == SPF_SMAX) return SPF_SMIN;
-  if (SPF == SPF_UMAX) return SPF_UMIN;
+  if (SPF == SPF_SMIN)
+    return SPF_SMAX;
+  if (SPF == SPF_UMIN)
+    return SPF_UMAX;
+  if (SPF == SPF_SMAX)
+    return SPF_SMIN;
+  if (SPF == SPF_UMAX)
+    return SPF_UMIN;
   llvm_unreachable("unhandled!");
 }
 
 Intrinsic::ID llvm::getInverseMinMaxIntrinsic(Intrinsic::ID MinMaxID) {
   switch (MinMaxID) {
-  case Intrinsic::smax: return Intrinsic::smin;
-  case Intrinsic::smin: return Intrinsic::smax;
-  case Intrinsic::umax: return Intrinsic::umin;
-  case Intrinsic::umin: return Intrinsic::umax;
+  case Intrinsic::smax:
+    return Intrinsic::smin;
+  case Intrinsic::smin:
+    return Intrinsic::smax;
+  case Intrinsic::umax:
+    return Intrinsic::umin;
+  case Intrinsic::umin:
+    return Intrinsic::umax;
   // Please note that next four intrinsics may produce the same result for
   // original and inverted case even if X != Y due to NaN is handled specially.
-  case Intrinsic::maximum: return Intrinsic::minimum;
-  case Intrinsic::minimum: return Intrinsic::maximum;
-  case Intrinsic::maxnum: return Intrinsic::minnum;
-  case Intrinsic::minnum: return Intrinsic::maxnum;
-  default: llvm_unreachable("Unexpected intrinsic");
+  case Intrinsic::maximum:
+    return Intrinsic::minimum;
+  case Intrinsic::minimum:
+    return Intrinsic::maximum;
+  case Intrinsic::maxnum:
+    return Intrinsic::minnum;
+  case Intrinsic::minnum:
+    return Intrinsic::maxnum;
+  default:
+    llvm_unreachable("Unexpected intrinsic");
   }
 }
 
 APInt llvm::getMinMaxLimit(SelectPatternFlavor SPF, unsigned BitWidth) {
   switch (SPF) {
-  case SPF_SMAX: return APInt::getSignedMaxValue(BitWidth);
-  case SPF_SMIN: return APInt::getSignedMinValue(BitWidth);
-  case SPF_UMAX: return APInt::getMaxValue(BitWidth);
-  case SPF_UMIN: return APInt::getMinValue(BitWidth);
-  default: llvm_unreachable("Unexpected flavor");
+  case SPF_SMAX:
+    return APInt::getSignedMaxValue(BitWidth);
+  case SPF_SMIN:
+    return APInt::getSignedMinValue(BitWidth);
+  case SPF_UMAX:
+    return APInt::getMaxValue(BitWidth);
+  case SPF_UMIN:
+    return APInt::getMinValue(BitWidth);
+  default:
+    llvm_unreachable("Unexpected flavor");
   }
 }
 
@@ -8150,8 +8213,8 @@ static bool isTruePredicate(CmpInst::Predicate Pred, const Value *LHS,
 
     // Match A to (X +_{nuw} CA) and B to (X +_{nuw} CB)
     auto MatchNUWAddsToSameValue = [&](const Value *A, const Value *B,
-                                       const Value *&X,
-                                       const APInt *&CA, const APInt *&CB) {
+                                       const Value *&X, const APInt *&CA,
+                                       const APInt *&CB) {
       if (match(A, m_NUWAdd(m_Value(X), m_APInt(CA))) &&
           match(B, m_NUWAdd(m_Specific(X), m_APInt(CB))))
         return true;
@@ -8222,8 +8285,9 @@ isImpliedCondOperands(CmpInst::Predicate Pred, const Value *ALHS,
 /// Return true if the operands of two compares (expanded as "L0 pred L1" and
 /// "R0 pred R1") match. IsSwappedOps is true when the operands match, but are
 /// swapped.
-static bool areMatchingOperands(const Value *L0, const Value *L1, const Value *R0,
-                           const Value *R1, bool &AreSwappedOps) {
+static bool areMatchingOperands(const Value *L0, const Value *L1,
+                                const Value *R0, const Value *R1,
+                                bool &AreSwappedOps) {
   bool AreMatchingOps = (L0 == R0 && L1 == R1);
   AreSwappedOps = (L0 == R1 && L1 == R0);
   return AreMatchingOps || AreSwappedOps;
@@ -8766,22 +8830,22 @@ static void setLimitsForSelectPattern(const SelectInst &SI, APInt &Lower,
     return;
 
   switch (R.Flavor) {
-    case SPF_UMIN:
-      Upper = *C + 1;
-      break;
-    case SPF_UMAX:
-      Lower = *C;
-      break;
-    case SPF_SMIN:
-      Lower = APInt::getSignedMinValue(BitWidth);
-      Upper = *C + 1;
-      break;
-    case SPF_SMAX:
-      Lower = *C;
-      Upper = APInt::getSignedMaxValue(BitWidth) + 1;
-      break;
-    default:
-      break;
+  case SPF_UMIN:
+    Upper = *C + 1;
+    break;
+  case SPF_UMAX:
+    Lower = *C;
+    break;
+  case SPF_SMIN:
+    Lower = APInt::getSignedMinValue(BitWidth);
+    Upper = *C + 1;
+    break;
+  case SPF_SMAX:
+    Lower = *C;
+    Upper = APInt::getSignedMaxValue(BitWidth) + 1;
+    break;
+  default:
+    break;
   }
 }
 
diff --git a/llvm/lib/Analysis/VectorUtils.cpp b/llvm/lib/Analysis/VectorUtils.cpp
index 13bb4e83a5b94d6..df1921ce6c7672d 100644
--- a/llvm/lib/Analysis/VectorUtils.cpp
+++ b/llvm/lib/Analysis/VectorUtils.cpp
@@ -43,7 +43,7 @@ static cl::opt<unsigned> MaxInterleaveGroupFactor(
 /// isVectorIntrinsicWithScalarOpAtArg).
 bool llvm::isTriviallyVectorizable(Intrinsic::ID ID) {
   switch (ID) {
-  case Intrinsic::abs:   // Begin integer bit-manipulation.
+  case Intrinsic::abs: // Begin integer bit-manipulation.
   case Intrinsic::bswap:
   case Intrinsic::bitreverse:
   case Intrinsic::ctpop:
@@ -201,7 +201,8 @@ Value *llvm::findScalarElement(Value *V, unsigned EltNo) {
 
   // Extract a value from a vector add operation with a constant zero.
   // TODO: Use getBinOpIdentity() to generalize this.
-  Value *Val; Constant *C;
+  Value *Val;
+  Constant *C;
   if (match(V, m_Add(m_Value(Val), m_Constant(C))))
     if (Constant *Elt = C->getAggregateElement(EltNo))
       if (Elt->isNullValue())
@@ -246,9 +247,8 @@ Value *llvm::getSplatValue(const Value *V) {
 
   // shuf (inselt ?, Splat, 0), ?, <0, undef, 0, ...>
   Value *Splat;
-  if (match(V,
-            m_Shuffle(m_InsertElt(m_Value(), m_Value(Splat), m_ZeroInt()),
-                      m_Value(), m_ZeroMask())))
+  if (match(V, m_Shuffle(m_InsertElt(m_Value(), m_Value(Splat), m_ZeroInt()),
+                         m_Value(), m_ZeroMask())))
     return Splat;
 
   return nullptr;
@@ -648,7 +648,8 @@ llvm::computeMinimumValueSizes(ArrayRef<BasicBlock *> Blocks, DemandedBits &DB,
     // We don't modify the types of PHIs. Reductions will already have been
     // truncated if possible, and inductions' sizes will have been chosen by
     // indvars.
-    // If we are required to shrink a PHI, abandon this entire equivalence class.
+    // If we are required to shrink a PHI, abandon this entire equivalence
+    // class.
     bool Abort = false;
     for (Value *M : llvm::make_range(ECs.member_begin(I), ECs.member_end()))
       if (isa<PHINode>(M) && MinBW < M->getType()->getScalarSizeInBits()) {
@@ -1031,7 +1032,7 @@ bool InterleavedAccessInfo::isStrided(int Stride) {
 
 void InterleavedAccessInfo::collectConstStrideAccesses(
     MapVector<Instruction *, StrideDescriptor> &AccessStrideInfo,
-    const DenseMap<Value*, const SCEV*> &Strides) {
+    const DenseMap<Value *, const SCEV *> &Strides) {
   auto &DL = TheLoop->getHeader()->getModule()->getDataLayout();
 
   // Since it's desired that the load/store instructions be maintained in
@@ -1062,13 +1063,13 @@ void InterleavedAccessInfo::collectConstStrideAccesses(
       // wrap around the address space we would do a memory access at nullptr
       // even without the transformation. The wrapping checks are therefore
       // deferred until after we've formed the interleaved groups.
-      int64_t Stride =
-        getPtrStride(PSE, ElementTy, Ptr, TheLoop, Strides,
-                     /*Assume=*/true, /*ShouldCheckWrap=*/false).value_or(0);
+      int64_t Stride = getPtrStride(PSE, ElementTy, Ptr, TheLoop, Strides,
+                                    /*Assume=*/true, /*ShouldCheckWrap=*/false)
+                           .value_or(0);
 
       const SCEV *Scev = replaceSymbolicStrideSCEV(PSE, Strides, Ptr);
-      AccessStrideInfo[&I] = StrideDescriptor(Stride, Scev, Size,
-                                              getLoadStoreAlignment(&I));
+      AccessStrideInfo[&I] =
+          StrideDescriptor(Stride, Scev, Size, getLoadStoreAlignment(&I));
     }
 }
 
@@ -1109,7 +1110,7 @@ void InterleavedAccessInfo::collectConstStrideAccesses(
 // with other accesses that may precede it in program order. Note that a
 // bottom-up order does not imply that WAW dependences should not be checked.
 void InterleavedAccessInfo::analyzeInterleaving(
-                                 bool EnablePredicatedInterleavedMemAccesses) {
+    bool EnablePredicatedInterleavedMemAccesses) {
   LLVM_DEBUG(dbgs() << "LV: Analyzing interleaved accesses...\n");
   const auto &Strides = LAI->getSymbolicStrides();
 
@@ -1151,8 +1152,8 @@ void InterleavedAccessInfo::analyzeInterleaving(
     // create a group for B, we continue with the bottom-up algorithm to ensure
     // we don't break any of B's dependences.
     InterleaveGroup<Instruction> *GroupB = nullptr;
-    if (isStrided(DesB.Stride) &&
-        (!isPredicated(B->getParent()) || EnablePredicatedInterleavedMemAccesses)) {
+    if (isStrided(DesB.Stride) && (!isPredicated(B->getParent()) ||
+                                   EnablePredicatedInterleavedMemAccesses)) {
       GroupB = getInterleaveGroup(B);
       if (!GroupB) {
         LLVM_DEBUG(dbgs() << "LV: Creating an interleave group with:" << *B
@@ -1284,8 +1285,8 @@ void InterleavedAccessInfo::analyzeInterleaving(
       if (DistanceToB % static_cast<int64_t>(DesB.Size))
         continue;
 
-      // All members of a predicated interleave-group must have the same predicate,
-      // and currently must reside in the same BB.
+      // All members of a predicated interleave-group must have the same
+      // predicate, and currently must reside in the same BB.
       BasicBlock *BlockA = A->getParent();
       BasicBlock *BlockB = B->getParent();
       if ((isPredicated(BlockA) || isPredicated(BlockB)) &&
@@ -1319,7 +1320,8 @@ void InterleavedAccessInfo::analyzeInterleaving(
     Value *MemberPtr = getLoadStorePointerOperand(Member);
     Type *AccessTy = getLoadStoreType(Member);
     if (getPtrStride(PSE, AccessTy, MemberPtr, TheLoop, Strides,
-                     /*Assume=*/false, /*ShouldCheckWrap=*/true).value_or(0))
+                     /*Assume=*/false, /*ShouldCheckWrap=*/true)
+            .value_or(0))
       return false;
     LLVM_DEBUG(dbgs() << "LV: Invalidate candidate interleaved group due to "
                       << FirstOrLast
@@ -1451,7 +1453,7 @@ void InterleaveGroup<Instruction>::addMetadata(Instruction *NewInst) const {
                  [](std::pair<int, Instruction *> p) { return p.second; });
   propagateMetadata(NewInst, VL);
 }
-}
+} // namespace llvm
 
 std::string VFABI::mangleTLIVectorName(StringRef VectorName,
                                        StringRef ScalarName, unsigned numArgs,
diff --git a/llvm/lib/Analysis/models/gen-inline-oz-test-model.py b/llvm/lib/Analysis/models/gen-inline-oz-test-model.py
index 4898509ea544f56..6a0498a02a4b931 100644
--- a/llvm/lib/Analysis/models/gen-inline-oz-test-model.py
+++ b/llvm/lib/Analysis/models/gen-inline-oz-test-model.py
@@ -28,11 +28,10 @@
 ]
 """
 
-
-# pylint: disable=g-complex-comprehension
+#pylint : disable = g - complex - comprehension
 def get_input_signature():
     """Returns the list of features for LLVM inlining."""
-    # int64 features
+#int64 features
     inputs = [
         tf.TensorSpec(dtype=tf.int64, shape=(), name=key)
         for key in [
@@ -75,7 +74,7 @@ def get_input_signature():
         ]
     ]
 
-    # float32 features
+#float32 features
     inputs.extend(
         [
             tf.TensorSpec(dtype=tf.float32, shape=(), name=key)
@@ -83,7 +82,7 @@ def get_input_signature():
         ]
     )
 
-    # int32 features
+#int32 features
     inputs.extend(
         [tf.TensorSpec(dtype=tf.int32, shape=(), name=key) for key in ["step_type"]]
     )
diff --git a/llvm/lib/Analysis/models/gen-regalloc-eviction-test-model.py b/llvm/lib/Analysis/models/gen-regalloc-eviction-test-model.py
index 5af2fb2878b5ba3..6723a003c6d2f87 100644
--- a/llvm/lib/Analysis/models/gen-regalloc-eviction-test-model.py
+++ b/llvm/lib/Analysis/models/gen-regalloc-eviction-test-model.py
@@ -43,8 +43,8 @@ def get_output_spec_path(path):
 def build_mock_model(path):
     """Build and save the mock model with the given signature."""
     module = tf.Module()
-    # We have to set this useless variable in order for the TF C API to correctly
-    # intake it
+#We have to set this useless variable in order for the TF C API to correctly
+#intake it
     module.var = tf.Variable(0, dtype=tf.int64)
 
     def action(*inputs):
diff --git a/llvm/lib/Analysis/models/gen-regalloc-priority-test-model.py b/llvm/lib/Analysis/models/gen-regalloc-priority-test-model.py
index 889ddae48b1ffcd..0bb84f78bdee473 100644
--- a/llvm/lib/Analysis/models/gen-regalloc-priority-test-model.py
+++ b/llvm/lib/Analysis/models/gen-regalloc-priority-test-model.py
@@ -66,8 +66,8 @@ def get_output_spec_path(path):
 def build_mock_model(path):
     """Build and save the mock model with the given signature."""
     module = tf.Module()
-    # We have to set this useless variable in order for the TF C API to correctly
-    # intake it
+#We have to set this useless variable in order for the TF C API to correctly
+#intake it
     module.var = tf.Variable(0, dtype=tf.float32)
 
     def action(*inputs):
@@ -81,7 +81,7 @@ def action(*inputs):
         s2 = tf.reduce_sum(
             [tf.cast(inputs[0][key], tf.float32) for key in CONTEXT_FEATURE_LIST]
         )
-        # Add a large number so s won't be 0.
+#Add a large number so s won't be 0.
         s = s1 + s2
         result = s + module.var
         return {POLICY_DECISION_LABEL: result}
diff --git a/llvm/lib/Analysis/models/interactive_host.py b/llvm/lib/Analysis/models/interactive_host.py
index f2c2b640e7f4f2a..9bbfb85937a8516 100644
--- a/llvm/lib/Analysis/models/interactive_host.py
+++ b/llvm/lib/Analysis/models/interactive_host.py
@@ -23,7 +23,7 @@
 def send(f: io.BufferedWriter, value: Union[int, float], spec: log_reader.TensorSpec):
     """Send the `value` - currently just a scalar - formatted as per `spec`."""
 
-    # just int64 for now
+#just int64 for now
     assert spec.element_type == ctypes.c_int64
     to_send = ctypes.c_int64(int(value))
     assert f.write(bytes(to_send)) == ctypes.sizeof(spec.element_type) * math.prod(
diff --git a/llvm/lib/Analysis/models/saved-model-to-tflite.py b/llvm/lib/Analysis/models/saved-model-to-tflite.py
index 9c83718732945d3..97076a3965e954f 100644
--- a/llvm/lib/Analysis/models/saved-model-to-tflite.py
+++ b/llvm/lib/Analysis/models/saved-model-to-tflite.py
@@ -1,37 +1,39 @@
-"""Convert a saved model to tflite model.
-
-Usage: python3 saved-model-to-tflite.py <mlgo saved_model_dir> <tflite dest_dir>
-
-The <tflite dest_dir> will contain:
-  model.tflite: this is the converted saved model
-  output_spec.json: the output spec, copied from the saved_model dir.
-"""
-
-import tensorflow as tf
-import os
-import sys
-from tf_agents.policies import greedy_policy
-
-
-def main(argv):
-    assert len(argv) == 3
-    sm_dir = argv[1]
-    tfl_dir = argv[2]
-    tf.io.gfile.makedirs(tfl_dir)
-    tfl_path = os.path.join(tfl_dir, "model.tflite")
-    converter = tf.lite.TFLiteConverter.from_saved_model(sm_dir)
-    converter.target_spec.supported_ops = [
-        tf.lite.OpsSet.TFLITE_BUILTINS,
-    ]
-    tfl_model = converter.convert()
-    with tf.io.gfile.GFile(tfl_path, "wb") as f:
-        f.write(tfl_model)
-
-    json_file = "output_spec.json"
-    src_json = os.path.join(sm_dir, json_file)
-    if tf.io.gfile.exists(src_json):
-        tf.io.gfile.copy(src_json, os.path.join(tfl_dir, json_file))
-
-
-if __name__ == "__main__":
-    main(sys.argv)
+""
+    "Convert a saved model to tflite model.
+
+    Usage : python3 saved -
+            model - to -
+            tflite
+                .py<mlgo saved_model_dir><tflite dest_dir>
+
+                    The<tflite dest_dir>
+                        will contain
+    : model.tflite : this is the converted saved model output_spec.json
+    : the output spec,
+    copied from the saved_model
+            dir.""
+                "
+
+        import tensorflow as tf import os import sys from tf_agents.policies
+            import greedy_policy
+
+                def
+                main(argv)
+    : assert len(argv) ==
+        3 sm_dir = argv[1] tfl_dir = argv[2] tf.io.gfile.makedirs(
+        tfl_dir) tfl_path = os.path.join(tfl_dir, "model.tflite") converter =
+        tf.lite.TFLiteConverter.from_saved_model(
+                                   sm_dir) converter.target_spec.supported_ops =
+            [
+              tf.lite.OpsSet.TFLITE_BUILTINS,
+            ] tfl_model = converter.convert() with tf.io.gfile
+                              .GFile(tfl_path, "wb") as f : f.write(tfl_model)
+
+                                                                json_file =
+                "output_spec.json" src_json =
+                    os.path.join(sm_dir,
+                                 json_file) if tf.io.gfile.exists(src_json)
+    : tf.io.gfile.copy(src_json, os.path.join(tfl_dir, json_file))
+
+          if __name__
+                    == "__main__" : main(sys.argv)
diff --git a/llvm/lib/AsmParser/CMakeLists.txt b/llvm/lib/AsmParser/CMakeLists.txt
index b0ed82729c7af12..1d9e3ee30886ea1 100644
--- a/llvm/lib/AsmParser/CMakeLists.txt
+++ b/llvm/lib/AsmParser/CMakeLists.txt
@@ -1,17 +1,10 @@
-# AsmParser
-add_llvm_component_library(LLVMAsmParser
-  LLLexer.cpp
-  LLParser.cpp
-  Parser.cpp
+#AsmParser
+add_llvm_component_library(LLVMAsmParser LLLexer.cpp LLParser.cpp Parser.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/ASMParser
+                               ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+                           llvm /
+                           ASMParser
 
-  DEPENDS
-  intrinsics_gen
+                               DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  BinaryFormat
-  Core
-  Support
-  )
+                                   LINK_COMPONENTS BinaryFormat Core Support)
diff --git a/llvm/lib/AsmParser/LLLexer.cpp b/llvm/lib/AsmParser/LLLexer.cpp
index 466bdebc001f589..0633a86f0885177 100644
--- a/llvm/lib/AsmParser/LLLexer.cpp
+++ b/llvm/lib/AsmParser/LLLexer.cpp
@@ -47,8 +47,8 @@ uint64_t LLLexer::atoull(const char *Buffer, const char *End) {
   for (; Buffer != End; Buffer++) {
     uint64_t OldRes = Result;
     Result *= 10;
-    Result += *Buffer-'0';
-    if (Result < OldRes) {  // Uh, oh, overflow detected!!!
+    Result += *Buffer - '0';
+    if (Result < OldRes) { // Uh, oh, overflow detected!!!
       Error("constant bigger than 64 bits detected!");
       return 0;
     }
@@ -63,7 +63,7 @@ uint64_t LLLexer::HexIntToVal(const char *Buffer, const char *End) {
     Result *= 16;
     Result += hexDigitValue(*Buffer);
 
-    if (Result < OldRes) {   // Uh, oh, overflow detected!!!
+    if (Result < OldRes) { // Uh, oh, overflow detected!!!
       Error("constant bigger than 64 bits detected!");
       return 0;
     }
@@ -93,9 +93,9 @@ void LLLexer::HexToIntPair(const char *Buffer, const char *End,
 /// FP80HexToIntPair - translate an 80 bit FP80 number (20 hexits) into
 /// { low64, high16 } as usual for an APInt.
 void LLLexer::FP80HexToIntPair(const char *Buffer, const char *End,
-                           uint64_t Pair[2]) {
+                               uint64_t Pair[2]) {
   Pair[1] = 0;
-  for (int i=0; i<4 && Buffer != End; i++, Buffer++) {
+  for (int i = 0; i < 4 && Buffer != End; i++, Buffer++) {
     assert(Buffer != End);
     Pair[1] *= 16;
     Pair[1] += hexDigitValue(*Buffer);
@@ -112,20 +112,21 @@ void LLLexer::FP80HexToIntPair(const char *Buffer, const char *End,
 // UnEscapeLexed - Run through the specified buffer and change \xx codes to the
 // appropriate character.
 static void UnEscapeLexed(std::string &Str) {
-  if (Str.empty()) return;
+  if (Str.empty())
+    return;
 
-  char *Buffer = &Str[0], *EndBuffer = Buffer+Str.size();
+  char *Buffer = &Str[0], *EndBuffer = Buffer + Str.size();
   char *BOut = Buffer;
-  for (char *BIn = Buffer; BIn != EndBuffer; ) {
+  for (char *BIn = Buffer; BIn != EndBuffer;) {
     if (BIn[0] == '\\') {
-      if (BIn < EndBuffer-1 && BIn[1] == '\\') {
+      if (BIn < EndBuffer - 1 && BIn[1] == '\\') {
         *BOut++ = '\\'; // Two \ becomes one
         BIn += 2;
-      } else if (BIn < EndBuffer-2 &&
+      } else if (BIn < EndBuffer - 2 &&
                  isxdigit(static_cast<unsigned char>(BIn[1])) &&
                  isxdigit(static_cast<unsigned char>(BIn[2]))) {
         *BOut = hexDigitValue(BIn[1]) * 16 + hexDigitValue(BIn[2]);
-        BIn += 3;                           // Skip over handled chars
+        BIn += 3; // Skip over handled chars
         ++BOut;
       } else {
         *BOut++ = *BIn++;
@@ -134,7 +135,7 @@ static void UnEscapeLexed(std::string &Str) {
       *BOut++ = *BIn++;
     }
   }
-  Str.resize(BOut-Buffer);
+  Str.resize(BOut - Buffer);
 }
 
 /// isLabelChar - Return true for [-a-zA-Z$._0-9].
@@ -146,8 +147,10 @@ static bool isLabelChar(char C) {
 /// isLabelTail - Return true if this pointer points to a valid end of a label.
 static const char *isLabelTail(const char *CurPtr) {
   while (true) {
-    if (CurPtr[0] == ':') return CurPtr+1;
-    if (!isLabelChar(CurPtr[0])) return nullptr;
+    if (CurPtr[0] == ':')
+      return CurPtr + 1;
+    if (!isLabelChar(CurPtr[0]))
+      return nullptr;
     ++CurPtr;
   }
 }
@@ -165,15 +168,16 @@ LLLexer::LLLexer(StringRef StartBuf, SourceMgr &SM, SMDiagnostic &Err,
 int LLLexer::getNextChar() {
   char CurChar = *CurPtr++;
   switch (CurChar) {
-  default: return (unsigned char)CurChar;
+  default:
+    return (unsigned char)CurChar;
   case 0:
     // A nul character in the stream is either the end of the current buffer or
     // a random nul in the file.  Disambiguate that here.
-    if (CurPtr-1 != CurBuf.end())
-      return 0;  // Just whitespace.
+    if (CurPtr - 1 != CurBuf.end())
+      return 0; // Just whitespace.
 
     // Otherwise, return end of file.
-    --CurPtr;  // Another call to lex will return EOF again.
+    --CurPtr; // Another call to lex will return EOF again.
     return EOF;
   }
 }
@@ -190,7 +194,8 @@ lltok::Kind LLLexer::LexToken() {
         return LexIdentifier();
 
       return lltok::Error;
-    case EOF: return lltok::Eof;
+    case EOF:
+      return lltok::Eof;
     case 0:
     case ' ':
     case '\t':
@@ -198,15 +203,20 @@ lltok::Kind LLLexer::LexToken() {
     case '\r':
       // Ignore whitespace.
       continue;
-    case '+': return LexPositive();
-    case '@': return LexAt();
-    case '$': return LexDollar();
-    case '%': return LexPercent();
-    case '"': return LexQuote();
+    case '+':
+      return LexPositive();
+    case '@':
+      return LexAt();
+    case '$':
+      return LexDollar();
+    case '%':
+      return LexPercent();
+    case '"':
+      return LexQuote();
     case '.':
       if (const char *Ptr = isLabelTail(CurPtr)) {
         CurPtr = Ptr;
-        StrVal.assign(TokStart, CurPtr-1);
+        StrVal.assign(TokStart, CurPtr - 1);
         return lltok::LabelStr;
       }
       if (CurPtr[0] == '.' && CurPtr[1] == '.') {
@@ -217,28 +227,50 @@ lltok::Kind LLLexer::LexToken() {
     case ';':
       SkipLineComment();
       continue;
-    case '!': return LexExclaim();
+    case '!':
+      return LexExclaim();
     case '^':
       return LexCaret();
     case ':':
       return lltok::colon;
-    case '#': return LexHash();
-    case '0': case '1': case '2': case '3': case '4':
-    case '5': case '6': case '7': case '8': case '9':
+    case '#':
+      return LexHash();
+    case '0':
+    case '1':
+    case '2':
+    case '3':
+    case '4':
+    case '5':
+    case '6':
+    case '7':
+    case '8':
+    case '9':
     case '-':
       return LexDigitOrNegative();
-    case '=': return lltok::equal;
-    case '[': return lltok::lsquare;
-    case ']': return lltok::rsquare;
-    case '{': return lltok::lbrace;
-    case '}': return lltok::rbrace;
-    case '<': return lltok::less;
-    case '>': return lltok::greater;
-    case '(': return lltok::lparen;
-    case ')': return lltok::rparen;
-    case ',': return lltok::comma;
-    case '*': return lltok::star;
-    case '|': return lltok::bar;
+    case '=':
+      return lltok::equal;
+    case '[':
+      return lltok::lsquare;
+    case ']':
+      return lltok::rsquare;
+    case '{':
+      return lltok::lbrace;
+    case '}':
+      return lltok::rbrace;
+    case '<':
+      return lltok::less;
+    case '>':
+      return lltok::greater;
+    case '(':
+      return lltok::lparen;
+    case ')':
+      return lltok::rparen;
+    case ',':
+      return lltok::comma;
+    case '*':
+      return lltok::star;
+    case '|':
+      return lltok::bar;
     }
   }
 }
@@ -306,7 +338,7 @@ lltok::Kind LLLexer::ReadString(lltok::Kind kind) {
       return lltok::Error;
     }
     if (CurChar == '"') {
-      StrVal.assign(Start, CurPtr-1);
+      StrVal.assign(Start, CurPtr - 1);
       UnEscapeLexed(StrVal);
       return kind;
     }
@@ -316,13 +348,11 @@ lltok::Kind LLLexer::ReadString(lltok::Kind kind) {
 /// ReadVarName - Read the rest of a token containing a variable name.
 bool LLLexer::ReadVarName() {
   const char *NameStart = CurPtr;
-  if (isalpha(static_cast<unsigned char>(CurPtr[0])) ||
-      CurPtr[0] == '-' || CurPtr[0] == '$' ||
-      CurPtr[0] == '.' || CurPtr[0] == '_') {
+  if (isalpha(static_cast<unsigned char>(CurPtr[0])) || CurPtr[0] == '-' ||
+      CurPtr[0] == '$' || CurPtr[0] == '.' || CurPtr[0] == '_') {
     ++CurPtr;
-    while (isalnum(static_cast<unsigned char>(CurPtr[0])) ||
-           CurPtr[0] == '-' || CurPtr[0] == '$' ||
-           CurPtr[0] == '.' || CurPtr[0] == '_')
+    while (isalnum(static_cast<unsigned char>(CurPtr[0])) || CurPtr[0] == '-' ||
+           CurPtr[0] == '$' || CurPtr[0] == '.' || CurPtr[0] == '_')
       ++CurPtr;
 
     StrVal.assign(NameStart, CurPtr);
@@ -360,7 +390,7 @@ lltok::Kind LLLexer::LexVar(lltok::Kind Var, lltok::Kind VarID) {
         return lltok::Error;
       }
       if (CurChar == '"') {
-        StrVal.assign(TokStart+2, CurPtr-1);
+        StrVal.assign(TokStart + 2, CurPtr - 1);
         UnEscapeLexed(StrVal);
         if (StringRef(StrVal).find_first_of(0) != StringRef::npos) {
           Error("Null bytes are not allowed in names");
@@ -413,16 +443,16 @@ lltok::Kind LLLexer::LexQuote() {
 ///    !
 lltok::Kind LLLexer::LexExclaim() {
   // Lex a metadata name as a MetadataVar.
-  if (isalpha(static_cast<unsigned char>(CurPtr[0])) ||
-      CurPtr[0] == '-' || CurPtr[0] == '$' ||
-      CurPtr[0] == '.' || CurPtr[0] == '_' || CurPtr[0] == '\\') {
+  if (isalpha(static_cast<unsigned char>(CurPtr[0])) || CurPtr[0] == '-' ||
+      CurPtr[0] == '$' || CurPtr[0] == '.' || CurPtr[0] == '_' ||
+      CurPtr[0] == '\\') {
     ++CurPtr;
-    while (isalnum(static_cast<unsigned char>(CurPtr[0])) ||
-           CurPtr[0] == '-' || CurPtr[0] == '$' ||
-           CurPtr[0] == '.' || CurPtr[0] == '_' || CurPtr[0] == '\\')
+    while (isalnum(static_cast<unsigned char>(CurPtr[0])) || CurPtr[0] == '-' ||
+           CurPtr[0] == '$' || CurPtr[0] == '.' || CurPtr[0] == '_' ||
+           CurPtr[0] == '\\')
       ++CurPtr;
 
-    StrVal.assign(TokStart+1, CurPtr);   // Skip !
+    StrVal.assign(TokStart + 1, CurPtr); // Skip !
     UnEscapeLexed(StrVal);
     return lltok::MetadataVar;
   }
@@ -465,13 +495,14 @@ lltok::Kind LLLexer::LexIdentifier() {
   // If we stopped due to a colon, unless we were directed to ignore it,
   // this really is a label.
   if (!IgnoreColonInIdentifiers && *CurPtr == ':') {
-    StrVal.assign(StartChar-1, CurPtr++);
+    StrVal.assign(StartChar - 1, CurPtr++);
     return lltok::LabelStr;
   }
 
   // Otherwise, this wasn't a label.  If this was valid as an integer type,
   // return it.
-  if (!IntEnd) IntEnd = CurPtr;
+  if (!IntEnd)
+    IntEnd = CurPtr;
   if (IntEnd != StartChar) {
     CurPtr = IntEnd;
     uint64_t NumBits = atoull(StartChar, CurPtr);
@@ -485,7 +516,8 @@ lltok::Kind LLLexer::LexIdentifier() {
   }
 
   // Otherwise, this was a letter sequence.  See which keyword this is.
-  if (!KeywordEnd) KeywordEnd = CurPtr;
+  if (!KeywordEnd)
+    KeywordEnd = CurPtr;
   CurPtr = KeywordEnd;
   --StartChar;
   StringRef Keyword(StartChar, CurPtr - StartChar);
@@ -496,9 +528,12 @@ lltok::Kind LLLexer::LexIdentifier() {
       return lltok::kw_##STR;                                                  \
   } while (false)
 
-  KEYWORD(true);    KEYWORD(false);
-  KEYWORD(declare); KEYWORD(define);
-  KEYWORD(global);  KEYWORD(constant);
+  KEYWORD(true);
+  KEYWORD(false);
+  KEYWORD(declare);
+  KEYWORD(define);
+  KEYWORD(global);
+  KEYWORD(constant);
 
   KEYWORD(dso_local);
   KEYWORD(dso_preemptable);
@@ -641,8 +676,7 @@ lltok::Kind LLLexer::LexIdentifier() {
   KEYWORD(async);
 
 #define GET_ATTR_NAMES
-#define ATTRIBUTE_ENUM(ENUM_NAME, DISPLAY_NAME) \
-  KEYWORD(DISPLAY_NAME);
+#define ATTRIBUTE_ENUM(ENUM_NAME, DISPLAY_NAME) KEYWORD(DISPLAY_NAME);
 #include "llvm/IR/Attributes.inc"
 
   KEYWORD(read);
@@ -684,13 +718,35 @@ lltok::Kind LLLexer::LexIdentifier() {
   KEYWORD(nodeduplicate);
   KEYWORD(samesize);
 
-  KEYWORD(eq); KEYWORD(ne); KEYWORD(slt); KEYWORD(sgt); KEYWORD(sle);
-  KEYWORD(sge); KEYWORD(ult); KEYWORD(ugt); KEYWORD(ule); KEYWORD(uge);
-  KEYWORD(oeq); KEYWORD(one); KEYWORD(olt); KEYWORD(ogt); KEYWORD(ole);
-  KEYWORD(oge); KEYWORD(ord); KEYWORD(uno); KEYWORD(ueq); KEYWORD(une);
-
-  KEYWORD(xchg); KEYWORD(nand); KEYWORD(max); KEYWORD(min); KEYWORD(umax);
-  KEYWORD(umin); KEYWORD(fmax); KEYWORD(fmin);
+  KEYWORD(eq);
+  KEYWORD(ne);
+  KEYWORD(slt);
+  KEYWORD(sgt);
+  KEYWORD(sle);
+  KEYWORD(sge);
+  KEYWORD(ult);
+  KEYWORD(ugt);
+  KEYWORD(ule);
+  KEYWORD(uge);
+  KEYWORD(oeq);
+  KEYWORD(one);
+  KEYWORD(olt);
+  KEYWORD(ogt);
+  KEYWORD(ole);
+  KEYWORD(oge);
+  KEYWORD(ord);
+  KEYWORD(uno);
+  KEYWORD(ueq);
+  KEYWORD(une);
+
+  KEYWORD(xchg);
+  KEYWORD(nand);
+  KEYWORD(max);
+  KEYWORD(min);
+  KEYWORD(umax);
+  KEYWORD(umin);
+  KEYWORD(fmax);
+  KEYWORD(fmin);
   KEYWORD(uinc_wrap);
   KEYWORD(udec_wrap);
 
@@ -812,20 +868,20 @@ lltok::Kind LLLexer::LexIdentifier() {
     }                                                                          \
   } while (false)
 
-  TYPEKEYWORD("void",      Type::getVoidTy(Context));
-  TYPEKEYWORD("half",      Type::getHalfTy(Context));
-  TYPEKEYWORD("bfloat",    Type::getBFloatTy(Context));
-  TYPEKEYWORD("float",     Type::getFloatTy(Context));
-  TYPEKEYWORD("double",    Type::getDoubleTy(Context));
-  TYPEKEYWORD("x86_fp80",  Type::getX86_FP80Ty(Context));
-  TYPEKEYWORD("fp128",     Type::getFP128Ty(Context));
+  TYPEKEYWORD("void", Type::getVoidTy(Context));
+  TYPEKEYWORD("half", Type::getHalfTy(Context));
+  TYPEKEYWORD("bfloat", Type::getBFloatTy(Context));
+  TYPEKEYWORD("float", Type::getFloatTy(Context));
+  TYPEKEYWORD("double", Type::getDoubleTy(Context));
+  TYPEKEYWORD("x86_fp80", Type::getX86_FP80Ty(Context));
+  TYPEKEYWORD("fp128", Type::getFP128Ty(Context));
   TYPEKEYWORD("ppc_fp128", Type::getPPC_FP128Ty(Context));
-  TYPEKEYWORD("label",     Type::getLabelTy(Context));
-  TYPEKEYWORD("metadata",  Type::getMetadataTy(Context));
-  TYPEKEYWORD("x86_mmx",   Type::getX86_MMXTy(Context));
-  TYPEKEYWORD("x86_amx",   Type::getX86_AMXTy(Context));
-  TYPEKEYWORD("token",     Type::getTokenTy(Context));
-  TYPEKEYWORD("ptr",       PointerType::getUnqual(Context));
+  TYPEKEYWORD("label", Type::getLabelTy(Context));
+  TYPEKEYWORD("metadata", Type::getMetadataTy(Context));
+  TYPEKEYWORD("x86_mmx", Type::getX86_MMXTy(Context));
+  TYPEKEYWORD("x86_amx", Type::getX86_AMXTy(Context));
+  TYPEKEYWORD("token", Type::getTokenTy(Context));
+  TYPEKEYWORD("ptr", PointerType::getUnqual(Context));
 
 #undef TYPEKEYWORD
 
@@ -838,64 +894,76 @@ lltok::Kind LLLexer::LexIdentifier() {
     }                                                                          \
   } while (false)
 
-  INSTKEYWORD(fneg,  FNeg);
-
-  INSTKEYWORD(add,   Add);  INSTKEYWORD(fadd,   FAdd);
-  INSTKEYWORD(sub,   Sub);  INSTKEYWORD(fsub,   FSub);
-  INSTKEYWORD(mul,   Mul);  INSTKEYWORD(fmul,   FMul);
-  INSTKEYWORD(udiv,  UDiv); INSTKEYWORD(sdiv,  SDiv); INSTKEYWORD(fdiv,  FDiv);
-  INSTKEYWORD(urem,  URem); INSTKEYWORD(srem,  SRem); INSTKEYWORD(frem,  FRem);
-  INSTKEYWORD(shl,   Shl);  INSTKEYWORD(lshr,  LShr); INSTKEYWORD(ashr,  AShr);
-  INSTKEYWORD(and,   And);  INSTKEYWORD(or,    Or);   INSTKEYWORD(xor,   Xor);
-  INSTKEYWORD(icmp,  ICmp); INSTKEYWORD(fcmp,  FCmp);
-
-  INSTKEYWORD(phi,         PHI);
-  INSTKEYWORD(call,        Call);
-  INSTKEYWORD(trunc,       Trunc);
-  INSTKEYWORD(zext,        ZExt);
-  INSTKEYWORD(sext,        SExt);
-  INSTKEYWORD(fptrunc,     FPTrunc);
-  INSTKEYWORD(fpext,       FPExt);
-  INSTKEYWORD(uitofp,      UIToFP);
-  INSTKEYWORD(sitofp,      SIToFP);
-  INSTKEYWORD(fptoui,      FPToUI);
-  INSTKEYWORD(fptosi,      FPToSI);
-  INSTKEYWORD(inttoptr,    IntToPtr);
-  INSTKEYWORD(ptrtoint,    PtrToInt);
-  INSTKEYWORD(bitcast,     BitCast);
+  INSTKEYWORD(fneg, FNeg);
+
+  INSTKEYWORD(add, Add);
+  INSTKEYWORD(fadd, FAdd);
+  INSTKEYWORD(sub, Sub);
+  INSTKEYWORD(fsub, FSub);
+  INSTKEYWORD(mul, Mul);
+  INSTKEYWORD(fmul, FMul);
+  INSTKEYWORD(udiv, UDiv);
+  INSTKEYWORD(sdiv, SDiv);
+  INSTKEYWORD(fdiv, FDiv);
+  INSTKEYWORD(urem, URem);
+  INSTKEYWORD(srem, SRem);
+  INSTKEYWORD(frem, FRem);
+  INSTKEYWORD(shl, Shl);
+  INSTKEYWORD(lshr, LShr);
+  INSTKEYWORD(ashr, AShr);
+  INSTKEYWORD(and, And);
+  INSTKEYWORD(or, Or);
+  INSTKEYWORD(xor, Xor);
+  INSTKEYWORD(icmp, ICmp);
+  INSTKEYWORD(fcmp, FCmp);
+
+  INSTKEYWORD(phi, PHI);
+  INSTKEYWORD(call, Call);
+  INSTKEYWORD(trunc, Trunc);
+  INSTKEYWORD(zext, ZExt);
+  INSTKEYWORD(sext, SExt);
+  INSTKEYWORD(fptrunc, FPTrunc);
+  INSTKEYWORD(fpext, FPExt);
+  INSTKEYWORD(uitofp, UIToFP);
+  INSTKEYWORD(sitofp, SIToFP);
+  INSTKEYWORD(fptoui, FPToUI);
+  INSTKEYWORD(fptosi, FPToSI);
+  INSTKEYWORD(inttoptr, IntToPtr);
+  INSTKEYWORD(ptrtoint, PtrToInt);
+  INSTKEYWORD(bitcast, BitCast);
   INSTKEYWORD(addrspacecast, AddrSpaceCast);
-  INSTKEYWORD(select,      Select);
-  INSTKEYWORD(va_arg,      VAArg);
-  INSTKEYWORD(ret,         Ret);
-  INSTKEYWORD(br,          Br);
-  INSTKEYWORD(switch,      Switch);
-  INSTKEYWORD(indirectbr,  IndirectBr);
-  INSTKEYWORD(invoke,      Invoke);
-  INSTKEYWORD(resume,      Resume);
+  INSTKEYWORD(select, Select);
+  INSTKEYWORD(va_arg, VAArg);
+  INSTKEYWORD(ret, Ret);
+  INSTKEYWORD(br, Br);
+  INSTKEYWORD(switch, Switch);
+  INSTKEYWORD(indirectbr, IndirectBr);
+  INSTKEYWORD(invoke, Invoke);
+  INSTKEYWORD(resume, Resume);
   INSTKEYWORD(unreachable, Unreachable);
-  INSTKEYWORD(callbr,      CallBr);
-
-  INSTKEYWORD(alloca,      Alloca);
-  INSTKEYWORD(load,        Load);
-  INSTKEYWORD(store,       Store);
-  INSTKEYWORD(cmpxchg,     AtomicCmpXchg);
-  INSTKEYWORD(atomicrmw,   AtomicRMW);
-  INSTKEYWORD(fence,       Fence);
+  INSTKEYWORD(callbr, CallBr);
+
+  INSTKEYWORD(alloca, Alloca);
+  INSTKEYWORD(load, Load);
+  INSTKEYWORD(store, Store);
+  INSTKEYWORD(cmpxchg, AtomicCmpXchg);
+  INSTKEYWORD(atomicrmw, AtomicRMW);
+  INSTKEYWORD(fence, Fence);
   INSTKEYWORD(getelementptr, GetElementPtr);
 
   INSTKEYWORD(extractelement, ExtractElement);
-  INSTKEYWORD(insertelement,  InsertElement);
-  INSTKEYWORD(shufflevector,  ShuffleVector);
-  INSTKEYWORD(extractvalue,   ExtractValue);
-  INSTKEYWORD(insertvalue,    InsertValue);
-  INSTKEYWORD(landingpad,     LandingPad);
-  INSTKEYWORD(cleanupret,     CleanupRet);
-  INSTKEYWORD(catchret,       CatchRet);
-  INSTKEYWORD(catchswitch,  CatchSwitch);
-  INSTKEYWORD(catchpad,     CatchPad);
-  INSTKEYWORD(cleanuppad,   CleanupPad);
-
-  INSTKEYWORD(freeze,       Freeze);
+  INSTKEYWORD(insertelement, InsertElement);
+  INSTKEYWORD(shufflevector, ShuffleVector);
+  INSTKEYWORD(extractvalue, ExtractValue);
+  INSTKEYWORD(insertvalue, InsertValue);
+  INSTKEYWORD(landingpad, LandingPad);
+  INSTKEYWORD(cleanupret, CleanupRet);
+  INSTKEYWORD(catchret, CatchRet);
+  INSTKEYWORD(catchswitch, CatchSwitch);
+  INSTKEYWORD(catchpad, CatchPad);
+  INSTKEYWORD(cleanuppad, CleanupPad);
+
+  INSTKEYWORD(freeze, Freeze);
 
 #undef INSTKEYWORD
 
@@ -946,15 +1014,14 @@ lltok::Kind LLLexer::LexIdentifier() {
 
   // Check for [us]0x[0-9A-Fa-f]+ which are Hexadecimal constant generated by
   // the CFE to avoid forcing it to deal with 64-bit numbers.
-  if ((TokStart[0] == 'u' || TokStart[0] == 's') &&
-      TokStart[1] == '0' && TokStart[2] == 'x' &&
-      isxdigit(static_cast<unsigned char>(TokStart[3]))) {
-    int len = CurPtr-TokStart-3;
+  if ((TokStart[0] == 'u' || TokStart[0] == 's') && TokStart[1] == '0' &&
+      TokStart[2] == 'x' && isxdigit(static_cast<unsigned char>(TokStart[3]))) {
+    int len = CurPtr - TokStart - 3;
     uint32_t bits = len * 4;
     StringRef HexStr(TokStart + 3, len);
     if (!all_of(HexStr, isxdigit)) {
       // Bad token, return it as an error.
-      CurPtr = TokStart+3;
+      CurPtr = TokStart + 3;
       return lltok::Error;
     }
     APInt Tmp(bits, HexStr, 16);
@@ -967,12 +1034,12 @@ lltok::Kind LLLexer::LexIdentifier() {
 
   // If this is "cc1234", return this as just "cc".
   if (TokStart[0] == 'c' && TokStart[1] == 'c') {
-    CurPtr = TokStart+2;
+    CurPtr = TokStart + 2;
     return lltok::kw_cc;
   }
 
   // Finally, if this isn't known, return an error.
-  CurPtr = TokStart+1;
+  CurPtr = TokStart + 1;
   return lltok::Error;
 }
 
@@ -997,7 +1064,7 @@ lltok::Kind LLLexer::Lex0x() {
 
   if (!isxdigit(static_cast<unsigned char>(CurPtr[0]))) {
     // Bad token, return it as an error.
-    CurPtr = TokStart+1;
+    CurPtr = TokStart + 1;
     return lltok::Error;
   }
 
@@ -1015,25 +1082,26 @@ lltok::Kind LLLexer::Lex0x() {
 
   uint64_t Pair[2];
   switch (Kind) {
-  default: llvm_unreachable("Unknown kind!");
+  default:
+    llvm_unreachable("Unknown kind!");
   case 'K':
     // F80HexFPConstant - x87 long double in hexadecimal format (10 bytes)
-    FP80HexToIntPair(TokStart+3, CurPtr, Pair);
+    FP80HexToIntPair(TokStart + 3, CurPtr, Pair);
     APFloatVal = APFloat(APFloat::x87DoubleExtended(), APInt(80, Pair));
     return lltok::APFloat;
   case 'L':
     // F128HexFPConstant - IEEE 128-bit in hexadecimal format (16 bytes)
-    HexToIntPair(TokStart+3, CurPtr, Pair);
+    HexToIntPair(TokStart + 3, CurPtr, Pair);
     APFloatVal = APFloat(APFloat::IEEEquad(), APInt(128, Pair));
     return lltok::APFloat;
   case 'M':
     // PPC128HexFPConstant - PowerPC 128-bit in hexadecimal format (16 bytes)
-    HexToIntPair(TokStart+3, CurPtr, Pair);
+    HexToIntPair(TokStart + 3, CurPtr, Pair);
     APFloatVal = APFloat(APFloat::PPCDoubleDouble(), APInt(128, Pair));
     return lltok::APFloat;
   case 'H':
     APFloatVal = APFloat(APFloat::IEEEhalf(),
-                         APInt(16,HexIntToVal(TokStart+3, CurPtr)));
+                         APInt(16, HexIntToVal(TokStart + 3, CurPtr)));
     return lltok::APFloat;
   case 'R':
     // Brain floating point
@@ -1058,7 +1126,7 @@ lltok::Kind LLLexer::LexDigitOrNegative() {
       !isdigit(static_cast<unsigned char>(CurPtr[0]))) {
     // Okay, this is not a number after the -, it's probably a label.
     if (const char *End = isLabelTail(CurPtr)) {
-      StrVal.assign(TokStart, End-1);
+      StrVal.assign(TokStart, End - 1);
       CurPtr = End;
       return lltok::LabelStr;
     }
@@ -1085,7 +1153,7 @@ lltok::Kind LLLexer::LexDigitOrNegative() {
   // Check to see if this really is a string label, e.g. "-1:".
   if (isLabelChar(CurPtr[0]) || CurPtr[0] == ':') {
     if (const char *End = isLabelTail(CurPtr)) {
-      StrVal.assign(TokStart, End-1);
+      StrVal.assign(TokStart, End - 1);
       CurPtr = End;
       return lltok::LabelStr;
     }
@@ -1103,19 +1171,21 @@ lltok::Kind LLLexer::LexDigitOrNegative() {
   ++CurPtr;
 
   // Skip over [0-9]*([eE][-+]?[0-9]+)?
-  while (isdigit(static_cast<unsigned char>(CurPtr[0]))) ++CurPtr;
+  while (isdigit(static_cast<unsigned char>(CurPtr[0])))
+    ++CurPtr;
 
   if (CurPtr[0] == 'e' || CurPtr[0] == 'E') {
     if (isdigit(static_cast<unsigned char>(CurPtr[1])) ||
         ((CurPtr[1] == '-' || CurPtr[1] == '+') &&
-          isdigit(static_cast<unsigned char>(CurPtr[2])))) {
+         isdigit(static_cast<unsigned char>(CurPtr[2])))) {
       CurPtr += 2;
-      while (isdigit(static_cast<unsigned char>(CurPtr[0]))) ++CurPtr;
+      while (isdigit(static_cast<unsigned char>(CurPtr[0])))
+        ++CurPtr;
     }
   }
 
-  APFloatVal = APFloat(APFloat::IEEEdouble(),
-                       StringRef(TokStart, CurPtr - TokStart));
+  APFloatVal =
+      APFloat(APFloat::IEEEdouble(), StringRef(TokStart, CurPtr - TokStart));
   return lltok::APFloat;
 }
 
@@ -1133,25 +1203,27 @@ lltok::Kind LLLexer::LexPositive() {
 
   // At this point, we need a '.'.
   if (CurPtr[0] != '.') {
-    CurPtr = TokStart+1;
+    CurPtr = TokStart + 1;
     return lltok::Error;
   }
 
   ++CurPtr;
 
   // Skip over [0-9]*([eE][-+]?[0-9]+)?
-  while (isdigit(static_cast<unsigned char>(CurPtr[0]))) ++CurPtr;
+  while (isdigit(static_cast<unsigned char>(CurPtr[0])))
+    ++CurPtr;
 
   if (CurPtr[0] == 'e' || CurPtr[0] == 'E') {
     if (isdigit(static_cast<unsigned char>(CurPtr[1])) ||
         ((CurPtr[1] == '-' || CurPtr[1] == '+') &&
-        isdigit(static_cast<unsigned char>(CurPtr[2])))) {
+         isdigit(static_cast<unsigned char>(CurPtr[2])))) {
       CurPtr += 2;
-      while (isdigit(static_cast<unsigned char>(CurPtr[0]))) ++CurPtr;
+      while (isdigit(static_cast<unsigned char>(CurPtr[0])))
+        ++CurPtr;
     }
   }
 
-  APFloatVal = APFloat(APFloat::IEEEdouble(),
-                       StringRef(TokStart, CurPtr - TokStart));
+  APFloatVal =
+      APFloat(APFloat::IEEEdouble(), StringRef(TokStart, CurPtr - TokStart));
   return lltok::APFloat;
 }
diff --git a/llvm/lib/AsmParser/LLParser.cpp b/llvm/lib/AsmParser/LLParser.cpp
index f1f0cdf746ee12a..51436667d304926 100644
--- a/llvm/lib/AsmParser/LLParser.cpp
+++ b/llvm/lib/AsmParser/LLParser.cpp
@@ -13,8 +13,8 @@
 #include "llvm/AsmParser/LLParser.h"
 #include "llvm/ADT/APSInt.h"
 #include "llvm/ADT/DenseMap.h"
-#include "llvm/ADT/ScopeExit.h"
 #include "llvm/ADT/STLExtras.h"
+#include "llvm/ADT/ScopeExit.h"
 #include "llvm/ADT/SmallPtrSet.h"
 #include "llvm/AsmParser/LLToken.h"
 #include "llvm/AsmParser/SlotMapping.h"
@@ -180,7 +180,7 @@ bool LLParser::validateEndOfModule(bool UpgradeDebugInfo) {
     } else if (auto *GV = dyn_cast<GlobalVariable>(V)) {
       AttrBuilder Attrs(M->getContext(), GV->getAttributes());
       Attrs.merge(B);
-      GV->setAttributes(AttributeSet::get(Context,Attrs));
+      GV->setAttributes(AttributeSet::get(Context, Attrs));
     } else {
       llvm_unreachable("invalid object with forward attribute group reference");
     }
@@ -235,8 +235,9 @@ bool LLParser::validateEndOfModule(bool UpgradeDebugInfo) {
       return error(NT.second.second,
                    "use of undefined type '%" + Twine(NT.first) + "'");
 
-  for (StringMap<std::pair<Type*, LocTy> >::iterator I =
-       NamedTypes.begin(), E = NamedTypes.end(); I != E; ++I)
+  for (StringMap<std::pair<Type *, LocTy>>::iterator I = NamedTypes.begin(),
+                                                     E = NamedTypes.end();
+       I != E; ++I)
     if (I->second.second.isValid())
       return error(I->second.second,
                    "use of undefined type named '" + I->getKey() + "'");
@@ -390,7 +391,8 @@ bool LLParser::parseTopLevelEntities() {
     switch (Lex.getKind()) {
     default:
       return tokError("expected top-level entity");
-    case lltok::Eof: return false;
+    case lltok::Eof:
+      return false;
     case lltok::kw_declare:
       if (parseDeclare())
         return true;
@@ -419,7 +421,10 @@ bool LLParser::parseTopLevelEntities() {
       if (parseNamedGlobal())
         return true;
       break;
-    case lltok::ComdatVar:  if (parseComdat()) return true; break;
+    case lltok::ComdatVar:
+      if (parseComdat())
+        return true;
+      break;
     case lltok::exclaim:
       if (parseStandaloneMetadata())
         return true;
@@ -520,7 +525,7 @@ bool LLParser::parseUnnamedType() {
     return true;
 
   if (!isa<StructType>(Result)) {
-    std::pair<Type*, LocTy> &Entry = NumberedTypes[TypeID];
+    std::pair<Type *, LocTy> &Entry = NumberedTypes[TypeID];
     if (Entry.first)
       return error(TypeLoc, "non-struct types may not be recursive");
     Entry.first = Result;
@@ -535,7 +540,7 @@ bool LLParser::parseUnnamedType() {
 bool LLParser::parseNamedType() {
   std::string Name = Lex.getStrVal();
   LocTy NameLoc = Lex.getLoc();
-  Lex.Lex();  // eat LocalVar.
+  Lex.Lex(); // eat LocalVar.
 
   if (parseToken(lltok::equal, "expected '=' after name") ||
       parseToken(lltok::kw_type, "expected 'type' after name"))
@@ -546,7 +551,7 @@ bool LLParser::parseNamedType() {
     return true;
 
   if (!isa<StructType>(Result)) {
-    std::pair<Type*, LocTy> &Entry = NamedTypes[Name];
+    std::pair<Type *, LocTy> &Entry = NamedTypes[Name];
     if (Entry.first)
       return error(NameLoc, "non-struct types may not be recursive");
     Entry.first = Result;
@@ -964,7 +969,8 @@ static bool isValidVisibilityForLinkage(unsigned V, unsigned L) {
 }
 static bool isValidDLLStorageClassForLinkage(unsigned S, unsigned L) {
   return !GlobalValue::isLocalLinkage((GlobalValue::LinkageTypes)L) ||
-         (GlobalValue::DLLStorageClassTypes)S == GlobalValue::DefaultStorageClass;
+         (GlobalValue::DLLStorageClassTypes)S ==
+             GlobalValue::DefaultStorageClass;
 }
 
 // If there was an explicit dso_local, update GV. In the absence of an explicit
@@ -1002,9 +1008,9 @@ bool LLParser::parseAliasOrIFunc(const std::string &Name, LocTy NameLoc,
     llvm_unreachable("Not an alias or ifunc!");
   Lex.Lex();
 
-  GlobalValue::LinkageTypes Linkage = (GlobalValue::LinkageTypes) L;
+  GlobalValue::LinkageTypes Linkage = (GlobalValue::LinkageTypes)L;
 
-  if(IsAlias && !GlobalAlias::isValidLinkage(Linkage))
+  if (IsAlias && !GlobalAlias::isValidLinkage(Linkage))
     return error(NameLoc, "invalid linkage type for alias");
 
   if (!isValidVisibilityForLinkage(Visibility, L))
@@ -1208,9 +1214,8 @@ bool LLParser::parseGlobal(const std::string &Name, LocTy NameLoc,
   // If the linkage is specified and is external, then no initializer is
   // present.
   Constant *Init = nullptr;
-  if (!HasLinkage ||
-      !GlobalValue::isValidDeclarationLinkage(
-          (GlobalValue::LinkageTypes)Linkage)) {
+  if (!HasLinkage || !GlobalValue::isValidDeclarationLinkage(
+                         (GlobalValue::LinkageTypes)Linkage)) {
     if (parseGlobalValue(Ty, Init))
       return true;
   }
@@ -1352,8 +1357,8 @@ bool LLParser::parseUnnamedAttrGrp() {
 static Attribute::AttrKind tokenToAttribute(lltok::Kind Kind) {
   switch (Kind) {
 #define GET_ATTR_NAMES
-#define ATTRIBUTE_ENUM(ENUM_NAME, DISPLAY_NAME) \
-  case lltok::kw_##DISPLAY_NAME: \
+#define ATTRIBUTE_ENUM(ENUM_NAME, DISPLAY_NAME)                                \
+  case lltok::kw_##DISPLAY_NAME:                                               \
     return Attribute::ENUM_NAME;
 #include "llvm/IR/Attributes.inc"
   default:
@@ -1596,7 +1601,7 @@ GlobalValue *LLParser::getGlobalVal(const std::string &Name, Type *Ty,
 
   // Look this name up in the normal function symbol table.
   GlobalValue *Val =
-    cast_or_null<GlobalValue>(M->getValueSymbolTable().lookup(Name));
+      cast_or_null<GlobalValue>(M->getValueSymbolTable().lookup(Name));
 
   // If this is a forward reference for the value, see if we already created a
   // forward ref record.
@@ -1690,7 +1695,7 @@ bool LLParser::parseStringConstant(std::string &Result) {
 bool LLParser::parseUInt32(uint32_t &Val) {
   if (Lex.getKind() != lltok::APSInt || Lex.getAPSIntVal().isSigned())
     return tokError("expected integer");
-  uint64_t Val64 = Lex.getAPSIntVal().getLimitedValue(0xFFFFFFFFULL+1);
+  uint64_t Val64 = Lex.getAPSIntVal().getLimitedValue(0xFFFFFFFFULL + 1);
   if (Val64 != unsigned(Val64))
     return tokError("expected 32-bit integer (too large)");
   Val = Val64;
@@ -1714,17 +1719,17 @@ bool LLParser::parseUInt64(uint64_t &Val) {
 ///   := 'localexec'
 bool LLParser::parseTLSModel(GlobalVariable::ThreadLocalMode &TLM) {
   switch (Lex.getKind()) {
-    default:
-      return tokError("expected localdynamic, initialexec or localexec");
-    case lltok::kw_localdynamic:
-      TLM = GlobalVariable::LocalDynamicTLSModel;
-      break;
-    case lltok::kw_initialexec:
-      TLM = GlobalVariable::InitialExecTLSModel;
-      break;
-    case lltok::kw_localexec:
-      TLM = GlobalVariable::LocalExecTLSModel;
-      break;
+  default:
+    return tokError("expected localdynamic, initialexec or localexec");
+  case lltok::kw_localdynamic:
+    TLM = GlobalVariable::LocalDynamicTLSModel;
+    break;
+  case lltok::kw_initialexec:
+    TLM = GlobalVariable::InitialExecTLSModel;
+    break;
+  case lltok::kw_localexec:
+    TLM = GlobalVariable::LocalExecTLSModel;
+    break;
   }
 
   Lex.Lex();
@@ -2003,20 +2008,48 @@ void LLParser::parseOptionalDLLStorageClass(unsigned &Res) {
 ///
 bool LLParser::parseOptionalCallingConv(unsigned &CC) {
   switch (Lex.getKind()) {
-  default:                       CC = CallingConv::C; return false;
-  case lltok::kw_ccc:            CC = CallingConv::C; break;
-  case lltok::kw_fastcc:         CC = CallingConv::Fast; break;
-  case lltok::kw_coldcc:         CC = CallingConv::Cold; break;
-  case lltok::kw_cfguard_checkcc: CC = CallingConv::CFGuard_Check; break;
-  case lltok::kw_x86_stdcallcc:  CC = CallingConv::X86_StdCall; break;
-  case lltok::kw_x86_fastcallcc: CC = CallingConv::X86_FastCall; break;
-  case lltok::kw_x86_regcallcc:  CC = CallingConv::X86_RegCall; break;
-  case lltok::kw_x86_thiscallcc: CC = CallingConv::X86_ThisCall; break;
-  case lltok::kw_x86_vectorcallcc:CC = CallingConv::X86_VectorCall; break;
-  case lltok::kw_arm_apcscc:     CC = CallingConv::ARM_APCS; break;
-  case lltok::kw_arm_aapcscc:    CC = CallingConv::ARM_AAPCS; break;
-  case lltok::kw_arm_aapcs_vfpcc:CC = CallingConv::ARM_AAPCS_VFP; break;
-  case lltok::kw_aarch64_vector_pcs:CC = CallingConv::AArch64_VectorCall; break;
+  default:
+    CC = CallingConv::C;
+    return false;
+  case lltok::kw_ccc:
+    CC = CallingConv::C;
+    break;
+  case lltok::kw_fastcc:
+    CC = CallingConv::Fast;
+    break;
+  case lltok::kw_coldcc:
+    CC = CallingConv::Cold;
+    break;
+  case lltok::kw_cfguard_checkcc:
+    CC = CallingConv::CFGuard_Check;
+    break;
+  case lltok::kw_x86_stdcallcc:
+    CC = CallingConv::X86_StdCall;
+    break;
+  case lltok::kw_x86_fastcallcc:
+    CC = CallingConv::X86_FastCall;
+    break;
+  case lltok::kw_x86_regcallcc:
+    CC = CallingConv::X86_RegCall;
+    break;
+  case lltok::kw_x86_thiscallcc:
+    CC = CallingConv::X86_ThisCall;
+    break;
+  case lltok::kw_x86_vectorcallcc:
+    CC = CallingConv::X86_VectorCall;
+    break;
+  case lltok::kw_arm_apcscc:
+    CC = CallingConv::ARM_APCS;
+    break;
+  case lltok::kw_arm_aapcscc:
+    CC = CallingConv::ARM_AAPCS;
+    break;
+  case lltok::kw_arm_aapcs_vfpcc:
+    CC = CallingConv::ARM_AAPCS_VFP;
+    break;
+  case lltok::kw_aarch64_vector_pcs:
+    CC = CallingConv::AArch64_VectorCall;
+    break;
   case lltok::kw_aarch64_sve_vector_pcs:
     CC = CallingConv::AArch64_SVE_VectorCall;
     break;
@@ -2026,51 +2059,109 @@ bool LLParser::parseOptionalCallingConv(unsigned &CC) {
   case lltok::kw_aarch64_sme_preservemost_from_x2:
     CC = CallingConv::AArch64_SME_ABI_Support_Routines_PreserveMost_From_X2;
     break;
-  case lltok::kw_msp430_intrcc:  CC = CallingConv::MSP430_INTR; break;
-  case lltok::kw_avr_intrcc:     CC = CallingConv::AVR_INTR; break;
-  case lltok::kw_avr_signalcc:   CC = CallingConv::AVR_SIGNAL; break;
-  case lltok::kw_ptx_kernel:     CC = CallingConv::PTX_Kernel; break;
-  case lltok::kw_ptx_device:     CC = CallingConv::PTX_Device; break;
-  case lltok::kw_spir_kernel:    CC = CallingConv::SPIR_KERNEL; break;
-  case lltok::kw_spir_func:      CC = CallingConv::SPIR_FUNC; break;
-  case lltok::kw_intel_ocl_bicc: CC = CallingConv::Intel_OCL_BI; break;
-  case lltok::kw_x86_64_sysvcc:  CC = CallingConv::X86_64_SysV; break;
-  case lltok::kw_win64cc:        CC = CallingConv::Win64; break;
-  case lltok::kw_webkit_jscc:    CC = CallingConv::WebKit_JS; break;
-  case lltok::kw_anyregcc:       CC = CallingConv::AnyReg; break;
-  case lltok::kw_preserve_mostcc:CC = CallingConv::PreserveMost; break;
-  case lltok::kw_preserve_allcc: CC = CallingConv::PreserveAll; break;
-  case lltok::kw_ghccc:          CC = CallingConv::GHC; break;
-  case lltok::kw_swiftcc:        CC = CallingConv::Swift; break;
-  case lltok::kw_swifttailcc:    CC = CallingConv::SwiftTail; break;
-  case lltok::kw_x86_intrcc:     CC = CallingConv::X86_INTR; break;
+  case lltok::kw_msp430_intrcc:
+    CC = CallingConv::MSP430_INTR;
+    break;
+  case lltok::kw_avr_intrcc:
+    CC = CallingConv::AVR_INTR;
+    break;
+  case lltok::kw_avr_signalcc:
+    CC = CallingConv::AVR_SIGNAL;
+    break;
+  case lltok::kw_ptx_kernel:
+    CC = CallingConv::PTX_Kernel;
+    break;
+  case lltok::kw_ptx_device:
+    CC = CallingConv::PTX_Device;
+    break;
+  case lltok::kw_spir_kernel:
+    CC = CallingConv::SPIR_KERNEL;
+    break;
+  case lltok::kw_spir_func:
+    CC = CallingConv::SPIR_FUNC;
+    break;
+  case lltok::kw_intel_ocl_bicc:
+    CC = CallingConv::Intel_OCL_BI;
+    break;
+  case lltok::kw_x86_64_sysvcc:
+    CC = CallingConv::X86_64_SysV;
+    break;
+  case lltok::kw_win64cc:
+    CC = CallingConv::Win64;
+    break;
+  case lltok::kw_webkit_jscc:
+    CC = CallingConv::WebKit_JS;
+    break;
+  case lltok::kw_anyregcc:
+    CC = CallingConv::AnyReg;
+    break;
+  case lltok::kw_preserve_mostcc:
+    CC = CallingConv::PreserveMost;
+    break;
+  case lltok::kw_preserve_allcc:
+    CC = CallingConv::PreserveAll;
+    break;
+  case lltok::kw_ghccc:
+    CC = CallingConv::GHC;
+    break;
+  case lltok::kw_swiftcc:
+    CC = CallingConv::Swift;
+    break;
+  case lltok::kw_swifttailcc:
+    CC = CallingConv::SwiftTail;
+    break;
+  case lltok::kw_x86_intrcc:
+    CC = CallingConv::X86_INTR;
+    break;
   case lltok::kw_hhvmcc:
     CC = CallingConv::DUMMY_HHVM;
     break;
   case lltok::kw_hhvm_ccc:
     CC = CallingConv::DUMMY_HHVM_C;
     break;
-  case lltok::kw_cxx_fast_tlscc: CC = CallingConv::CXX_FAST_TLS; break;
-  case lltok::kw_amdgpu_vs:      CC = CallingConv::AMDGPU_VS; break;
-  case lltok::kw_amdgpu_gfx:     CC = CallingConv::AMDGPU_Gfx; break;
-  case lltok::kw_amdgpu_ls:      CC = CallingConv::AMDGPU_LS; break;
-  case lltok::kw_amdgpu_hs:      CC = CallingConv::AMDGPU_HS; break;
-  case lltok::kw_amdgpu_es:      CC = CallingConv::AMDGPU_ES; break;
-  case lltok::kw_amdgpu_gs:      CC = CallingConv::AMDGPU_GS; break;
-  case lltok::kw_amdgpu_ps:      CC = CallingConv::AMDGPU_PS; break;
-  case lltok::kw_amdgpu_cs:      CC = CallingConv::AMDGPU_CS; break;
+  case lltok::kw_cxx_fast_tlscc:
+    CC = CallingConv::CXX_FAST_TLS;
+    break;
+  case lltok::kw_amdgpu_vs:
+    CC = CallingConv::AMDGPU_VS;
+    break;
+  case lltok::kw_amdgpu_gfx:
+    CC = CallingConv::AMDGPU_Gfx;
+    break;
+  case lltok::kw_amdgpu_ls:
+    CC = CallingConv::AMDGPU_LS;
+    break;
+  case lltok::kw_amdgpu_hs:
+    CC = CallingConv::AMDGPU_HS;
+    break;
+  case lltok::kw_amdgpu_es:
+    CC = CallingConv::AMDGPU_ES;
+    break;
+  case lltok::kw_amdgpu_gs:
+    CC = CallingConv::AMDGPU_GS;
+    break;
+  case lltok::kw_amdgpu_ps:
+    CC = CallingConv::AMDGPU_PS;
+    break;
+  case lltok::kw_amdgpu_cs:
+    CC = CallingConv::AMDGPU_CS;
+    break;
   case lltok::kw_amdgpu_cs_chain:
     CC = CallingConv::AMDGPU_CS_Chain;
     break;
   case lltok::kw_amdgpu_cs_chain_preserve:
     CC = CallingConv::AMDGPU_CS_ChainPreserve;
     break;
-  case lltok::kw_amdgpu_kernel:  CC = CallingConv::AMDGPU_KERNEL; break;
-  case lltok::kw_tailcc:         CC = CallingConv::Tail; break;
+  case lltok::kw_amdgpu_kernel:
+    CC = CallingConv::AMDGPU_KERNEL;
+    break;
+  case lltok::kw_tailcc:
+    CC = CallingConv::Tail;
+    break;
   case lltok::kw_cc: {
-      Lex.Lex();
-      return parseUInt32(CC);
-    }
+    Lex.Lex();
+    return parseUInt32(CC);
+  }
   }
 
   Lex.Lex();
@@ -2557,13 +2648,23 @@ bool LLParser::parseOrdering(AtomicOrdering &Ordering) {
   switch (Lex.getKind()) {
   default:
     return tokError("Expected ordering on atomic instruction");
-  case lltok::kw_unordered: Ordering = AtomicOrdering::Unordered; break;
-  case lltok::kw_monotonic: Ordering = AtomicOrdering::Monotonic; break;
+  case lltok::kw_unordered:
+    Ordering = AtomicOrdering::Unordered;
+    break;
+  case lltok::kw_monotonic:
+    Ordering = AtomicOrdering::Monotonic;
+    break;
   // Not specified yet:
   // case lltok::kw_consume: Ordering = AtomicOrdering::Consume; break;
-  case lltok::kw_acquire: Ordering = AtomicOrdering::Acquire; break;
-  case lltok::kw_release: Ordering = AtomicOrdering::Release; break;
-  case lltok::kw_acq_rel: Ordering = AtomicOrdering::AcquireRelease; break;
+  case lltok::kw_acquire:
+    Ordering = AtomicOrdering::Acquire;
+    break;
+  case lltok::kw_release:
+    Ordering = AtomicOrdering::Release;
+    break;
+  case lltok::kw_acq_rel:
+    Ordering = AtomicOrdering::AcquireRelease;
+    break;
   case lltok::kw_seq_cst:
     Ordering = AtomicOrdering::SequentiallyConsistent;
     break;
@@ -2689,7 +2790,7 @@ bool LLParser::parseType(Type *&Result, const Twine &Msg, bool AllowVoid) {
     break;
   case lltok::LocalVar: {
     // Type ::= %foo
-    std::pair<Type*, LocTy> &Entry = NamedTypes[Lex.getStrVal()];
+    std::pair<Type *, LocTy> &Entry = NamedTypes[Lex.getStrVal()];
 
     // If the type hasn't been defined yet, create a forward definition and
     // remember where that forward def'n was seen (in case it never is defined).
@@ -2704,7 +2805,7 @@ bool LLParser::parseType(Type *&Result, const Twine &Msg, bool AllowVoid) {
 
   case lltok::LocalVarID: {
     // Type ::= %4
-    std::pair<Type*, LocTy> &Entry = NumberedTypes[Lex.getUIntVal()];
+    std::pair<Type *, LocTy> &Entry = NumberedTypes[Lex.getUIntVal()];
 
     // If the type hasn't been defined yet, create a forward definition and
     // remember where that forward def'n was seen (in case it never is defined).
@@ -2789,7 +2890,7 @@ bool LLParser::parseParameterList(SmallVectorImpl<ParamInfo> &ArgList,
         return tokError(Twine(Msg) + "non-musttail call");
       if (!InVarArgsFunc)
         return tokError(Twine(Msg) + "musttail call in non-varargs function");
-      Lex.Lex();  // Lex the '...', it is purely for readability.
+      Lex.Lex(); // Lex the '...', it is purely for readability.
       return parseToken(lltok::rparen, "expected ')' at end of argument list");
     }
 
@@ -2810,15 +2911,15 @@ bool LLParser::parseParameterList(SmallVectorImpl<ParamInfo> &ArgList,
       if (parseOptionalParamAttrs(ArgAttrs) || parseValue(ArgTy, V, PFS))
         return true;
     }
-    ArgList.push_back(ParamInfo(
-        ArgLoc, V, AttributeSet::get(V->getContext(), ArgAttrs)));
+    ArgList.push_back(
+        ParamInfo(ArgLoc, V, AttributeSet::get(V->getContext(), ArgAttrs)));
   }
 
   if (IsMustTailCall && InVarArgsFunc)
     return tokError("expected '...' at end of argument list for musttail call "
                     "in varargs function");
 
-  Lex.Lex();  // Lex the ')'.
+  Lex.Lex(); // Lex the ')'.
   return false;
 }
 
@@ -3008,7 +3109,7 @@ bool LLParser::parseFunctionType(Type *&Result) {
                    "argument attributes invalid in function type");
   }
 
-  SmallVector<Type*, 16> ArgListTy;
+  SmallVector<Type *, 16> ArgListTy;
   for (unsigned i = 0, e = ArgList.size(); i != e; ++i)
     ArgListTy.push_back(ArgList[i].Ty);
 
@@ -3019,7 +3120,7 @@ bool LLParser::parseFunctionType(Type *&Result) {
 /// parseAnonStructType - parse an anonymous struct type, which is inlined into
 /// other structs.
 bool LLParser::parseAnonStructType(Type *&Result, bool Packed) {
-  SmallVector<Type*, 8> Elts;
+  SmallVector<Type *, 8> Elts;
   if (parseStructBody(Elts))
     return true;
 
@@ -3073,7 +3174,7 @@ bool LLParser::parseStructDefinition(SMLoc TypeLoc, StringRef Name,
 
   StructType *STy = cast<StructType>(Entry.first);
 
-  SmallVector<Type*, 8> Body;
+  SmallVector<Type *, 8> Body;
   if (parseStructBody(Body) ||
       (isPacked && parseToken(lltok::greater, "expected '>' in packed struct")))
     return true;
@@ -3175,7 +3276,8 @@ bool LLParser::parseArrayVectorType(Type *&Result, bool IsVector) {
 
 /// parseTargetExtType - handle target extension type syntax
 ///   TargetExtType
-///     ::= 'target' '(' STRINGCONSTANT TargetExtTypeParams TargetExtIntParams ')'
+///     ::= 'target' '(' STRINGCONSTANT TargetExtTypeParams TargetExtIntParams
+///     ')'
 ///
 ///   TargetExtTypeParams
 ///     ::= /*empty*/
@@ -3233,7 +3335,7 @@ bool LLParser::parseTargetExtType(Type *&Result) {
 
 LLParser::PerFunctionState::PerFunctionState(LLParser &p, Function &f,
                                              int functionNumber)
-  : P(p), F(f), FunctionNumber(functionNumber) {
+    : P(p), F(f), FunctionNumber(functionNumber) {
 
   // Insert unnamed arguments into the NumberedVals list.
   for (Argument &A : F.args())
@@ -3484,19 +3586,19 @@ bool LLParser::parseValID(ValID &ID, PerFunctionState *PFS, Type *ExpectedTy) {
   switch (Lex.getKind()) {
   default:
     return tokError("expected value token");
-  case lltok::GlobalID:  // @42
+  case lltok::GlobalID: // @42
     ID.UIntVal = Lex.getUIntVal();
     ID.Kind = ValID::t_GlobalID;
     break;
-  case lltok::GlobalVar:  // @foo
+  case lltok::GlobalVar: // @foo
     ID.StrVal = Lex.getStrVal();
     ID.Kind = ValID::t_GlobalName;
     break;
-  case lltok::LocalVarID:  // %42
+  case lltok::LocalVarID: // %42
     ID.UIntVal = Lex.getUIntVal();
     ID.Kind = ValID::t_LocalID;
     break;
-  case lltok::LocalVar:  // %foo
+  case lltok::LocalVar: // %foo
     ID.StrVal = Lex.getStrVal();
     ID.Kind = ValID::t_LocalName;
     break;
@@ -3516,16 +3618,26 @@ bool LLParser::parseValID(ValID &ID, PerFunctionState *PFS, Type *ExpectedTy) {
     ID.ConstantVal = ConstantInt::getFalse(Context);
     ID.Kind = ValID::t_Constant;
     break;
-  case lltok::kw_null: ID.Kind = ValID::t_Null; break;
-  case lltok::kw_undef: ID.Kind = ValID::t_Undef; break;
-  case lltok::kw_poison: ID.Kind = ValID::t_Poison; break;
-  case lltok::kw_zeroinitializer: ID.Kind = ValID::t_Zero; break;
-  case lltok::kw_none: ID.Kind = ValID::t_None; break;
+  case lltok::kw_null:
+    ID.Kind = ValID::t_Null;
+    break;
+  case lltok::kw_undef:
+    ID.Kind = ValID::t_Undef;
+    break;
+  case lltok::kw_poison:
+    ID.Kind = ValID::t_Poison;
+    break;
+  case lltok::kw_zeroinitializer:
+    ID.Kind = ValID::t_Zero;
+    break;
+  case lltok::kw_none:
+    ID.Kind = ValID::t_None;
+    break;
 
   case lltok::lbrace: {
     // ValID ::= '{' ConstVector '}'
     Lex.Lex();
-    SmallVector<Constant*, 16> Elts;
+    SmallVector<Constant *, 16> Elts;
     if (parseGlobalValueVector(Elts) ||
         parseToken(lltok::rbrace, "expected end of struct constant"))
       return true;
@@ -3543,7 +3655,7 @@ bool LLParser::parseValID(ValID &ID, PerFunctionState *PFS, Type *ExpectedTy) {
     Lex.Lex();
     bool isPackedStruct = EatIfPresent(lltok::lbrace);
 
-    SmallVector<Constant*, 16> Elts;
+    SmallVector<Constant *, 16> Elts;
     LocTy FirstEltLoc = Lex.getLoc();
     if (parseGlobalValueVector(Elts) ||
         (isPackedStruct &&
@@ -3581,9 +3693,9 @@ bool LLParser::parseValID(ValID &ID, PerFunctionState *PFS, Type *ExpectedTy) {
     ID.Kind = ValID::t_Constant;
     return false;
   }
-  case lltok::lsquare: {   // Array Constant
+  case lltok::lsquare: { // Array Constant
     Lex.Lex();
-    SmallVector<Constant*, 16> Elts;
+    SmallVector<Constant *, 16> Elts;
     LocTy FirstEltLoc = Lex.getLoc();
     if (parseGlobalValueVector(Elts) ||
         parseToken(lltok::rsquare, "expected end of array constant"))
@@ -3615,10 +3727,10 @@ bool LLParser::parseValID(ValID &ID, PerFunctionState *PFS, Type *ExpectedTy) {
     ID.Kind = ValID::t_Constant;
     return false;
   }
-  case lltok::kw_c:  // c "foo"
+  case lltok::kw_c: // c "foo"
     Lex.Lex();
-    ID.ConstantVal = ConstantDataArray::getString(Context, Lex.getStrVal(),
-                                                  false);
+    ID.ConstantVal =
+        ConstantDataArray::getString(Context, Lex.getStrVal(), false);
     if (parseToken(lltok::StringConstant, "expected string"))
       return true;
     ID.Kind = ValID::t_Constant;
@@ -3684,9 +3796,9 @@ bool LLParser::parseValID(ValID &ID, PerFunctionState *PFS, Type *ExpectedTy) {
     if (!F) {
       // Make a global variable as a placeholder for this reference.
       GlobalValue *&FwdRef =
-          ForwardRefBlockAddresses.insert(std::make_pair(
-                                              std::move(Fn),
-                                              std::map<ValID, GlobalValue *>()))
+          ForwardRefBlockAddresses
+              .insert(std::make_pair(std::move(Fn),
+                                     std::map<ValID, GlobalValue *>()))
               .first->second.insert(std::make_pair(std::move(Label), nullptr))
               .first->second;
       if (!FwdRef) {
@@ -3829,8 +3941,8 @@ bool LLParser::parseValID(ValID &ID, PerFunctionState *PFS, Type *ExpectedTy) {
       return error(ID.Loc, "invalid cast opcode for cast from '" +
                                getTypeString(SrcVal->getType()) + "' to '" +
                                getTypeString(DestTy) + "'");
-    ID.ConstantVal = ConstantExpr::getCast((Instruction::CastOps)Opc,
-                                                 SrcVal, DestTy);
+    ID.ConstantVal =
+        ConstantExpr::getCast((Instruction::CastOps)Opc, SrcVal, DestTy);
     ID.Kind = ValID::t_Constant;
     return false;
   }
@@ -3955,12 +4067,16 @@ bool LLParser::parseValID(ValID &ID, PerFunctionState *PFS, Type *ExpectedTy) {
       if (!Val0->getType()->isFPOrFPVectorTy())
         return error(ID.Loc, "constexpr requires fp operands");
       break;
-    default: llvm_unreachable("Unknown binary operator!");
+    default:
+      llvm_unreachable("Unknown binary operator!");
     }
     unsigned Flags = 0;
-    if (NUW)   Flags |= OverflowingBinaryOperator::NoUnsignedWrap;
-    if (NSW)   Flags |= OverflowingBinaryOperator::NoSignedWrap;
-    if (Exact) Flags |= PossiblyExactOperator::IsExact;
+    if (NUW)
+      Flags |= OverflowingBinaryOperator::NoUnsignedWrap;
+    if (NSW)
+      Flags |= OverflowingBinaryOperator::NoSignedWrap;
+    if (Exact)
+      Flags |= PossiblyExactOperator::IsExact;
     Constant *C = ConstantExpr::get(Opc, Val0, Val1, Flags);
     ID.ConstantVal = C;
     ID.Kind = ValID::t_Constant;
@@ -3993,7 +4109,7 @@ bool LLParser::parseValID(ValID &ID, PerFunctionState *PFS, Type *ExpectedTy) {
   case lltok::kw_insertelement:
   case lltok::kw_extractelement: {
     unsigned Opc = Lex.getUIntVal();
-    SmallVector<Constant*, 16> Elts;
+    SmallVector<Constant *, 16> Elts;
     bool InBounds = false;
     Type *Ty;
     Lex.Lex();
@@ -4017,8 +4133,7 @@ bool LLParser::parseValID(ValID &ID, PerFunctionState *PFS, Type *ExpectedTy) {
       return true;
 
     if (Opc == Instruction::GetElementPtr) {
-      if (Elts.size() == 0 ||
-          !Elts[0]->getType()->isPtrOrPtrVectorTy())
+      if (Elts.size() == 0 || !Elts[0]->getType()->isPtrOrPtrVectorTy())
         return error(ID.Loc, "base of getelementptr must be a pointer");
 
       Type *BaseType = Elts[0]->getType();
@@ -4044,7 +4159,7 @@ bool LLParser::parseValID(ValID &ID, PerFunctionState *PFS, Type *ExpectedTy) {
         }
       }
 
-      SmallPtrSet<Type*, 4> Visited;
+      SmallPtrSet<Type *, 4> Visited;
       if (!Indices.empty() && !Ty->isSized(&Visited))
         return error(ID.Loc, "base element of getelementptr must be sized");
 
@@ -4081,7 +4196,7 @@ bool LLParser::parseValID(ValID &ID, PerFunctionState *PFS, Type *ExpectedTy) {
       if (!InsertElementInst::isValidOperands(Elts[0], Elts[1], Elts[2]))
         return error(ID.Loc, "invalid insertelement operands");
       ID.ConstantVal =
-                 ConstantExpr::getInsertElement(Elts[0], Elts[1],Elts[2]);
+          ConstantExpr::getInsertElement(Elts[0], Elts[1], Elts[2]);
     }
 
     ID.Kind = ValID::t_Constant;
@@ -4139,10 +4254,8 @@ bool LLParser::parseOptionalComdat(StringRef GlobalName, Comdat *&C) {
 bool LLParser::parseGlobalValueVector(SmallVectorImpl<Constant *> &Elts,
                                       std::optional<unsigned> *InRangeOp) {
   // Empty list.
-  if (Lex.getKind() == lltok::rbrace ||
-      Lex.getKind() == lltok::rsquare ||
-      Lex.getKind() == lltok::greater ||
-      Lex.getKind() == lltok::rparen)
+  if (Lex.getKind() == lltok::rbrace || Lex.getKind() == lltok::rsquare ||
+      Lex.getKind() == lltok::greater || Lex.getKind() == lltok::rparen)
     return false;
 
   do {
@@ -4214,11 +4327,7 @@ template <class FieldTypeA, class FieldTypeB> struct MDEitherFieldImpl {
   FieldTypeB B;
   bool Seen;
 
-  enum {
-    IsInvalid = 0,
-    IsTypeA = 1,
-    IsTypeB = 2
-  } WhatIs;
+  enum { IsInvalid = 0, IsTypeA = 1, IsTypeB = 2 } WhatIs;
 
   void assign(FieldTypeA A) {
     Seen = true;
@@ -4261,7 +4370,7 @@ struct DwarfTagField : public MDUnsignedField {
 struct DwarfMacinfoTypeField : public MDUnsignedField {
   DwarfMacinfoTypeField() : MDUnsignedField(0, dwarf::DW_MACINFO_vendor_ext) {}
   DwarfMacinfoTypeField(dwarf::MacinfoRecordType DefaultType)
-    : MDUnsignedField(DefaultType, dwarf::DW_MACINFO_vendor_ext) {}
+      : MDUnsignedField(DefaultType, dwarf::DW_MACINFO_vendor_ext) {}
 };
 
 struct DwarfAttEncodingField : public MDUnsignedField {
@@ -4307,8 +4416,7 @@ struct MDSignedField : public MDFieldImpl<int64_t> {
   int64_t Min = INT64_MIN;
   int64_t Max = INT64_MAX;
 
-  MDSignedField(int64_t Default = 0)
-      : ImplTy(Default) {}
+  MDSignedField(int64_t Default = 0) : ImplTy(Default) {}
   MDSignedField(int64_t Default, int64_t Min, int64_t Max)
       : ImplTy(Default), Min(Min), Max(Max) {}
 };
@@ -5288,9 +5396,8 @@ bool LLParser::parseDICommonBlock(MDNode *&Result, bool IsDistinct) {
   PARSE_MD_FIELDS();
 #undef VISIT_MD_FIELDS
 
-  Result = GET_OR_DISTINCT(DICommonBlock,
-                           (Context, scope.Val, declaration.Val, name.Val,
-                            file.Val, line.Val));
+  Result = GET_OR_DISTINCT(DICommonBlock, (Context, scope.Val, declaration.Val,
+                                           name.Val, file.Val, line.Val));
   return false;
 }
 
@@ -5423,12 +5530,11 @@ bool LLParser::parseDIGlobalVariable(MDNode *&Result, bool IsDistinct) {
   PARSE_MD_FIELDS();
 #undef VISIT_MD_FIELDS
 
-  Result =
-      GET_OR_DISTINCT(DIGlobalVariable,
-                      (Context, scope.Val, name.Val, linkageName.Val, file.Val,
-                       line.Val, type.Val, isLocal.Val, isDefinition.Val,
-                       declaration.Val, templateParams.Val, align.Val,
-                       annotations.Val));
+  Result = GET_OR_DISTINCT(DIGlobalVariable,
+                           (Context, scope.Val, name.Val, linkageName.Val,
+                            file.Val, line.Val, type.Val, isLocal.Val,
+                            isDefinition.Val, declaration.Val,
+                            templateParams.Val, align.Val, annotations.Val));
   return false;
 }
 
@@ -5453,10 +5559,10 @@ bool LLParser::parseDILocalVariable(MDNode *&Result, bool IsDistinct) {
   PARSE_MD_FIELDS();
 #undef VISIT_MD_FIELDS
 
-  Result = GET_OR_DISTINCT(DILocalVariable,
-                           (Context, scope.Val, name.Val, file.Val, line.Val,
-                            type.Val, arg.Val, flags.Val, align.Val,
-                            annotations.Val));
+  Result =
+      GET_OR_DISTINCT(DILocalVariable, (Context, scope.Val, name.Val, file.Val,
+                                        line.Val, type.Val, arg.Val, flags.Val,
+                                        align.Val, annotations.Val));
   return false;
 }
 
@@ -5770,8 +5876,8 @@ bool LLParser::convertValIDToValue(Type *Ty, ValID &ID, Value *&V,
         ID.APFloatVal.convert(APFloat::BFloat(), APFloat::rmNearestTiesToEven,
                               &Ignored);
       else if (Ty->isFloatTy())
-        ID.APFloatVal.convert(APFloat::IEEEsingle(), APFloat::rmNearestTiesToEven,
-                              &Ignored);
+        ID.APFloatVal.convert(APFloat::IEEEsingle(),
+                              APFloat::rmNearestTiesToEven, &Ignored);
       if (IsSNAN) {
         // The convert call above may quiet an SNaN, so manufacture another
         // SNaN. The bitcast works because the payload (significand) parameter
@@ -5888,8 +5994,7 @@ bool LLParser::parseConstantValue(Type *Ty, Constant *&C) {
 bool LLParser::parseValue(Type *Ty, Value *&V, PerFunctionState *PFS) {
   V = nullptr;
   ValID ID;
-  return parseValID(ID, PFS, Ty) ||
-         convertValIDToValue(Ty, ID, V, PFS);
+  return parseValID(ID, PFS, Ty) || convertValIDToValue(Ty, ID, V, PFS);
 }
 
 bool LLParser::parseTypeAndValue(Value *&V, PerFunctionState *PFS) {
@@ -5971,7 +6076,7 @@ bool LLParser::parseFunctionHeader(Function *&Fn, bool IsDefine) {
   std::string FunctionName;
   if (Lex.getKind() == lltok::GlobalVar) {
     FunctionName = Lex.getStrVal();
-  } else if (Lex.getKind() == lltok::GlobalID) {     // @42 is ok.
+  } else if (Lex.getKind() == lltok::GlobalID) { // @42 is ok.
     unsigned NameID = Lex.getUIntVal();
 
     if (NameID != NumberedVals.size())
@@ -6029,7 +6134,7 @@ bool LLParser::parseFunctionHeader(Function *&Fn, bool IsDefine) {
 
   // Okay, if we got here, the function is syntactically valid.  Convert types
   // and do semantic checks.
-  std::vector<Type*> ParamTypeList;
+  std::vector<Type *> ParamTypeList;
   SmallVector<AttributeSet, 8> Attrs;
 
   for (unsigned i = 0, e = ArgList.size(); i != e; ++i) {
@@ -6111,7 +6216,8 @@ bool LLParser::parseFunctionHeader(Function *&Fn, bool IsDefine) {
   Fn->setPartition(Partition);
   Fn->setComdat(C);
   Fn->setPersonalityFn(PersonalityFn);
-  if (!GC.empty()) Fn->setGC(GC);
+  if (!GC.empty())
+    Fn->setGC(GC);
   Fn->setPrefixData(Prefix);
   Fn->setPrologueData(Prologue);
   ForwardRefAttrGroups[Fn] = FwdRefAttrGrps;
@@ -6120,7 +6226,8 @@ bool LLParser::parseFunctionHeader(Function *&Fn, bool IsDefine) {
   Function::arg_iterator ArgIt = Fn->arg_begin();
   for (unsigned i = 0, e = ArgList.size(); i != e; ++i, ++ArgIt) {
     // If the argument has a name, insert it into the argument symbol table.
-    if (ArgList[i].Name.empty()) continue;
+    if (ArgList[i].Name.empty())
+      continue;
 
     // Set the name, if it conflicted, it will be auto-renamed.
     ArgIt->setName(ArgList[i].Name);
@@ -6200,10 +6307,11 @@ bool LLParser::PerFunctionState::resolveForwardRefBlockAddresses() {
 bool LLParser::parseFunctionBody(Function &Fn) {
   if (Lex.getKind() != lltok::lbrace)
     return tokError("expected '{' in function body");
-  Lex.Lex();  // eat the {.
+  Lex.Lex(); // eat the {.
 
   int FunctionNumber = -1;
-  if (!Fn.hasName()) FunctionNumber = NumberedVals.size()-1;
+  if (!Fn.hasName())
+    FunctionNumber = NumberedVals.size() - 1;
 
   PerFunctionState PFS(*this, Fn, FunctionNumber);
 
@@ -6278,7 +6386,8 @@ bool LLParser::parseBasicBlock(PerFunctionState &PFS) {
     switch (parseInstruction(Inst, BB, PFS)) {
     default:
       llvm_unreachable("Unknown parseInstruction result!");
-    case InstError: return true;
+    case InstError:
+      return true;
     case InstNormal:
       Inst->insertInto(BB, BB->end());
 
@@ -6319,13 +6428,15 @@ int LLParser::parseInstruction(Instruction *&Inst, BasicBlock *BB,
     return tokError("found end of file when expecting more instructions");
   LocTy Loc = Lex.getLoc();
   unsigned KeywordVal = Lex.getUIntVal();
-  Lex.Lex();  // Eat the keyword.
+  Lex.Lex(); // Eat the keyword.
 
   switch (Token) {
   default:
     return error(Loc, "expected instruction opcode");
   // Terminator Instructions.
-  case lltok::kw_unreachable: Inst = new UnreachableInst(Context); return false;
+  case lltok::kw_unreachable:
+    Inst = new UnreachableInst(Context);
+    return false;
   case lltok::kw_ret:
     return parseRet(Inst, BB, PFS);
   case lltok::kw_br:
@@ -6367,13 +6478,16 @@ int LLParser::parseInstruction(Instruction *&Inst, BasicBlock *BB,
   case lltok::kw_shl: {
     bool NUW = EatIfPresent(lltok::kw_nuw);
     bool NSW = EatIfPresent(lltok::kw_nsw);
-    if (!NUW) NUW = EatIfPresent(lltok::kw_nuw);
+    if (!NUW)
+      NUW = EatIfPresent(lltok::kw_nuw);
 
     if (parseArithmetic(Inst, PFS, KeywordVal, /*IsFP*/ false))
       return true;
 
-    if (NUW) cast<BinaryOperator>(Inst)->setHasNoUnsignedWrap(true);
-    if (NSW) cast<BinaryOperator>(Inst)->setHasNoSignedWrap(true);
+    if (NUW)
+      cast<BinaryOperator>(Inst)->setHasNoUnsignedWrap(true);
+    if (NSW)
+      cast<BinaryOperator>(Inst)->setHasNoSignedWrap(true);
     return false;
   }
   case lltok::kw_fadd:
@@ -6398,7 +6512,8 @@ int LLParser::parseInstruction(Instruction *&Inst, BasicBlock *BB,
 
     if (parseArithmetic(Inst, PFS, KeywordVal, /*IsFP*/ false))
       return true;
-    if (Exact) cast<BinaryOperator>(Inst)->setIsExact(true);
+    if (Exact)
+      cast<BinaryOperator>(Inst)->setIsExact(true);
     return false;
   }
 
@@ -6513,37 +6628,89 @@ bool LLParser::parseCmpPredicate(unsigned &P, unsigned Opc) {
     switch (Lex.getKind()) {
     default:
       return tokError("expected fcmp predicate (e.g. 'oeq')");
-    case lltok::kw_oeq: P = CmpInst::FCMP_OEQ; break;
-    case lltok::kw_one: P = CmpInst::FCMP_ONE; break;
-    case lltok::kw_olt: P = CmpInst::FCMP_OLT; break;
-    case lltok::kw_ogt: P = CmpInst::FCMP_OGT; break;
-    case lltok::kw_ole: P = CmpInst::FCMP_OLE; break;
-    case lltok::kw_oge: P = CmpInst::FCMP_OGE; break;
-    case lltok::kw_ord: P = CmpInst::FCMP_ORD; break;
-    case lltok::kw_uno: P = CmpInst::FCMP_UNO; break;
-    case lltok::kw_ueq: P = CmpInst::FCMP_UEQ; break;
-    case lltok::kw_une: P = CmpInst::FCMP_UNE; break;
-    case lltok::kw_ult: P = CmpInst::FCMP_ULT; break;
-    case lltok::kw_ugt: P = CmpInst::FCMP_UGT; break;
-    case lltok::kw_ule: P = CmpInst::FCMP_ULE; break;
-    case lltok::kw_uge: P = CmpInst::FCMP_UGE; break;
-    case lltok::kw_true: P = CmpInst::FCMP_TRUE; break;
-    case lltok::kw_false: P = CmpInst::FCMP_FALSE; break;
+    case lltok::kw_oeq:
+      P = CmpInst::FCMP_OEQ;
+      break;
+    case lltok::kw_one:
+      P = CmpInst::FCMP_ONE;
+      break;
+    case lltok::kw_olt:
+      P = CmpInst::FCMP_OLT;
+      break;
+    case lltok::kw_ogt:
+      P = CmpInst::FCMP_OGT;
+      break;
+    case lltok::kw_ole:
+      P = CmpInst::FCMP_OLE;
+      break;
+    case lltok::kw_oge:
+      P = CmpInst::FCMP_OGE;
+      break;
+    case lltok::kw_ord:
+      P = CmpInst::FCMP_ORD;
+      break;
+    case lltok::kw_uno:
+      P = CmpInst::FCMP_UNO;
+      break;
+    case lltok::kw_ueq:
+      P = CmpInst::FCMP_UEQ;
+      break;
+    case lltok::kw_une:
+      P = CmpInst::FCMP_UNE;
+      break;
+    case lltok::kw_ult:
+      P = CmpInst::FCMP_ULT;
+      break;
+    case lltok::kw_ugt:
+      P = CmpInst::FCMP_UGT;
+      break;
+    case lltok::kw_ule:
+      P = CmpInst::FCMP_ULE;
+      break;
+    case lltok::kw_uge:
+      P = CmpInst::FCMP_UGE;
+      break;
+    case lltok::kw_true:
+      P = CmpInst::FCMP_TRUE;
+      break;
+    case lltok::kw_false:
+      P = CmpInst::FCMP_FALSE;
+      break;
     }
   } else {
     switch (Lex.getKind()) {
     default:
       return tokError("expected icmp predicate (e.g. 'eq')");
-    case lltok::kw_eq:  P = CmpInst::ICMP_EQ; break;
-    case lltok::kw_ne:  P = CmpInst::ICMP_NE; break;
-    case lltok::kw_slt: P = CmpInst::ICMP_SLT; break;
-    case lltok::kw_sgt: P = CmpInst::ICMP_SGT; break;
-    case lltok::kw_sle: P = CmpInst::ICMP_SLE; break;
-    case lltok::kw_sge: P = CmpInst::ICMP_SGE; break;
-    case lltok::kw_ult: P = CmpInst::ICMP_ULT; break;
-    case lltok::kw_ugt: P = CmpInst::ICMP_UGT; break;
-    case lltok::kw_ule: P = CmpInst::ICMP_ULE; break;
-    case lltok::kw_uge: P = CmpInst::ICMP_UGE; break;
+    case lltok::kw_eq:
+      P = CmpInst::ICMP_EQ;
+      break;
+    case lltok::kw_ne:
+      P = CmpInst::ICMP_NE;
+      break;
+    case lltok::kw_slt:
+      P = CmpInst::ICMP_SLT;
+      break;
+    case lltok::kw_sgt:
+      P = CmpInst::ICMP_SGT;
+      break;
+    case lltok::kw_sle:
+      P = CmpInst::ICMP_SLE;
+      break;
+    case lltok::kw_sge:
+      P = CmpInst::ICMP_SGE;
+      break;
+    case lltok::kw_ult:
+      P = CmpInst::ICMP_ULT;
+      break;
+    case lltok::kw_ugt:
+      P = CmpInst::ICMP_UGT;
+      break;
+    case lltok::kw_ule:
+      P = CmpInst::ICMP_ULE;
+      break;
+    case lltok::kw_uge:
+      P = CmpInst::ICMP_UGE;
+      break;
     }
   }
   Lex.Lex();
@@ -6634,8 +6801,8 @@ bool LLParser::parseSwitch(Instruction *&Inst, PerFunctionState &PFS) {
     return error(CondLoc, "switch condition must have integer type");
 
   // parse the jump table pairs.
-  SmallPtrSet<Value*, 32> SeenCases;
-  SmallVector<std::pair<ConstantInt*, BasicBlock*>, 32> Table;
+  SmallPtrSet<Value *, 32> SeenCases;
+  SmallVector<std::pair<ConstantInt *, BasicBlock *>, 32> Table;
   while (Lex.getKind() != lltok::rsquare) {
     Value *Constant;
     BasicBlock *DestBB;
@@ -6653,7 +6820,7 @@ bool LLParser::parseSwitch(Instruction *&Inst, PerFunctionState &PFS) {
     Table.push_back(std::make_pair(cast<ConstantInt>(Constant), DestBB));
   }
 
-  Lex.Lex();  // Eat the ']'.
+  Lex.Lex(); // Eat the ']'.
 
   SwitchInst *SI = SwitchInst::Create(Cond, DefaultBB, Table.size());
   for (unsigned i = 0, e = Table.size(); i != e; ++i)
@@ -6677,7 +6844,7 @@ bool LLParser::parseIndirectBr(Instruction *&Inst, PerFunctionState &PFS) {
     return error(AddrLoc, "indirectbr address must have pointer type");
 
   // parse the destination list.
-  SmallVector<BasicBlock*, 16> DestList;
+  SmallVector<BasicBlock *, 16> DestList;
 
   if (Lex.getKind() != lltok::rsquare) {
     BasicBlock *DestBB;
@@ -6711,7 +6878,7 @@ bool LLParser::resolveFunctionType(Type *RetType,
   FuncTy = dyn_cast<FunctionType>(RetType);
   if (!FuncTy) {
     // Pull out the types of all of the arguments...
-    std::vector<Type*> ParamTypes;
+    std::vector<Type *> ParamTypes;
     for (unsigned i = 0, e = ArgList.size(); i != e; ++i)
       ParamTypes.push_back(ArgList[i].V->getType());
 
@@ -6811,7 +6978,8 @@ bool LLParser::parseInvoke(Instruction *&Inst, PerFunctionState &PFS) {
 /// parseResume
 ///   ::= 'resume' TypeAndValue
 bool LLParser::parseResume(Instruction *&Inst, PerFunctionState &PFS) {
-  Value *Exn; LocTy ExnLoc;
+  Value *Exn;
+  LocTy ExnLoc;
   if (parseTypeAndValue(Exn, ExnLoc, PFS))
     return true;
 
@@ -6848,7 +7016,7 @@ bool LLParser::parseExceptionArgs(SmallVectorImpl<Value *> &Args,
     Args.push_back(V);
   }
 
-  Lex.Lex();  // Lex the ']'.
+  Lex.Lex(); // Lex the ']'.
   return false;
 }
 
@@ -7006,7 +7174,8 @@ bool LLParser::parseCleanupPad(Instruction *&Inst, PerFunctionState &PFS) {
 /// operand is allowed.
 bool LLParser::parseUnaryOp(Instruction *&Inst, PerFunctionState &PFS,
                             unsigned Opc, bool IsFP) {
-  LocTy Loc; Value *LHS;
+  LocTy Loc;
+  Value *LHS;
   if (parseTypeAndValue(LHS, Loc, PFS))
     return true;
 
@@ -7112,9 +7281,8 @@ bool LLParser::parseCallBr(Instruction *&Inst, PerFunctionState &PFS) {
       AttributeList::get(Context, AttributeSet::get(Context, FnAttrs),
                          AttributeSet::get(Context, RetAttrs), ArgAttrs);
 
-  CallBrInst *CBI =
-      CallBrInst::Create(Ty, Callee, DefaultDest, IndirectDests, Args,
-                         BundleList);
+  CallBrInst *CBI = CallBrInst::Create(Ty, Callee, DefaultDest, IndirectDests,
+                                       Args, BundleList);
   CBI->setCallingConv(CC);
   CBI->setAttributes(PAL);
   ForwardRefAttrGroups[CBI] = FwdRefAttrGrps;
@@ -7133,7 +7301,8 @@ bool LLParser::parseCallBr(Instruction *&Inst, PerFunctionState &PFS) {
 /// operand is allowed.
 bool LLParser::parseArithmetic(Instruction *&Inst, PerFunctionState &PFS,
                                unsigned Opc, bool IsFP) {
-  LocTy Loc; Value *LHS, *RHS;
+  LocTy Loc;
+  Value *LHS, *RHS;
   if (parseTypeAndValue(LHS, Loc, PFS) ||
       parseToken(lltok::comma, "expected ',' in arithmetic operation") ||
       parseValue(LHS->getType(), RHS, PFS))
@@ -7153,7 +7322,8 @@ bool LLParser::parseArithmetic(Instruction *&Inst, PerFunctionState &PFS,
 ///  ::= ArithmeticOps TypeAndValue ',' Value {
 bool LLParser::parseLogical(Instruction *&Inst, PerFunctionState &PFS,
                             unsigned Opc) {
-  LocTy Loc; Value *LHS, *RHS;
+  LocTy Loc;
+  Value *LHS, *RHS;
   if (parseTypeAndValue(LHS, Loc, PFS) ||
       parseToken(lltok::comma, "expected ',' in logical operation") ||
       parseValue(LHS->getType(), RHS, PFS))
@@ -7316,7 +7486,8 @@ bool LLParser::parseShuffleVector(Instruction *&Inst, PerFunctionState &PFS) {
 /// parsePHI
 ///   ::= 'phi' Type '[' Value ',' Value ']' (',' '[' Value ',' Value ']')*
 int LLParser::parsePHI(Instruction *&Inst, PerFunctionState &PFS) {
-  Type *Ty = nullptr;  LocTy TypeLoc;
+  Type *Ty = nullptr;
+  LocTy TypeLoc;
   Value *Op0, *Op1;
 
   if (parseType(Ty, TypeLoc))
@@ -7327,7 +7498,7 @@ int LLParser::parsePHI(Instruction *&Inst, PerFunctionState &PFS) {
 
   bool First = true;
   bool AteExtraComma = false;
-  SmallVector<std::pair<Value*, BasicBlock*>, 16> PHIVals;
+  SmallVector<std::pair<Value *, BasicBlock *>, 16> PHIVals;
 
   while (true) {
     if (First) {
@@ -7366,7 +7537,8 @@ int LLParser::parsePHI(Instruction *&Inst, PerFunctionState &PFS) {
 ///   ::= 'filter'
 ///   ::= 'filter' TypeAndValue ( ',' TypeAndValue )*
 bool LLParser::parseLandingPad(Instruction *&Inst, PerFunctionState &PFS) {
-  Type *Ty = nullptr; LocTy TyLoc;
+  Type *Ty = nullptr;
+  LocTy TyLoc;
 
   if (parseType(Ty, TyLoc))
     return true;
@@ -7374,7 +7546,8 @@ bool LLParser::parseLandingPad(Instruction *&Inst, PerFunctionState &PFS) {
   std::unique_ptr<LandingPadInst> LP(LandingPadInst::Create(Ty, 0));
   LP->setCleanup(EatIfPresent(lltok::kw_cleanup));
 
-  while (Lex.getKind() == lltok::kw_catch || Lex.getKind() == lltok::kw_filter){
+  while (Lex.getKind() == lltok::kw_catch ||
+         Lex.getKind() == lltok::kw_filter) {
     LandingPadInst::ClauseType CT;
     if (EatIfPresent(lltok::kw_catch))
       CT = LandingPadInst::Catch;
@@ -7478,7 +7651,7 @@ bool LLParser::parseCall(Instruction *&Inst, PerFunctionState &PFS,
   // Set up the Attribute for the function.
   SmallVector<AttributeSet, 8> Attrs;
 
-  SmallVector<Value*, 8> Args;
+  SmallVector<Value *, 8> Args;
 
   // Loop through FunctionType's arguments and ensure they are specified
   // correctly.  Also, gather any parameter attributes.
@@ -7600,7 +7773,8 @@ int LLParser::parseAlloc(Instruction *&Inst, PerFunctionState &PFS) {
 ///   ::= 'load' 'atomic' 'volatile'? TypeAndValue
 ///       'singlethread'? AtomicOrdering (',' 'align' i32)?
 int LLParser::parseLoad(Instruction *&Inst, PerFunctionState &PFS) {
-  Value *Val; LocTy Loc;
+  Value *Val;
+  LocTy Loc;
   MaybeAlign Alignment;
   bool AteExtraComma = false;
   bool isAtomic = false;
@@ -7650,7 +7824,8 @@ int LLParser::parseLoad(Instruction *&Inst, PerFunctionState &PFS) {
 ///   ::= 'store' 'atomic' 'volatile'? TypeAndValue ',' TypeAndValue
 ///       'singlethread'? AtomicOrdering (',' 'align' i32)?
 int LLParser::parseStore(Instruction *&Inst, PerFunctionState &PFS) {
-  Value *Val, *Ptr; LocTy Loc, PtrLoc;
+  Value *Val, *Ptr;
+  LocTy Loc, PtrLoc;
   MaybeAlign Alignment;
   bool AteExtraComma = false;
   bool isAtomic = false;
@@ -7699,7 +7874,8 @@ int LLParser::parseStore(Instruction *&Inst, PerFunctionState &PFS) {
 ///       TypeAndValue 'singlethread'? AtomicOrdering AtomicOrdering ','
 ///       'Align'?
 int LLParser::parseCmpXchg(Instruction *&Inst, PerFunctionState &PFS) {
-  Value *Ptr, *Cmp, *New; LocTy PtrLoc, CmpLoc, NewLoc;
+  Value *Ptr, *Cmp, *New;
+  LocTy PtrLoc, CmpLoc, NewLoc;
   bool AteExtraComma = false;
   AtomicOrdering SuccessOrdering = AtomicOrdering::NotAtomic;
   AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic;
@@ -7753,7 +7929,8 @@ int LLParser::parseCmpXchg(Instruction *&Inst, PerFunctionState &PFS) {
 ///   ::= 'atomicrmw' 'volatile'? BinOp TypeAndValue ',' TypeAndValue
 ///       'singlethread'? AtomicOrdering
 int LLParser::parseAtomicRMW(Instruction *&Inst, PerFunctionState &PFS) {
-  Value *Ptr, *Val; LocTy PtrLoc, ValLoc;
+  Value *Ptr, *Val;
+  LocTy PtrLoc, ValLoc;
   bool AteExtraComma = false;
   AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
   SyncScope::ID SSID = SyncScope::System;
@@ -7768,17 +7945,39 @@ int LLParser::parseAtomicRMW(Instruction *&Inst, PerFunctionState &PFS) {
   switch (Lex.getKind()) {
   default:
     return tokError("expected binary operation in atomicrmw");
-  case lltok::kw_xchg: Operation = AtomicRMWInst::Xchg; break;
-  case lltok::kw_add: Operation = AtomicRMWInst::Add; break;
-  case lltok::kw_sub: Operation = AtomicRMWInst::Sub; break;
-  case lltok::kw_and: Operation = AtomicRMWInst::And; break;
-  case lltok::kw_nand: Operation = AtomicRMWInst::Nand; break;
-  case lltok::kw_or: Operation = AtomicRMWInst::Or; break;
-  case lltok::kw_xor: Operation = AtomicRMWInst::Xor; break;
-  case lltok::kw_max: Operation = AtomicRMWInst::Max; break;
-  case lltok::kw_min: Operation = AtomicRMWInst::Min; break;
-  case lltok::kw_umax: Operation = AtomicRMWInst::UMax; break;
-  case lltok::kw_umin: Operation = AtomicRMWInst::UMin; break;
+  case lltok::kw_xchg:
+    Operation = AtomicRMWInst::Xchg;
+    break;
+  case lltok::kw_add:
+    Operation = AtomicRMWInst::Add;
+    break;
+  case lltok::kw_sub:
+    Operation = AtomicRMWInst::Sub;
+    break;
+  case lltok::kw_and:
+    Operation = AtomicRMWInst::And;
+    break;
+  case lltok::kw_nand:
+    Operation = AtomicRMWInst::Nand;
+    break;
+  case lltok::kw_or:
+    Operation = AtomicRMWInst::Or;
+    break;
+  case lltok::kw_xor:
+    Operation = AtomicRMWInst::Xor;
+    break;
+  case lltok::kw_max:
+    Operation = AtomicRMWInst::Max;
+    break;
+  case lltok::kw_min:
+    Operation = AtomicRMWInst::Min;
+    break;
+  case lltok::kw_umax:
+    Operation = AtomicRMWInst::UMax;
+    break;
+  case lltok::kw_umin:
+    Operation = AtomicRMWInst::UMin;
+    break;
   case lltok::kw_uinc_wrap:
     Operation = AtomicRMWInst::UIncWrap;
     break;
@@ -7802,7 +8001,7 @@ int LLParser::parseAtomicRMW(Instruction *&Inst, PerFunctionState &PFS) {
     IsFP = true;
     break;
   }
-  Lex.Lex();  // Eat the operation.
+  Lex.Lex(); // Eat the operation.
 
   if (parseTypeAndValue(Ptr, PtrLoc, PFS) ||
       parseToken(lltok::comma, "expected ',' after atomicrmw address") ||
@@ -7889,11 +8088,12 @@ int LLParser::parseGetElementPtr(Instruction *&Inst, PerFunctionState &PFS) {
     return true;
 
   Type *BaseType = Ptr->getType();
-  PointerType *BasePointerType = dyn_cast<PointerType>(BaseType->getScalarType());
+  PointerType *BasePointerType =
+      dyn_cast<PointerType>(BaseType->getScalarType());
   if (!BasePointerType)
     return error(Loc, "base of getelementptr must be a pointer");
 
-  SmallVector<Value*, 16> Indices;
+  SmallVector<Value *, 16> Indices;
   bool AteExtraComma = false;
   // GEP returns a vector of pointers if at least one of parameters is a vector.
   // All vector parameters should have the same vector width.
@@ -7922,7 +8122,7 @@ int LLParser::parseGetElementPtr(Instruction *&Inst, PerFunctionState &PFS) {
     Indices.push_back(Val);
   }
 
-  SmallPtrSet<Type*, 4> Visited;
+  SmallPtrSet<Type *, 4> Visited;
   if (!Indices.empty() && !Ty->isSized(&Visited))
     return error(Loc, "base element of getelementptr must be sized");
 
@@ -7942,7 +8142,8 @@ int LLParser::parseGetElementPtr(Instruction *&Inst, PerFunctionState &PFS) {
 /// parseExtractValue
 ///   ::= 'extractvalue' TypeAndValue (',' uint32)+
 int LLParser::parseExtractValue(Instruction *&Inst, PerFunctionState &PFS) {
-  Value *Val; LocTy Loc;
+  Value *Val;
+  LocTy Loc;
   SmallVector<unsigned, 4> Indices;
   bool AteExtraComma;
   if (parseTypeAndValue(Val, Loc, PFS) ||
@@ -7961,7 +8162,8 @@ int LLParser::parseExtractValue(Instruction *&Inst, PerFunctionState &PFS) {
 /// parseInsertValue
 ///   ::= 'insertvalue' TypeAndValue ',' TypeAndValue (',' uint32)+
 int LLParser::parseInsertValue(Instruction *&Inst, PerFunctionState &PFS) {
-  Value *Val0, *Val1; LocTy Loc0, Loc1;
+  Value *Val0, *Val1;
+  LocTy Loc0, Loc1;
   SmallVector<unsigned, 4> Indices;
   bool AteExtraComma;
   if (parseTypeAndValue(Val0, Loc0, PFS) ||
@@ -7973,7 +8175,8 @@ int LLParser::parseInsertValue(Instruction *&Inst, PerFunctionState &PFS) {
   if (!Val0->getType()->isAggregateType())
     return error(Loc0, "insertvalue operand must be aggregate type");
 
-  Type *IndexedType = ExtractValueInst::getIndexedType(Val0->getType(), Indices);
+  Type *IndexedType =
+      ExtractValueInst::getIndexedType(Val0->getType(), Indices);
   if (!IndexedType)
     return error(Loc0, "invalid indices for insertvalue");
   if (IndexedType != Val1->getType())
diff --git a/llvm/lib/BinaryFormat/AMDGPUMetadataVerifier.cpp b/llvm/lib/BinaryFormat/AMDGPUMetadataVerifier.cpp
index 35a79ec04b6e767..1e54714092743cc 100644
--- a/llvm/lib/BinaryFormat/AMDGPUMetadataVerifier.cpp
+++ b/llvm/lib/BinaryFormat/AMDGPUMetadataVerifier.cpp
@@ -76,8 +76,7 @@ bool MetadataVerifier::verifyEntry(
 
 bool MetadataVerifier::verifyScalarEntry(
     msgpack::MapDocNode &MapNode, StringRef Key, bool Required,
-    msgpack::Type SKind,
-    function_ref<bool(msgpack::DocNode &)> verifyValue) {
+    msgpack::Type SKind, function_ref<bool(msgpack::DocNode &)> verifyValue) {
   return verifyEntry(MapNode, Key, Required, [=](msgpack::DocNode &Node) {
     return verifyScalar(Node, SKind, verifyValue);
   });
@@ -95,11 +94,9 @@ bool MetadataVerifier::verifyKernelArgs(msgpack::DocNode &Node) {
     return false;
   auto &ArgsMap = Node.getMap();
 
-  if (!verifyScalarEntry(ArgsMap, ".name", false,
-                         msgpack::Type::String))
+  if (!verifyScalarEntry(ArgsMap, ".name", false, msgpack::Type::String))
     return false;
-  if (!verifyScalarEntry(ArgsMap, ".type_name", false,
-                         msgpack::Type::String))
+  if (!verifyScalarEntry(ArgsMap, ".type_name", false, msgpack::Type::String))
     return false;
   if (!verifyIntegerEntry(ArgsMap, ".size", true))
     return false;
@@ -144,8 +141,7 @@ bool MetadataVerifier::verifyKernelArgs(msgpack::DocNode &Node) {
   if (!verifyIntegerEntry(ArgsMap, ".pointee_align", false))
     return false;
   if (!verifyScalarEntry(ArgsMap, ".address_space", false,
-                         msgpack::Type::String,
-                         [](msgpack::DocNode &SNode) {
+                         msgpack::Type::String, [](msgpack::DocNode &SNode) {
                            return StringSwitch<bool>(SNode.getString())
                                .Case("private", true)
                                .Case("global", true)
@@ -156,8 +152,7 @@ bool MetadataVerifier::verifyKernelArgs(msgpack::DocNode &Node) {
                                .Default(false);
                          }))
     return false;
-  if (!verifyScalarEntry(ArgsMap, ".access", false,
-                         msgpack::Type::String,
+  if (!verifyScalarEntry(ArgsMap, ".access", false, msgpack::Type::String,
                          [](msgpack::DocNode &SNode) {
                            return StringSwitch<bool>(SNode.getString())
                                .Case("read_only", true)
@@ -167,8 +162,7 @@ bool MetadataVerifier::verifyKernelArgs(msgpack::DocNode &Node) {
                          }))
     return false;
   if (!verifyScalarEntry(ArgsMap, ".actual_access", false,
-                         msgpack::Type::String,
-                         [](msgpack::DocNode &SNode) {
+                         msgpack::Type::String, [](msgpack::DocNode &SNode) {
                            return StringSwitch<bool>(SNode.getString())
                                .Case("read_only", true)
                                .Case("write_only", true)
@@ -176,8 +170,7 @@ bool MetadataVerifier::verifyKernelArgs(msgpack::DocNode &Node) {
                                .Default(false);
                          }))
     return false;
-  if (!verifyScalarEntry(ArgsMap, ".is_const", false,
-                         msgpack::Type::Boolean))
+  if (!verifyScalarEntry(ArgsMap, ".is_const", false, msgpack::Type::Boolean))
     return false;
   if (!verifyScalarEntry(ArgsMap, ".is_restrict", false,
                          msgpack::Type::Boolean))
@@ -185,8 +178,7 @@ bool MetadataVerifier::verifyKernelArgs(msgpack::DocNode &Node) {
   if (!verifyScalarEntry(ArgsMap, ".is_volatile", false,
                          msgpack::Type::Boolean))
     return false;
-  if (!verifyScalarEntry(ArgsMap, ".is_pipe", false,
-                         msgpack::Type::Boolean))
+  if (!verifyScalarEntry(ArgsMap, ".is_pipe", false, msgpack::Type::Boolean))
     return false;
 
   return true;
@@ -197,14 +189,11 @@ bool MetadataVerifier::verifyKernel(msgpack::DocNode &Node) {
     return false;
   auto &KernelMap = Node.getMap();
 
-  if (!verifyScalarEntry(KernelMap, ".name", true,
-                         msgpack::Type::String))
+  if (!verifyScalarEntry(KernelMap, ".name", true, msgpack::Type::String))
     return false;
-  if (!verifyScalarEntry(KernelMap, ".symbol", true,
-                         msgpack::Type::String))
+  if (!verifyScalarEntry(KernelMap, ".symbol", true, msgpack::Type::String))
     return false;
-  if (!verifyScalarEntry(KernelMap, ".language", false,
-                         msgpack::Type::String,
+  if (!verifyScalarEntry(KernelMap, ".language", false, msgpack::Type::String,
                          [](msgpack::DocNode &SNode) {
                            return StringSwitch<bool>(SNode.getString())
                                .Case("OpenCL C", true)
@@ -216,12 +205,15 @@ bool MetadataVerifier::verifyKernel(msgpack::DocNode &Node) {
                                .Default(false);
                          }))
     return false;
-  if (!verifyEntry(
-          KernelMap, ".language_version", false, [this](msgpack::DocNode &Node) {
-            return verifyArray(
-                Node,
-                [this](msgpack::DocNode &Node) { return verifyInteger(Node); }, 2);
-          }))
+  if (!verifyEntry(KernelMap, ".language_version", false,
+                   [this](msgpack::DocNode &Node) {
+                     return verifyArray(
+                         Node,
+                         [this](msgpack::DocNode &Node) {
+                           return verifyInteger(Node);
+                         },
+                         2);
+                   }))
     return false;
   if (!verifyEntry(KernelMap, ".args", false, [this](msgpack::DocNode &Node) {
         return verifyArray(Node, [this](msgpack::DocNode &Node) {
@@ -231,20 +223,22 @@ bool MetadataVerifier::verifyKernel(msgpack::DocNode &Node) {
     return false;
   if (!verifyEntry(KernelMap, ".reqd_workgroup_size", false,
                    [this](msgpack::DocNode &Node) {
-                     return verifyArray(Node,
-                                        [this](msgpack::DocNode &Node) {
-                                          return verifyInteger(Node);
-                                        },
-                                        3);
+                     return verifyArray(
+                         Node,
+                         [this](msgpack::DocNode &Node) {
+                           return verifyInteger(Node);
+                         },
+                         3);
                    }))
     return false;
   if (!verifyEntry(KernelMap, ".workgroup_size_hint", false,
                    [this](msgpack::DocNode &Node) {
-                     return verifyArray(Node,
-                                        [this](msgpack::DocNode &Node) {
-                                          return verifyInteger(Node);
-                                        },
-                                        3);
+                     return verifyArray(
+                         Node,
+                         [this](msgpack::DocNode &Node) {
+                           return verifyInteger(Node);
+                         },
+                         3);
                    }))
     return false;
   if (!verifyScalarEntry(KernelMap, ".vec_type_hint", false,
@@ -281,7 +275,6 @@ bool MetadataVerifier::verifyKernel(msgpack::DocNode &Node) {
   if (!verifyIntegerEntry(KernelMap, ".uniform_work_group_size", false))
     return false;
 
-
   return true;
 }
 
@@ -294,15 +287,16 @@ bool MetadataVerifier::verify(msgpack::DocNode &HSAMetadataRoot) {
           RootMap, "amdhsa.version", true, [this](msgpack::DocNode &Node) {
             return verifyArray(
                 Node,
-                [this](msgpack::DocNode &Node) { return verifyInteger(Node); }, 2);
+                [this](msgpack::DocNode &Node) { return verifyInteger(Node); },
+                2);
           }))
     return false;
-  if (!verifyEntry(
-          RootMap, "amdhsa.printf", false, [this](msgpack::DocNode &Node) {
-            return verifyArray(Node, [this](msgpack::DocNode &Node) {
-              return verifyScalar(Node, msgpack::Type::String);
-            });
-          }))
+  if (!verifyEntry(RootMap, "amdhsa.printf", false,
+                   [this](msgpack::DocNode &Node) {
+                     return verifyArray(Node, [this](msgpack::DocNode &Node) {
+                       return verifyScalar(Node, msgpack::Type::String);
+                     });
+                   }))
     return false;
   if (!verifyEntry(RootMap, "amdhsa.kernels", true,
                    [this](msgpack::DocNode &Node) {
diff --git a/llvm/lib/BinaryFormat/CMakeLists.txt b/llvm/lib/BinaryFormat/CMakeLists.txt
index 38ba2d9e85a068c..224643b984a126b 100644
--- a/llvm/lib/BinaryFormat/CMakeLists.txt
+++ b/llvm/lib/BinaryFormat/CMakeLists.txt
@@ -1,23 +1,11 @@
-add_llvm_component_library(LLVMBinaryFormat
-  AMDGPUMetadataVerifier.cpp
-  COFF.cpp
-  Dwarf.cpp
-  DXContainer.cpp
-  ELF.cpp
-  MachO.cpp
-  Magic.cpp
-  Minidump.cpp
-  MsgPackDocument.cpp
-  MsgPackDocumentYAML.cpp
-  MsgPackReader.cpp
-  MsgPackWriter.cpp
-  Wasm.cpp
-  XCOFF.cpp
+add_llvm_component_library(
+    LLVMBinaryFormat AMDGPUMetadataVerifier.cpp COFF.cpp Dwarf.cpp DXContainer
+        .cpp ELF.cpp MachO.cpp Magic.cpp Minidump.cpp MsgPackDocument
+        .cpp MsgPackDocumentYAML.cpp MsgPackReader.cpp MsgPackWriter.cpp Wasm
+        .cpp XCOFF.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/BinaryFormat
+            ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+    llvm /
+    BinaryFormat
 
-  LINK_COMPONENTS
-  Support
-  TargetParser
-  )
+        LINK_COMPONENTS Support TargetParser)
diff --git a/llvm/lib/BinaryFormat/Dwarf.cpp b/llvm/lib/BinaryFormat/Dwarf.cpp
index e4e5b5dd8c0e051..3844e54cc3989f8 100644
--- a/llvm/lib/BinaryFormat/Dwarf.cpp
+++ b/llvm/lib/BinaryFormat/Dwarf.cpp
@@ -569,15 +569,17 @@ StringRef llvm::dwarf::LocListEncodingString(unsigned Encoding) {
 }
 
 StringRef llvm::dwarf::CallFrameString(unsigned Encoding,
-    Triple::ArchType Arch) {
+                                       Triple::ArchType Arch) {
   assert(Arch != llvm::Triple::ArchType::UnknownArch);
-#define SELECT_AARCH64 (Arch == llvm::Triple::aarch64_be || Arch == llvm::Triple::aarch64)
+#define SELECT_AARCH64                                                         \
+  (Arch == llvm::Triple::aarch64_be || Arch == llvm::Triple::aarch64)
 #define SELECT_MIPS64 Arch == llvm::Triple::mips64
-#define SELECT_SPARC (Arch == llvm::Triple::sparc || Arch == llvm::Triple::sparcv9)
+#define SELECT_SPARC                                                           \
+  (Arch == llvm::Triple::sparc || Arch == llvm::Triple::sparcv9)
 #define SELECT_X86 (Arch == llvm::Triple::x86 || Arch == llvm::Triple::x86_64)
 #define HANDLE_DW_CFA(ID, NAME)
-#define HANDLE_DW_CFA_PRED(ID, NAME, PRED) \
-  if (ID == Encoding && PRED) \
+#define HANDLE_DW_CFA_PRED(ID, NAME, PRED)                                     \
+  if (ID == Encoding && PRED)                                                  \
     return "DW_CFA_" #NAME;
 #include "llvm/BinaryFormat/Dwarf.def"
 
diff --git a/llvm/lib/BinaryFormat/MsgPackDocumentYAML.cpp b/llvm/lib/BinaryFormat/MsgPackDocumentYAML.cpp
index 3de3dccce0c6c94..a58546aafd8dc02 100644
--- a/llvm/lib/BinaryFormat/MsgPackDocumentYAML.cpp
+++ b/llvm/lib/BinaryFormat/MsgPackDocumentYAML.cpp
@@ -245,4 +245,3 @@ bool msgpack::Document::fromYAML(StringRef S) {
   Yin >> getRoot();
   return !Yin.error();
 }
-
diff --git a/llvm/lib/Bitcode/CMakeLists.txt b/llvm/lib/Bitcode/CMakeLists.txt
index ff7e290cad1bbdc..6403e1ac766ffc6 100644
--- a/llvm/lib/Bitcode/CMakeLists.txt
+++ b/llvm/lib/Bitcode/CMakeLists.txt
@@ -1,2 +1 @@
-add_subdirectory(Reader)
-add_subdirectory(Writer)
+add_subdirectory(Reader) add_subdirectory(Writer)
diff --git a/llvm/lib/Bitcode/Reader/BitReader.cpp b/llvm/lib/Bitcode/Reader/BitReader.cpp
index da2cf0770ec5aba..e369dc18ac8d1ca 100644
--- a/llvm/lib/Bitcode/Reader/BitReader.cpp
+++ b/llvm/lib/Bitcode/Reader/BitReader.cpp
@@ -41,9 +41,8 @@ LLVMBool LLVMParseBitcodeInContext(LLVMContextRef ContextRef,
   Expected<std::unique_ptr<Module>> ModuleOrErr = parseBitcodeFile(Buf, Ctx);
   if (Error Err = ModuleOrErr.takeError()) {
     std::string Message;
-    handleAllErrors(std::move(Err), [&](ErrorInfoBase &EIB) {
-      Message = EIB.message();
-    });
+    handleAllErrors(std::move(Err),
+                    [&](ErrorInfoBase &EIB) { Message = EIB.message(); });
     if (OutMessage)
       *OutMessage = strdup(Message.c_str());
     *OutModule = wrap((Module *)nullptr);
@@ -87,9 +86,8 @@ LLVMBool LLVMGetBitcodeModuleInContext(LLVMContextRef ContextRef,
 
   if (Error Err = ModuleOrErr.takeError()) {
     std::string Message;
-    handleAllErrors(std::move(Err), [&](ErrorInfoBase &EIB) {
-      Message = EIB.message();
-    });
+    handleAllErrors(std::move(Err),
+                    [&](ErrorInfoBase &EIB) { Message = EIB.message(); });
     if (OutMessage)
       *OutMessage = strdup(Message.c_str());
     *OutM = wrap((Module *)nullptr);
diff --git a/llvm/lib/Bitcode/Reader/BitcodeAnalyzer.cpp b/llvm/lib/Bitcode/Reader/BitcodeAnalyzer.cpp
index 7005011980ebc95..cb77df8561459d8 100644
--- a/llvm/lib/Bitcode/Reader/BitcodeAnalyzer.cpp
+++ b/llvm/lib/Bitcode/Reader/BitcodeAnalyzer.cpp
@@ -993,4 +993,3 @@ Error BitcodeAnalyzer::parseBlock(unsigned BlockID, unsigned IndentLevel,
       return Skipped.takeError();
   }
 }
-
diff --git a/llvm/lib/Bitcode/Reader/BitcodeReader.cpp b/llvm/lib/Bitcode/Reader/BitcodeReader.cpp
index 1d1ec988a93d847..f1785771810bb24 100644
--- a/llvm/lib/Bitcode/Reader/BitcodeReader.cpp
+++ b/llvm/lib/Bitcode/Reader/BitcodeReader.cpp
@@ -216,9 +216,9 @@ static Expected<std::string> readIdentificationBlock(BitstreamCursor &Stream) {
     case bitc::IDENTIFICATION_CODE_EPOCH: { // EPOCH: [epoch#]
       unsigned epoch = (unsigned)Record[0];
       if (epoch != bitc::BITCODE_CURRENT_EPOCH) {
-        return error(
-          Twine("Incompatible epoch: Bitcode '") + Twine(epoch) +
-          "' vs current: '" + Twine(bitc::BITCODE_CURRENT_EPOCH) + "'");
+        return error(Twine("Incompatible epoch: Bitcode '") + Twine(epoch) +
+                     "' vs current: '" + Twine(bitc::BITCODE_CURRENT_EPOCH) +
+                     "'");
       }
     }
     }
@@ -366,8 +366,9 @@ static Expected<std::string> readModuleTriple(BitstreamCursor &Stream) {
     if (!MaybeRecord)
       return MaybeRecord.takeError();
     switch (MaybeRecord.get()) {
-    default: break;  // Default behavior, ignore unknown content.
-    case bitc::MODULE_CODE_TRIPLE: {  // TRIPLE: [strchr x N]
+    default:
+      break; // Default behavior, ignore unknown content.
+    case bitc::MODULE_CODE_TRIPLE: { // TRIPLE: [strchr x N]
       std::string S;
       if (convertToString(Record, 0, S))
         return error("Invalid triple record");
@@ -619,11 +620,11 @@ class BitcodeReader : public BitcodeReaderBase, public GVMaterializer {
 
   /// While parsing a function body, this is a list of the basic blocks for the
   /// function.
-  std::vector<BasicBlock*> FunctionBBs;
+  std::vector<BasicBlock *> FunctionBBs;
 
   // When reading the module header, this list is populated with functions that
   // have bodies later in the file.
-  std::vector<Function*> FunctionsWithBodies;
+  std::vector<Function *> FunctionsWithBodies;
 
   // When intrinsic functions are encountered which require upgrading they are
   // stored here with their replacement function.
@@ -637,7 +638,7 @@ class BitcodeReader : public BitcodeReaderBase, public GVMaterializer {
 
   /// When function bodies are initially scanned, this map contains info about
   /// where to find deferred function body in the stream.
-  DenseMap<Function*, uint64_t> DeferredFunctionInfo;
+  DenseMap<Function *, uint64_t> DeferredFunctionInfo;
 
   /// When Metadata block is initially scanned when parsing the module, we may
   /// choose to defer parsing of the metadata. This vector contains info about
@@ -727,13 +728,14 @@ class BitcodeReader : public BitcodeReaderBase, public GVMaterializer {
   }
 
   BasicBlock *getBasicBlock(unsigned ID) const {
-    if (ID >= FunctionBBs.size()) return nullptr; // Invalid ID
+    if (ID >= FunctionBBs.size())
+      return nullptr; // Invalid ID
     return FunctionBBs[ID];
   }
 
   AttributeList getAttributes(unsigned i) const {
-    if (i-1 < MAttributes.size())
-      return MAttributes[i-1];
+    if (i - 1 < MAttributes.size())
+      return MAttributes[i - 1];
     return AttributeList();
   }
 
@@ -743,7 +745,8 @@ class BitcodeReader : public BitcodeReaderBase, public GVMaterializer {
   bool getValueTypePair(const SmallVectorImpl<uint64_t> &Record, unsigned &Slot,
                         unsigned InstNum, Value *&ResVal, unsigned &TypeID,
                         BasicBlock *ConstExprInsertBB) {
-    if (Slot == Record.size()) return true;
+    if (Slot == Record.size())
+      return true;
     unsigned ValNo = (unsigned)Record[Slot++];
     // Adjust the ValNo, if it was encoded relative to the InstNum.
     if (UseRelativeIDs)
@@ -761,8 +764,8 @@ class BitcodeReader : public BitcodeReaderBase, public GVMaterializer {
       return true;
 
     TypeID = (unsigned)Record[Slot++];
-    ResVal = getFnValueByID(ValNo, getTypeByID(TypeID), TypeID,
-                            ConstExprInsertBB);
+    ResVal =
+        getFnValueByID(ValNo, getTypeByID(TypeID), TypeID, ConstExprInsertBB);
     return ResVal == nullptr;
   }
 
@@ -792,7 +795,8 @@ class BitcodeReader : public BitcodeReaderBase, public GVMaterializer {
   Value *getValue(const SmallVectorImpl<uint64_t> &Record, unsigned Slot,
                   unsigned InstNum, Type *Ty, unsigned TyID,
                   BasicBlock *ConstExprInsertBB) {
-    if (Slot == Record.size()) return nullptr;
+    if (Slot == Record.size())
+      return nullptr;
     unsigned ValNo = (unsigned)Record[Slot];
     // Adjust the ValNo, if it was encoded relative to the InstNum.
     if (UseRelativeIDs)
@@ -804,7 +808,8 @@ class BitcodeReader : public BitcodeReaderBase, public GVMaterializer {
   Value *getValueSigned(const SmallVectorImpl<uint64_t> &Record, unsigned Slot,
                         unsigned InstNum, Type *Ty, unsigned TyID,
                         BasicBlock *ConstExprInsertBB) {
-    if (Slot == Record.size()) return nullptr;
+    if (Slot == Record.size())
+      return nullptr;
     unsigned ValNo = (unsigned)decodeSignRotatedValue(Record[Slot]);
     // Adjust the ValNo, if it was encoded relative to the InstNum.
     if (UseRelativeIDs)
@@ -1055,7 +1060,7 @@ static GlobalValue::LinkageTypes getDecodedLinkage(unsigned Val) {
     return GlobalValue::PrivateLinkage; // Obsolete LinkerPrivateWeakLinkage
   case 15:
     return GlobalValue::ExternalLinkage; // Obsolete LinkOnceODRAutoHideLinkage
-  case 1: // Old value with implicit comdat.
+  case 1:                                // Old value with implicit comdat.
   case 16:
     return GlobalValue::WeakAnyLinkage;
   case 10: // Old value with implicit comdat.
@@ -1094,7 +1099,7 @@ static GlobalValueSummary::GVFlags getDecodedGVSummaryFlags(uint64_t RawFlags,
   // Summary were not emitted before LLVM 3.9, we don't need to upgrade Linkage
   // like getDecodedLinkage() above. Any future change to the linkage enum and
   // to getDecodedLinkage() will need to be taken into account here as above.
-  auto Linkage = GlobalValue::LinkageTypes(RawFlags & 0xF); // 4 bits
+  auto Linkage = GlobalValue::LinkageTypes(RawFlags & 0xF);            // 4 bits
   auto Visibility = GlobalValue::VisibilityTypes((RawFlags >> 8) & 3); // 2 bits
   RawFlags = RawFlags >> 4;
   bool NotEligibleToImport = (RawFlags & 0x1) || Version < 3;
@@ -1120,9 +1125,12 @@ static GlobalVarSummary::GVarFlags getDecodedGVarFlags(uint64_t RawFlags) {
 static GlobalValue::VisibilityTypes getDecodedVisibility(unsigned Val) {
   switch (Val) {
   default: // Map unknown visibilities to default.
-  case 0: return GlobalValue::DefaultVisibility;
-  case 1: return GlobalValue::HiddenVisibility;
-  case 2: return GlobalValue::ProtectedVisibility;
+  case 0:
+    return GlobalValue::DefaultVisibility;
+  case 1:
+    return GlobalValue::HiddenVisibility;
+  case 2:
+    return GlobalValue::ProtectedVisibility;
   }
 }
 
@@ -1130,56 +1138,83 @@ static GlobalValue::DLLStorageClassTypes
 getDecodedDLLStorageClass(unsigned Val) {
   switch (Val) {
   default: // Map unknown values to default.
-  case 0: return GlobalValue::DefaultStorageClass;
-  case 1: return GlobalValue::DLLImportStorageClass;
-  case 2: return GlobalValue::DLLExportStorageClass;
+  case 0:
+    return GlobalValue::DefaultStorageClass;
+  case 1:
+    return GlobalValue::DLLImportStorageClass;
+  case 2:
+    return GlobalValue::DLLExportStorageClass;
   }
 }
 
 static bool getDecodedDSOLocal(unsigned Val) {
-  switch(Val) {
+  switch (Val) {
   default: // Map unknown values to preemptable.
-  case 0:  return false;
-  case 1:  return true;
+  case 0:
+    return false;
+  case 1:
+    return true;
   }
 }
 
 static GlobalVariable::ThreadLocalMode getDecodedThreadLocalMode(unsigned Val) {
   switch (Val) {
-    case 0: return GlobalVariable::NotThreadLocal;
-    default: // Map unknown non-zero value to general dynamic.
-    case 1: return GlobalVariable::GeneralDynamicTLSModel;
-    case 2: return GlobalVariable::LocalDynamicTLSModel;
-    case 3: return GlobalVariable::InitialExecTLSModel;
-    case 4: return GlobalVariable::LocalExecTLSModel;
+  case 0:
+    return GlobalVariable::NotThreadLocal;
+  default: // Map unknown non-zero value to general dynamic.
+  case 1:
+    return GlobalVariable::GeneralDynamicTLSModel;
+  case 2:
+    return GlobalVariable::LocalDynamicTLSModel;
+  case 3:
+    return GlobalVariable::InitialExecTLSModel;
+  case 4:
+    return GlobalVariable::LocalExecTLSModel;
   }
 }
 
 static GlobalVariable::UnnamedAddr getDecodedUnnamedAddrType(unsigned Val) {
   switch (Val) {
-    default: // Map unknown to UnnamedAddr::None.
-    case 0: return GlobalVariable::UnnamedAddr::None;
-    case 1: return GlobalVariable::UnnamedAddr::Global;
-    case 2: return GlobalVariable::UnnamedAddr::Local;
+  default: // Map unknown to UnnamedAddr::None.
+  case 0:
+    return GlobalVariable::UnnamedAddr::None;
+  case 1:
+    return GlobalVariable::UnnamedAddr::Global;
+  case 2:
+    return GlobalVariable::UnnamedAddr::Local;
   }
 }
 
 static int getDecodedCastOpcode(unsigned Val) {
   switch (Val) {
-  default: return -1;
-  case bitc::CAST_TRUNC   : return Instruction::Trunc;
-  case bitc::CAST_ZEXT    : return Instruction::ZExt;
-  case bitc::CAST_SEXT    : return Instruction::SExt;
-  case bitc::CAST_FPTOUI  : return Instruction::FPToUI;
-  case bitc::CAST_FPTOSI  : return Instruction::FPToSI;
-  case bitc::CAST_UITOFP  : return Instruction::UIToFP;
-  case bitc::CAST_SITOFP  : return Instruction::SIToFP;
-  case bitc::CAST_FPTRUNC : return Instruction::FPTrunc;
-  case bitc::CAST_FPEXT   : return Instruction::FPExt;
-  case bitc::CAST_PTRTOINT: return Instruction::PtrToInt;
-  case bitc::CAST_INTTOPTR: return Instruction::IntToPtr;
-  case bitc::CAST_BITCAST : return Instruction::BitCast;
-  case bitc::CAST_ADDRSPACECAST: return Instruction::AddrSpaceCast;
+  default:
+    return -1;
+  case bitc::CAST_TRUNC:
+    return Instruction::Trunc;
+  case bitc::CAST_ZEXT:
+    return Instruction::ZExt;
+  case bitc::CAST_SEXT:
+    return Instruction::SExt;
+  case bitc::CAST_FPTOUI:
+    return Instruction::FPToUI;
+  case bitc::CAST_FPTOSI:
+    return Instruction::FPToSI;
+  case bitc::CAST_UITOFP:
+    return Instruction::UIToFP;
+  case bitc::CAST_SITOFP:
+    return Instruction::SIToFP;
+  case bitc::CAST_FPTRUNC:
+    return Instruction::FPTrunc;
+  case bitc::CAST_FPEXT:
+    return Instruction::FPExt;
+  case bitc::CAST_PTRTOINT:
+    return Instruction::PtrToInt;
+  case bitc::CAST_INTTOPTR:
+    return Instruction::IntToPtr;
+  case bitc::CAST_BITCAST:
+    return Instruction::BitCast;
+  case bitc::CAST_ADDRSPACECAST:
+    return Instruction::AddrSpaceCast;
   }
 }
 
@@ -1237,22 +1272,38 @@ static int getDecodedBinaryOpcode(unsigned Val, Type *Ty) {
 
 static AtomicRMWInst::BinOp getDecodedRMWOperation(unsigned Val) {
   switch (Val) {
-  default: return AtomicRMWInst::BAD_BINOP;
-  case bitc::RMW_XCHG: return AtomicRMWInst::Xchg;
-  case bitc::RMW_ADD: return AtomicRMWInst::Add;
-  case bitc::RMW_SUB: return AtomicRMWInst::Sub;
-  case bitc::RMW_AND: return AtomicRMWInst::And;
-  case bitc::RMW_NAND: return AtomicRMWInst::Nand;
-  case bitc::RMW_OR: return AtomicRMWInst::Or;
-  case bitc::RMW_XOR: return AtomicRMWInst::Xor;
-  case bitc::RMW_MAX: return AtomicRMWInst::Max;
-  case bitc::RMW_MIN: return AtomicRMWInst::Min;
-  case bitc::RMW_UMAX: return AtomicRMWInst::UMax;
-  case bitc::RMW_UMIN: return AtomicRMWInst::UMin;
-  case bitc::RMW_FADD: return AtomicRMWInst::FAdd;
-  case bitc::RMW_FSUB: return AtomicRMWInst::FSub;
-  case bitc::RMW_FMAX: return AtomicRMWInst::FMax;
-  case bitc::RMW_FMIN: return AtomicRMWInst::FMin;
+  default:
+    return AtomicRMWInst::BAD_BINOP;
+  case bitc::RMW_XCHG:
+    return AtomicRMWInst::Xchg;
+  case bitc::RMW_ADD:
+    return AtomicRMWInst::Add;
+  case bitc::RMW_SUB:
+    return AtomicRMWInst::Sub;
+  case bitc::RMW_AND:
+    return AtomicRMWInst::And;
+  case bitc::RMW_NAND:
+    return AtomicRMWInst::Nand;
+  case bitc::RMW_OR:
+    return AtomicRMWInst::Or;
+  case bitc::RMW_XOR:
+    return AtomicRMWInst::Xor;
+  case bitc::RMW_MAX:
+    return AtomicRMWInst::Max;
+  case bitc::RMW_MIN:
+    return AtomicRMWInst::Min;
+  case bitc::RMW_UMAX:
+    return AtomicRMWInst::UMax;
+  case bitc::RMW_UMIN:
+    return AtomicRMWInst::UMin;
+  case bitc::RMW_FADD:
+    return AtomicRMWInst::FAdd;
+  case bitc::RMW_FSUB:
+    return AtomicRMWInst::FSub;
+  case bitc::RMW_FMAX:
+    return AtomicRMWInst::FMax;
+  case bitc::RMW_FMIN:
+    return AtomicRMWInst::FMin;
   case bitc::RMW_UINC_WRAP:
     return AtomicRMWInst::UIncWrap;
   case bitc::RMW_UDEC_WRAP:
@@ -1262,14 +1313,21 @@ static AtomicRMWInst::BinOp getDecodedRMWOperation(unsigned Val) {
 
 static AtomicOrdering getDecodedOrdering(unsigned Val) {
   switch (Val) {
-  case bitc::ORDERING_NOTATOMIC: return AtomicOrdering::NotAtomic;
-  case bitc::ORDERING_UNORDERED: return AtomicOrdering::Unordered;
-  case bitc::ORDERING_MONOTONIC: return AtomicOrdering::Monotonic;
-  case bitc::ORDERING_ACQUIRE: return AtomicOrdering::Acquire;
-  case bitc::ORDERING_RELEASE: return AtomicOrdering::Release;
-  case bitc::ORDERING_ACQREL: return AtomicOrdering::AcquireRelease;
+  case bitc::ORDERING_NOTATOMIC:
+    return AtomicOrdering::NotAtomic;
+  case bitc::ORDERING_UNORDERED:
+    return AtomicOrdering::Unordered;
+  case bitc::ORDERING_MONOTONIC:
+    return AtomicOrdering::Monotonic;
+  case bitc::ORDERING_ACQUIRE:
+    return AtomicOrdering::Acquire;
+  case bitc::ORDERING_RELEASE:
+    return AtomicOrdering::Release;
+  case bitc::ORDERING_ACQREL:
+    return AtomicOrdering::AcquireRelease;
   default: // Map unknown orderings to sequentially-consistent.
-  case bitc::ORDERING_SEQCST: return AtomicOrdering::SequentiallyConsistent;
+  case bitc::ORDERING_SEQCST:
+    return AtomicOrdering::SequentiallyConsistent;
   }
 }
 
@@ -1315,8 +1373,12 @@ static void upgradeDLLImportExportLinkage(GlobalValue *GV, unsigned Val) {
   if (GV->hasLocalLinkage())
     return;
   switch (Val) {
-  case 5: GV->setDLLStorageClass(GlobalValue::DLLImportStorageClass); break;
-  case 6: GV->setDLLStorageClass(GlobalValue::DLLExportStorageClass); break;
+  case 5:
+    GV->setDLLStorageClass(GlobalValue::DLLImportStorageClass);
+    break;
+  case 6:
+    GV->setDLLStorageClass(GlobalValue::DLLExportStorageClass);
+    break;
   }
 }
 
@@ -1679,61 +1741,114 @@ static uint64_t getRawAttributeMask(Attribute::AttrKind Val) {
   case Attribute::TombstoneKey:
     llvm_unreachable("Synthetic enumerators which should never get here");
 
-  case Attribute::None:            return 0;
-  case Attribute::ZExt:            return 1 << 0;
-  case Attribute::SExt:            return 1 << 1;
-  case Attribute::NoReturn:        return 1 << 2;
-  case Attribute::InReg:           return 1 << 3;
-  case Attribute::StructRet:       return 1 << 4;
-  case Attribute::NoUnwind:        return 1 << 5;
-  case Attribute::NoAlias:         return 1 << 6;
-  case Attribute::ByVal:           return 1 << 7;
-  case Attribute::Nest:            return 1 << 8;
-  case Attribute::ReadNone:        return 1 << 9;
-  case Attribute::ReadOnly:        return 1 << 10;
-  case Attribute::NoInline:        return 1 << 11;
-  case Attribute::AlwaysInline:    return 1 << 12;
-  case Attribute::OptimizeForSize: return 1 << 13;
-  case Attribute::StackProtect:    return 1 << 14;
-  case Attribute::StackProtectReq: return 1 << 15;
-  case Attribute::Alignment:       return 31 << 16;
-  case Attribute::NoCapture:       return 1 << 21;
-  case Attribute::NoRedZone:       return 1 << 22;
-  case Attribute::NoImplicitFloat: return 1 << 23;
-  case Attribute::Naked:           return 1 << 24;
-  case Attribute::InlineHint:      return 1 << 25;
-  case Attribute::StackAlignment:  return 7 << 26;
-  case Attribute::ReturnsTwice:    return 1 << 29;
-  case Attribute::UWTable:         return 1 << 30;
-  case Attribute::NonLazyBind:     return 1U << 31;
-  case Attribute::SanitizeAddress: return 1ULL << 32;
-  case Attribute::MinSize:         return 1ULL << 33;
-  case Attribute::NoDuplicate:     return 1ULL << 34;
-  case Attribute::StackProtectStrong: return 1ULL << 35;
-  case Attribute::SanitizeThread:  return 1ULL << 36;
-  case Attribute::SanitizeMemory:  return 1ULL << 37;
-  case Attribute::NoBuiltin:       return 1ULL << 38;
-  case Attribute::Returned:        return 1ULL << 39;
-  case Attribute::Cold:            return 1ULL << 40;
-  case Attribute::Builtin:         return 1ULL << 41;
-  case Attribute::OptimizeNone:    return 1ULL << 42;
-  case Attribute::InAlloca:        return 1ULL << 43;
-  case Attribute::NonNull:         return 1ULL << 44;
-  case Attribute::JumpTable:       return 1ULL << 45;
-  case Attribute::Convergent:      return 1ULL << 46;
-  case Attribute::SafeStack:       return 1ULL << 47;
-  case Attribute::NoRecurse:       return 1ULL << 48;
+  case Attribute::None:
+    return 0;
+  case Attribute::ZExt:
+    return 1 << 0;
+  case Attribute::SExt:
+    return 1 << 1;
+  case Attribute::NoReturn:
+    return 1 << 2;
+  case Attribute::InReg:
+    return 1 << 3;
+  case Attribute::StructRet:
+    return 1 << 4;
+  case Attribute::NoUnwind:
+    return 1 << 5;
+  case Attribute::NoAlias:
+    return 1 << 6;
+  case Attribute::ByVal:
+    return 1 << 7;
+  case Attribute::Nest:
+    return 1 << 8;
+  case Attribute::ReadNone:
+    return 1 << 9;
+  case Attribute::ReadOnly:
+    return 1 << 10;
+  case Attribute::NoInline:
+    return 1 << 11;
+  case Attribute::AlwaysInline:
+    return 1 << 12;
+  case Attribute::OptimizeForSize:
+    return 1 << 13;
+  case Attribute::StackProtect:
+    return 1 << 14;
+  case Attribute::StackProtectReq:
+    return 1 << 15;
+  case Attribute::Alignment:
+    return 31 << 16;
+  case Attribute::NoCapture:
+    return 1 << 21;
+  case Attribute::NoRedZone:
+    return 1 << 22;
+  case Attribute::NoImplicitFloat:
+    return 1 << 23;
+  case Attribute::Naked:
+    return 1 << 24;
+  case Attribute::InlineHint:
+    return 1 << 25;
+  case Attribute::StackAlignment:
+    return 7 << 26;
+  case Attribute::ReturnsTwice:
+    return 1 << 29;
+  case Attribute::UWTable:
+    return 1 << 30;
+  case Attribute::NonLazyBind:
+    return 1U << 31;
+  case Attribute::SanitizeAddress:
+    return 1ULL << 32;
+  case Attribute::MinSize:
+    return 1ULL << 33;
+  case Attribute::NoDuplicate:
+    return 1ULL << 34;
+  case Attribute::StackProtectStrong:
+    return 1ULL << 35;
+  case Attribute::SanitizeThread:
+    return 1ULL << 36;
+  case Attribute::SanitizeMemory:
+    return 1ULL << 37;
+  case Attribute::NoBuiltin:
+    return 1ULL << 38;
+  case Attribute::Returned:
+    return 1ULL << 39;
+  case Attribute::Cold:
+    return 1ULL << 40;
+  case Attribute::Builtin:
+    return 1ULL << 41;
+  case Attribute::OptimizeNone:
+    return 1ULL << 42;
+  case Attribute::InAlloca:
+    return 1ULL << 43;
+  case Attribute::NonNull:
+    return 1ULL << 44;
+  case Attribute::JumpTable:
+    return 1ULL << 45;
+  case Attribute::Convergent:
+    return 1ULL << 46;
+  case Attribute::SafeStack:
+    return 1ULL << 47;
+  case Attribute::NoRecurse:
+    return 1ULL << 48;
   // 1ULL << 49 is InaccessibleMemOnly, which is upgraded separately.
   // 1ULL << 50 is InaccessibleMemOrArgMemOnly, which is upgraded separately.
-  case Attribute::SwiftSelf:       return 1ULL << 51;
-  case Attribute::SwiftError:      return 1ULL << 52;
-  case Attribute::WriteOnly:       return 1ULL << 53;
-  case Attribute::Speculatable:    return 1ULL << 54;
-  case Attribute::StrictFP:        return 1ULL << 55;
-  case Attribute::SanitizeHWAddress: return 1ULL << 56;
-  case Attribute::NoCfCheck:       return 1ULL << 57;
-  case Attribute::OptForFuzzing:   return 1ULL << 58;
-  case Attribute::ShadowCallStack: return 1ULL << 59;
+  case Attribute::SwiftSelf:
+    return 1ULL << 51;
+  case Attribute::SwiftError:
+    return 1ULL << 52;
+  case Attribute::WriteOnly:
+    return 1ULL << 53;
+  case Attribute::Speculatable:
+    return 1ULL << 54;
+  case Attribute::StrictFP:
+    return 1ULL << 55;
+  case Attribute::SanitizeHWAddress:
+    return 1ULL << 56;
+  case Attribute::NoCfCheck:
+    return 1ULL << 57;
+  case Attribute::OptForFuzzing:
+    return 1ULL << 58;
+  case Attribute::ShadowCallStack:
+    return 1ULL << 59;
   case Attribute::SpeculativeLoadHardening:
     return 1ULL << 60;
   case Attribute::ImmArg:
@@ -1751,7 +1866,8 @@ static uint64_t getRawAttributeMask(Attribute::AttrKind Val) {
 }
 
 static void addRawAttributeValue(AttrBuilder &B, uint64_t Val) {
-  if (!Val) return;
+  if (!Val)
+    return;
 
   for (Attribute::AttrKind I = Attribute::None; I != Attribute::EndAttrKinds;
        I = Attribute::AttrKind(I + 1)) {
@@ -1759,7 +1875,7 @@ static void addRawAttributeValue(AttrBuilder &B, uint64_t Val) {
       if (I == Attribute::Alignment)
         B.addAlignmentAttr(1ULL << ((A >> 16) - 1));
       else if (I == Attribute::StackAlignment)
-        B.addStackAlignmentAttr(1ULL << ((A >> 26)-1));
+        B.addStackAlignmentAttr(1ULL << ((A >> 26) - 1));
       else if (Attribute::isTypeAttrKind(I))
         B.addTypeAttr(I, nullptr); // Type will be auto-upgraded.
       else
@@ -1783,8 +1899,8 @@ static void decodeLLVMAttributesForBitcode(AttrBuilder &B,
   if (Alignment)
     B.addAlignmentAttr(Alignment);
 
-  uint64_t Attrs = ((EncodedAttrs & (0xfffffULL << 32)) >> 11) |
-                   (EncodedAttrs & 0xffff);
+  uint64_t Attrs =
+      ((EncodedAttrs & (0xfffffULL << 32)) >> 11) | (EncodedAttrs & 0xffff);
 
   if (AttrIdx == AttributeList::FunctionIndex) {
     // Upgrade old memory attributes.
@@ -1856,7 +1972,7 @@ Error BitcodeReader::parseAttributeBlock() {
     if (!MaybeRecord)
       return MaybeRecord.takeError();
     switch (MaybeRecord.get()) {
-    default:  // Default behavior: ignore.
+    default: // Default behavior: ignore.
       break;
     case bitc::PARAMATTR_CODE_ENTRY_OLD: // ENTRY: [paramidx0, attr0, ...]
       // Deprecated, but still needed to read old bitcode files.
@@ -1865,7 +1981,7 @@ Error BitcodeReader::parseAttributeBlock() {
 
       for (unsigned i = 0, e = Record.size(); i != e; i += 2) {
         AttrBuilder B(Context);
-        decodeLLVMAttributesForBitcode(B, Record[i+1], Record[i]);
+        decodeLLVMAttributesForBitcode(B, Record[i + 1], Record[i]);
         Attrs.push_back(AttributeList::get(Context, Record[i], B));
       }
 
@@ -2134,7 +2250,7 @@ Error BitcodeReader::parseAttributeGroupBlock() {
     if (!MaybeRecord)
       return MaybeRecord.takeError();
     switch (MaybeRecord.get()) {
-    default:  // Default behavior: ignore.
+    default: // Default behavior: ignore.
       break;
     case bitc::PARAMATTR_GRP_CODE_ENTRY: { // ENTRY: [grpid, idx, a0, a1, ...]
       if (Record.size() < 3)
@@ -2146,7 +2262,7 @@ Error BitcodeReader::parseAttributeGroupBlock() {
       AttrBuilder B(Context);
       MemoryEffects ME = MemoryEffects::unknown();
       for (unsigned i = 2, e = Record.size(); i != e; ++i) {
-        if (Record[i] == 0) {        // Enum attribute
+        if (Record[i] == 0) { // Enum attribute
           Attribute::AttrKind Kind;
           uint64_t EncodedKind = Record[++i];
           if (Idx == AttributeList::FunctionIndex &&
@@ -2294,43 +2410,43 @@ Error BitcodeReader::parseTypeTableBody() {
         return error("Invalid numentry record");
       TypeList.resize(Record[0]);
       continue;
-    case bitc::TYPE_CODE_VOID:      // VOID
+    case bitc::TYPE_CODE_VOID: // VOID
       ResultTy = Type::getVoidTy(Context);
       break;
-    case bitc::TYPE_CODE_HALF:     // HALF
+    case bitc::TYPE_CODE_HALF: // HALF
       ResultTy = Type::getHalfTy(Context);
       break;
-    case bitc::TYPE_CODE_BFLOAT:    // BFLOAT
+    case bitc::TYPE_CODE_BFLOAT: // BFLOAT
       ResultTy = Type::getBFloatTy(Context);
       break;
-    case bitc::TYPE_CODE_FLOAT:     // FLOAT
+    case bitc::TYPE_CODE_FLOAT: // FLOAT
       ResultTy = Type::getFloatTy(Context);
       break;
-    case bitc::TYPE_CODE_DOUBLE:    // DOUBLE
+    case bitc::TYPE_CODE_DOUBLE: // DOUBLE
       ResultTy = Type::getDoubleTy(Context);
       break;
-    case bitc::TYPE_CODE_X86_FP80:  // X86_FP80
+    case bitc::TYPE_CODE_X86_FP80: // X86_FP80
       ResultTy = Type::getX86_FP80Ty(Context);
       break;
-    case bitc::TYPE_CODE_FP128:     // FP128
+    case bitc::TYPE_CODE_FP128: // FP128
       ResultTy = Type::getFP128Ty(Context);
       break;
     case bitc::TYPE_CODE_PPC_FP128: // PPC_FP128
       ResultTy = Type::getPPC_FP128Ty(Context);
       break;
-    case bitc::TYPE_CODE_LABEL:     // LABEL
+    case bitc::TYPE_CODE_LABEL: // LABEL
       ResultTy = Type::getLabelTy(Context);
       break;
-    case bitc::TYPE_CODE_METADATA:  // METADATA
+    case bitc::TYPE_CODE_METADATA: // METADATA
       ResultTy = Type::getMetadataTy(Context);
       break;
-    case bitc::TYPE_CODE_X86_MMX:   // X86_MMX
+    case bitc::TYPE_CODE_X86_MMX: // X86_MMX
       ResultTy = Type::getX86_MMXTy(Context);
       break;
-    case bitc::TYPE_CODE_X86_AMX:   // X86_AMX
+    case bitc::TYPE_CODE_X86_AMX: // X86_AMX
       ResultTy = Type::getX86_AMXTy(Context);
       break;
-    case bitc::TYPE_CODE_TOKEN:     // TOKEN
+    case bitc::TYPE_CODE_TOKEN: // TOKEN
       ResultTy = Type::getTokenTy(Context);
       break;
     case bitc::TYPE_CODE_INTEGER: { // INTEGER: [width]
@@ -2352,8 +2468,7 @@ Error BitcodeReader::parseTypeTableBody() {
       if (Record.size() == 2)
         AddressSpace = Record[1];
       ResultTy = getTypeByID(Record[0]);
-      if (!ResultTy ||
-          !PointerType::isValidElementType(ResultTy))
+      if (!ResultTy || !PointerType::isValidElementType(ResultTy))
         return error("Invalid type");
       ContainedIDs.push_back(Record[0]);
       ResultTy = PointerType::get(ResultTy, AddressSpace);
@@ -2371,7 +2486,7 @@ Error BitcodeReader::parseTypeTableBody() {
       // FUNCTION: [vararg, attrid, retty, paramty x N]
       if (Record.size() < 3)
         return error("Invalid function record");
-      SmallVector<Type*, 8> ArgTys;
+      SmallVector<Type *, 8> ArgTys;
       for (unsigned i = 3, e = Record.size(); i != e; ++i) {
         if (Type *T = getTypeByID(Record[i]))
           ArgTys.push_back(T);
@@ -2380,7 +2495,7 @@ Error BitcodeReader::parseTypeTableBody() {
       }
 
       ResultTy = getTypeByID(Record[2]);
-      if (!ResultTy || ArgTys.size() < Record.size()-3)
+      if (!ResultTy || ArgTys.size() < Record.size() - 3)
         return error("Invalid type");
 
       ContainedIDs.append(Record.begin() + 2, Record.end());
@@ -2391,42 +2506,41 @@ Error BitcodeReader::parseTypeTableBody() {
       // FUNCTION: [vararg, retty, paramty x N]
       if (Record.size() < 2)
         return error("Invalid function record");
-      SmallVector<Type*, 8> ArgTys;
+      SmallVector<Type *, 8> ArgTys;
       for (unsigned i = 2, e = Record.size(); i != e; ++i) {
         if (Type *T = getTypeByID(Record[i])) {
           if (!FunctionType::isValidArgumentType(T))
             return error("Invalid function argument type");
           ArgTys.push_back(T);
-        }
-        else
+        } else
           break;
       }
 
       ResultTy = getTypeByID(Record[1]);
-      if (!ResultTy || ArgTys.size() < Record.size()-2)
+      if (!ResultTy || ArgTys.size() < Record.size() - 2)
         return error("Invalid type");
 
       ContainedIDs.append(Record.begin() + 1, Record.end());
       ResultTy = FunctionType::get(ResultTy, ArgTys, Record[0]);
       break;
     }
-    case bitc::TYPE_CODE_STRUCT_ANON: {  // STRUCT: [ispacked, eltty x N]
+    case bitc::TYPE_CODE_STRUCT_ANON: { // STRUCT: [ispacked, eltty x N]
       if (Record.empty())
         return error("Invalid anon struct record");
-      SmallVector<Type*, 8> EltTys;
+      SmallVector<Type *, 8> EltTys;
       for (unsigned i = 1, e = Record.size(); i != e; ++i) {
         if (Type *T = getTypeByID(Record[i]))
           EltTys.push_back(T);
         else
           break;
       }
-      if (EltTys.size() != Record.size()-1)
+      if (EltTys.size() != Record.size() - 1)
         return error("Invalid type");
       ContainedIDs.append(Record.begin() + 1, Record.end());
       ResultTy = StructType::get(Context, EltTys, Record[0]);
       break;
     }
-    case bitc::TYPE_CODE_STRUCT_NAME:   // STRUCT_NAME: [strchr x N]
+    case bitc::TYPE_CODE_STRUCT_NAME: // STRUCT_NAME: [strchr x N]
       if (convertToString(Record, 0, TypeName))
         return error("Invalid struct name record");
       continue;
@@ -2443,25 +2557,25 @@ Error BitcodeReader::parseTypeTableBody() {
       if (Res) {
         Res->setName(TypeName);
         TypeList[NumRecords] = nullptr;
-      } else  // Otherwise, create a new struct.
+      } else // Otherwise, create a new struct.
         Res = createIdentifiedStructType(Context, TypeName);
       TypeName.clear();
 
-      SmallVector<Type*, 8> EltTys;
+      SmallVector<Type *, 8> EltTys;
       for (unsigned i = 1, e = Record.size(); i != e; ++i) {
         if (Type *T = getTypeByID(Record[i]))
           EltTys.push_back(T);
         else
           break;
       }
-      if (EltTys.size() != Record.size()-1)
+      if (EltTys.size() != Record.size() - 1)
         return error("Invalid named struct record");
       Res->setBody(EltTys, Record[0]);
       ContainedIDs.append(Record.begin() + 1, Record.end());
       ResultTy = Res;
       break;
     }
-    case bitc::TYPE_CODE_OPAQUE: {       // OPAQUE: []
+    case bitc::TYPE_CODE_OPAQUE: { // OPAQUE: []
       if (Record.size() != 1)
         return error("Invalid opaque type record");
 
@@ -2473,7 +2587,7 @@ Error BitcodeReader::parseTypeTableBody() {
       if (Res) {
         Res->setName(TypeName);
         TypeList[NumRecords] = nullptr;
-      } else  // Otherwise, create a new struct with no body.
+      } else // Otherwise, create a new struct with no body.
         Res = createIdentifiedStructType(Context, TypeName);
       TypeName.clear();
       ResultTy = Res;
@@ -2508,7 +2622,7 @@ Error BitcodeReader::parseTypeTableBody() {
       TypeName.clear();
       break;
     }
-    case bitc::TYPE_CODE_ARRAY:     // ARRAY: [numelts, eltty]
+    case bitc::TYPE_CODE_ARRAY: // ARRAY: [numelts, eltty]
       if (Record.size() < 2)
         return error("Invalid array type record");
       ResultTy = getTypeByID(Record[1]);
@@ -2517,8 +2631,8 @@ Error BitcodeReader::parseTypeTableBody() {
       ContainedIDs.push_back(Record[1]);
       ResultTy = ArrayType::get(ResultTy, Record[0]);
       break;
-    case bitc::TYPE_CODE_VECTOR:    // VECTOR: [numelts, eltty] or
-                                    //         [numelts, eltty, scalable]
+    case bitc::TYPE_CODE_VECTOR: // VECTOR: [numelts, eltty] or
+                                 //         [numelts, eltty, scalable]
       if (Record.size() < 2)
         return error("Invalid vector type record");
       if (Record[0] == 0)
@@ -2803,9 +2917,9 @@ Error BitcodeReader::parseValueSymbolTable(uint64_t Offset) {
     if (!MaybeRecord)
       return MaybeRecord.takeError();
     switch (MaybeRecord.get()) {
-    default:  // Default behavior: unknown type.
+    default: // Default behavior: unknown type.
       break;
-    case bitc::VST_CODE_ENTRY: {  // VST_CODE_ENTRY: [valueid, namechar x N]
+    case bitc::VST_CODE_ENTRY: { // VST_CODE_ENTRY: [valueid, namechar x N]
       Expected<Value *> ValOrErr = recordValue(Record, 1, TT);
       if (Error Err = ValOrErr.takeError())
         return Err;
@@ -2944,8 +3058,7 @@ Error BitcodeReader::resolveGlobalAndIndirectSymbolInits() {
 
 APInt llvm::readWideAPInt(ArrayRef<uint64_t> Vals, unsigned TypeBits) {
   SmallVector<uint64_t, 8> Words(Vals.size());
-  transform(Vals, Words.begin(),
-                 BitcodeReader::decodeSignRotatedValue);
+  transform(Vals, Words.begin(), BitcodeReader::decodeSignRotatedValue);
 
   return APInt(TypeBits, Words);
 }
@@ -2990,14 +3103,14 @@ Error BitcodeReader::parseConstants() {
     if (!MaybeBitCode)
       return MaybeBitCode.takeError();
     switch (unsigned BitCode = MaybeBitCode.get()) {
-    default:  // Default behavior: unknown constant
-    case bitc::CST_CODE_UNDEF:     // UNDEF
+    default:                   // Default behavior: unknown constant
+    case bitc::CST_CODE_UNDEF: // UNDEF
       V = UndefValue::get(CurTy);
       break;
-    case bitc::CST_CODE_POISON:    // POISON
+    case bitc::CST_CODE_POISON: // POISON
       V = PoisonValue::get(CurTy);
       break;
-    case bitc::CST_CODE_SETTYPE:   // SETTYPE: [typeid]
+    case bitc::CST_CODE_SETTYPE: // SETTYPE: [typeid]
       if (Record.empty())
         return error("Invalid settype record");
       if (Record[0] >= TypeList.size() || !TypeList[Record[0]])
@@ -3007,8 +3120,8 @@ Error BitcodeReader::parseConstants() {
       CurTyID = Record[0];
       CurTy = TypeList[CurTyID];
       CurElemTy = getPtrElementTypeByID(CurTyID);
-      continue;  // Skip the ValueList manipulation.
-    case bitc::CST_CODE_NULL:      // NULL
+      continue;               // Skip the ValueList manipulation.
+    case bitc::CST_CODE_NULL: // NULL
       if (CurTy->isVoidTy() || CurTy->isFunctionTy() || CurTy->isLabelTy())
         return error("Invalid type for a constant null value");
       if (auto *TETy = dyn_cast<TargetExtType>(CurTy))
@@ -3016,12 +3129,12 @@ Error BitcodeReader::parseConstants() {
           return error("Invalid type for a constant null value");
       V = Constant::getNullValue(CurTy);
       break;
-    case bitc::CST_CODE_INTEGER:   // INTEGER: [intval]
+    case bitc::CST_CODE_INTEGER: // INTEGER: [intval]
       if (!CurTy->isIntegerTy() || Record.empty())
         return error("Invalid integer const record");
       V = ConstantInt::get(CurTy, decodeSignRotatedValue(Record[0]));
       break;
-    case bitc::CST_CODE_WIDE_INTEGER: {// WIDE_INTEGER: [n x intval]
+    case bitc::CST_CODE_WIDE_INTEGER: { // WIDE_INTEGER: [n x intval]
       if (!CurTy->isIntegerTy() || Record.empty())
         return error("Invalid wide integer const record");
 
@@ -3031,7 +3144,7 @@ Error BitcodeReader::parseConstants() {
 
       break;
     }
-    case bitc::CST_CODE_FLOAT: {    // FLOAT: [fpval]
+    case bitc::CST_CODE_FLOAT: { // FLOAT: [fpval]
       if (Record.empty())
         return error("Invalid float const record");
       if (CurTy->isHalfTy())
@@ -3044,8 +3157,8 @@ Error BitcodeReader::parseConstants() {
         V = ConstantFP::get(Context, APFloat(APFloat::IEEEsingle(),
                                              APInt(32, (uint32_t)Record[0])));
       else if (CurTy->isDoubleTy())
-        V = ConstantFP::get(Context, APFloat(APFloat::IEEEdouble(),
-                                             APInt(64, Record[0])));
+        V = ConstantFP::get(
+            Context, APFloat(APFloat::IEEEdouble(), APInt(64, Record[0])));
       else if (CurTy->isX86_FP80Ty()) {
         // Bits are not stored the same way as a normal i80 APInt, compensate.
         uint64_t Rearrange[2];
@@ -3054,17 +3167,17 @@ Error BitcodeReader::parseConstants() {
         V = ConstantFP::get(Context, APFloat(APFloat::x87DoubleExtended(),
                                              APInt(80, Rearrange)));
       } else if (CurTy->isFP128Ty())
-        V = ConstantFP::get(Context, APFloat(APFloat::IEEEquad(),
-                                             APInt(128, Record)));
+        V = ConstantFP::get(Context,
+                            APFloat(APFloat::IEEEquad(), APInt(128, Record)));
       else if (CurTy->isPPC_FP128Ty())
-        V = ConstantFP::get(Context, APFloat(APFloat::PPCDoubleDouble(),
-                                             APInt(128, Record)));
+        V = ConstantFP::get(
+            Context, APFloat(APFloat::PPCDoubleDouble(), APInt(128, Record)));
       else
         V = UndefValue::get(CurTy);
       break;
     }
 
-    case bitc::CST_CODE_AGGREGATE: {// AGGREGATE: [n x value number]
+    case bitc::CST_CODE_AGGREGATE: { // AGGREGATE: [n x value number]
       if (Record.empty())
         return error("Invalid aggregate record");
 
@@ -3097,7 +3210,7 @@ Error BitcodeReader::parseConstants() {
                                        BitCode == bitc::CST_CODE_CSTRING);
       break;
     }
-    case bitc::CST_CODE_DATA: {// DATA: [n x value]
+    case bitc::CST_CODE_DATA: { // DATA: [n x value]
       if (Record.empty())
         return error("Invalid data record");
 
@@ -3159,38 +3272,34 @@ Error BitcodeReader::parseConstants() {
       }
       break;
     }
-    case bitc::CST_CODE_CE_UNOP: {  // CE_UNOP: [opcode, opval]
+    case bitc::CST_CODE_CE_UNOP: { // CE_UNOP: [opcode, opval]
       if (Record.size() < 2)
         return error("Invalid unary op constexpr record");
       int Opc = getDecodedUnaryOpcode(Record[0], CurTy);
       if (Opc < 0) {
-        V = UndefValue::get(CurTy);  // Unknown unop.
+        V = UndefValue::get(CurTy); // Unknown unop.
       } else {
         V = BitcodeConstant::create(Alloc, CurTy, Opc, (unsigned)Record[1]);
       }
       break;
     }
-    case bitc::CST_CODE_CE_BINOP: {  // CE_BINOP: [opcode, opval, opval]
+    case bitc::CST_CODE_CE_BINOP: { // CE_BINOP: [opcode, opval, opval]
       if (Record.size() < 3)
         return error("Invalid binary op constexpr record");
       int Opc = getDecodedBinaryOpcode(Record[0], CurTy);
       if (Opc < 0) {
-        V = UndefValue::get(CurTy);  // Unknown binop.
+        V = UndefValue::get(CurTy); // Unknown binop.
       } else {
         uint8_t Flags = 0;
         if (Record.size() >= 4) {
-          if (Opc == Instruction::Add ||
-              Opc == Instruction::Sub ||
-              Opc == Instruction::Mul ||
-              Opc == Instruction::Shl) {
+          if (Opc == Instruction::Add || Opc == Instruction::Sub ||
+              Opc == Instruction::Mul || Opc == Instruction::Shl) {
             if (Record[3] & (1 << bitc::OBO_NO_SIGNED_WRAP))
               Flags |= OverflowingBinaryOperator::NoSignedWrap;
             if (Record[3] & (1 << bitc::OBO_NO_UNSIGNED_WRAP))
               Flags |= OverflowingBinaryOperator::NoUnsignedWrap;
-          } else if (Opc == Instruction::SDiv ||
-                     Opc == Instruction::UDiv ||
-                     Opc == Instruction::LShr ||
-                     Opc == Instruction::AShr) {
+          } else if (Opc == Instruction::SDiv || Opc == Instruction::UDiv ||
+                     Opc == Instruction::LShr || Opc == Instruction::AShr) {
             if (Record[3] & (1 << bitc::PEO_EXACT))
               Flags |= PossiblyExactOperator::IsExact;
           }
@@ -3200,12 +3309,12 @@ Error BitcodeReader::parseConstants() {
       }
       break;
     }
-    case bitc::CST_CODE_CE_CAST: {  // CE_CAST: [opcode, opty, opval]
+    case bitc::CST_CODE_CE_CAST: { // CE_CAST: [opcode, opty, opval]
       if (Record.size() < 3)
         return error("Invalid cast constexpr record");
       int Opc = getDecodedCastOpcode(Record[0]);
       if (Opc < 0) {
-        V = UndefValue::get(CurTy);  // Unknown cast.
+        V = UndefValue::get(CurTy); // Unknown cast.
       } else {
         unsigned OpTyID = Record[1];
         Type *OpTy = getTypeByID(OpTyID);
@@ -3215,8 +3324,8 @@ Error BitcodeReader::parseConstants() {
       }
       break;
     }
-    case bitc::CST_CODE_CE_INBOUNDS_GEP: // [ty, n x operands]
-    case bitc::CST_CODE_CE_GEP: // [ty, n x operands]
+    case bitc::CST_CODE_CE_INBOUNDS_GEP:             // [ty, n x operands]
+    case bitc::CST_CODE_CE_GEP:                      // [ty, n x operands]
     case bitc::CST_CODE_CE_GEP_WITH_INRANGE_INDEX: { // [ty, flags, n x
                                                      // operands]
       if (Record.size() < 2)
@@ -3271,7 +3380,7 @@ Error BitcodeReader::parseConstants() {
                                   Elts);
       break;
     }
-    case bitc::CST_CODE_CE_SELECT: {  // CE_SELECT: [opval#, opval#, opval#]
+    case bitc::CST_CODE_CE_SELECT: { // CE_SELECT: [opval#, opval#, opval#]
       if (Record.size() < 3)
         return error("Invalid select constexpr record");
 
@@ -3280,13 +3389,12 @@ Error BitcodeReader::parseConstants() {
           {(unsigned)Record[0], (unsigned)Record[1], (unsigned)Record[2]});
       break;
     }
-    case bitc::CST_CODE_CE_EXTRACTELT
-        : { // CE_EXTRACTELT: [opty, opval, opty, opval]
+    case bitc::CST_CODE_CE_EXTRACTELT: { // CE_EXTRACTELT: [opty, opval, opty,
+                                         // opval]
       if (Record.size() < 3)
         return error("Invalid extractelement constexpr record");
       unsigned OpTyID = Record[0];
-      VectorType *OpTy =
-        dyn_cast_or_null<VectorType>(getTypeByID(OpTyID));
+      VectorType *OpTy = dyn_cast_or_null<VectorType>(getTypeByID(OpTyID));
       if (!OpTy)
         return error("Invalid extractelement constexpr record");
       unsigned IdxRecord;
@@ -3304,8 +3412,8 @@ Error BitcodeReader::parseConstants() {
                                   {(unsigned)Record[1], IdxRecord});
       break;
     }
-    case bitc::CST_CODE_CE_INSERTELT
-        : { // CE_INSERTELT: [opval, opval, opty, opval]
+    case bitc::CST_CODE_CE_INSERTELT: { // CE_INSERTELT: [opval, opval, opty,
+                                        // opval]
       VectorType *OpTy = dyn_cast<VectorType>(CurTy);
       if (Record.size() < 3 || !OpTy)
         return error("Invalid insertelement constexpr record");
@@ -3336,8 +3444,7 @@ Error BitcodeReader::parseConstants() {
     }
     case bitc::CST_CODE_CE_SHUFVEC_EX: { // [opty, opval, opval, opval]
       VectorType *RTy = dyn_cast<VectorType>(CurTy);
-      VectorType *OpTy =
-        dyn_cast_or_null<VectorType>(getTypeByID(Record[0]));
+      VectorType *OpTy = dyn_cast_or_null<VectorType>(getTypeByID(Record[0]));
       if (Record.size() < 4 || !RTy || !OpTy)
         return error("Invalid shufflevector constexpr record");
       V = BitcodeConstant::create(
@@ -3345,7 +3452,7 @@ Error BitcodeReader::parseConstants() {
           {(unsigned)Record[1], (unsigned)Record[2], (unsigned)Record[3]});
       break;
     }
-    case bitc::CST_CODE_CE_CMP: {     // CE_CMP: [opty, opval, opval, pred]
+    case bitc::CST_CODE_CE_CMP: { // CE_CMP: [opty, opval, opval, pred]
       if (Record.size() < 4)
         return error("Invalid cmp constexpt record");
       unsigned OpTyID = Record[0];
@@ -3369,16 +3476,16 @@ Error BitcodeReader::parseConstants() {
       bool HasSideEffects = Record[0] & 1;
       bool IsAlignStack = Record[0] >> 1;
       unsigned AsmStrSize = Record[1];
-      if (2+AsmStrSize >= Record.size())
+      if (2 + AsmStrSize >= Record.size())
         return error("Invalid inlineasm record");
-      unsigned ConstStrSize = Record[2+AsmStrSize];
-      if (3+AsmStrSize+ConstStrSize > Record.size())
+      unsigned ConstStrSize = Record[2 + AsmStrSize];
+      if (3 + AsmStrSize + ConstStrSize > Record.size())
         return error("Invalid inlineasm record");
 
       for (unsigned i = 0; i != AsmStrSize; ++i)
-        AsmStr += (char)Record[2+i];
+        AsmStr += (char)Record[2 + i];
       for (unsigned i = 0; i != ConstStrSize; ++i)
-        ConstrStr += (char)Record[3+AsmStrSize+i];
+        ConstrStr += (char)Record[3 + AsmStrSize + i];
       UpgradeInlineAsmString(&AsmStr);
       if (!CurElemTy)
         return error("Missing element type for old-style inlineasm");
@@ -3396,16 +3503,16 @@ Error BitcodeReader::parseConstants() {
       bool IsAlignStack = (Record[0] >> 1) & 1;
       unsigned AsmDialect = Record[0] >> 2;
       unsigned AsmStrSize = Record[1];
-      if (2+AsmStrSize >= Record.size())
+      if (2 + AsmStrSize >= Record.size())
         return error("Invalid inlineasm record");
-      unsigned ConstStrSize = Record[2+AsmStrSize];
-      if (3+AsmStrSize+ConstStrSize > Record.size())
+      unsigned ConstStrSize = Record[2 + AsmStrSize];
+      if (3 + AsmStrSize + ConstStrSize > Record.size())
         return error("Invalid inlineasm record");
 
       for (unsigned i = 0; i != AsmStrSize; ++i)
-        AsmStr += (char)Record[2+i];
+        AsmStr += (char)Record[2 + i];
       for (unsigned i = 0; i != ConstStrSize; ++i)
-        ConstrStr += (char)Record[3+AsmStrSize+i];
+        ConstrStr += (char)Record[3 + AsmStrSize + i];
       UpgradeInlineAsmString(&AsmStr);
       if (!CurElemTy)
         return error("Missing element type for old-style inlineasm");
@@ -3479,7 +3586,7 @@ Error BitcodeReader::parseConstants() {
                          InlineAsm::AsmDialect(AsmDialect), CanThrow);
       break;
     }
-    case bitc::CST_CODE_BLOCKADDRESS:{
+    case bitc::CST_CODE_BLOCKADDRESS: {
       if (Record.size() < 3)
         return error("Invalid blockaddress record");
       unsigned FnTyID = Record[0];
@@ -3554,7 +3661,7 @@ Error BitcodeReader::parseUseLists() {
     if (!MaybeRecord)
       return MaybeRecord.takeError();
     switch (MaybeRecord.get()) {
-    default:  // Default behavior: unknown type.
+    default: // Default behavior: unknown type.
       break;
     case bitc::USELIST_CODE_BB:
       IsBB = true;
@@ -3703,7 +3810,8 @@ Error BitcodeReader::rememberAndSkipFunctionBodies() {
     return error("Could not find function in stream");
 
   if (!SeenFirstFunctionBody)
-    return error("Trying to materialize functions before seeing function blocks");
+    return error(
+        "Trying to materialize functions before seeing function blocks");
 
   // An old bitcode file with the symbol table at the end would have
   // finished the parse greedily.
@@ -3969,8 +4077,8 @@ Error BitcodeReader::parseFunctionRecord(ArrayRef<uint64_t> Record) {
   // argument's pointee type. There should be no opaque pointers where the byval
   // type is implicit.
   for (unsigned i = 0; i != Func->arg_size(); ++i) {
-    for (Attribute::AttrKind Kind : {Attribute::ByVal, Attribute::StructRet,
-                                     Attribute::InAlloca}) {
+    for (Attribute::AttrKind Kind :
+         {Attribute::ByVal, Attribute::StructRet, Attribute::InAlloca}) {
       if (!Func->hasParamAttribute(i, Kind))
         continue;
 
@@ -4003,8 +4111,8 @@ Error BitcodeReader::parseFunctionRecord(ArrayRef<uint64_t> Record) {
     }
   }
 
-  if (Func->getCallingConv() == CallingConv::X86_INTR &&
-      !Func->arg_empty() && !Func->hasParamAttribute(0, Attribute::ByVal)) {
+  if (Func->getCallingConv() == CallingConv::X86_INTR && !Func->arg_empty() &&
+      !Func->hasParamAttribute(0, Attribute::ByVal)) {
     unsigned ParamTypeID = getContainedTypeID(FTyID, 1);
     Type *ByValTy = getPtrElementTypeByID(ParamTypeID);
     if (!ByValTy)
@@ -4156,8 +4264,7 @@ Error BitcodeReader::parseGlobalIndirectSymbolRecord(
       // A GlobalValue with local linkage cannot have a DLL storage class.
       if (!NewGA->hasLocalLinkage())
         NewGA->setDLLStorageClass(getDecodedDLLStorageClass(S));
-    }
-    else
+    } else
       upgradeDLLImportExportLinkage(NewGA, Linkage);
     if (OpNum != Record.size())
       NewGA->setThreadLocalMode(getDecodedThreadLocalMode(Record[OpNum++]));
@@ -4245,7 +4352,7 @@ Error BitcodeReader::parseModule(uint64_t ResumeBit,
 
     case BitstreamEntry::SubBlock:
       switch (Entry.ID) {
-      default:  // Skip unknown content.
+      default: // Skip unknown content.
         if (Error Err = Stream.SkipBlock())
           return Err;
         break;
@@ -4385,7 +4492,8 @@ Error BitcodeReader::parseModule(uint64_t ResumeBit,
     if (!MaybeBitCode)
       return MaybeBitCode.takeError();
     switch (unsigned BitCode = MaybeBitCode.get()) {
-    default: break;  // Default behavior, ignore unknown content.
+    default:
+      break; // Default behavior, ignore unknown content.
     case bitc::MODULE_CODE_VERSION: {
       Expected<unsigned> VersionOrErr = parseVersionRecord(Record);
       if (!VersionOrErr)
@@ -4393,7 +4501,7 @@ Error BitcodeReader::parseModule(uint64_t ResumeBit,
       UseRelativeIDs = *VersionOrErr >= 1;
       break;
     }
-    case bitc::MODULE_CODE_TRIPLE: {  // TRIPLE: [strchr x N]
+    case bitc::MODULE_CODE_TRIPLE: { // TRIPLE: [strchr x N]
       if (ResolvedDataLayout)
         return error("target triple too late in module");
       std::string S;
@@ -4402,21 +4510,21 @@ Error BitcodeReader::parseModule(uint64_t ResumeBit,
       TheModule->setTargetTriple(S);
       break;
     }
-    case bitc::MODULE_CODE_DATALAYOUT: {  // DATALAYOUT: [strchr x N]
+    case bitc::MODULE_CODE_DATALAYOUT: { // DATALAYOUT: [strchr x N]
       if (ResolvedDataLayout)
         return error("datalayout too late in module");
       if (convertToString(Record, 0, TentativeDataLayoutStr))
         return error("Invalid record");
       break;
     }
-    case bitc::MODULE_CODE_ASM: {  // ASM: [strchr x N]
+    case bitc::MODULE_CODE_ASM: { // ASM: [strchr x N]
       std::string S;
       if (convertToString(Record, 0, S))
         return error("Invalid record");
       TheModule->setModuleInlineAsm(S);
       break;
     }
-    case bitc::MODULE_CODE_DEPLIB: {  // DEPLIB: [strchr x N]
+    case bitc::MODULE_CODE_DEPLIB: { // DEPLIB: [strchr x N]
       // Deprecated, but still needed to read old bitcode files.
       std::string S;
       if (convertToString(Record, 0, S))
@@ -4424,14 +4532,14 @@ Error BitcodeReader::parseModule(uint64_t ResumeBit,
       // Ignore value.
       break;
     }
-    case bitc::MODULE_CODE_SECTIONNAME: {  // SECTIONNAME: [strchr x N]
+    case bitc::MODULE_CODE_SECTIONNAME: { // SECTIONNAME: [strchr x N]
       std::string S;
       if (convertToString(Record, 0, S))
         return error("Invalid record");
       SectionTable.push_back(S);
       break;
     }
-    case bitc::MODULE_CODE_GCNAME: {  // SECTIONNAME: [strchr x N]
+    case bitc::MODULE_CODE_GCNAME: { // SECTIONNAME: [strchr x N]
       std::string S;
       if (convertToString(Record, 0, S))
         return error("Invalid record");
@@ -4512,8 +4620,8 @@ Error BitcodeReader::propagateAttributeTypes(CallBase *CB,
                                              ArrayRef<unsigned> ArgTyIDs) {
   AttributeList Attrs = CB->getAttributes();
   for (unsigned i = 0; i != CB->arg_size(); ++i) {
-    for (Attribute::AttrKind Kind : {Attribute::ByVal, Attribute::StructRet,
-                                     Attribute::InAlloca}) {
+    for (Attribute::AttrKind Kind :
+         {Attribute::ByVal, Attribute::StructRet, Attribute::InAlloca}) {
       if (!Attrs.hasParamAttr(i, Kind) ||
           Attrs.getParamAttr(i, Kind).getValueAsType())
         continue;
@@ -4632,7 +4740,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
   // Edge blocks for phi nodes into which constant expressions have been
   // expanded.
   SmallMapVector<std::pair<BasicBlock *, BasicBlock *>, BasicBlock *, 4>
-    ConstExprEdgeBBs;
+      ConstExprEdgeBBs;
 
   DebugLoc LastLoc;
   auto getLastInstruction = [&]() -> Instruction * {
@@ -4663,7 +4771,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
 
     case BitstreamEntry::SubBlock:
       switch (Entry.ID) {
-      default:  // Skip unknown content.
+      default: // Skip unknown content.
         if (Error Err = Stream.SkipBlock())
           return Err;
         break;
@@ -4708,7 +4816,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
     switch (unsigned BitCode = MaybeBitCode.get()) {
     default: // Default behavior: reject
       return error("Invalid value");
-    case bitc::FUNC_CODE_DECLAREBLOCKS: {   // DECLAREBLOCKS: [nblocks]
+    case bitc::FUNC_CODE_DECLAREBLOCKS: { // DECLAREBLOCKS: [nblocks]
       if (Record.empty() || Record[0] == 0)
         return error("Invalid record");
       // Create all the basic blocks for the function.
@@ -4768,7 +4876,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
 
       continue;
 
-    case bitc::FUNC_CODE_DEBUG_LOC_AGAIN:  // DEBUG_LOC_AGAIN
+    case bitc::FUNC_CODE_DEBUG_LOC_AGAIN: // DEBUG_LOC_AGAIN
       // This record indicates that the last instruction is at the same
       // location as the previous instruction with a location.
       I = getLastInstruction();
@@ -4779,7 +4887,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       I = nullptr;
       continue;
 
-    case bitc::FUNC_CODE_DEBUG_LOC: {      // DEBUG_LOC: [line, col, scope, ia]
+    case bitc::FUNC_CODE_DEBUG_LOC: { // DEBUG_LOC: [line, col, scope, ia]
       I = getLastInstruction();
       if (!I || Record.size() < 4)
         return error("Invalid record");
@@ -4807,12 +4915,12 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       I = nullptr;
       continue;
     }
-    case bitc::FUNC_CODE_INST_UNOP: {    // UNOP: [opval, ty, opcode]
+    case bitc::FUNC_CODE_INST_UNOP: { // UNOP: [opval, ty, opcode]
       unsigned OpNum = 0;
       Value *LHS;
       unsigned TypeID;
       if (getValueTypePair(Record, OpNum, NextValueNo, LHS, TypeID, CurBB) ||
-          OpNum+1 > Record.size())
+          OpNum + 1 > Record.size())
         return error("Invalid record");
 
       int Opc = getDecodedUnaryOpcode(Record[OpNum++], LHS->getType());
@@ -4830,14 +4938,14 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       }
       break;
     }
-    case bitc::FUNC_CODE_INST_BINOP: {    // BINOP: [opval, ty, opval, opcode]
+    case bitc::FUNC_CODE_INST_BINOP: { // BINOP: [opval, ty, opval, opcode]
       unsigned OpNum = 0;
       Value *LHS, *RHS;
       unsigned TypeID;
       if (getValueTypePair(Record, OpNum, NextValueNo, LHS, TypeID, CurBB) ||
           popValue(Record, OpNum, NextValueNo, LHS->getType(), TypeID, RHS,
                    CurBB) ||
-          OpNum+1 > Record.size())
+          OpNum + 1 > Record.size())
         return error("Invalid record");
 
       int Opc = getDecodedBinaryOpcode(Record[OpNum++], LHS->getType());
@@ -4847,18 +4955,14 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       ResTypeID = TypeID;
       InstructionList.push_back(I);
       if (OpNum < Record.size()) {
-        if (Opc == Instruction::Add ||
-            Opc == Instruction::Sub ||
-            Opc == Instruction::Mul ||
-            Opc == Instruction::Shl) {
+        if (Opc == Instruction::Add || Opc == Instruction::Sub ||
+            Opc == Instruction::Mul || Opc == Instruction::Shl) {
           if (Record[OpNum] & (1 << bitc::OBO_NO_SIGNED_WRAP))
             cast<BinaryOperator>(I)->setHasNoSignedWrap(true);
           if (Record[OpNum] & (1 << bitc::OBO_NO_UNSIGNED_WRAP))
             cast<BinaryOperator>(I)->setHasNoUnsignedWrap(true);
-        } else if (Opc == Instruction::SDiv ||
-                   Opc == Instruction::UDiv ||
-                   Opc == Instruction::LShr ||
-                   Opc == Instruction::AShr) {
+        } else if (Opc == Instruction::SDiv || Opc == Instruction::UDiv ||
+                   Opc == Instruction::LShr || Opc == Instruction::AShr) {
           if (Record[OpNum] & (1 << bitc::PEO_EXACT))
             cast<BinaryOperator>(I)->setIsExact(true);
         } else if (isa<FPMathOperator>(I)) {
@@ -4866,16 +4970,15 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
           if (FMF.any())
             I->setFastMathFlags(FMF);
         }
-
       }
       break;
     }
-    case bitc::FUNC_CODE_INST_CAST: {    // CAST: [opval, opty, destty, castopc]
+    case bitc::FUNC_CODE_INST_CAST: { // CAST: [opval, opty, destty, castopc]
       unsigned OpNum = 0;
       Value *Op;
       unsigned OpTypeID;
       if (getValueTypePair(Record, OpNum, NextValueNo, Op, OpTypeID, CurBB) ||
-          OpNum+2 != Record.size())
+          OpNum + 2 != Record.size())
         return error("Invalid record");
 
       ResTypeID = Record[OpNum];
@@ -4931,7 +5034,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
         Ty = getTypeByID(TyID);
       }
 
-      SmallVector<Value*, 16> GEPIdx;
+      SmallVector<Value *, 16> GEPIdx;
       while (OpNum != Record.size()) {
         Value *Op;
         unsigned OpTypeID;
@@ -4972,7 +5075,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
     }
 
     case bitc::FUNC_CODE_INST_EXTRACTVAL: {
-                                       // EXTRACTVAL: [opty, opval, n x indices]
+      // EXTRACTVAL: [opty, opval, n x indices]
       unsigned OpNum = 0;
       Value *Agg;
       unsigned AggTypeID;
@@ -5016,7 +5119,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
     }
 
     case bitc::FUNC_CODE_INST_INSERTVAL: {
-                           // INSERTVAL: [opty, opval, opty, opval, n x indices]
+      // INSERTVAL: [opty, opval, opty, opval, n x indices]
       unsigned OpNum = 0;
       Value *Agg;
       unsigned AggTypeID;
@@ -5084,7 +5187,8 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       break;
     }
 
-    case bitc::FUNC_CODE_INST_VSELECT: {// VSELECT: [ty,opval,opval,predty,pred]
+    case bitc::FUNC_CODE_INST_VSELECT: { // VSELECT:
+                                         // [ty,opval,opval,predty,pred]
       // new form of select
       // handles select i1 or select [N x i1]
       unsigned OpNum = 0;
@@ -5098,8 +5202,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
         return error("Invalid record");
 
       // select condition can be either i1 or [N x i1]
-      if (VectorType* vector_type =
-          dyn_cast<VectorType>(Cond->getType())) {
+      if (VectorType *vector_type = dyn_cast<VectorType>(Cond->getType())) {
         // expect <n x i1>
         if (vector_type->getElementType() != Type::getInt1Ty(Context))
           return error("Invalid type for value");
@@ -5154,7 +5257,8 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       break;
     }
 
-    case bitc::FUNC_CODE_INST_SHUFFLEVEC: {// SHUFFLEVEC: [opval,ty,opval,opval]
+    case bitc::FUNC_CODE_INST_SHUFFLEVEC: { // SHUFFLEVEC:
+                                            // [opval,ty,opval,opval]
       unsigned OpNum = 0;
       Value *Vec1, *Vec2, *Mask;
       unsigned Vec1TypeID;
@@ -5177,10 +5281,11 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       break;
     }
 
-    case bitc::FUNC_CODE_INST_CMP:   // CMP: [opty, opval, opval, pred]
-      // Old form of ICmp/FCmp returning bool
-      // Existed to differentiate between icmp/fcmp and vicmp/vfcmp which were
-      // both legal on vectors but had different behaviour.
+    case bitc::FUNC_CODE_INST_CMP: // CMP: [opty, opval, opval, pred]
+                                   // Old form of ICmp/FCmp returning bool
+                                   // Existed to differentiate between icmp/fcmp
+                                   // and vicmp/vfcmp which were both legal on
+                                   // vectors but had different behaviour.
     case bitc::FUNC_CODE_INST_CMP2: { // CMP2: [opty, opval, opval, pred]
       // FCmp/ICmp returning bool or vector of bool
 
@@ -5199,10 +5304,10 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       unsigned PredVal = Record[OpNum];
       bool IsFP = LHS->getType()->isFPOrFPVectorTy();
       FastMathFlags FMF;
-      if (IsFP && Record.size() > OpNum+1)
+      if (IsFP && Record.size() > OpNum + 1)
         FMF = getDecodedFastMathFlags(Record[++OpNum]);
 
-      if (OpNum+1 != Record.size())
+      if (OpNum + 1 != Record.size())
         return error("Invalid record");
 
       if (LHS->getType()->isFPOrFPVectorTy())
@@ -5221,26 +5326,26 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
     }
 
     case bitc::FUNC_CODE_INST_RET: // RET: [opty,opval<optional>]
-      {
-        unsigned Size = Record.size();
-        if (Size == 0) {
-          I = ReturnInst::Create(Context);
-          InstructionList.push_back(I);
-          break;
-        }
-
-        unsigned OpNum = 0;
-        Value *Op = nullptr;
-        unsigned OpTypeID;
-        if (getValueTypePair(Record, OpNum, NextValueNo, Op, OpTypeID, CurBB))
-          return error("Invalid record");
-        if (OpNum != Record.size())
-          return error("Invalid record");
-
-        I = ReturnInst::Create(Context, Op);
+    {
+      unsigned Size = Record.size();
+      if (Size == 0) {
+        I = ReturnInst::Create(Context);
         InstructionList.push_back(I);
         break;
       }
+
+      unsigned OpNum = 0;
+      Value *Op = nullptr;
+      unsigned OpTypeID;
+      if (getValueTypePair(Record, OpNum, NextValueNo, Op, OpTypeID, CurBB))
+        return error("Invalid record");
+      if (OpNum != Record.size())
+        return error("Invalid record");
+
+      I = ReturnInst::Create(Context, Op);
+      InstructionList.push_back(I);
+      break;
+    }
     case bitc::FUNC_CODE_INST_BR: { // BR: [bb#, bb#, opval] or [bb#]
       if (Record.size() != 1 && Record.size() != 3)
         return error("Invalid record");
@@ -5251,8 +5356,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       if (Record.size() == 1) {
         I = BranchInst::Create(TrueDest);
         InstructionList.push_back(I);
-      }
-      else {
+      } else {
         BasicBlock *FalseDest = getBasicBlock(Record[1]);
         Type *CondType = Type::getInt1Ty(Context);
         Value *Cond = getValue(Record, 2, NextValueNo, CondType,
@@ -5399,7 +5503,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
 
         unsigned CurIdx = 5;
         for (unsigned i = 0; i != NumCases; ++i) {
-          SmallVector<ConstantInt*, 1> CaseVals;
+          SmallVector<ConstantInt *, 1> CaseVals;
           unsigned NumItems = Record[CurIdx++];
           for (unsigned ci = 0; ci != NumItems; ++ci) {
             bool isSingleNumber = Record[CurIdx++];
@@ -5424,7 +5528,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
               // compared as signed or unsigned values. The partially
               // implemented changes that used this format in the past used
               // unsigned comparisons.
-              for ( ; Low.ule(High); ++Low)
+              for (; Low.ule(High); ++Low)
                 CaseVals.push_back(ConstantInt::get(Context, Low));
             } else
               CaseVals.push_back(ConstantInt::get(Context, Low));
@@ -5447,13 +5551,13 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       BasicBlock *Default = getBasicBlock(Record[2]);
       if (!OpTy || !Cond || !Default)
         return error("Invalid record");
-      unsigned NumCases = (Record.size()-3)/2;
+      unsigned NumCases = (Record.size() - 3) / 2;
       SwitchInst *SI = SwitchInst::Create(Cond, Default, NumCases);
       InstructionList.push_back(SI);
       for (unsigned i = 0, e = NumCases; i != e; ++i) {
         ConstantInt *CaseVal = dyn_cast_or_null<ConstantInt>(
-            getFnValueByID(Record[3+i*2], OpTy, OpTyID, nullptr));
-        BasicBlock *DestBB = getBasicBlock(Record[1+3+i*2]);
+            getFnValueByID(Record[3 + i * 2], OpTy, OpTyID, nullptr));
+        BasicBlock *DestBB = getBasicBlock(Record[1 + 3 + i * 2]);
         if (!CaseVal || !DestBB) {
           delete SI;
           return error("Invalid record");
@@ -5471,11 +5575,11 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       Value *Address = getValue(Record, 1, NextValueNo, OpTy, OpTyID, CurBB);
       if (!OpTy || !Address)
         return error("Invalid record");
-      unsigned NumDests = Record.size()-2;
+      unsigned NumDests = Record.size() - 2;
       IndirectBrInst *IBI = IndirectBrInst::Create(Address, NumDests);
       InstructionList.push_back(IBI);
       for (unsigned i = 0, e = NumDests; i != e; ++i) {
-        if (BasicBlock *DestBB = getBasicBlock(Record[2+i])) {
+        if (BasicBlock *DestBB = getBasicBlock(Record[2 + i])) {
           IBI->addDestination(DestBB);
         } else {
           delete IBI;
@@ -5523,7 +5627,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       if (Record.size() < FTy->getNumParams() + OpNum)
         return error("Insufficient operands to call");
 
-      SmallVector<Value*, 16> Ops;
+      SmallVector<Value *, 16> Ops;
       SmallVector<unsigned, 16> ArgTyIDs;
       for (unsigned i = 0, e = FTy->getNumParams(); i != e; ++i, ++OpNum) {
         unsigned ArgTyID = getContainedTypeID(FTyID, i + 1);
@@ -5617,7 +5721,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       if (Record.size() < FTy->getNumParams() + OpNum)
         return error("Insufficient operands to call");
 
-      SmallVector<Value*, 16> Args;
+      SmallVector<Value *, 16> Args;
       SmallVector<unsigned, 16> ArgTyIDs;
       // Read the fixed params.
       for (unsigned i = 0, e = FTy->getNumParams(); i != e; ++i, ++OpNum) {
@@ -5845,7 +5949,8 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       LP->setCleanup(IsCleanup);
       for (unsigned J = 0; J != NumClauses; ++J) {
         LandingPadInst::ClauseType CT =
-          LandingPadInst::ClauseType(Record[Idx++]); (void)CT;
+            LandingPadInst::ClauseType(Record[Idx++]);
+        (void)CT;
         Value *Val;
         unsigned ValTypeID;
 
@@ -5855,12 +5960,12 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
           return error("Invalid record");
         }
 
-        assert((CT != LandingPadInst::Catch ||
-                !isa<ArrayType>(Val->getType())) &&
-               "Catch clause has a invalid type!");
-        assert((CT != LandingPadInst::Filter ||
-                isa<ArrayType>(Val->getType())) &&
-               "Filter clause has invalid type!");
+        assert(
+            (CT != LandingPadInst::Catch || !isa<ArrayType>(Val->getType())) &&
+            "Catch clause has a invalid type!");
+        assert(
+            (CT != LandingPadInst::Filter || isa<ArrayType>(Val->getType())) &&
+            "Filter clause has invalid type!");
         LP->addClause(cast<Constant>(Val));
       }
 
@@ -5952,7 +6057,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       break;
     }
     case bitc::FUNC_CODE_INST_LOADATOMIC: {
-       // LOADATOMIC: [opty, op, align, vol, ordering, ssid]
+      // LOADATOMIC: [opty, op, align, vol, ordering, ssid]
       unsigned OpNum = 0;
       Value *Op;
       unsigned OpTypeID;
@@ -5996,7 +6101,8 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       break;
     }
     case bitc::FUNC_CODE_INST_STORE:
-    case bitc::FUNC_CODE_INST_STORE_OLD: { // STORE2:[ptrty, ptr, val, align, vol]
+    case bitc::FUNC_CODE_INST_STORE_OLD: { // STORE2:[ptrty, ptr, val, align,
+                                           // vol]
       unsigned OpNum = 0;
       Value *Val, *Ptr;
       unsigned PtrTypeID, ValTypeID;
@@ -6092,8 +6198,8 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
         return error("Invalid record");
 
       Value *New = nullptr;
-      if (popValue(Record, OpNum, NextValueNo, Cmp->getType(), CmpTypeID,
-                   New, CurBB) ||
+      if (popValue(Record, OpNum, NextValueNo, Cmp->getType(), CmpTypeID, New,
+                   CurBB) ||
           NumRecords < OpNum + 3 || NumRecords > OpNum + 5)
         return error("Invalid record");
 
@@ -6225,8 +6331,8 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       unsigned ValTypeID = InvalidTypeID;
       if (BitCode == bitc::FUNC_CODE_INST_ATOMICRMW_OLD) {
         ValTypeID = getContainedTypeID(PtrTypeID);
-        if (popValue(Record, OpNum, NextValueNo,
-                     getTypeByID(ValTypeID), ValTypeID, Val, CurBB))
+        if (popValue(Record, OpNum, NextValueNo, getTypeByID(ValTypeID),
+                     ValTypeID, Val, CurBB))
           return error("Invalid record");
       } else {
         if (getValueTypePair(Record, OpNum, NextValueNo, Val, ValTypeID, CurBB))
@@ -6325,7 +6431,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
       if (Record.size() < FTy->getNumParams() + OpNum)
         return error("Insufficient operands to call");
 
-      SmallVector<Value*, 16> Args;
+      SmallVector<Value *, 16> Args;
       SmallVector<unsigned, 16> ArgTyIDs;
       // Read the fixed params.
       for (unsigned i = 0, e = FTy->getNumParams(); i != e; ++i, ++OpNum) {
@@ -6476,7 +6582,8 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
   if (Argument *A = dyn_cast<Argument>(ValueList.back())) {
     if (!A->getParent()) {
       // We found at least one unresolved value.  Nuke them all to avoid leaks.
-      for (unsigned i = ModuleValueListSize, e = ValueList.size(); i != e; ++i){
+      for (unsigned i = ModuleValueListSize, e = ValueList.size(); i != e;
+           ++i) {
         if ((A = dyn_cast_or_null<Argument>(ValueList[i])) && !A->getParent()) {
           A->replaceAllUsesWith(PoisonValue::get(A->getType()));
           delete A;
@@ -6506,7 +6613,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
   // Trim the value list down to the size it was before we parsed this function.
   ValueList.shrinkTo(ModuleValueListSize);
   MDLoader->shrinkTo(ModuleMDLoaderSize);
-  std::vector<BasicBlock*>().swap(FunctionBBs);
+  std::vector<BasicBlock *>().swap(FunctionBBs);
   return Error::success();
 }
 
@@ -6546,7 +6653,7 @@ Error BitcodeReader::materialize(GlobalValue *GV) {
   if (!F || !F->isMaterializable())
     return Error::success();
 
-  DenseMap<Function*, uint64_t>::iterator DFII = DeferredFunctionInfo.find(F);
+  DenseMap<Function *, uint64_t>::iterator DFII = DeferredFunctionInfo.find(F);
   assert(DFII != DeferredFunctionInfo.end() && "Deferred function not found!");
   // If its position is recorded as 0, its body is somewhere in the stream
   // but we haven't seen it yet.
@@ -6907,71 +7014,71 @@ Error ModuleSummaryIndexBitcodeReader::parseModule() {
       continue;
 
     case BitstreamEntry::Record: {
-        Record.clear();
-        Expected<unsigned> MaybeBitCode = Stream.readRecord(Entry.ID, Record);
-        if (!MaybeBitCode)
-          return MaybeBitCode.takeError();
-        switch (MaybeBitCode.get()) {
-        default:
-          break; // Default behavior, ignore unknown content.
-        case bitc::MODULE_CODE_VERSION: {
-          if (Error Err = parseVersionRecord(Record).takeError())
-            return Err;
-          break;
-        }
-        /// MODULE_CODE_SOURCE_FILENAME: [namechar x N]
-        case bitc::MODULE_CODE_SOURCE_FILENAME: {
-          SmallString<128> ValueName;
-          if (convertToString(Record, 0, ValueName))
-            return error("Invalid record");
-          SourceFileName = ValueName.c_str();
-          break;
+      Record.clear();
+      Expected<unsigned> MaybeBitCode = Stream.readRecord(Entry.ID, Record);
+      if (!MaybeBitCode)
+        return MaybeBitCode.takeError();
+      switch (MaybeBitCode.get()) {
+      default:
+        break; // Default behavior, ignore unknown content.
+      case bitc::MODULE_CODE_VERSION: {
+        if (Error Err = parseVersionRecord(Record).takeError())
+          return Err;
+        break;
+      }
+      /// MODULE_CODE_SOURCE_FILENAME: [namechar x N]
+      case bitc::MODULE_CODE_SOURCE_FILENAME: {
+        SmallString<128> ValueName;
+        if (convertToString(Record, 0, ValueName))
+          return error("Invalid record");
+        SourceFileName = ValueName.c_str();
+        break;
+      }
+      /// MODULE_CODE_HASH: [5*i32]
+      case bitc::MODULE_CODE_HASH: {
+        if (Record.size() != 5)
+          return error("Invalid hash length " + Twine(Record.size()).str());
+        auto &Hash = getThisModule()->second;
+        int Pos = 0;
+        for (auto &Val : Record) {
+          assert(!(Val >> 32) && "Unexpected high bits set");
+          Hash[Pos++] = Val;
         }
-        /// MODULE_CODE_HASH: [5*i32]
-        case bitc::MODULE_CODE_HASH: {
-          if (Record.size() != 5)
-            return error("Invalid hash length " + Twine(Record.size()).str());
-          auto &Hash = getThisModule()->second;
-          int Pos = 0;
-          for (auto &Val : Record) {
-            assert(!(Val >> 32) && "Unexpected high bits set");
-            Hash[Pos++] = Val;
-          }
+        break;
+      }
+      /// MODULE_CODE_VSTOFFSET: [offset]
+      case bitc::MODULE_CODE_VSTOFFSET:
+        if (Record.empty())
+          return error("Invalid record");
+        // Note that we subtract 1 here because the offset is relative to one
+        // word before the start of the identification or module block, which
+        // was historically always the start of the regular bitcode header.
+        VSTOffset = Record[0] - 1;
+        break;
+      // v1 GLOBALVAR: [pointer type, isconst,     initid,       linkage, ...]
+      // v1 FUNCTION:  [type,         callingconv, isproto,      linkage, ...]
+      // v1 ALIAS:     [alias type,   addrspace,   aliasee val#, linkage, ...]
+      // v2: [strtab offset, strtab size, v1]
+      case bitc::MODULE_CODE_GLOBALVAR:
+      case bitc::MODULE_CODE_FUNCTION:
+      case bitc::MODULE_CODE_ALIAS: {
+        StringRef Name;
+        ArrayRef<uint64_t> GVRecord;
+        std::tie(Name, GVRecord) = readNameFromStrtab(Record);
+        if (GVRecord.size() <= 3)
+          return error("Invalid record");
+        uint64_t RawLinkage = GVRecord[3];
+        GlobalValue::LinkageTypes Linkage = getDecodedLinkage(RawLinkage);
+        if (!UseStrtab) {
+          ValueIdToLinkageMap[ValueId++] = Linkage;
           break;
         }
-        /// MODULE_CODE_VSTOFFSET: [offset]
-        case bitc::MODULE_CODE_VSTOFFSET:
-          if (Record.empty())
-            return error("Invalid record");
-          // Note that we subtract 1 here because the offset is relative to one
-          // word before the start of the identification or module block, which
-          // was historically always the start of the regular bitcode header.
-          VSTOffset = Record[0] - 1;
-          break;
-        // v1 GLOBALVAR: [pointer type, isconst,     initid,       linkage, ...]
-        // v1 FUNCTION:  [type,         callingconv, isproto,      linkage, ...]
-        // v1 ALIAS:     [alias type,   addrspace,   aliasee val#, linkage, ...]
-        // v2: [strtab offset, strtab size, v1]
-        case bitc::MODULE_CODE_GLOBALVAR:
-        case bitc::MODULE_CODE_FUNCTION:
-        case bitc::MODULE_CODE_ALIAS: {
-          StringRef Name;
-          ArrayRef<uint64_t> GVRecord;
-          std::tie(Name, GVRecord) = readNameFromStrtab(Record);
-          if (GVRecord.size() <= 3)
-            return error("Invalid record");
-          uint64_t RawLinkage = GVRecord[3];
-          GlobalValue::LinkageTypes Linkage = getDecodedLinkage(RawLinkage);
-          if (!UseStrtab) {
-            ValueIdToLinkageMap[ValueId++] = Linkage;
-            break;
-          }
 
-          setValueGUID(ValueId++, Name, Linkage, SourceFileName);
-          break;
-        }
-        }
+        setValueGUID(ValueId++, Name, Linkage, SourceFileName);
+        break;
+      }
       }
+    }
       continue;
     }
   }
@@ -7154,8 +7261,7 @@ Error ModuleSummaryIndexBitcodeReader::parseEntireSummary(unsigned ID) {
   if (Version < 1 || Version > ModuleSummaryIndex::BitcodeSummaryVersion)
     return error("Invalid summary version " + Twine(Version) +
                  ". Version should be in the range [1-" +
-                 Twine(ModuleSummaryIndex::BitcodeSummaryVersion) +
-                 "].");
+                 Twine(ModuleSummaryIndex::BitcodeSummaryVersion) + "].");
   Record.clear();
 
   // Keep around the last seen summary to be used when we see an optional
@@ -7207,7 +7313,7 @@ Error ModuleSummaryIndexBitcodeReader::parseEntireSummary(unsigned ID) {
     switch (unsigned BitCode = MaybeBitCode.get()) {
     default: // Default behavior: ignore.
       break;
-    case bitc::FS_FLAGS: {  // [flags]
+    case bitc::FS_FLAGS: { // [flags]
       TheIndex.setFlags(Record[0]);
       break;
     }
@@ -7272,8 +7378,7 @@ Error ModuleSummaryIndexBitcodeReader::parseEntireSummary(unsigned ID) {
       // the prevailing copy of a symbol. The linker doesn't resolve local
       // linkage values so don't check whether those are prevailing.
       auto LT = (GlobalValue::LinkageTypes)Flags.Linkage;
-      if (IsPrevailing &&
-          !GlobalValue::isLocalLinkage(LT) &&
+      if (IsPrevailing && !GlobalValue::isLocalLinkage(LT) &&
           !IsPrevailing(std::get<2>(VIAndOriginalGUID))) {
         PendingCallsites.clear();
         PendingAllocs.clear();
@@ -7310,7 +7415,8 @@ Error ModuleSummaryIndexBitcodeReader::parseEntireSummary(unsigned ID) {
       AS->setModulePath(getThisModule()->first());
 
       auto AliaseeVI = std::get<0>(getValueInfoFromValueId(AliaseeID));
-      auto AliaseeInModule = TheIndex.findSummaryInModule(AliaseeVI, ModulePath);
+      auto AliaseeInModule =
+          TheIndex.findSummaryInModule(AliaseeVI, ModulePath);
       if (!AliaseeInModule)
         return error("Alias expects aliasee summary to be parsed");
       AS->setAliasee(AliaseeVI, AliaseeInModule);
@@ -7336,8 +7442,7 @@ Error ModuleSummaryIndexBitcodeReader::parseEntireSummary(unsigned ID) {
       }
       std::vector<ValueInfo> Refs =
           makeRefList(ArrayRef<uint64_t>(Record).slice(RefArrayStart));
-      auto FS =
-          std::make_unique<GlobalVarSummary>(Flags, GVF, std::move(Refs));
+      auto FS = std::make_unique<GlobalVarSummary>(Flags, GVF, std::move(Refs));
       FS->setModulePath(getThisModule()->first());
       auto GUID = getValueInfoFromValueId(ValueID);
       FS->setOriginalName(std::get<1>(GUID));
@@ -7363,8 +7468,7 @@ Error ModuleSummaryIndexBitcodeReader::parseEntireSummary(unsigned ID) {
         uint64_t Offset = Record[++I];
         VTableFuncs.push_back({Callee, Offset});
       }
-      auto VS =
-          std::make_unique<GlobalVarSummary>(Flags, GVF, std::move(Refs));
+      auto VS = std::make_unique<GlobalVarSummary>(Flags, GVF, std::move(Refs));
       VS->setModulePath(getThisModule()->first());
       VS->setVTableFuncs(VTableFuncs);
       auto GUID = getValueInfoFromValueId(ValueID);
@@ -7451,7 +7555,8 @@ Error ModuleSummaryIndexBitcodeReader::parseEntireSummary(unsigned ID) {
       AS->setModulePath(ModuleIdMap[ModuleId]);
 
       auto AliaseeVI = std::get<0>(getValueInfoFromValueId(AliaseeValueId));
-      auto AliaseeInModule = TheIndex.findSummaryInModule(AliaseeVI, AS->modulePath());
+      auto AliaseeInModule =
+          TheIndex.findSummaryInModule(AliaseeVI, AS->modulePath());
       AS->setAliasee(AliaseeVI, AliaseeInModule);
 
       ValueInfo VI = std::get<0>(getValueInfoFromValueId(ValueID));
@@ -7476,8 +7581,7 @@ Error ModuleSummaryIndexBitcodeReader::parseEntireSummary(unsigned ID) {
       }
       std::vector<ValueInfo> Refs =
           makeRefList(ArrayRef<uint64_t>(Record).slice(RefArrayStart));
-      auto FS =
-          std::make_unique<GlobalVarSummary>(Flags, GVF, std::move(Refs));
+      auto FS = std::make_unique<GlobalVarSummary>(Flags, GVF, std::move(Refs));
       LastSeenSummary = FS.get();
       FS->setModulePath(ModuleIdMap[ModuleId]);
       ValueInfo VI = std::get<0>(getValueInfoFromValueId(ValueID));
@@ -7505,13 +7609,13 @@ Error ModuleSummaryIndexBitcodeReader::parseEntireSummary(unsigned ID) {
     case bitc::FS_TYPE_TEST_ASSUME_VCALLS:
       assert(PendingTypeTestAssumeVCalls.empty());
       for (unsigned I = 0; I != Record.size(); I += 2)
-        PendingTypeTestAssumeVCalls.push_back({Record[I], Record[I+1]});
+        PendingTypeTestAssumeVCalls.push_back({Record[I], Record[I + 1]});
       break;
 
     case bitc::FS_TYPE_CHECKED_LOAD_VCALLS:
       assert(PendingTypeCheckedLoadVCalls.empty());
       for (unsigned I = 0; I != Record.size(); I += 2)
-        PendingTypeCheckedLoadVCalls.push_back({Record[I], Record[I+1]});
+        PendingTypeCheckedLoadVCalls.push_back({Record[I], Record[I + 1]});
       break;
 
     case bitc::FS_TYPE_TEST_ASSUME_CONST_VCALL:
@@ -7641,8 +7745,7 @@ Error ModuleSummaryIndexBitcodeReader::parseEntireSummary(unsigned ID) {
       SmallVector<uint8_t> Versions;
       for (unsigned J = 0; J < NumVersions; J++)
         Versions.push_back(Record[I++]);
-      PendingAllocs.push_back(
-          AllocInfo(std::move(Versions), std::move(MIBs)));
+      PendingAllocs.push_back(AllocInfo(std::move(Versions), std::move(MIBs)));
       break;
     }
     }
@@ -7724,9 +7827,7 @@ namespace {
 // will be removed once this transition is complete. Clients should prefer to
 // deal with the Error value directly, rather than converting to error_code.
 class BitcodeErrorCategoryType : public std::error_category {
-  const char *name() const noexcept override {
-    return "llvm.bitcode";
-  }
+  const char *name() const noexcept override { return "llvm.bitcode"; }
 
   std::string message(int IE) const override {
     BitcodeError E = static_cast<BitcodeError>(IE);
@@ -7993,16 +8094,15 @@ Expected<std::unique_ptr<ModuleSummaryIndex>> BitcodeModule::getSummary() {
 }
 
 static Expected<std::pair<bool, bool>>
-getEnableSplitLTOUnitAndUnifiedFlag(BitstreamCursor &Stream,
-                                                 unsigned ID,
-                                                 BitcodeLTOInfo &LTOInfo) {
+getEnableSplitLTOUnitAndUnifiedFlag(BitstreamCursor &Stream, unsigned ID,
+                                    BitcodeLTOInfo &LTOInfo) {
   if (Error Err = Stream.EnterSubBlock(ID))
     return std::move(Err);
   SmallVector<uint64_t, 64> Record;
 
   while (true) {
     BitstreamEntry Entry;
-    std::pair<bool, bool> Result = {false,false};
+    std::pair<bool, bool> Result = {false, false};
     if (Error E = Stream.advanceSkippingSubblocks().moveInto(Entry))
       return std::move(E);
 
diff --git a/llvm/lib/Bitcode/Reader/CMakeLists.txt b/llvm/lib/Bitcode/Reader/CMakeLists.txt
index 7a385613105acde..65c15fc20859a22 100644
--- a/llvm/lib/Bitcode/Reader/CMakeLists.txt
+++ b/llvm/lib/Bitcode/Reader/CMakeLists.txt
@@ -1,19 +1,11 @@
-add_llvm_component_library(LLVMBitReader
-  BitcodeAnalyzer.cpp
-  BitReader.cpp
-  BitcodeReader.cpp
-  MetadataLoader.cpp
-  ValueList.cpp
+add_llvm_component_library(
+    LLVMBitReader BitcodeAnalyzer.cpp BitReader.cpp BitcodeReader
+        .cpp MetadataLoader.cpp ValueList.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/Bitcode
+            ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+    llvm /
+    Bitcode
 
-  DEPENDS
-  intrinsics_gen
+        DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  BitstreamReader
-  Core
-  Support
-  TargetParser
-  )
+            LINK_COMPONENTS BitstreamReader Core Support TargetParser)
diff --git a/llvm/lib/Bitcode/Reader/MetadataLoader.h b/llvm/lib/Bitcode/Reader/MetadataLoader.h
index fbee7e49f8dff14..21ae46af74d2856 100644
--- a/llvm/lib/Bitcode/Reader/MetadataLoader.h
+++ b/llvm/lib/Bitcode/Reader/MetadataLoader.h
@@ -92,6 +92,6 @@ class MetadataLoader {
   /// Perform bitcode upgrades on llvm.dbg.* calls.
   void upgradeDebugIntrinsics(Function &F);
 };
-}
+} // namespace llvm
 
 #endif // LLVM_LIB_BITCODE_READER_METADATALOADER_H
diff --git a/llvm/lib/Bitcode/Reader/ValueList.h b/llvm/lib/Bitcode/Reader/ValueList.h
index a5b3f6e207077a7..5875501f60f1921 100644
--- a/llvm/lib/Bitcode/Reader/ValueList.h
+++ b/llvm/lib/Bitcode/Reader/ValueList.h
@@ -46,16 +46,12 @@ class BitcodeReaderValueList {
 
   // vector compatibility methods
   unsigned size() const { return ValuePtrs.size(); }
-  void resize(unsigned N) {
-    ValuePtrs.resize(N);
-  }
+  void resize(unsigned N) { ValuePtrs.resize(N); }
   void push_back(Value *V, unsigned TypeID) {
     ValuePtrs.emplace_back(V, TypeID);
   }
 
-  void clear() {
-    ValuePtrs.clear();
-  }
+  void clear() { ValuePtrs.clear(); }
 
   Value *operator[](unsigned i) const {
     assert(i < ValuePtrs.size());
@@ -68,9 +64,7 @@ class BitcodeReaderValueList {
   }
 
   Value *back() const { return ValuePtrs.back().first; }
-  void pop_back() {
-    ValuePtrs.pop_back();
-  }
+  void pop_back() { ValuePtrs.pop_back(); }
   bool empty() const { return ValuePtrs.empty(); }
 
   void shrinkTo(unsigned N) {
diff --git a/llvm/lib/Bitcode/Writer/BitWriter.cpp b/llvm/lib/Bitcode/Writer/BitWriter.cpp
index be59c1f928360a8..1d2850bb65230b4 100644
--- a/llvm/lib/Bitcode/Writer/BitWriter.cpp
+++ b/llvm/lib/Bitcode/Writer/BitWriter.cpp
@@ -14,7 +14,6 @@
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
-
 /*===-- Operations on modules ---------------------------------------------===*/
 
 int LLVMWriteBitcodeToFile(LLVMModuleRef M, const char *Path) {
diff --git a/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp b/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
index f53fbd73667762c..765e7271c1825a9 100644
--- a/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
+++ b/llvm/lib/Bitcode/Writer/BitcodeWriter.cpp
@@ -352,8 +352,8 @@ class ModuleBitcodeWriter : public ModuleBitcodeWriterBase {
                              unsigned Abbrev);
   void writeDILocalVariable(const DILocalVariable *N,
                             SmallVectorImpl<uint64_t> &Record, unsigned Abbrev);
-  void writeDILabel(const DILabel *N,
-                    SmallVectorImpl<uint64_t> &Record, unsigned Abbrev);
+  void writeDILabel(const DILabel *N, SmallVectorImpl<uint64_t> &Record,
+                    unsigned Abbrev);
   void writeDIExpression(const DIExpression *N,
                          SmallVectorImpl<uint64_t> &Record, unsigned Abbrev);
   void writeDIGlobalVariableExpression(const DIGlobalVariableExpression *N,
@@ -403,9 +403,7 @@ class ModuleBitcodeWriter : public ModuleBitcodeWriterBase {
   void writeBlockInfo();
   void writeModuleHash(size_t BlockStartPos);
 
-  unsigned getEncodedSyncScopeID(SyncScope::ID SSID) {
-    return unsigned(SSID);
-  }
+  unsigned getEncodedSyncScopeID(SyncScope::ID SSID) { return unsigned(SSID); }
 
   unsigned getEncodedAlign(MaybeAlign Alignment) { return encode(Alignment); }
 };
@@ -479,8 +477,7 @@ class IndexBitcodeWriter : public BitcodeWriterBase {
   /// Calls the callback for each value GUID and summary to be written to
   /// bitcode. This hides the details of whether they are being pulled from the
   /// entire index or just those in a provided ModuleToSummariesForIndex map.
-  template<typename Functor>
-  void forEachSummary(Functor Callback) {
+  template <typename Functor> void forEachSummary(Functor Callback) {
     if (ModuleToSummariesForIndex) {
       for (auto &M : *ModuleToSummariesForIndex)
         for (auto &Summary : M.second) {
@@ -550,72 +547,118 @@ class IndexBitcodeWriter : public BitcodeWriterBase {
 
 static unsigned getEncodedCastOpcode(unsigned Opcode) {
   switch (Opcode) {
-  default: llvm_unreachable("Unknown cast instruction!");
-  case Instruction::Trunc   : return bitc::CAST_TRUNC;
-  case Instruction::ZExt    : return bitc::CAST_ZEXT;
-  case Instruction::SExt    : return bitc::CAST_SEXT;
-  case Instruction::FPToUI  : return bitc::CAST_FPTOUI;
-  case Instruction::FPToSI  : return bitc::CAST_FPTOSI;
-  case Instruction::UIToFP  : return bitc::CAST_UITOFP;
-  case Instruction::SIToFP  : return bitc::CAST_SITOFP;
-  case Instruction::FPTrunc : return bitc::CAST_FPTRUNC;
-  case Instruction::FPExt   : return bitc::CAST_FPEXT;
-  case Instruction::PtrToInt: return bitc::CAST_PTRTOINT;
-  case Instruction::IntToPtr: return bitc::CAST_INTTOPTR;
-  case Instruction::BitCast : return bitc::CAST_BITCAST;
-  case Instruction::AddrSpaceCast: return bitc::CAST_ADDRSPACECAST;
+  default:
+    llvm_unreachable("Unknown cast instruction!");
+  case Instruction::Trunc:
+    return bitc::CAST_TRUNC;
+  case Instruction::ZExt:
+    return bitc::CAST_ZEXT;
+  case Instruction::SExt:
+    return bitc::CAST_SEXT;
+  case Instruction::FPToUI:
+    return bitc::CAST_FPTOUI;
+  case Instruction::FPToSI:
+    return bitc::CAST_FPTOSI;
+  case Instruction::UIToFP:
+    return bitc::CAST_UITOFP;
+  case Instruction::SIToFP:
+    return bitc::CAST_SITOFP;
+  case Instruction::FPTrunc:
+    return bitc::CAST_FPTRUNC;
+  case Instruction::FPExt:
+    return bitc::CAST_FPEXT;
+  case Instruction::PtrToInt:
+    return bitc::CAST_PTRTOINT;
+  case Instruction::IntToPtr:
+    return bitc::CAST_INTTOPTR;
+  case Instruction::BitCast:
+    return bitc::CAST_BITCAST;
+  case Instruction::AddrSpaceCast:
+    return bitc::CAST_ADDRSPACECAST;
   }
 }
 
 static unsigned getEncodedUnaryOpcode(unsigned Opcode) {
   switch (Opcode) {
-  default: llvm_unreachable("Unknown binary instruction!");
-  case Instruction::FNeg: return bitc::UNOP_FNEG;
+  default:
+    llvm_unreachable("Unknown binary instruction!");
+  case Instruction::FNeg:
+    return bitc::UNOP_FNEG;
   }
 }
 
 static unsigned getEncodedBinaryOpcode(unsigned Opcode) {
   switch (Opcode) {
-  default: llvm_unreachable("Unknown binary instruction!");
+  default:
+    llvm_unreachable("Unknown binary instruction!");
   case Instruction::Add:
-  case Instruction::FAdd: return bitc::BINOP_ADD;
+  case Instruction::FAdd:
+    return bitc::BINOP_ADD;
   case Instruction::Sub:
-  case Instruction::FSub: return bitc::BINOP_SUB;
+  case Instruction::FSub:
+    return bitc::BINOP_SUB;
   case Instruction::Mul:
-  case Instruction::FMul: return bitc::BINOP_MUL;
-  case Instruction::UDiv: return bitc::BINOP_UDIV;
+  case Instruction::FMul:
+    return bitc::BINOP_MUL;
+  case Instruction::UDiv:
+    return bitc::BINOP_UDIV;
   case Instruction::FDiv:
-  case Instruction::SDiv: return bitc::BINOP_SDIV;
-  case Instruction::URem: return bitc::BINOP_UREM;
+  case Instruction::SDiv:
+    return bitc::BINOP_SDIV;
+  case Instruction::URem:
+    return bitc::BINOP_UREM;
   case Instruction::FRem:
-  case Instruction::SRem: return bitc::BINOP_SREM;
-  case Instruction::Shl:  return bitc::BINOP_SHL;
-  case Instruction::LShr: return bitc::BINOP_LSHR;
-  case Instruction::AShr: return bitc::BINOP_ASHR;
-  case Instruction::And:  return bitc::BINOP_AND;
-  case Instruction::Or:   return bitc::BINOP_OR;
-  case Instruction::Xor:  return bitc::BINOP_XOR;
+  case Instruction::SRem:
+    return bitc::BINOP_SREM;
+  case Instruction::Shl:
+    return bitc::BINOP_SHL;
+  case Instruction::LShr:
+    return bitc::BINOP_LSHR;
+  case Instruction::AShr:
+    return bitc::BINOP_ASHR;
+  case Instruction::And:
+    return bitc::BINOP_AND;
+  case Instruction::Or:
+    return bitc::BINOP_OR;
+  case Instruction::Xor:
+    return bitc::BINOP_XOR;
   }
 }
 
 static unsigned getEncodedRMWOperation(AtomicRMWInst::BinOp Op) {
   switch (Op) {
-  default: llvm_unreachable("Unknown RMW operation!");
-  case AtomicRMWInst::Xchg: return bitc::RMW_XCHG;
-  case AtomicRMWInst::Add: return bitc::RMW_ADD;
-  case AtomicRMWInst::Sub: return bitc::RMW_SUB;
-  case AtomicRMWInst::And: return bitc::RMW_AND;
-  case AtomicRMWInst::Nand: return bitc::RMW_NAND;
-  case AtomicRMWInst::Or: return bitc::RMW_OR;
-  case AtomicRMWInst::Xor: return bitc::RMW_XOR;
-  case AtomicRMWInst::Max: return bitc::RMW_MAX;
-  case AtomicRMWInst::Min: return bitc::RMW_MIN;
-  case AtomicRMWInst::UMax: return bitc::RMW_UMAX;
-  case AtomicRMWInst::UMin: return bitc::RMW_UMIN;
-  case AtomicRMWInst::FAdd: return bitc::RMW_FADD;
-  case AtomicRMWInst::FSub: return bitc::RMW_FSUB;
-  case AtomicRMWInst::FMax: return bitc::RMW_FMAX;
-  case AtomicRMWInst::FMin: return bitc::RMW_FMIN;
+  default:
+    llvm_unreachable("Unknown RMW operation!");
+  case AtomicRMWInst::Xchg:
+    return bitc::RMW_XCHG;
+  case AtomicRMWInst::Add:
+    return bitc::RMW_ADD;
+  case AtomicRMWInst::Sub:
+    return bitc::RMW_SUB;
+  case AtomicRMWInst::And:
+    return bitc::RMW_AND;
+  case AtomicRMWInst::Nand:
+    return bitc::RMW_NAND;
+  case AtomicRMWInst::Or:
+    return bitc::RMW_OR;
+  case AtomicRMWInst::Xor:
+    return bitc::RMW_XOR;
+  case AtomicRMWInst::Max:
+    return bitc::RMW_MAX;
+  case AtomicRMWInst::Min:
+    return bitc::RMW_MIN;
+  case AtomicRMWInst::UMax:
+    return bitc::RMW_UMAX;
+  case AtomicRMWInst::UMin:
+    return bitc::RMW_UMIN;
+  case AtomicRMWInst::FAdd:
+    return bitc::RMW_FADD;
+  case AtomicRMWInst::FSub:
+    return bitc::RMW_FSUB;
+  case AtomicRMWInst::FMax:
+    return bitc::RMW_FMAX;
+  case AtomicRMWInst::FMin:
+    return bitc::RMW_FMIN;
   case AtomicRMWInst::UIncWrap:
     return bitc::RMW_UINC_WRAP;
   case AtomicRMWInst::UDecWrap:
@@ -625,13 +668,20 @@ static unsigned getEncodedRMWOperation(AtomicRMWInst::BinOp Op) {
 
 static unsigned getEncodedOrdering(AtomicOrdering Ordering) {
   switch (Ordering) {
-  case AtomicOrdering::NotAtomic: return bitc::ORDERING_NOTATOMIC;
-  case AtomicOrdering::Unordered: return bitc::ORDERING_UNORDERED;
-  case AtomicOrdering::Monotonic: return bitc::ORDERING_MONOTONIC;
-  case AtomicOrdering::Acquire: return bitc::ORDERING_ACQUIRE;
-  case AtomicOrdering::Release: return bitc::ORDERING_RELEASE;
-  case AtomicOrdering::AcquireRelease: return bitc::ORDERING_ACQREL;
-  case AtomicOrdering::SequentiallyConsistent: return bitc::ORDERING_SEQCST;
+  case AtomicOrdering::NotAtomic:
+    return bitc::ORDERING_NOTATOMIC;
+  case AtomicOrdering::Unordered:
+    return bitc::ORDERING_UNORDERED;
+  case AtomicOrdering::Monotonic:
+    return bitc::ORDERING_MONOTONIC;
+  case AtomicOrdering::Acquire:
+    return bitc::ORDERING_ACQUIRE;
+  case AtomicOrdering::Release:
+    return bitc::ORDERING_RELEASE;
+  case AtomicOrdering::AcquireRelease:
+    return bitc::ORDERING_ACQREL;
+  case AtomicOrdering::SequentiallyConsistent:
+    return bitc::ORDERING_SEQCST;
   }
   llvm_unreachable("Invalid ordering");
 }
@@ -836,7 +886,8 @@ static uint64_t getAttrKindEncoding(Attribute::AttrKind Kind) {
 void ModuleBitcodeWriter::writeAttributeGroupTable() {
   const std::vector<ValueEnumerator::IndexAndAttrSet> &AttrGrps =
       VE.getAttributeGroups();
-  if (AttrGrps.empty()) return;
+  if (AttrGrps.empty())
+    return;
 
   Stream.EnterSubblock(bitc::PARAMATTR_GROUP_BLOCK_ID, 3);
 
@@ -885,7 +936,8 @@ void ModuleBitcodeWriter::writeAttributeGroupTable() {
 
 void ModuleBitcodeWriter::writeAttributeTable() {
   const std::vector<AttributeList> &Attrs = VE.getAttributeLists();
-  if (Attrs.empty()) return;
+  if (Attrs.empty())
+    return;
 
   Stream.EnterSubblock(bitc::PARAMATTR_BLOCK_ID, 3);
 
@@ -922,7 +974,7 @@ void ModuleBitcodeWriter::writeTypeTable() {
   // Abbrev for TYPE_CODE_FUNCTION.
   Abbv = std::make_shared<BitCodeAbbrev>();
   Abbv->Add(BitCodeAbbrevOp(bitc::TYPE_CODE_FUNCTION));
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1));  // isvararg
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // isvararg
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Array));
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, NumBits));
   unsigned FunctionAbbrev = Stream.EmitAbbrev(std::move(Abbv));
@@ -930,7 +982,7 @@ void ModuleBitcodeWriter::writeTypeTable() {
   // Abbrev for TYPE_CODE_STRUCT_ANON.
   Abbv = std::make_shared<BitCodeAbbrev>();
   Abbv->Add(BitCodeAbbrevOp(bitc::TYPE_CODE_STRUCT_ANON));
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1));  // ispacked
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // ispacked
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Array));
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, NumBits));
   unsigned StructAnonAbbrev = Stream.EmitAbbrev(std::move(Abbv));
@@ -945,7 +997,7 @@ void ModuleBitcodeWriter::writeTypeTable() {
   // Abbrev for TYPE_CODE_STRUCT_NAMED.
   Abbv = std::make_shared<BitCodeAbbrev>();
   Abbv->Add(BitCodeAbbrevOp(bitc::TYPE_CODE_STRUCT_NAMED));
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1));  // ispacked
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // ispacked
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Array));
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, NumBits));
   unsigned StructNamedAbbrev = Stream.EmitAbbrev(std::move(Abbv));
@@ -953,7 +1005,7 @@ void ModuleBitcodeWriter::writeTypeTable() {
   // Abbrev for TYPE_CODE_ARRAY.
   Abbv = std::make_shared<BitCodeAbbrev>();
   Abbv->Add(BitCodeAbbrevOp(bitc::TYPE_CODE_ARRAY));
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // size
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // size
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, NumBits));
   unsigned ArrayAbbrev = Stream.EmitAbbrev(std::move(Abbv));
 
@@ -968,19 +1020,45 @@ void ModuleBitcodeWriter::writeTypeTable() {
     unsigned Code = 0;
 
     switch (T->getTypeID()) {
-    case Type::VoidTyID:      Code = bitc::TYPE_CODE_VOID;      break;
-    case Type::HalfTyID:      Code = bitc::TYPE_CODE_HALF;      break;
-    case Type::BFloatTyID:    Code = bitc::TYPE_CODE_BFLOAT;    break;
-    case Type::FloatTyID:     Code = bitc::TYPE_CODE_FLOAT;     break;
-    case Type::DoubleTyID:    Code = bitc::TYPE_CODE_DOUBLE;    break;
-    case Type::X86_FP80TyID:  Code = bitc::TYPE_CODE_X86_FP80;  break;
-    case Type::FP128TyID:     Code = bitc::TYPE_CODE_FP128;     break;
-    case Type::PPC_FP128TyID: Code = bitc::TYPE_CODE_PPC_FP128; break;
-    case Type::LabelTyID:     Code = bitc::TYPE_CODE_LABEL;     break;
-    case Type::MetadataTyID:  Code = bitc::TYPE_CODE_METADATA;  break;
-    case Type::X86_MMXTyID:   Code = bitc::TYPE_CODE_X86_MMX;   break;
-    case Type::X86_AMXTyID:   Code = bitc::TYPE_CODE_X86_AMX;   break;
-    case Type::TokenTyID:     Code = bitc::TYPE_CODE_TOKEN;     break;
+    case Type::VoidTyID:
+      Code = bitc::TYPE_CODE_VOID;
+      break;
+    case Type::HalfTyID:
+      Code = bitc::TYPE_CODE_HALF;
+      break;
+    case Type::BFloatTyID:
+      Code = bitc::TYPE_CODE_BFLOAT;
+      break;
+    case Type::FloatTyID:
+      Code = bitc::TYPE_CODE_FLOAT;
+      break;
+    case Type::DoubleTyID:
+      Code = bitc::TYPE_CODE_DOUBLE;
+      break;
+    case Type::X86_FP80TyID:
+      Code = bitc::TYPE_CODE_X86_FP80;
+      break;
+    case Type::FP128TyID:
+      Code = bitc::TYPE_CODE_FP128;
+      break;
+    case Type::PPC_FP128TyID:
+      Code = bitc::TYPE_CODE_PPC_FP128;
+      break;
+    case Type::LabelTyID:
+      Code = bitc::TYPE_CODE_LABEL;
+      break;
+    case Type::MetadataTyID:
+      Code = bitc::TYPE_CODE_METADATA;
+      break;
+    case Type::X86_MMXTyID:
+      Code = bitc::TYPE_CODE_X86_MMX;
+      break;
+    case Type::X86_AMXTyID:
+      Code = bitc::TYPE_CODE_X86_AMX;
+      break;
+    case Type::TokenTyID:
+      Code = bitc::TYPE_CODE_TOKEN;
+      break;
     case Type::IntegerTyID:
       // INTEGER: [width]
       Code = bitc::TYPE_CODE_INTEGER;
@@ -1153,29 +1231,40 @@ static uint64_t getEncodedGVarFlags(GlobalVarSummary::GVarFlags Flags) {
 
 static unsigned getEncodedVisibility(const GlobalValue &GV) {
   switch (GV.getVisibility()) {
-  case GlobalValue::DefaultVisibility:   return 0;
-  case GlobalValue::HiddenVisibility:    return 1;
-  case GlobalValue::ProtectedVisibility: return 2;
+  case GlobalValue::DefaultVisibility:
+    return 0;
+  case GlobalValue::HiddenVisibility:
+    return 1;
+  case GlobalValue::ProtectedVisibility:
+    return 2;
   }
   llvm_unreachable("Invalid visibility");
 }
 
 static unsigned getEncodedDLLStorageClass(const GlobalValue &GV) {
   switch (GV.getDLLStorageClass()) {
-  case GlobalValue::DefaultStorageClass:   return 0;
-  case GlobalValue::DLLImportStorageClass: return 1;
-  case GlobalValue::DLLExportStorageClass: return 2;
+  case GlobalValue::DefaultStorageClass:
+    return 0;
+  case GlobalValue::DLLImportStorageClass:
+    return 1;
+  case GlobalValue::DLLExportStorageClass:
+    return 2;
   }
   llvm_unreachable("Invalid DLL storage class");
 }
 
 static unsigned getEncodedThreadLocalMode(const GlobalValue &GV) {
   switch (GV.getThreadLocalMode()) {
-    case GlobalVariable::NotThreadLocal:         return 0;
-    case GlobalVariable::GeneralDynamicTLSModel: return 1;
-    case GlobalVariable::LocalDynamicTLSModel:   return 2;
-    case GlobalVariable::InitialExecTLSModel:    return 3;
-    case GlobalVariable::LocalExecTLSModel:      return 4;
+  case GlobalVariable::NotThreadLocal:
+    return 0;
+  case GlobalVariable::GeneralDynamicTLSModel:
+    return 1;
+  case GlobalVariable::LocalDynamicTLSModel:
+    return 2;
+  case GlobalVariable::InitialExecTLSModel:
+    return 3;
+  case GlobalVariable::LocalExecTLSModel:
+    return 4;
   }
   llvm_unreachable("Invalid TLS model");
 }
@@ -1198,9 +1287,12 @@ static unsigned getEncodedComdatSelectionKind(const Comdat &C) {
 
 static unsigned getEncodedUnnamedAddr(const GlobalValue &GV) {
   switch (GV.getUnnamedAddr()) {
-  case GlobalValue::UnnamedAddr::None:   return 0;
-  case GlobalValue::UnnamedAddr::Local:  return 2;
-  case GlobalValue::UnnamedAddr::Global: return 1;
+  case GlobalValue::UnnamedAddr::None:
+    return 0;
+  case GlobalValue::UnnamedAddr::Local:
+    return 2;
+  case GlobalValue::UnnamedAddr::Global:
+    return 1;
   }
   llvm_unreachable("Invalid unnamed_addr");
 }
@@ -1270,8 +1362,8 @@ static_assert(sizeof(GlobalValue::SanitizerMetadata) <= sizeof(unsigned),
               "Sanitizer Metadata is too large for naive serialization.");
 static unsigned
 serializeSanitizerMetadata(const GlobalValue::SanitizerMetadata &Meta) {
-  return Meta.NoAddress | (Meta.NoHWAddress << 1) |
-         (Meta.Memtag << 2) | (Meta.IsDynInit << 3);
+  return Meta.NoAddress | (Meta.NoHWAddress << 1) | (Meta.Memtag << 2) |
+         (Meta.IsDynInit << 3);
 }
 
 /// Emit top-level description of module, including target triple, inline asm,
@@ -1306,8 +1398,8 @@ void ModuleBitcodeWriter::writeModuleInfo() {
       // Give section names unique ID's.
       unsigned &Entry = SectionMap[std::string(GV.getSection())];
       if (!Entry) {
-        writeStringRecord(Stream, bitc::MODULE_CODE_SECTIONNAME, GV.getSection(),
-                          0 /*TODO*/);
+        writeStringRecord(Stream, bitc::MODULE_CODE_SECTIONNAME,
+                          GV.getSection(), 0 /*TODO*/);
         Entry = SectionMap.size();
       }
     }
@@ -1343,7 +1435,7 @@ void ModuleBitcodeWriter::writeModuleInfo() {
     Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));
     Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));
     Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed,
-                              Log2_32_Ceil(MaxGlobalType+1)));
+                              Log2_32_Ceil(MaxGlobalType + 1)));
     Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6));   // AddrSpace << 2
                                                            //| explicitType << 1
                                                            //| constant
@@ -1354,13 +1446,13 @@ void ModuleBitcodeWriter::writeModuleInfo() {
     else {
       unsigned MaxEncAlignment = getEncodedAlign(MaxAlignment);
       Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed,
-                               Log2_32_Ceil(MaxEncAlignment+1)));
+                                Log2_32_Ceil(MaxEncAlignment + 1)));
     }
-    if (SectionMap.empty())                                    // Section.
+    if (SectionMap.empty()) // Section.
       Abbv->Add(BitCodeAbbrevOp(0));
     else
       Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed,
-                               Log2_32_Ceil(SectionMap.size()+1)));
+                                Log2_32_Ceil(SectionMap.size() + 1)));
     // Don't bother emitting vis + thread local.
     SimpleGVarAbbrev = Stream.EmitAbbrev(std::move(Abbv));
   }
@@ -1402,8 +1494,8 @@ void ModuleBitcodeWriter::writeModuleInfo() {
     Vals.push_back(GV.getName().size());
     Vals.push_back(VE.getTypeID(GV.getValueType()));
     Vals.push_back(GV.getType()->getAddressSpace() << 2 | 2 | GV.isConstant());
-    Vals.push_back(GV.isDeclaration() ? 0 :
-                   (VE.getValueID(GV.getInitializer()) + 1));
+    Vals.push_back(
+        GV.isDeclaration() ? 0 : (VE.getValueID(GV.getInitializer()) + 1));
     Vals.push_back(getEncodedLinkage(GV));
     Vals.push_back(getEncodedAlign(GV.getAlign()));
     Vals.push_back(GV.hasSection() ? SectionMap[std::string(GV.getSection())]
@@ -1459,8 +1551,8 @@ void ModuleBitcodeWriter::writeModuleInfo() {
     Vals.push_back(getEncodedVisibility(F));
     Vals.push_back(F.hasGC() ? GCMap[F.getGC()] : 0);
     Vals.push_back(getEncodedUnnamedAddr(F));
-    Vals.push_back(F.hasPrologueData() ? (VE.getValueID(F.getPrologueData()) + 1)
-                                       : 0);
+    Vals.push_back(
+        F.hasPrologueData() ? (VE.getValueID(F.getPrologueData()) + 1) : 0);
     Vals.push_back(getEncodedDLLStorageClass(F));
     Vals.push_back(F.hasComdat() ? VE.getComdatID(F.getComdat()) : 0);
     Vals.push_back(F.hasPrefixData() ? (VE.getValueID(F.getPrefixData()) + 1)
@@ -2077,9 +2169,9 @@ void ModuleBitcodeWriter::writeDILocalVariable(
   Record.clear();
 }
 
-void ModuleBitcodeWriter::writeDILabel(
-    const DILabel *N, SmallVectorImpl<uint64_t> &Record,
-    unsigned Abbrev) {
+void ModuleBitcodeWriter::writeDILabel(const DILabel *N,
+                                       SmallVectorImpl<uint64_t> &Record,
+                                       unsigned Abbrev) {
   Record.push_back((uint64_t)N->isDistinct());
   Record.push_back(VE.getMetadataOrNullID(N->getScope()));
   Record.push_back(VE.getMetadataOrNullID(N->getRawName()));
@@ -2230,7 +2322,7 @@ void ModuleBitcodeWriter::writeMetadataRecords(
   if (MDs.empty())
     return;
 
-  // Initialize MDNode abbreviations.
+    // Initialize MDNode abbreviations.
 #define HANDLE_MDNODE_LEAF(CLASS) unsigned CLASS##Abbrev = 0;
 #include "llvm/IR/Metadata.def"
 
@@ -2395,7 +2487,8 @@ void ModuleBitcodeWriter::writeFunctionMetadataAttachment(const Function &F) {
       I.getAllMetadataOtherThanDebugLoc(MDs);
 
       // If no metadata, ignore instruction.
-      if (MDs.empty()) continue;
+      if (MDs.empty())
+        continue;
 
       Record.push_back(VE.getInstructionID(&I));
 
@@ -2418,7 +2511,8 @@ void ModuleBitcodeWriter::writeModuleMetadataKinds() {
   SmallVector<StringRef, 8> Names;
   M.getMDKindNames(Names);
 
-  if (Names.empty()) return;
+  if (Names.empty())
+    return;
 
   Stream.EnterSubblock(bitc::METADATA_KIND_BLOCK_ID, 3);
 
@@ -2481,7 +2575,8 @@ void ModuleBitcodeWriter::writeSyncScopeNames() {
 
 void ModuleBitcodeWriter::writeConstants(unsigned FirstVal, unsigned LastVal,
                                          bool isGlobal) {
-  if (FirstVal == LastVal) return;
+  if (FirstVal == LastVal)
+    return;
 
   Stream.EnterSubblock(bitc::CONSTANTS_BLOCK_ID, 4);
 
@@ -2495,7 +2590,8 @@ void ModuleBitcodeWriter::writeConstants(unsigned FirstVal, unsigned LastVal,
     auto Abbv = std::make_shared<BitCodeAbbrev>();
     Abbv->Add(BitCodeAbbrevOp(bitc::CST_CODE_AGGREGATE));
     Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Array));
-    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, Log2_32_Ceil(LastVal+1)));
+    Abbv->Add(
+        BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, Log2_32_Ceil(LastVal + 1)));
     AggregateAbbrev = Stream.EmitAbbrev(std::move(Abbv));
 
     // Abbrev for CST_CODE_STRING.
@@ -2567,7 +2663,7 @@ void ModuleBitcodeWriter::writeConstants(unsigned FirstVal, unsigned LastVal,
         emitSignedInt64(Record, V);
         Code = bitc::CST_CODE_INTEGER;
         AbbrevToUse = CONSTANTS_INTEGER_ABBREV;
-      } else {                             // Wide integers, > 64 bits in size.
+      } else { // Wide integers, > 64 bits in size.
         emitWideAPInt(Record, IV->getValue());
         Code = bitc::CST_CODE_WIDE_INTEGER;
       }
@@ -2600,7 +2696,7 @@ void ModuleBitcodeWriter::writeConstants(unsigned FirstVal, unsigned LastVal,
       // If this is a null-terminated string, use the denser CSTRING encoding.
       if (Str->isCString()) {
         Code = bitc::CST_CODE_CSTRING;
-        --NumElts;  // Don't encode the null, which isn't allowed by char6.
+        --NumElts; // Don't encode the null, which isn't allowed by char6.
       } else {
         Code = bitc::CST_CODE_STRING;
         AbbrevToUse = String8Abbrev;
@@ -2620,7 +2716,7 @@ void ModuleBitcodeWriter::writeConstants(unsigned FirstVal, unsigned LastVal,
       else if (isCStr7)
         AbbrevToUse = CString7Abbrev;
     } else if (const ConstantDataSequential *CDS =
-                  dyn_cast<ConstantDataSequential>(C)) {
+                   dyn_cast<ConstantDataSequential>(C)) {
       Code = bitc::CST_CODE_DATA;
       Type *EltTy = CDS->getElementType();
       if (isa<IntegerType>(EltTy)) {
@@ -2919,45 +3015,39 @@ void ModuleBitcodeWriter::writeInstruction(const Instruction &I,
     break;
   }
 
-  case Instruction::Ret:
-    {
-      Code = bitc::FUNC_CODE_INST_RET;
-      unsigned NumOperands = I.getNumOperands();
-      if (NumOperands == 0)
-        AbbrevToUse = FUNCTION_INST_RET_VOID_ABBREV;
-      else if (NumOperands == 1) {
-        if (!pushValueAndType(I.getOperand(0), InstID, Vals))
-          AbbrevToUse = FUNCTION_INST_RET_VAL_ABBREV;
-      } else {
-        for (unsigned i = 0, e = NumOperands; i != e; ++i)
-          pushValueAndType(I.getOperand(i), InstID, Vals);
-      }
+  case Instruction::Ret: {
+    Code = bitc::FUNC_CODE_INST_RET;
+    unsigned NumOperands = I.getNumOperands();
+    if (NumOperands == 0)
+      AbbrevToUse = FUNCTION_INST_RET_VOID_ABBREV;
+    else if (NumOperands == 1) {
+      if (!pushValueAndType(I.getOperand(0), InstID, Vals))
+        AbbrevToUse = FUNCTION_INST_RET_VAL_ABBREV;
+    } else {
+      for (unsigned i = 0, e = NumOperands; i != e; ++i)
+        pushValueAndType(I.getOperand(i), InstID, Vals);
     }
-    break;
-  case Instruction::Br:
-    {
-      Code = bitc::FUNC_CODE_INST_BR;
-      const BranchInst &II = cast<BranchInst>(I);
-      Vals.push_back(VE.getValueID(II.getSuccessor(0)));
-      if (II.isConditional()) {
-        Vals.push_back(VE.getValueID(II.getSuccessor(1)));
-        pushValue(II.getCondition(), InstID, Vals);
-      }
+  } break;
+  case Instruction::Br: {
+    Code = bitc::FUNC_CODE_INST_BR;
+    const BranchInst &II = cast<BranchInst>(I);
+    Vals.push_back(VE.getValueID(II.getSuccessor(0)));
+    if (II.isConditional()) {
+      Vals.push_back(VE.getValueID(II.getSuccessor(1)));
+      pushValue(II.getCondition(), InstID, Vals);
     }
-    break;
-  case Instruction::Switch:
-    {
-      Code = bitc::FUNC_CODE_INST_SWITCH;
-      const SwitchInst &SI = cast<SwitchInst>(I);
-      Vals.push_back(VE.getTypeID(SI.getCondition()->getType()));
-      pushValue(SI.getCondition(), InstID, Vals);
-      Vals.push_back(VE.getValueID(SI.getDefaultDest()));
-      for (auto Case : SI.cases()) {
-        Vals.push_back(VE.getValueID(Case.getCaseValue()));
-        Vals.push_back(VE.getValueID(Case.getCaseSuccessor()));
-      }
+  } break;
+  case Instruction::Switch: {
+    Code = bitc::FUNC_CODE_INST_SWITCH;
+    const SwitchInst &SI = cast<SwitchInst>(I);
+    Vals.push_back(VE.getTypeID(SI.getCondition()->getType()));
+    pushValue(SI.getCondition(), InstID, Vals);
+    Vals.push_back(VE.getValueID(SI.getDefaultDest()));
+    for (auto Case : SI.cases()) {
+      Vals.push_back(VE.getValueID(Case.getCaseValue()));
+      Vals.push_back(VE.getValueID(Case.getCaseSuccessor()));
     }
-    break;
+  } break;
   case Instruction::IndirectBr:
     Code = bitc::FUNC_CODE_INST_INDIRECTBR;
     Vals.push_back(VE.getTypeID(I.getOperand(0)->getType()));
@@ -3250,9 +3340,9 @@ void ModuleBitcodeWriter::writeInstruction(const Instruction &I,
   }
   case Instruction::VAArg:
     Code = bitc::FUNC_CODE_INST_VAARG;
-    Vals.push_back(VE.getTypeID(I.getOperand(0)->getType()));   // valistty
-    pushValue(I.getOperand(0), InstID, Vals);                   // valist.
-    Vals.push_back(VE.getTypeID(I.getType())); // restype.
+    Vals.push_back(VE.getTypeID(I.getOperand(0)->getType())); // valistty
+    pushValue(I.getOperand(0), InstID, Vals);                 // valist.
+    Vals.push_back(VE.getTypeID(I.getType()));                // restype.
     break;
   case Instruction::Freeze:
     Code = bitc::FUNC_CODE_INST_FREEZE;
@@ -3267,7 +3357,7 @@ void ModuleBitcodeWriter::writeInstruction(const Instruction &I,
 /// Write a GlobalValue VST to the module. The purpose of this data structure is
 /// to allow clients to efficiently find the function body.
 void ModuleBitcodeWriter::writeGlobalValueSymbolTable(
-  DenseMap<const Function *, uint64_t> &FunctionToBitcodeIndex) {
+    DenseMap<const Function *, uint64_t> &FunctionToBitcodeIndex) {
   // Get the offset of the VST we are writing, and backpatch it into
   // the VST forward declaration record.
   uint64_t VSTOffset = Stream.GetCurrentBitNo();
@@ -3564,10 +3654,10 @@ void ModuleBitcodeWriter::writeBlockInfo() {
   { // CE_CAST abbrev for CONSTANTS_BLOCK.
     auto Abbv = std::make_shared<BitCodeAbbrev>();
     Abbv->Add(BitCodeAbbrevOp(bitc::CST_CODE_CE_CAST));
-    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 4));  // cast opc
-    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed,       // typeid
+    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 4)); // cast opc
+    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed,      // typeid
                               VE.computeBitsRequiredForTypeIndicies()));
-    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));    // value id
+    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // value id
 
     if (Stream.EmitBlockInfoAbbrev(bitc::CONSTANTS_BLOCK_ID, Abbv) !=
         CONSTANTS_CE_CAST_Abbrev)
@@ -3589,7 +3679,7 @@ void ModuleBitcodeWriter::writeBlockInfo() {
     Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // Ptr
     Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed,    // dest ty
                               VE.computeBitsRequiredForTypeIndicies()));
-    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // Align
+    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // Align
     Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 1)); // volatile
     if (Stream.EmitBlockInfoAbbrev(bitc::FUNCTION_BLOCK_ID, Abbv) !=
         FUNCTION_INST_LOAD_ABBREV)
@@ -3598,7 +3688,7 @@ void ModuleBitcodeWriter::writeBlockInfo() {
   { // INST_UNOP abbrev for FUNCTION_BLOCK.
     auto Abbv = std::make_shared<BitCodeAbbrev>();
     Abbv->Add(BitCodeAbbrevOp(bitc::FUNC_CODE_INST_UNOP));
-    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // LHS
+    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6));   // LHS
     Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 4)); // opc
     if (Stream.EmitBlockInfoAbbrev(bitc::FUNCTION_BLOCK_ID, Abbv) !=
         FUNCTION_INST_UNOP_ABBREV)
@@ -3607,7 +3697,7 @@ void ModuleBitcodeWriter::writeBlockInfo() {
   { // INST_UNOP_FLAGS abbrev for FUNCTION_BLOCK.
     auto Abbv = std::make_shared<BitCodeAbbrev>();
     Abbv->Add(BitCodeAbbrevOp(bitc::FUNC_CODE_INST_UNOP));
-    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // LHS
+    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6));   // LHS
     Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 4)); // opc
     Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 8)); // flags
     if (Stream.EmitBlockInfoAbbrev(bitc::FUNCTION_BLOCK_ID, Abbv) !=
@@ -3617,8 +3707,8 @@ void ModuleBitcodeWriter::writeBlockInfo() {
   { // INST_BINOP abbrev for FUNCTION_BLOCK.
     auto Abbv = std::make_shared<BitCodeAbbrev>();
     Abbv->Add(BitCodeAbbrevOp(bitc::FUNC_CODE_INST_BINOP));
-    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // LHS
-    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // RHS
+    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6));   // LHS
+    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6));   // RHS
     Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 4)); // opc
     if (Stream.EmitBlockInfoAbbrev(bitc::FUNCTION_BLOCK_ID, Abbv) !=
         FUNCTION_INST_BINOP_ABBREV)
@@ -3627,8 +3717,8 @@ void ModuleBitcodeWriter::writeBlockInfo() {
   { // INST_BINOP_FLAGS abbrev for FUNCTION_BLOCK.
     auto Abbv = std::make_shared<BitCodeAbbrev>();
     Abbv->Add(BitCodeAbbrevOp(bitc::FUNC_CODE_INST_BINOP));
-    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // LHS
-    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // RHS
+    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6));   // LHS
+    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6));   // RHS
     Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 4)); // opc
     Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 8)); // flags
     if (Stream.EmitBlockInfoAbbrev(bitc::FUNCTION_BLOCK_ID, Abbv) !=
@@ -3638,10 +3728,10 @@ void ModuleBitcodeWriter::writeBlockInfo() {
   { // INST_CAST abbrev for FUNCTION_BLOCK.
     auto Abbv = std::make_shared<BitCodeAbbrev>();
     Abbv->Add(BitCodeAbbrevOp(bitc::FUNC_CODE_INST_CAST));
-    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6));    // OpVal
-    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed,       // dest ty
+    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // OpVal
+    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed,    // dest ty
                               VE.computeBitsRequiredForTypeIndicies()));
-    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 4));  // opc
+    Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Fixed, 4)); // opc
     if (Stream.EmitBlockInfoAbbrev(bitc::FUNCTION_BLOCK_ID, Abbv) !=
         FUNCTION_INST_CAST_ABBREV)
       llvm_unreachable("Unexpected abbrev ordering!");
@@ -4119,13 +4209,13 @@ void ModuleBitcodeWriterBase::writePerModuleGlobalValueSummary() {
   // Abbrev for FS_PERMODULE_PROFILE.
   auto Abbv = std::make_shared<BitCodeAbbrev>();
   Abbv->Add(BitCodeAbbrevOp(bitc::FS_PERMODULE_PROFILE));
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // valueid
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // flags
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // instcount
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // fflags
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // numrefs
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // rorefcnt
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // worefcnt
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // valueid
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // flags
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // instcount
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // fflags
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // numrefs
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // rorefcnt
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // worefcnt
   // numrefs x valueid, n x (valueid, hotness)
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Array));
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));
@@ -4137,13 +4227,13 @@ void ModuleBitcodeWriterBase::writePerModuleGlobalValueSummary() {
     Abbv->Add(BitCodeAbbrevOp(bitc::FS_PERMODULE_RELBF));
   else
     Abbv->Add(BitCodeAbbrevOp(bitc::FS_PERMODULE));
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // valueid
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6));   // flags
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // instcount
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // fflags
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // numrefs
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // rorefcnt
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // worefcnt
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // valueid
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // flags
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // instcount
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // fflags
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // numrefs
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // rorefcnt
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // worefcnt
   // numrefs x valueid, n x (valueid [, rel_block_freq])
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Array));
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));
@@ -4172,9 +4262,9 @@ void ModuleBitcodeWriterBase::writePerModuleGlobalValueSummary() {
   // Abbrev for FS_ALIAS.
   Abbv = std::make_shared<BitCodeAbbrev>();
   Abbv->Add(BitCodeAbbrevOp(bitc::FS_ALIAS));
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // valueid
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6));   // flags
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // valueid
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // valueid
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // flags
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // valueid
   unsigned FSAliasAbbrev = Stream.EmitAbbrev(std::move(Abbv));
 
   // Abbrev for FS_TYPE_ID_METADATA
@@ -4284,8 +4374,8 @@ void IndexBitcodeWriter::writeCombinedGlobalValueSummary() {
     StackIdAbbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Array));
     StackIdAbbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));
     unsigned StackIdAbbvId = Stream.EmitAbbrev(std::move(StackIdAbbv));
-    // Write the stack ids used by this index, which will be a subset of those in
-    // the full index in the case of distributed indexes.
+    // Write the stack ids used by this index, which will be a subset of those
+    // in the full index in the case of distributed indexes.
     std::vector<uint64_t> StackIds;
     for (auto &I : StackIdIndices)
       StackIds.push_back(Index.getStackIdAtIndex(I));
@@ -4295,15 +4385,15 @@ void IndexBitcodeWriter::writeCombinedGlobalValueSummary() {
   // Abbrev for FS_COMBINED.
   auto Abbv = std::make_shared<BitCodeAbbrev>();
   Abbv->Add(BitCodeAbbrevOp(bitc::FS_COMBINED));
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // valueid
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // modid
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6));   // flags
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // instcount
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // fflags
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // entrycount
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // numrefs
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // rorefcnt
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // worefcnt
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // valueid
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // modid
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // flags
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // instcount
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // fflags
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // entrycount
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // numrefs
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // rorefcnt
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // worefcnt
   // numrefs x valueid, n x (valueid)
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Array));
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));
@@ -4312,15 +4402,15 @@ void IndexBitcodeWriter::writeCombinedGlobalValueSummary() {
   // Abbrev for FS_COMBINED_PROFILE.
   Abbv = std::make_shared<BitCodeAbbrev>();
   Abbv->Add(BitCodeAbbrevOp(bitc::FS_COMBINED_PROFILE));
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // valueid
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // modid
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6));   // flags
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // instcount
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // fflags
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // entrycount
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // numrefs
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // rorefcnt
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4));   // worefcnt
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // valueid
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // modid
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // flags
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // instcount
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // fflags
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // entrycount
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // numrefs
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // rorefcnt
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 4)); // worefcnt
   // numrefs x valueid, n x (valueid, hotness)
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Array));
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));
@@ -4329,20 +4419,20 @@ void IndexBitcodeWriter::writeCombinedGlobalValueSummary() {
   // Abbrev for FS_COMBINED_GLOBALVAR_INIT_REFS.
   Abbv = std::make_shared<BitCodeAbbrev>();
   Abbv->Add(BitCodeAbbrevOp(bitc::FS_COMBINED_GLOBALVAR_INIT_REFS));
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // valueid
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // modid
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6));   // flags
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Array));    // valueids
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // valueid
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // modid
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // flags
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::Array));  // valueids
   Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));
   unsigned FSModRefsAbbrev = Stream.EmitAbbrev(std::move(Abbv));
 
   // Abbrev for FS_COMBINED_ALIAS.
   Abbv = std::make_shared<BitCodeAbbrev>();
   Abbv->Add(BitCodeAbbrevOp(bitc::FS_COMBINED_ALIAS));
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // valueid
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // modid
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6));   // flags
-  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8));   // valueid
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // valueid
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // modid
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 6)); // flags
+  Abbv->Add(BitCodeAbbrevOp(BitCodeAbbrevOp::VBR, 8)); // valueid
   unsigned FSAliasAbbrev = Stream.EmitAbbrev(std::move(Abbv));
 
   Abbv = std::make_shared<BitCodeAbbrev>();
@@ -4454,7 +4544,8 @@ void IndexBitcodeWriter::writeCombinedGlobalValueSummary() {
     writeFunctionHeapProfileRecords(
         Stream, FS, CallsiteAbbrev, AllocAbbrev,
         /*PerModule*/ false,
-        /*GetValueId*/ [&](const ValueInfo &VI) -> unsigned {
+        /*GetValueId*/
+        [&](const ValueInfo &VI) -> unsigned {
           std::optional<unsigned> ValueID = GetValueId(VI);
           // This can happen in shared index files for distributed ThinLTO if
           // the callee function summary is not included. Record 0 which we
@@ -4464,7 +4555,8 @@ void IndexBitcodeWriter::writeCombinedGlobalValueSummary() {
             return 0;
           return *ValueID;
         },
-        /*GetStackIndex*/ [&](unsigned I) {
+        /*GetStackIndex*/
+        [&](unsigned I) {
           // Get the corresponding index into the list of StackIdIndices
           // actually being written for this combined index (which may be a
           // subset in the case of distributed indexes).
@@ -4730,10 +4822,10 @@ static void emitDarwinBCHeaderAndTrailer(SmallVectorImpl<char> &Buffer,
   // number from /usr/include/mach/machine.h.  It is ok to reproduce the
   // specific constants here because they are implicitly part of the Darwin ABI.
   enum {
-    DARWIN_CPU_ARCH_ABI64      = 0x01000000,
-    DARWIN_CPU_TYPE_X86        = 7,
-    DARWIN_CPU_TYPE_ARM        = 12,
-    DARWIN_CPU_TYPE_POWERPC    = 18
+    DARWIN_CPU_ARCH_ABI64 = 0x01000000,
+    DARWIN_CPU_TYPE_X86 = 7,
+    DARWIN_CPU_TYPE_ARM = 12,
+    DARWIN_CPU_TYPE_POWERPC = 18
   };
 
   Triple::ArchType Arch = TT.getArch();
@@ -4882,7 +4974,7 @@ void llvm::WriteBitcodeToFile(const Module &M, raw_ostream &Out,
                               const ModuleSummaryIndex *Index,
                               bool GenerateHash, ModuleHash *ModHash) {
   SmallVector<char, 0> Buffer;
-  Buffer.reserve(256*1024);
+  Buffer.reserve(256 * 1024);
 
   // If this is darwin or another generic macho target, reserve space for the
   // header.
diff --git a/llvm/lib/Bitcode/Writer/BitcodeWriterPass.cpp b/llvm/lib/Bitcode/Writer/BitcodeWriterPass.cpp
index 536d04f2fe269fa..ac02a0071cd876b 100644
--- a/llvm/lib/Bitcode/Writer/BitcodeWriterPass.cpp
+++ b/llvm/lib/Bitcode/Writer/BitcodeWriterPass.cpp
@@ -27,44 +27,44 @@ PreservedAnalyses BitcodeWriterPass::run(Module &M, ModuleAnalysisManager &AM) {
 }
 
 namespace {
-  class WriteBitcodePass : public ModulePass {
-    raw_ostream &OS; // raw_ostream to print on
-    bool ShouldPreserveUseListOrder;
-    bool EmitSummaryIndex;
-    bool EmitModuleHash;
+class WriteBitcodePass : public ModulePass {
+  raw_ostream &OS; // raw_ostream to print on
+  bool ShouldPreserveUseListOrder;
+  bool EmitSummaryIndex;
+  bool EmitModuleHash;
 
-  public:
-    static char ID; // Pass identification, replacement for typeid
-    WriteBitcodePass() : ModulePass(ID), OS(dbgs()) {
-      initializeWriteBitcodePassPass(*PassRegistry::getPassRegistry());
-    }
+public:
+  static char ID; // Pass identification, replacement for typeid
+  WriteBitcodePass() : ModulePass(ID), OS(dbgs()) {
+    initializeWriteBitcodePassPass(*PassRegistry::getPassRegistry());
+  }
 
-    explicit WriteBitcodePass(raw_ostream &o, bool ShouldPreserveUseListOrder,
-                              bool EmitSummaryIndex, bool EmitModuleHash)
-        : ModulePass(ID), OS(o),
-          ShouldPreserveUseListOrder(ShouldPreserveUseListOrder),
-          EmitSummaryIndex(EmitSummaryIndex), EmitModuleHash(EmitModuleHash) {
-      initializeWriteBitcodePassPass(*PassRegistry::getPassRegistry());
-    }
+  explicit WriteBitcodePass(raw_ostream &o, bool ShouldPreserveUseListOrder,
+                            bool EmitSummaryIndex, bool EmitModuleHash)
+      : ModulePass(ID), OS(o),
+        ShouldPreserveUseListOrder(ShouldPreserveUseListOrder),
+        EmitSummaryIndex(EmitSummaryIndex), EmitModuleHash(EmitModuleHash) {
+    initializeWriteBitcodePassPass(*PassRegistry::getPassRegistry());
+  }
 
-    StringRef getPassName() const override { return "Bitcode Writer"; }
+  StringRef getPassName() const override { return "Bitcode Writer"; }
 
-    bool runOnModule(Module &M) override {
-      const ModuleSummaryIndex *Index =
-          EmitSummaryIndex
-              ? &(getAnalysis<ModuleSummaryIndexWrapperPass>().getIndex())
-              : nullptr;
-      WriteBitcodeToFile(M, OS, ShouldPreserveUseListOrder, Index,
-                         EmitModuleHash);
-      return false;
-    }
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.setPreservesAll();
-      if (EmitSummaryIndex)
-        AU.addRequired<ModuleSummaryIndexWrapperPass>();
-    }
-  };
-}
+  bool runOnModule(Module &M) override {
+    const ModuleSummaryIndex *Index =
+        EmitSummaryIndex
+            ? &(getAnalysis<ModuleSummaryIndexWrapperPass>().getIndex())
+            : nullptr;
+    WriteBitcodeToFile(M, OS, ShouldPreserveUseListOrder, Index,
+                       EmitModuleHash);
+    return false;
+  }
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.setPreservesAll();
+    if (EmitSummaryIndex)
+      AU.addRequired<ModuleSummaryIndexWrapperPass>();
+  }
+};
+} // namespace
 
 char WriteBitcodePass::ID = 0;
 INITIALIZE_PASS_BEGIN(WriteBitcodePass, "write-bitcode", "Write Bitcode", false,
@@ -75,9 +75,10 @@ INITIALIZE_PASS_END(WriteBitcodePass, "write-bitcode", "Write Bitcode", false,
 
 ModulePass *llvm::createBitcodeWriterPass(raw_ostream &Str,
                                           bool ShouldPreserveUseListOrder,
-                                          bool EmitSummaryIndex, bool EmitModuleHash) {
-  return new WriteBitcodePass(Str, ShouldPreserveUseListOrder,
-                              EmitSummaryIndex, EmitModuleHash);
+                                          bool EmitSummaryIndex,
+                                          bool EmitModuleHash) {
+  return new WriteBitcodePass(Str, ShouldPreserveUseListOrder, EmitSummaryIndex,
+                              EmitModuleHash);
 }
 
 bool llvm::isBitcodeWriterPass(Pass *P) {
diff --git a/llvm/lib/Bitcode/Writer/CMakeLists.txt b/llvm/lib/Bitcode/Writer/CMakeLists.txt
index 1cc1802bc9aaf0c..fae6f443844b731 100644
--- a/llvm/lib/Bitcode/Writer/CMakeLists.txt
+++ b/llvm/lib/Bitcode/Writer/CMakeLists.txt
@@ -1,17 +1,7 @@
-add_llvm_component_library(LLVMBitWriter
-  BitWriter.cpp
-  BitcodeWriter.cpp
-  BitcodeWriterPass.cpp
-  ValueEnumerator.cpp
+add_llvm_component_library(LLVMBitWriter BitWriter.cpp BitcodeWriter
+                               .cpp BitcodeWriterPass.cpp ValueEnumerator.cpp
 
-  DEPENDS
-  intrinsics_gen
+                                   DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  Analysis
-  Core
-  MC
-  Object
-  Support
-  TargetParser
-  )
+                                       LINK_COMPONENTS Analysis Core MC Object
+                                           Support TargetParser)
diff --git a/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp b/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp
index 998f629aaa4ea9c..11ca7c57bb350fd 100644
--- a/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp
+++ b/llvm/lib/Bitcode/Writer/ValueEnumerator.cpp
@@ -54,9 +54,7 @@ struct OrderMap {
 
   OrderMap() = default;
 
-  bool isGlobalValue(unsigned ID) const {
-    return ID <= LastGlobalValueID;
-  }
+  bool isGlobalValue(unsigned ID) const { return ID <= LastGlobalValueID; }
 
   unsigned size() const { return IDs.size(); }
   std::pair<unsigned, bool> &operator[](const Value *V) { return IDs[V]; }
@@ -315,7 +313,7 @@ static UseListOrderStack predictUseListOrder(const Module &M) {
   return Stack;
 }
 
-static bool isIntOrIntVectorValue(const std::pair<const Value*, unsigned> &V) {
+static bool isIntOrIntVectorValue(const std::pair<const Value *, unsigned> &V) {
   return V.first->getType()->isIntOrIntVectorTy();
 }
 
@@ -332,7 +330,7 @@ ValueEnumerator::ValueEnumerator(const Module &M,
   }
 
   // Enumerate the functions.
-  for (const Function & F : M) {
+  for (const Function &F : M) {
     EnumerateValue(&F);
     EnumerateType(F.getValueType());
     EnumerateAttributes(F.getAttributes());
@@ -485,7 +483,7 @@ unsigned ValueEnumerator::getValueID(const Value *V) const {
 
   ValueMapType::const_iterator I = ValueMap.find(V);
   assert(I != ValueMap.end() && "Value not in slotcalculator!");
-  return I->second-1;
+  return I->second - 1;
 }
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
@@ -514,13 +512,12 @@ void ValueEnumerator::print(raw_ostream &OS, const ValueMapType &Map,
     for (const Use &U : V->uses()) {
       if (&U != &*V->use_begin())
         OS << ",";
-      if(U->hasName())
+      if (U->hasName())
         OS << " " << U->getName();
       else
         OS << " [null]";
-
     }
-    OS <<  "\n\n";
+    OS << "\n\n";
   }
 }
 
@@ -539,7 +536,8 @@ void ValueEnumerator::print(raw_ostream &OS, const MetadataMapType &Map,
 
 /// OptimizeConstants - Reorder constant pool for denser encoding.
 void ValueEnumerator::OptimizeConstants(unsigned CstStart, unsigned CstEnd) {
-  if (CstStart == CstEnd || CstStart+1 == CstEnd) return;
+  if (CstStart == CstEnd || CstStart + 1 == CstEnd)
+    return;
 
   if (ShouldPreserveUseListOrder)
     // Optimizing constants makes the use-list order difficult to predict.
@@ -549,12 +547,13 @@ void ValueEnumerator::OptimizeConstants(unsigned CstStart, unsigned CstEnd) {
   std::stable_sort(Values.begin() + CstStart, Values.begin() + CstEnd,
                    [this](const std::pair<const Value *, unsigned> &LHS,
                           const std::pair<const Value *, unsigned> &RHS) {
-    // Sort by plane.
-    if (LHS.first->getType() != RHS.first->getType())
-      return getTypeID(LHS.first->getType()) < getTypeID(RHS.first->getType());
-    // Then by frequency.
-    return LHS.second > RHS.second;
-  });
+                     // Sort by plane.
+                     if (LHS.first->getType() != RHS.first->getType())
+                       return getTypeID(LHS.first->getType()) <
+                              getTypeID(RHS.first->getType());
+                     // Then by frequency.
+                     return LHS.second > RHS.second;
+                   });
 
   // Ensure that integer and vector of integer constants are at the start of the
   // constant pool.  This is important so that GEP structure indices come before
@@ -564,7 +563,7 @@ void ValueEnumerator::OptimizeConstants(unsigned CstStart, unsigned CstEnd) {
 
   // Rebuild the modified portion of ValueMap.
   for (; CstStart != CstEnd; ++CstStart)
-    ValueMap[Values[CstStart].first] = CstStart+1;
+    ValueMap[Values[CstStart].first] = CstStart + 1;
 }
 
 /// EnumerateValueSymbolTable - Insert all of the values in the specified symbol
@@ -683,7 +682,8 @@ void ValueEnumerator::EnumerateMetadata(unsigned F, const Metadata *MD) {
   }
 }
 
-const MDNode *ValueEnumerator::enumerateMetadataImpl(unsigned F, const Metadata *MD) {
+const MDNode *ValueEnumerator::enumerateMetadataImpl(unsigned F,
+                                                     const Metadata *MD) {
   if (!MD)
     return nullptr;
 
@@ -871,7 +871,7 @@ void ValueEnumerator::EnumerateValue(const Value *V) {
   unsigned &ValueID = ValueMap[V];
   if (ValueID) {
     // Increment use count.
-    Values[ValueID-1].second++;
+    Values[ValueID - 1].second++;
     return;
   }
 
@@ -894,8 +894,8 @@ void ValueEnumerator::EnumerateValue(const Value *V) {
       // itself.  This makes it more likely that we can avoid forward references
       // in the reader.  We know that there can be no cycles in the constants
       // graph that don't go through a global variable.
-      for (User::const_op_iterator I = C->op_begin(), E = C->op_end();
-           I != E; ++I)
+      for (User::const_op_iterator I = C->op_begin(), E = C->op_end(); I != E;
+           ++I)
         if (!isa<BasicBlock>(*I)) // Don't enumerate BB operand to BlockAddress.
           EnumerateValue(*I);
       if (auto *CE = dyn_cast<ConstantExpr>(C)) {
@@ -918,7 +918,6 @@ void ValueEnumerator::EnumerateValue(const Value *V) {
   ValueID = Values.size();
 }
 
-
 void ValueEnumerator::EnumerateType(Type *Ty) {
   unsigned *TypeID = &TypeMap[Ty];
 
@@ -990,7 +989,8 @@ void ValueEnumerator::EnumerateOperandType(const Value *V) {
 }
 
 void ValueEnumerator::EnumerateAttributes(AttributeList PAL) {
-  if (PAL.isEmpty()) return;  // null is always 0.
+  if (PAL.isEmpty())
+    return; // null is always 0.
 
   // Do a lookup.
   unsigned &Entry = AttributeListMap[PAL];
@@ -1119,8 +1119,8 @@ void ValueEnumerator::purgeFunction() {
   NumMDStrings = 0;
 }
 
-static void IncorporateFunctionInfoGlobalBBIDs(const Function *F,
-                                 DenseMap<const BasicBlock*, unsigned> &IDMap) {
+static void IncorporateFunctionInfoGlobalBBIDs(
+    const Function *F, DenseMap<const BasicBlock *, unsigned> &IDMap) {
   unsigned Counter = 0;
   for (const BasicBlock &BB : *F)
     IDMap[&BB] = ++Counter;
@@ -1132,7 +1132,7 @@ static void IncorporateFunctionInfoGlobalBBIDs(const Function *F,
 unsigned ValueEnumerator::getGlobalBasicBlockID(const BasicBlock *BB) const {
   unsigned &Idx = GlobalBasicBlockIDs[BB];
   if (Idx != 0)
-    return Idx-1;
+    return Idx - 1;
 
   IncorporateFunctionInfoGlobalBBIDs(BB->getParent(), GlobalBasicBlockIDs);
   return getGlobalBasicBlockID(BB);
diff --git a/llvm/lib/Bitcode/Writer/ValueEnumerator.h b/llvm/lib/Bitcode/Writer/ValueEnumerator.h
index 4b45503595f6f45..2f92f58d726329e 100644
--- a/llvm/lib/Bitcode/Writer/ValueEnumerator.h
+++ b/llvm/lib/Bitcode/Writer/ValueEnumerator.h
@@ -115,7 +115,7 @@ class ValueEnumerator {
 
   /// GlobalBasicBlockIDs - This map memoizes the basic block ID's referenced by
   /// the "getGlobalBasicBlockID" method.
-  mutable DenseMap<const BasicBlock*, unsigned> GlobalBasicBlockIDs;
+  mutable DenseMap<const BasicBlock *, unsigned> GlobalBasicBlockIDs;
 
   using InstructionMapType = DenseMap<const Instruction *, unsigned>;
   InstructionMapType InstructionMap;
@@ -123,7 +123,7 @@ class ValueEnumerator {
 
   /// BasicBlocks - This contains all the basic blocks for the currently
   /// incorporated function.  Their reverse mapping is stored in ValueMap.
-  std::vector<const BasicBlock*> BasicBlocks;
+  std::vector<const BasicBlock *> BasicBlocks;
 
   /// When a function is incorporated, this is the size of the Values list
   /// before incorporation.
@@ -166,14 +166,15 @@ class ValueEnumerator {
   unsigned getTypeID(Type *T) const {
     TypeMapType::const_iterator I = TypeMap.find(T);
     assert(I != TypeMap.end() && "Type not in ValueEnumerator!");
-    return I->second-1;
+    return I->second - 1;
   }
 
   unsigned getInstructionID(const Instruction *I) const;
   void setInstructionID(const Instruction *I);
 
   unsigned getAttributeListID(AttributeList PAL) const {
-    if (PAL.isEmpty()) return 0;  // Null maps to zero.
+    if (PAL.isEmpty())
+      return 0; // Null maps to zero.
     AttributeListMapType::const_iterator I = AttributeListMap.find(PAL);
     assert(I != AttributeListMap.end() && "Attribute not in ValueEnumerator!");
     return I->second;
@@ -211,11 +212,13 @@ class ValueEnumerator {
 
   const TypeList &getTypes() const { return Types; }
 
-  const std::vector<const BasicBlock*> &getBasicBlocks() const {
+  const std::vector<const BasicBlock *> &getBasicBlocks() const {
     return BasicBlocks;
   }
 
-  const std::vector<AttributeList> &getAttributeLists() const { return AttributeLists; }
+  const std::vector<AttributeList> &getAttributeLists() const {
+    return AttributeLists;
+  }
 
   const std::vector<IndexAndAttrSet> &getAttributeGroups() const {
     return AttributeGroups;
diff --git a/llvm/lib/Bitstream/CMakeLists.txt b/llvm/lib/Bitstream/CMakeLists.txt
index 49def158f690453..6114fde207bfb54 100644
--- a/llvm/lib/Bitstream/CMakeLists.txt
+++ b/llvm/lib/Bitstream/CMakeLists.txt
@@ -1,2 +1,2 @@
 add_subdirectory(Reader)
-# The writer is header-only.
+#The writer is header - only.
diff --git a/llvm/lib/Bitstream/Reader/BitstreamReader.cpp b/llvm/lib/Bitstream/Reader/BitstreamReader.cpp
index 3cc9dfdf7b85864..e3f7cd8e11b1dc2 100644
--- a/llvm/lib/Bitstream/Reader/BitstreamReader.cpp
+++ b/llvm/lib/Bitstream/Reader/BitstreamReader.cpp
@@ -154,7 +154,7 @@ Expected<unsigned> BitstreamCursor::skipRecord(unsigned AbbrevID) {
       unsigned NumElts = MaybeNum.get();
 
       // Get the element encoding.
-      assert(i+2 == e && "array op not second to last?");
+      assert(i + 2 == e && "array op not second to last?");
       const BitCodeAbbrevOp &EltEnc = Abbv->getOperandInfo(++i);
 
       // Read all the elements.
@@ -192,14 +192,14 @@ Expected<unsigned> BitstreamCursor::skipRecord(unsigned AbbrevID) {
     if (!MaybeNum)
       return MaybeNum.takeError();
     unsigned NumElts = MaybeNum.get();
-    SkipToFourByteBoundary();  // 32-bit alignment
+    SkipToFourByteBoundary(); // 32-bit alignment
 
     // Figure out where the end of this blob will be including tail padding.
     const size_t NewEnd = GetCurrentBitNo() + alignTo(NumElts, 4) * 8;
 
     // If this would read off the end of the bitcode file, just set the
     // record to empty and return.
-    if (!canSkipToPos(NewEnd/8)) {
+    if (!canSkipToPos(NewEnd / 8)) {
       skipToEnd();
       break;
     }
@@ -291,8 +291,7 @@ Expected<unsigned> BitstreamCursor::readRecord(unsigned AbbrevID,
         return error("Array op not second to last");
       const BitCodeAbbrevOp &EltEnc = Abbv->getOperandInfo(++i);
       if (!EltEnc.isEncoding())
-        return error(
-            "Array element type has to be an encoding of a type");
+        return error("Array element type has to be an encoding of a type");
 
       // Read all the elements.
       switch (EltEnc.getEncoding()) {
@@ -330,14 +329,14 @@ Expected<unsigned> BitstreamCursor::readRecord(unsigned AbbrevID,
     if (!MaybeNumElts)
       return MaybeNumElts.takeError();
     uint32_t NumElts = MaybeNumElts.get();
-    SkipToFourByteBoundary();  // 32-bit alignment
+    SkipToFourByteBoundary(); // 32-bit alignment
 
     // Figure out where the end of this blob will be including tail padding.
     size_t CurBitPos = GetCurrentBitNo();
     const size_t NewEnd = CurBitPos + alignTo(NumElts, 4) * 8;
 
     // Make sure the bitstream is large enough to contain the blob.
-    if (!canSkipToPos(NewEnd/8))
+    if (!canSkipToPos(NewEnd / 8))
       return error("Blob ends too soon");
 
     // Otherwise, inform the streamer that we need these bytes in memory.  Skip
@@ -482,7 +481,7 @@ BitstreamCursor::ReadBlockInfoBlock(bool ReadBlockInfoNames) {
       CurBlockInfo->Name = std::string(Record.begin(), Record.end());
       break;
     }
-      case bitc::BLOCKINFO_CODE_SETRECORDNAME: {
+    case bitc::BLOCKINFO_CODE_SETRECORDNAME: {
       if (!CurBlockInfo)
         return std::nullopt;
       if (!ReadBlockInfoNames)
@@ -490,7 +489,7 @@ BitstreamCursor::ReadBlockInfoBlock(bool ReadBlockInfoNames) {
       CurBlockInfo->RecordNames.emplace_back(
           (unsigned)Record[0], std::string(Record.begin() + 1, Record.end()));
       break;
-      }
-      }
+    }
+    }
   }
 }
diff --git a/llvm/lib/Bitstream/Reader/CMakeLists.txt b/llvm/lib/Bitstream/Reader/CMakeLists.txt
index 832ccecfb7c9092..3835c30900eb8a4 100644
--- a/llvm/lib/Bitstream/Reader/CMakeLists.txt
+++ b/llvm/lib/Bitstream/Reader/CMakeLists.txt
@@ -1,10 +1,7 @@
-add_llvm_component_library(LLVMBitstreamReader
-  BitstreamReader.cpp
+add_llvm_component_library(LLVMBitstreamReader BitstreamReader.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/Bitcode
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/Bitstream
+                               ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+                           llvm / Bitcode ${LLVM_MAIN_INCLUDE_DIR} / llvm /
+                           Bitstream
 
-  LINK_COMPONENTS
-  Support
-  )
+                               LINK_COMPONENTS Support)
diff --git a/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp b/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp
index c5367221cae7a72..53432f3d8b8f125 100644
--- a/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp
+++ b/llvm/lib/CodeGen/AggressiveAntiDepBreaker.cpp
@@ -42,14 +42,14 @@ using namespace llvm;
 
 // If DebugDiv > 0 then only break antidep with (ID % DebugDiv) == DebugMod
 static cl::opt<int>
-DebugDiv("agg-antidep-debugdiv",
-         cl::desc("Debug control for aggressive anti-dep breaker"),
-         cl::init(0), cl::Hidden);
+    DebugDiv("agg-antidep-debugdiv",
+             cl::desc("Debug control for aggressive anti-dep breaker"),
+             cl::init(0), cl::Hidden);
 
 static cl::opt<int>
-DebugMod("agg-antidep-debugmod",
-         cl::desc("Debug control for aggressive anti-dep breaker"),
-         cl::init(0), cl::Hidden);
+    DebugMod("agg-antidep-debugmod",
+             cl::desc("Debug control for aggressive anti-dep breaker"),
+             cl::init(0), cl::Hidden);
 
 AggressiveAntiDepState::AggressiveAntiDepState(const unsigned TargetRegs,
                                                MachineBasicBlock *BB)
@@ -76,10 +76,9 @@ unsigned AggressiveAntiDepState::GetGroup(unsigned Reg) {
 }
 
 void AggressiveAntiDepState::GetGroupRegs(
-  unsigned Group,
-  std::vector<unsigned> &Regs,
-  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference> *RegRefs)
-{
+    unsigned Group, std::vector<unsigned> &Regs,
+    std::multimap<unsigned, AggressiveAntiDepState::RegisterReference>
+        *RegRefs) {
   for (unsigned Reg = 0; Reg != NumTargetRegs; ++Reg) {
     if ((GetGroup(Reg) == Group) && (RegRefs->count(Reg) > 0))
       Regs.push_back(Reg);
@@ -114,7 +113,7 @@ unsigned AggressiveAntiDepState::LeaveGroup(unsigned Reg) {
 bool AggressiveAntiDepState::IsLive(unsigned Reg) {
   // KillIndex must be defined and DefIndex not defined for a register
   // to be live.
-  return((KillIndices[Reg] != ~0u) && (DefIndices[Reg] == ~0u));
+  return ((KillIndices[Reg] != ~0u) && (DefIndices[Reg] == ~0u));
 }
 
 AggressiveAntiDepBreaker::AggressiveAntiDepBreaker(
@@ -130,18 +129,16 @@ AggressiveAntiDepBreaker::AggressiveAntiDepBreaker(
       CriticalPathSet = CPSet;
     else
       CriticalPathSet |= CPSet;
-   }
+  }
 
-   LLVM_DEBUG(dbgs() << "AntiDep Critical-Path Registers:");
-   LLVM_DEBUG(for (unsigned r
-                   : CriticalPathSet.set_bits()) dbgs()
-              << " " << printReg(r, TRI));
-   LLVM_DEBUG(dbgs() << '\n');
+  LLVM_DEBUG(dbgs() << "AntiDep Critical-Path Registers:");
+  LLVM_DEBUG(for (unsigned r
+                  : CriticalPathSet.set_bits()) dbgs()
+             << " " << printReg(r, TRI));
+  LLVM_DEBUG(dbgs() << '\n');
 }
 
-AggressiveAntiDepBreaker::~AggressiveAntiDepBreaker() {
-  delete State;
-}
+AggressiveAntiDepBreaker::~AggressiveAntiDepBreaker() { delete State; }
 
 void AggressiveAntiDepBreaker::StartBlock(MachineBasicBlock *BB) {
   assert(!State);
@@ -167,8 +164,7 @@ void AggressiveAntiDepBreaker::StartBlock(MachineBasicBlock *BB) {
   // callee-saved register that is not saved in the prolog.
   const MachineFrameInfo &MFI = MF.getFrameInfo();
   BitVector Pristine = MFI.getPristineRegs(MF);
-  for (const MCPhysReg *I = MF.getRegInfo().getCalleeSavedRegs(); *I;
-       ++I) {
+  for (const MCPhysReg *I = MF.getRegInfo().getCalleeSavedRegs(); *I; ++I) {
     unsigned Reg = *I;
     if (!IsReturnBlock && !Pristine.test(Reg))
       continue;
@@ -212,8 +208,8 @@ void AggressiveAntiDepBreaker::Observe(MachineInstr &MI, unsigned Count,
                  << " " << printReg(Reg, TRI) << "=g" << State->GetGroup(Reg)
                  << "->g0(region live-out)");
       State->UnionGroups(Reg, 0);
-    } else if ((DefIndices[Reg] < InsertPosIndex)
-               && (DefIndices[Reg] >= Count)) {
+    } else if ((DefIndices[Reg] < InsertPosIndex) &&
+               (DefIndices[Reg] >= Count)) {
       DefIndices[Reg] = Count;
     }
   }
@@ -235,14 +231,15 @@ bool AggressiveAntiDepBreaker::IsImplicitDefUse(MachineInstr &MI,
   else
     Op = MI.findRegisterDefOperand(Reg);
 
-  return(Op && Op->isImplicit());
+  return (Op && Op->isImplicit());
 }
 
 void AggressiveAntiDepBreaker::GetPassthruRegs(
     MachineInstr &MI, std::set<unsigned> &PassthruRegs) {
   for (unsigned i = 0, e = MI.getNumOperands(); i != e; ++i) {
     MachineOperand &MO = MI.getOperand(i);
-    if (!MO.isReg()) continue;
+    if (!MO.isReg())
+      continue;
     if ((MO.isDef() && MI.isRegTiedToUseOperand(i)) ||
         IsImplicitDefUse(MI, MO)) {
       const Register Reg = MO.getReg();
@@ -294,8 +291,8 @@ void AggressiveAntiDepBreaker::HandleLastUse(unsigned Reg, unsigned KillIdx,
                                              const char *footer) {
   std::vector<unsigned> &KillIndices = State->GetKillIndices();
   std::vector<unsigned> &DefIndices = State->GetDefIndices();
-  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference>&
-    RegRefs = State->GetRegRefs();
+  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference> &RegRefs =
+      State->GetRegRefs();
 
   // FIXME: We must leave subregisters of live super registers as live, so that
   // we don't clear out the register tracking information for subregisters of
@@ -343,8 +340,8 @@ void AggressiveAntiDepBreaker::HandleLastUse(unsigned Reg, unsigned KillIdx,
 void AggressiveAntiDepBreaker::PrescanInstruction(
     MachineInstr &MI, unsigned Count, std::set<unsigned> &PassthruRegs) {
   std::vector<unsigned> &DefIndices = State->GetDefIndices();
-  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference>&
-    RegRefs = State->GetRegRefs();
+  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference> &RegRefs =
+      State->GetRegRefs();
 
   // Handle dead defs by simulating a last-use of the register just
   // after the def. A dead def can occur because the def is truly
@@ -353,7 +350,8 @@ void AggressiveAntiDepBreaker::PrescanInstruction(
   // previous def.
   for (const MachineOperand &MO : MI.all_defs()) {
     Register Reg = MO.getReg();
-    if (Reg == 0) continue;
+    if (Reg == 0)
+      continue;
 
     HandleLastUse(Reg, Count + 1, "", "\tDead Def: ", "\n");
   }
@@ -361,9 +359,11 @@ void AggressiveAntiDepBreaker::PrescanInstruction(
   LLVM_DEBUG(dbgs() << "\tDef Groups:");
   for (unsigned i = 0, e = MI.getNumOperands(); i != e; ++i) {
     MachineOperand &MO = MI.getOperand(i);
-    if (!MO.isReg() || !MO.isDef()) continue;
+    if (!MO.isReg() || !MO.isDef())
+      continue;
     Register Reg = MO.getReg();
-    if (Reg == 0) continue;
+    if (Reg == 0)
+      continue;
 
     LLVM_DEBUG(dbgs() << " " << printReg(Reg, TRI) << "=g"
                       << State->GetGroup(Reg));
@@ -394,7 +394,7 @@ void AggressiveAntiDepBreaker::PrescanInstruction(
     const TargetRegisterClass *RC = nullptr;
     if (i < MI.getDesc().getNumOperands())
       RC = TII->getRegClass(MI.getDesc(), i, TRI, MF);
-    AggressiveAntiDepState::RegisterReference RR = { &MO, RC };
+    AggressiveAntiDepState::RegisterReference RR = {&MO, RC};
     RegRefs.insert(std::make_pair(Reg, RR));
   }
 
@@ -403,9 +403,11 @@ void AggressiveAntiDepBreaker::PrescanInstruction(
   // Scan the register defs for this instruction and update
   // live-ranges.
   for (const MachineOperand &MO : MI.operands()) {
-    if (!MO.isReg() || !MO.isDef()) continue;
+    if (!MO.isReg() || !MO.isDef())
+      continue;
     Register Reg = MO.getReg();
-    if (Reg == 0) continue;
+    if (Reg == 0)
+      continue;
     // Ignore KILLs and passthru registers for liveness...
     if (MI.isKill() || (PassthruRegs.count(Reg) != 0))
       continue;
@@ -429,8 +431,8 @@ void AggressiveAntiDepBreaker::PrescanInstruction(
 void AggressiveAntiDepBreaker::ScanInstruction(MachineInstr &MI,
                                                unsigned Count) {
   LLVM_DEBUG(dbgs() << "\tUse Groups:");
-  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference>&
-    RegRefs = State->GetRegRefs();
+  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference> &RegRefs =
+      State->GetRegRefs();
 
   // If MI's uses have special allocation requirement, don't allow
   // any use registers to be changed. Also assume all registers
@@ -456,9 +458,11 @@ void AggressiveAntiDepBreaker::ScanInstruction(MachineInstr &MI,
   // live-ranges, groups and RegRefs.
   for (unsigned i = 0, e = MI.getNumOperands(); i != e; ++i) {
     MachineOperand &MO = MI.getOperand(i);
-    if (!MO.isReg() || !MO.isUse()) continue;
+    if (!MO.isReg() || !MO.isUse())
+      continue;
     Register Reg = MO.getReg();
-    if (Reg == 0) continue;
+    if (Reg == 0)
+      continue;
 
     LLVM_DEBUG(dbgs() << " " << printReg(Reg, TRI) << "=g"
                       << State->GetGroup(Reg));
@@ -477,7 +481,7 @@ void AggressiveAntiDepBreaker::ScanInstruction(MachineInstr &MI,
     const TargetRegisterClass *RC = nullptr;
     if (i < MI.getDesc().getNumOperands())
       RC = TII->getRegClass(MI.getDesc(), i, TRI, MF);
-    AggressiveAntiDepState::RegisterReference RR = { &MO, RC };
+    AggressiveAntiDepState::RegisterReference RR = {&MO, RC};
     RegRefs.insert(std::make_pair(Reg, RR));
   }
 
@@ -490,9 +494,11 @@ void AggressiveAntiDepBreaker::ScanInstruction(MachineInstr &MI,
 
     unsigned FirstReg = 0;
     for (const MachineOperand &MO : MI.operands()) {
-      if (!MO.isReg()) continue;
+      if (!MO.isReg())
+        continue;
       Register Reg = MO.getReg();
-      if (Reg == 0) continue;
+      if (Reg == 0)
+        continue;
 
       if (FirstReg != 0) {
         LLVM_DEBUG(dbgs() << "=" << printReg(Reg, TRI));
@@ -516,7 +522,8 @@ BitVector AggressiveAntiDepBreaker::GetRenameRegisters(unsigned Reg) {
   // that are appropriate for renaming.
   for (const auto &Q : make_range(State->GetRegRefs().equal_range(Reg))) {
     const TargetRegisterClass *RC = Q.second.RC;
-    if (!RC) continue;
+    if (!RC)
+      continue;
 
     BitVector RCBV = TRI->getAllocatableSet(MF, RC);
     if (first) {
@@ -537,8 +544,8 @@ bool AggressiveAntiDepBreaker::FindSuitableFreeRegisters(
     std::map<unsigned, unsigned> &RenameMap) {
   std::vector<unsigned> &KillIndices = State->GetKillIndices();
   std::vector<unsigned> &DefIndices = State->GetDefIndices();
-  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference>&
-    RegRefs = State->GetRegRefs();
+  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference> &RegRefs =
+      State->GetRegRefs();
 
   // Collect all referenced registers in the same group as
   // AntiDepReg. These all need to be renamed together if we are to
@@ -574,7 +581,8 @@ bool AggressiveAntiDepBreaker::FindSuitableFreeRegisters(
 
   // All group registers should be a subreg of SuperReg.
   for (unsigned Reg : Regs) {
-    if (Reg == SuperReg) continue;
+    if (Reg == SuperReg)
+      continue;
     bool IsSub = TRI->isSubRegister(SuperReg, Reg);
     // FIXME: remove this once PR18663 has been properly fixed. For now,
     // return a conservative answer:
@@ -604,7 +612,7 @@ bool AggressiveAntiDepBreaker::FindSuitableFreeRegisters(
   // check every use of the register and find the largest register class
   // that can be used in all of them.
   const TargetRegisterClass *SuperRC =
-    TRI->getMinimalPhysRegClass(SuperReg, MVT::Other);
+      TRI->getMinimalPhysRegClass(SuperReg, MVT::Other);
 
   ArrayRef<MCPhysReg> Order = RegClassInfo.getOrder(SuperRC);
   if (Order.empty()) {
@@ -620,13 +628,16 @@ bool AggressiveAntiDepBreaker::FindSuitableFreeRegisters(
   unsigned EndR = ((OrigR == Order.size()) ? 0 : OrigR);
   unsigned R = OrigR;
   do {
-    if (R == 0) R = Order.size();
+    if (R == 0)
+      R = Order.size();
     --R;
     const unsigned NewSuperReg = Order[R];
     // Don't consider non-allocatable registers
-    if (!MRI.isAllocatable(NewSuperReg)) continue;
+    if (!MRI.isAllocatable(NewSuperReg))
+      continue;
     // Don't replace a register with itself.
-    if (NewSuperReg == SuperReg) continue;
+    if (NewSuperReg == SuperReg)
+      continue;
 
     LLVM_DEBUG(dbgs() << " [" << printReg(NewSuperReg, TRI) << ':');
     RenameMap.clear();
@@ -727,19 +738,18 @@ bool AggressiveAntiDepBreaker::FindSuitableFreeRegisters(
 /// BreakAntiDependencies - Identifiy anti-dependencies within the
 /// ScheduleDAG and break them by renaming registers.
 unsigned AggressiveAntiDepBreaker::BreakAntiDependencies(
-                              const std::vector<SUnit> &SUnits,
-                              MachineBasicBlock::iterator Begin,
-                              MachineBasicBlock::iterator End,
-                              unsigned InsertPosIndex,
-                              DbgValueVector &DbgValues) {
+    const std::vector<SUnit> &SUnits, MachineBasicBlock::iterator Begin,
+    MachineBasicBlock::iterator End, unsigned InsertPosIndex,
+    DbgValueVector &DbgValues) {
   std::vector<unsigned> &KillIndices = State->GetKillIndices();
   std::vector<unsigned> &DefIndices = State->GetDefIndices();
-  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference>&
-    RegRefs = State->GetRegRefs();
+  std::multimap<unsigned, AggressiveAntiDepState::RegisterReference> &RegRefs =
+      State->GetRegRefs();
 
   // The code below assumes that there is at least one instruction,
   // so just duck out immediately if the block is empty.
-  if (SUnits.empty()) return 0;
+  if (SUnits.empty())
+    return 0;
 
   // For each regclass the next register to use for renaming.
   RenameOrderType RenameOrder;
@@ -783,8 +793,7 @@ unsigned AggressiveAntiDepBreaker::BreakAntiDependencies(
   // to help determine which registers are available.
   unsigned Broken = 0;
   unsigned Count = InsertPosIndex - 1;
-  for (MachineBasicBlock::iterator I = End, E = Begin;
-       I != E; --Count) {
+  for (MachineBasicBlock::iterator I = End, E = Begin; I != E; --Count) {
     MachineInstr &MI = *--I;
 
     if (MI.isDebugInstr())
@@ -823,7 +832,8 @@ unsigned AggressiveAntiDepBreaker::BreakAntiDependencies(
         SUnit *NextSU = Edge->getSUnit();
 
         if ((Edge->getKind() != SDep::Anti) &&
-            (Edge->getKind() != SDep::Output)) continue;
+            (Edge->getKind() != SDep::Output))
+          continue;
 
         unsigned AntiDepReg = Edge->getReg();
         LLVM_DEBUG(dbgs() << "\tAntidep reg: " << printReg(AntiDepReg, TRI));
@@ -925,7 +935,8 @@ unsigned AggressiveAntiDepBreaker::BreakAntiDependencies(
               // information related to the anti-dependency register, make
               // sure to update that as well.
               const SUnit *SU = MISUnitMap[Q.second.Operand->getParent()];
-              if (!SU) continue;
+              if (!SU)
+                continue;
               UpdateDbgValues(DbgValues, Q.second.Operand->getParent(),
                               AntiDepReg, NewReg);
             }
diff --git a/llvm/lib/CodeGen/AggressiveAntiDepBreaker.h b/llvm/lib/CodeGen/AggressiveAntiDepBreaker.h
index 06c4c6957ba043a..472912f360c6e59 100644
--- a/llvm/lib/CodeGen/AggressiveAntiDepBreaker.h
+++ b/llvm/lib/CodeGen/AggressiveAntiDepBreaker.h
@@ -36,151 +36,148 @@ class TargetInstrInfo;
 class TargetRegisterClass;
 class TargetRegisterInfo;
 
-  /// Contains all the state necessary for anti-dep breaking.
+/// Contains all the state necessary for anti-dep breaking.
 class LLVM_LIBRARY_VISIBILITY AggressiveAntiDepState {
-  public:
-    /// Information about a register reference within a liverange
-    struct RegisterReference {
-      /// The registers operand
-      MachineOperand *Operand;
-
-      /// The register class
-      const TargetRegisterClass *RC;
-    };
-
-  private:
-    /// Number of non-virtual target registers (i.e. TRI->getNumRegs()).
-    const unsigned NumTargetRegs;
-
-    /// Implements a disjoint-union data structure to
-    /// form register groups. A node is represented by an index into
-    /// the vector. A node can "point to" itself to indicate that it
-    /// is the parent of a group, or point to another node to indicate
-    /// that it is a member of the same group as that node.
-    std::vector<unsigned> GroupNodes;
-
-    /// For each register, the index of the GroupNode
-    /// currently representing the group that the register belongs to.
-    /// Register 0 is always represented by the 0 group, a group
-    /// composed of registers that are not eligible for anti-aliasing.
-    std::vector<unsigned> GroupNodeIndices;
-
-    /// Map registers to all their references within a live range.
-    std::multimap<unsigned, RegisterReference> RegRefs;
-
-    /// The index of the most recent kill (proceeding bottom-up),
-    /// or ~0u if the register is not live.
-    std::vector<unsigned> KillIndices;
-
-    /// The index of the most recent complete def (proceeding bottom
-    /// up), or ~0u if the register is live.
-    std::vector<unsigned> DefIndices;
-
-  public:
-    AggressiveAntiDepState(const unsigned TargetRegs, MachineBasicBlock *BB);
-
-    /// Return the kill indices.
-    std::vector<unsigned> &GetKillIndices() { return KillIndices; }
-
-    /// Return the define indices.
-    std::vector<unsigned> &GetDefIndices() { return DefIndices; }
-
-    /// Return the RegRefs map.
-    std::multimap<unsigned, RegisterReference>& GetRegRefs() { return RegRefs; }
-
-    // Get the group for a register. The returned value is
-    // the index of the GroupNode representing the group.
-    unsigned GetGroup(unsigned Reg);
-
-    // Return a vector of the registers belonging to a group.
-    // If RegRefs is non-NULL then only included referenced registers.
-    void GetGroupRegs(
-       unsigned Group,
-       std::vector<unsigned> &Regs,
-       std::multimap<unsigned,
-         AggressiveAntiDepState::RegisterReference> *RegRefs);
-
-    // Union Reg1's and Reg2's groups to form a new group.
-    // Return the index of the GroupNode representing the group.
-    unsigned UnionGroups(unsigned Reg1, unsigned Reg2);
-
-    // Remove a register from its current group and place
-    // it alone in its own group. Return the index of the GroupNode
-    // representing the registers new group.
-    unsigned LeaveGroup(unsigned Reg);
-
-    /// Return true if Reg is live.
-    bool IsLive(unsigned Reg);
+public:
+  /// Information about a register reference within a liverange
+  struct RegisterReference {
+    /// The registers operand
+    MachineOperand *Operand;
+
+    /// The register class
+    const TargetRegisterClass *RC;
   };
 
-  class LLVM_LIBRARY_VISIBILITY AggressiveAntiDepBreaker
-      : public AntiDepBreaker {
-    MachineFunction &MF;
-    MachineRegisterInfo &MRI;
-    const TargetInstrInfo *TII;
-    const TargetRegisterInfo *TRI;
-    const RegisterClassInfo &RegClassInfo;
-
-    /// The set of registers that should only be
-    /// renamed if they are on the critical path.
-    BitVector CriticalPathSet;
-
-    /// The state used to identify and rename anti-dependence registers.
-    AggressiveAntiDepState *State = nullptr;
-
-  public:
-    AggressiveAntiDepBreaker(MachineFunction &MFi,
-                          const RegisterClassInfo &RCI,
-                          TargetSubtargetInfo::RegClassVector& CriticalPathRCs);
-    AggressiveAntiDepBreaker &
-    operator=(const AggressiveAntiDepBreaker &other) = delete;
-    AggressiveAntiDepBreaker(const AggressiveAntiDepBreaker &other) = delete;
-    ~AggressiveAntiDepBreaker() override;
-
-    /// Initialize anti-dep breaking for a new basic block.
-    void StartBlock(MachineBasicBlock *BB) override;
-
-    /// Identifiy anti-dependencies along the critical path
-    /// of the ScheduleDAG and break them by renaming registers.
-    unsigned BreakAntiDependencies(const std::vector<SUnit> &SUnits,
-                                   MachineBasicBlock::iterator Begin,
-                                   MachineBasicBlock::iterator End,
-                                   unsigned InsertPosIndex,
-                                   DbgValueVector &DbgValues) override;
-
-    /// Update liveness information to account for the current
-    /// instruction, which will not be scheduled.
-    void Observe(MachineInstr &MI, unsigned Count,
-                 unsigned InsertPosIndex) override;
-
-    /// Finish anti-dep breaking for a basic block.
-    void FinishBlock() override;
-
-  private:
-    /// Keep track of a position in the allocation order for each regclass.
-    using RenameOrderType = std::map<const TargetRegisterClass *, unsigned>;
-
-    /// Return true if MO represents a register
-    /// that is both implicitly used and defined in MI
-    bool IsImplicitDefUse(MachineInstr &MI, MachineOperand &MO);
-
-    /// If MI implicitly def/uses a register, then
-    /// return that register and all subregisters.
-    void GetPassthruRegs(MachineInstr &MI, std::set<unsigned> &PassthruRegs);
-
-    void HandleLastUse(unsigned Reg, unsigned KillIdx, const char *tag,
-                       const char *header = nullptr,
-                       const char *footer = nullptr);
-
-    void PrescanInstruction(MachineInstr &MI, unsigned Count,
-                            std::set<unsigned> &PassthruRegs);
-    void ScanInstruction(MachineInstr &MI, unsigned Count);
-    BitVector GetRenameRegisters(unsigned Reg);
-    bool FindSuitableFreeRegisters(unsigned SuperReg,
-                                   unsigned AntiDepGroupIndex,
-                                   RenameOrderType &RenameOrder,
-                                   std::map<unsigned, unsigned> &RenameMap);
-  };
+private:
+  /// Number of non-virtual target registers (i.e. TRI->getNumRegs()).
+  const unsigned NumTargetRegs;
+
+  /// Implements a disjoint-union data structure to
+  /// form register groups. A node is represented by an index into
+  /// the vector. A node can "point to" itself to indicate that it
+  /// is the parent of a group, or point to another node to indicate
+  /// that it is a member of the same group as that node.
+  std::vector<unsigned> GroupNodes;
+
+  /// For each register, the index of the GroupNode
+  /// currently representing the group that the register belongs to.
+  /// Register 0 is always represented by the 0 group, a group
+  /// composed of registers that are not eligible for anti-aliasing.
+  std::vector<unsigned> GroupNodeIndices;
+
+  /// Map registers to all their references within a live range.
+  std::multimap<unsigned, RegisterReference> RegRefs;
+
+  /// The index of the most recent kill (proceeding bottom-up),
+  /// or ~0u if the register is not live.
+  std::vector<unsigned> KillIndices;
+
+  /// The index of the most recent complete def (proceeding bottom
+  /// up), or ~0u if the register is live.
+  std::vector<unsigned> DefIndices;
+
+public:
+  AggressiveAntiDepState(const unsigned TargetRegs, MachineBasicBlock *BB);
+
+  /// Return the kill indices.
+  std::vector<unsigned> &GetKillIndices() { return KillIndices; }
+
+  /// Return the define indices.
+  std::vector<unsigned> &GetDefIndices() { return DefIndices; }
+
+  /// Return the RegRefs map.
+  std::multimap<unsigned, RegisterReference> &GetRegRefs() { return RegRefs; }
+
+  // Get the group for a register. The returned value is
+  // the index of the GroupNode representing the group.
+  unsigned GetGroup(unsigned Reg);
+
+  // Return a vector of the registers belonging to a group.
+  // If RegRefs is non-NULL then only included referenced registers.
+  void GetGroupRegs(
+      unsigned Group, std::vector<unsigned> &Regs,
+      std::multimap<unsigned, AggressiveAntiDepState::RegisterReference>
+          *RegRefs);
+
+  // Union Reg1's and Reg2's groups to form a new group.
+  // Return the index of the GroupNode representing the group.
+  unsigned UnionGroups(unsigned Reg1, unsigned Reg2);
+
+  // Remove a register from its current group and place
+  // it alone in its own group. Return the index of the GroupNode
+  // representing the registers new group.
+  unsigned LeaveGroup(unsigned Reg);
+
+  /// Return true if Reg is live.
+  bool IsLive(unsigned Reg);
+};
+
+class LLVM_LIBRARY_VISIBILITY AggressiveAntiDepBreaker : public AntiDepBreaker {
+  MachineFunction &MF;
+  MachineRegisterInfo &MRI;
+  const TargetInstrInfo *TII;
+  const TargetRegisterInfo *TRI;
+  const RegisterClassInfo &RegClassInfo;
+
+  /// The set of registers that should only be
+  /// renamed if they are on the critical path.
+  BitVector CriticalPathSet;
+
+  /// The state used to identify and rename anti-dependence registers.
+  AggressiveAntiDepState *State = nullptr;
+
+public:
+  AggressiveAntiDepBreaker(
+      MachineFunction &MFi, const RegisterClassInfo &RCI,
+      TargetSubtargetInfo::RegClassVector &CriticalPathRCs);
+  AggressiveAntiDepBreaker &
+  operator=(const AggressiveAntiDepBreaker &other) = delete;
+  AggressiveAntiDepBreaker(const AggressiveAntiDepBreaker &other) = delete;
+  ~AggressiveAntiDepBreaker() override;
+
+  /// Initialize anti-dep breaking for a new basic block.
+  void StartBlock(MachineBasicBlock *BB) override;
+
+  /// Identifiy anti-dependencies along the critical path
+  /// of the ScheduleDAG and break them by renaming registers.
+  unsigned BreakAntiDependencies(const std::vector<SUnit> &SUnits,
+                                 MachineBasicBlock::iterator Begin,
+                                 MachineBasicBlock::iterator End,
+                                 unsigned InsertPosIndex,
+                                 DbgValueVector &DbgValues) override;
+
+  /// Update liveness information to account for the current
+  /// instruction, which will not be scheduled.
+  void Observe(MachineInstr &MI, unsigned Count,
+               unsigned InsertPosIndex) override;
+
+  /// Finish anti-dep breaking for a basic block.
+  void FinishBlock() override;
+
+private:
+  /// Keep track of a position in the allocation order for each regclass.
+  using RenameOrderType = std::map<const TargetRegisterClass *, unsigned>;
+
+  /// Return true if MO represents a register
+  /// that is both implicitly used and defined in MI
+  bool IsImplicitDefUse(MachineInstr &MI, MachineOperand &MO);
+
+  /// If MI implicitly def/uses a register, then
+  /// return that register and all subregisters.
+  void GetPassthruRegs(MachineInstr &MI, std::set<unsigned> &PassthruRegs);
+
+  void HandleLastUse(unsigned Reg, unsigned KillIdx, const char *tag,
+                     const char *header = nullptr,
+                     const char *footer = nullptr);
+
+  void PrescanInstruction(MachineInstr &MI, unsigned Count,
+                          std::set<unsigned> &PassthruRegs);
+  void ScanInstruction(MachineInstr &MI, unsigned Count);
+  BitVector GetRenameRegisters(unsigned Reg);
+  bool FindSuitableFreeRegisters(unsigned SuperReg, unsigned AntiDepGroupIndex,
+                                 RenameOrderType &RenameOrder,
+                                 std::map<unsigned, unsigned> &RenameMap);
+};
 
 } // end namespace llvm
 
diff --git a/llvm/lib/CodeGen/Analysis.cpp b/llvm/lib/CodeGen/Analysis.cpp
index 1994e6aec84b237..d639fbcb57968cb 100644
--- a/llvm/lib/CodeGen/Analysis.cpp
+++ b/llvm/lib/CodeGen/Analysis.cpp
@@ -30,8 +30,7 @@ using namespace llvm;
 /// Compute the linearized index of a member in a nested aggregate/struct/array
 /// by recursing and accumulating CurIndex as long as there are indices in the
 /// index list.
-unsigned llvm::ComputeLinearIndex(Type *Ty,
-                                  const unsigned *Indices,
+unsigned llvm::ComputeLinearIndex(Type *Ty, const unsigned *Indices,
                                   const unsigned *IndicesEnd,
                                   unsigned CurIndex) {
   // Base case: We're done.
@@ -59,10 +58,10 @@ unsigned llvm::ComputeLinearIndex(Type *Ty,
       assert(*Indices < NumElts && "Unexpected out of bound");
       // If the indice is inside the array, compute the index to the requested
       // elt and recurse inside the element with the end of the indices list
-      CurIndex += EltLinearOffset* *Indices;
-      return ComputeLinearIndex(EltTy, Indices+1, IndicesEnd, CurIndex);
+      CurIndex += EltLinearOffset * *Indices;
+      return ComputeLinearIndex(EltTy, Indices + 1, IndicesEnd, CurIndex);
     }
-    CurIndex += EltLinearOffset*NumElts;
+    CurIndex += EltLinearOffset * NumElts;
     return CurIndex;
   }
   // We haven't found the type we're looking for, so keep searching.
@@ -87,8 +86,7 @@ void llvm::ComputeValueVTs(const TargetLowering &TLI, const DataLayout &DL,
     // us to support structs with scalable vectors for operations that don't
     // need offsets.
     const StructLayout *SL = Offsets ? DL.getStructLayout(STy) : nullptr;
-    for (StructType::element_iterator EB = STy->element_begin(),
-                                      EI = EB,
+    for (StructType::element_iterator EB = STy->element_begin(), EI = EB,
                                       EE = STy->element_end();
          EI != EE; ++EI) {
       // Don't compute the element offset if we didn't get a StructLayout above.
@@ -221,7 +219,8 @@ GlobalValue *llvm::ExtractTypeInfo(Value *V) {
            "The EH catch-all value must have an initializer");
     Value *Init = Var->getInitializer();
     GV = dyn_cast<GlobalValue>(Init);
-    if (!GV) V = cast<ConstantPointerNull>(Init);
+    if (!GV)
+      V = cast<ConstantPointerNull>(Init);
   }
 
   assert((GV || isa<ConstantPointerNull>(V)) &&
@@ -235,50 +234,90 @@ GlobalValue *llvm::ExtractTypeInfo(Value *V) {
 ///
 ISD::CondCode llvm::getFCmpCondCode(FCmpInst::Predicate Pred) {
   switch (Pred) {
-  case FCmpInst::FCMP_FALSE: return ISD::SETFALSE;
-  case FCmpInst::FCMP_OEQ:   return ISD::SETOEQ;
-  case FCmpInst::FCMP_OGT:   return ISD::SETOGT;
-  case FCmpInst::FCMP_OGE:   return ISD::SETOGE;
-  case FCmpInst::FCMP_OLT:   return ISD::SETOLT;
-  case FCmpInst::FCMP_OLE:   return ISD::SETOLE;
-  case FCmpInst::FCMP_ONE:   return ISD::SETONE;
-  case FCmpInst::FCMP_ORD:   return ISD::SETO;
-  case FCmpInst::FCMP_UNO:   return ISD::SETUO;
-  case FCmpInst::FCMP_UEQ:   return ISD::SETUEQ;
-  case FCmpInst::FCMP_UGT:   return ISD::SETUGT;
-  case FCmpInst::FCMP_UGE:   return ISD::SETUGE;
-  case FCmpInst::FCMP_ULT:   return ISD::SETULT;
-  case FCmpInst::FCMP_ULE:   return ISD::SETULE;
-  case FCmpInst::FCMP_UNE:   return ISD::SETUNE;
-  case FCmpInst::FCMP_TRUE:  return ISD::SETTRUE;
-  default: llvm_unreachable("Invalid FCmp predicate opcode!");
+  case FCmpInst::FCMP_FALSE:
+    return ISD::SETFALSE;
+  case FCmpInst::FCMP_OEQ:
+    return ISD::SETOEQ;
+  case FCmpInst::FCMP_OGT:
+    return ISD::SETOGT;
+  case FCmpInst::FCMP_OGE:
+    return ISD::SETOGE;
+  case FCmpInst::FCMP_OLT:
+    return ISD::SETOLT;
+  case FCmpInst::FCMP_OLE:
+    return ISD::SETOLE;
+  case FCmpInst::FCMP_ONE:
+    return ISD::SETONE;
+  case FCmpInst::FCMP_ORD:
+    return ISD::SETO;
+  case FCmpInst::FCMP_UNO:
+    return ISD::SETUO;
+  case FCmpInst::FCMP_UEQ:
+    return ISD::SETUEQ;
+  case FCmpInst::FCMP_UGT:
+    return ISD::SETUGT;
+  case FCmpInst::FCMP_UGE:
+    return ISD::SETUGE;
+  case FCmpInst::FCMP_ULT:
+    return ISD::SETULT;
+  case FCmpInst::FCMP_ULE:
+    return ISD::SETULE;
+  case FCmpInst::FCMP_UNE:
+    return ISD::SETUNE;
+  case FCmpInst::FCMP_TRUE:
+    return ISD::SETTRUE;
+  default:
+    llvm_unreachable("Invalid FCmp predicate opcode!");
   }
 }
 
 ISD::CondCode llvm::getFCmpCodeWithoutNaN(ISD::CondCode CC) {
   switch (CC) {
-    case ISD::SETOEQ: case ISD::SETUEQ: return ISD::SETEQ;
-    case ISD::SETONE: case ISD::SETUNE: return ISD::SETNE;
-    case ISD::SETOLT: case ISD::SETULT: return ISD::SETLT;
-    case ISD::SETOLE: case ISD::SETULE: return ISD::SETLE;
-    case ISD::SETOGT: case ISD::SETUGT: return ISD::SETGT;
-    case ISD::SETOGE: case ISD::SETUGE: return ISD::SETGE;
-    default: return CC;
+  case ISD::SETOEQ:
+  case ISD::SETUEQ:
+    return ISD::SETEQ;
+  case ISD::SETONE:
+  case ISD::SETUNE:
+    return ISD::SETNE;
+  case ISD::SETOLT:
+  case ISD::SETULT:
+    return ISD::SETLT;
+  case ISD::SETOLE:
+  case ISD::SETULE:
+    return ISD::SETLE;
+  case ISD::SETOGT:
+  case ISD::SETUGT:
+    return ISD::SETGT;
+  case ISD::SETOGE:
+  case ISD::SETUGE:
+    return ISD::SETGE;
+  default:
+    return CC;
   }
 }
 
 ISD::CondCode llvm::getICmpCondCode(ICmpInst::Predicate Pred) {
   switch (Pred) {
-  case ICmpInst::ICMP_EQ:  return ISD::SETEQ;
-  case ICmpInst::ICMP_NE:  return ISD::SETNE;
-  case ICmpInst::ICMP_SLE: return ISD::SETLE;
-  case ICmpInst::ICMP_ULE: return ISD::SETULE;
-  case ICmpInst::ICMP_SGE: return ISD::SETGE;
-  case ICmpInst::ICMP_UGE: return ISD::SETUGE;
-  case ICmpInst::ICMP_SLT: return ISD::SETLT;
-  case ICmpInst::ICMP_ULT: return ISD::SETULT;
-  case ICmpInst::ICMP_SGT: return ISD::SETGT;
-  case ICmpInst::ICMP_UGT: return ISD::SETUGT;
+  case ICmpInst::ICMP_EQ:
+    return ISD::SETEQ;
+  case ICmpInst::ICMP_NE:
+    return ISD::SETNE;
+  case ICmpInst::ICMP_SLE:
+    return ISD::SETLE;
+  case ICmpInst::ICMP_ULE:
+    return ISD::SETULE;
+  case ICmpInst::ICMP_SGE:
+    return ISD::SETGE;
+  case ICmpInst::ICMP_UGE:
+    return ISD::SETUGE;
+  case ICmpInst::ICMP_SLT:
+    return ISD::SETLT;
+  case ICmpInst::ICMP_ULT:
+    return ISD::SETULT;
+  case ICmpInst::ICMP_SGT:
+    return ISD::SETGT;
+  case ICmpInst::ICMP_UGT:
+    return ISD::SETUGT;
   default:
     llvm_unreachable("Invalid ICmp predicate opcode!");
   }
@@ -311,8 +350,7 @@ ICmpInst::Predicate llvm::getICmpCondCode(ISD::CondCode Pred) {
   }
 }
 
-static bool isNoopBitcast(Type *T1, Type *T2,
-                          const TargetLoweringBase& TLI) {
+static bool isNoopBitcast(Type *T1, Type *T2, const TargetLoweringBase &TLI) {
   return T1 == T2 || (T1->isPointerTy() && T2->isPointerTy()) ||
          (isa<VectorType>(T1) && isa<VectorType>(T2) &&
           TLI.isTypeLegal(EVT::getEVT(T1)) && TLI.isTypeLegal(EVT::getEVT(T2)));
@@ -338,7 +376,8 @@ static const Value *getNoopInput(const Value *V,
     // Try to look through V1; if V1 is not an instruction, it can't be looked
     // through.
     const Instruction *I = dyn_cast<Instruction>(V);
-    if (!I || I->getNumOperands() == 0) return V;
+    if (!I || I->getNumOperands() == 0)
+      return V;
     const Value *NoopInput = nullptr;
 
     Value *Op = I->getOperand(0);
@@ -562,7 +601,6 @@ static bool nextRealType(SmallVectorImpl<Type *> &SubTypes,
   return true;
 }
 
-
 /// Test if the given instruction is in a position to be optimized
 /// with a tail-call. This roughly means that it's in a block with
 /// a return and there's nothing that needs to be scheduled
@@ -696,11 +734,13 @@ bool llvm::returnTypeIsEligibleForTailCall(const Function *F,
                                            const TargetLoweringBase &TLI) {
   // If the block ends with a void return or unreachable, it doesn't matter
   // what the call's return type is.
-  if (!Ret || Ret->getNumOperands() == 0) return true;
+  if (!Ret || Ret->getNumOperands() == 0)
+    return true;
 
   // If the return value is undef, it doesn't matter what the call's
   // return type is.
-  if (isa<UndefValue>(Ret->getOperand(0))) return true;
+  if (isa<UndefValue>(Ret->getOperand(0)))
+    return true;
 
   // Make sure the attributes attached to each return are compatible.
   bool AllowDifferingSizes;
@@ -769,8 +809,8 @@ bool llvm::returnTypeIsEligibleForTailCall(const Function *F,
                               F->getParent()->getDataLayout()))
       return false;
 
-    CallEmpty  = !nextRealType(CallSubTypes, CallPath);
-  } while(nextRealType(RetSubTypes, RetPath));
+    CallEmpty = !nextRealType(CallSubTypes, CallPath);
+  } while (nextRealType(RetSubTypes, RetPath));
 
   return true;
 }
diff --git a/llvm/lib/CodeGen/AsmPrinter/ARMException.cpp b/llvm/lib/CodeGen/AsmPrinter/ARMException.cpp
index de6ebcf0c3419f7..56f48655bc65e04 100644
--- a/llvm/lib/CodeGen/AsmPrinter/ARMException.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/ARMException.cpp
@@ -62,12 +62,12 @@ void ARMException::endFunction(const MachineFunction *MF) {
   if (F.hasPersonalityFn())
     Per = dyn_cast<Function>(F.getPersonalityFn()->stripPointerCasts());
   bool forceEmitPersonality =
-    F.hasPersonalityFn() && !isNoOpWithoutInvoke(classifyEHPersonality(Per)) &&
-    F.needsUnwindTableEntry();
-  bool shouldEmitPersonality = forceEmitPersonality ||
-    !MF->getLandingPads().empty();
-  if (!Asm->MF->getFunction().needsUnwindTableEntry() &&
-      !shouldEmitPersonality)
+      F.hasPersonalityFn() &&
+      !isNoOpWithoutInvoke(classifyEHPersonality(Per)) &&
+      F.needsUnwindTableEntry();
+  bool shouldEmitPersonality =
+      forceEmitPersonality || !MF->getLandingPads().empty();
+  if (!Asm->MF->getFunction().needsUnwindTableEntry() && !shouldEmitPersonality)
     ATS.emitCantUnwind();
   else if (shouldEmitPersonality) {
     // Emit references to personality.
@@ -117,8 +117,9 @@ void ARMException::emitTypeInfos(unsigned TTypeEncoding,
     Asm->OutStreamer->addBlankLine();
     Entry = 0;
   }
-  for (std::vector<unsigned>::const_iterator
-         I = FilterIds.begin(), E = FilterIds.end(); I < E; ++I) {
+  for (std::vector<unsigned>::const_iterator I = FilterIds.begin(),
+                                             E = FilterIds.end();
+       I < E; ++I) {
     unsigned TypeID = *I;
     if (VerboseAsm) {
       --Entry;
diff --git a/llvm/lib/CodeGen/AsmPrinter/AddressPool.h b/llvm/lib/CodeGen/AsmPrinter/AddressPool.h
index f1edc6c330d51e4..ddd3f64e2691c16 100644
--- a/llvm/lib/CodeGen/AsmPrinter/AddressPool.h
+++ b/llvm/lib/CodeGen/AsmPrinter/AddressPool.h
@@ -48,7 +48,9 @@ class AddressPool {
 
   bool hasBeenUsed() const { return HasBeenUsed; }
 
-  void resetUsedFlag(bool HasBeenUsed = false) { this->HasBeenUsed = HasBeenUsed; }
+  void resetUsedFlag(bool HasBeenUsed = false) {
+    this->HasBeenUsed = HasBeenUsed;
+  }
 
   MCSymbol *getLabel() { return AddressTableBaseSym; }
   void setLabel(MCSymbol *Sym) { AddressTableBaseSym = Sym; }
diff --git a/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp b/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
index 2ce08a2ff43955b..2b76f8c64775d19 100644
--- a/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/AsmPrinter.cpp
@@ -165,9 +165,7 @@ class AddrLabelMapCallbackPtr final : CallbackVH {
   AddrLabelMapCallbackPtr() = default;
   AddrLabelMapCallbackPtr(Value *V) : CallbackVH(V) {}
 
-  void setPtr(BasicBlock *BB) {
-    ValueHandleBase::operator=(BB);
-  }
+  void setPtr(BasicBlock *BB) { ValueHandleBase::operator=(BB); }
 
   void setMap(AddrLabelMap *map) { Map = map; }
 
@@ -437,8 +435,8 @@ bool AsmPrinter::doInitialization(Module &M) {
   AddrLabelSymbols = nullptr;
 
   // Initialize TargetLoweringObjectFile.
-  const_cast<TargetLoweringObjectFile&>(getObjFileLowering())
-    .Initialize(OutContext, TM);
+  const_cast<TargetLoweringObjectFile &>(getObjFileLowering())
+      .Initialize(OutContext, TM);
 
   const_cast<TargetLoweringObjectFile &>(getObjFileLowering())
       .getModuleMetadata(M);
@@ -511,9 +509,8 @@ bool AsmPrinter::doInitialization(Module &M) {
   if (MAI->doesSupportDebugInformation()) {
     bool EmitCodeView = M.getCodeViewFlag();
     if (EmitCodeView && TM.getTargetTriple().isOSWindows()) {
-      Handlers.emplace_back(std::make_unique<CodeViewDebug>(this),
-                            DbgTimerName, DbgTimerDescription,
-                            CodeViewLineTablesGroupName,
+      Handlers.emplace_back(std::make_unique<CodeViewDebug>(this), DbgTimerName,
+                            DbgTimerDescription, CodeViewLineTablesGroupName,
                             CodeViewLineTablesGroupDescription);
     }
     if (!EmitCodeView || M.getDwarfVersion()) {
@@ -570,7 +567,8 @@ bool AsmPrinter::doInitialization(Module &M) {
     break;
   case ExceptionHandling::WinEH:
     switch (MAI->getWinEHEncodingType()) {
-    default: llvm_unreachable("unsupported unwinding information encoding");
+    default:
+      llvm_unreachable("unsupported unwinding information encoding");
     case WinEH::EncodingType::Invalid:
       break;
     case WinEH::EncodingType::X86:
@@ -643,7 +641,7 @@ void AsmPrinter::emitLinkage(const GlobalValue *GV, MCSymbol *GVSym) const {
     } else if (MAI->avoidWeakIfComdat() && GV->hasComdat()) {
       // .globl _foo
       OutStreamer->emitSymbolAttribute(GVSym, MCSA_Global);
-      //NOTE: linkonce is handled by the section the symbol was assigned to.
+      // NOTE: linkonce is handled by the section the symbol was assigned to.
     } else {
       // .weak _foo
       OutStreamer->emitSymbolAttribute(GVSym, MCSA_Weak);
@@ -680,7 +678,8 @@ MCSymbol *AsmPrinter::getSymbolPreferLocal(const GlobalValue &GV) const {
   // assembler would otherwise be conservative and assume a global default
   // visibility symbol can be interposable, even if the code generator already
   // assumed it.
-  if (TM.getTargetTriple().isOSBinFormatELF() && GV.canBenefitFromLocalAlias()) {
+  if (TM.getTargetTriple().isOSBinFormatELF() &&
+      GV.canBenefitFromLocalAlias()) {
     const Module &M = *GV.getParent();
     if (TM.getRelocationModel() != Reloc::Static &&
         M.getPIELevel() == PIELevel::Default && GV.isDSOLocal())
@@ -737,7 +736,7 @@ void AsmPrinter::emitGlobalVariable(const GlobalVariable *GV) {
     OutStreamer->emitSymbolAttribute(EmittedSym, MAI->getMemtagAttr());
   }
 
-  if (!GV->hasInitializer())   // External globals require no extra code.
+  if (!GV->hasInitializer()) // External globals require no extra code.
     return;
 
   GVSym->redefineIfPossible();
@@ -759,15 +758,15 @@ void AsmPrinter::emitGlobalVariable(const GlobalVariable *GV) {
   const Align Alignment = getGVAlignment(GV, DL);
 
   for (const HandlerInfo &HI : Handlers) {
-    NamedRegionTimer T(HI.TimerName, HI.TimerDescription,
-                       HI.TimerGroupName, HI.TimerGroupDescription,
-                       TimePassesIsEnabled);
+    NamedRegionTimer T(HI.TimerName, HI.TimerDescription, HI.TimerGroupName,
+                       HI.TimerGroupDescription, TimePassesIsEnabled);
     HI.Handler->setSymbolSize(GVSym, Size);
   }
 
   // Handle common symbols
   if (GVKind.isCommon()) {
-    if (Size == 0) Size = 1;   // .comm Foo, 0 is undefined, avoid it.
+    if (Size == 0)
+      Size = 1; // .comm Foo, 0 is undefined, avoid it.
     // .comm _foo, 42, 4
     OutStreamer->emitCommonSymbol(GVSym, Size, Alignment);
     return;
@@ -858,7 +857,7 @@ void AsmPrinter::emitGlobalVariable(const GlobalVariable *GV) {
     //   - pointer to mangled symbol above with initializer
     unsigned PtrSize = DL.getPointerTypeSize(GV->getType());
     OutStreamer->emitSymbolValue(GetExternalSymbolSymbol("_tlv_bootstrap"),
-                                PtrSize);
+                                 PtrSize);
     OutStreamer->emitIntValue(0, PtrSize);
     OutStreamer->emitSymbolValue(MangSym, PtrSize);
 
@@ -941,8 +940,8 @@ void AsmPrinter::emitFunctionHeader() {
     if (MAI->hasSubsectionsViaSymbols()) {
       // Preserving prefix data on platforms which use subsections-via-symbols
       // is a bit tricky. Here we introduce a symbol for the prefix data
-      // and use the .alt_entry attribute to mark the function's real entry point
-      // as an alternative entry point to the prefix-data symbol.
+      // and use the .alt_entry attribute to mark the function's real entry
+      // point as an alternative entry point to the prefix-data symbol.
       MCSymbol *PrefixSym = OutContext.createLinkerPrivateTempSymbol();
       OutStreamer->emitLabel(PrefixSym);
 
@@ -1010,7 +1009,7 @@ void AsmPrinter::emitFunctionHeader() {
   // If the function had address-taken blocks that got deleted, then we have
   // references to the dangling symbols.  Emit them at the start of the function
   // so that we don't get references to undefined symbols.
-  std::vector<MCSymbol*> DeadBlockSyms;
+  std::vector<MCSymbol *> DeadBlockSyms;
   takeDeletedSymbolsForFunction(&F, DeadBlockSyms);
   for (MCSymbol *DeadBlockSym : DeadBlockSyms) {
     OutStreamer->AddComment("Address taken block that was later removed");
@@ -1022,7 +1021,7 @@ void AsmPrinter::emitFunctionHeader() {
       MCSymbol *CurPos = OutContext.createTempSymbol();
       OutStreamer->emitLabel(CurPos);
       OutStreamer->emitAssignment(CurrentFnBegin,
-                                 MCSymbolRefExpr::create(CurPos, OutContext));
+                                  MCSymbolRefExpr::create(CurPos, OutContext));
     } else {
       OutStreamer->emitLabel(CurrentFnBegin);
     }
@@ -1330,7 +1329,7 @@ void AsmPrinter::emitFrameAlloc(const MachineInstr &MI) {
 
   // Emit a symbol assignment.
   OutStreamer->emitAssignment(FrameAllocSym,
-                             MCConstantExpr::create(FrameOffset, OutContext));
+                              MCConstantExpr::create(FrameOffset, OutContext));
 }
 
 /// Returns the BB metadata to be emitted in the SHT_LLVM_BB_ADDR_MAP section
@@ -1711,10 +1710,12 @@ void AsmPrinter::emitFunctionBody() {
         }
         break;
       case TargetOpcode::IMPLICIT_DEF:
-        if (isVerbose()) emitImplicitDef(&MI);
+        if (isVerbose())
+          emitImplicitDef(&MI);
         break;
       case TargetOpcode::KILL:
-        if (isVerbose()) emitKill(&MI, *this);
+        if (isVerbose())
+          emitKill(&MI, *this);
         break;
       case TargetOpcode::PSEUDO_PROBE:
         emitPseudoProbe(MI);
@@ -2333,7 +2334,7 @@ bool AsmPrinter::doFinalization(Module &M) {
 
   GCModuleInfo *MI = getAnalysisIfAvailable<GCModuleInfo>();
   assert(MI && "AsmPrinter didn't require GCModuleInfo?");
-  for (GCModuleInfo::iterator I = MI->end(), E = MI->begin(); I != E; )
+  for (GCModuleInfo::iterator I = MI->end(), E = MI->begin(); I != E;)
     if (GCMetadataPrinter *MP = getOrCreateGCPrinter(**--I))
       MP->finishAssembly(M, *MI, *this);
 
@@ -2453,9 +2454,9 @@ void AsmPrinter::SetupMachineFunction(MachineFunction &MF) {
   bool NeedsLocalForSize = MAI->needsLocalForSize();
   if (F.hasFnAttribute("patchable-function-entry") ||
       F.hasFnAttribute("function-instrument") ||
-      F.hasFnAttribute("xray-instruction-threshold") ||
-      needFuncLabels(MF) || NeedsLocalForSize ||
-      MF.getTarget().Options.EmitStackSizeSection || MF.hasBBLabels()) {
+      F.hasFnAttribute("xray-instruction-threshold") || needFuncLabels(MF) ||
+      NeedsLocalForSize || MF.getTarget().Options.EmitStackSizeSection ||
+      MF.hasBBLabels()) {
     CurrentFnBegin = createTempSymbol("func_begin");
     if (NeedsLocalForSize)
       CurrentFnSymForSize = CurrentFnBegin;
@@ -2467,13 +2468,13 @@ void AsmPrinter::SetupMachineFunction(MachineFunction &MF) {
 namespace {
 
 // Keep track the alignment, constpool entries per Section.
-  struct SectionCPs {
-    MCSection *S;
-    Align Alignment;
-    SmallVector<unsigned, 4> CPEs;
+struct SectionCPs {
+  MCSection *S;
+  Align Alignment;
+  SmallVector<unsigned, 4> CPEs;
 
-    SectionCPs(MCSection *s, Align a) : S(s), Alignment(a) {}
-  };
+  SectionCPs(MCSection *s, Align a) : S(s), Alignment(a) {}
+};
 
 } // end anonymous namespace
 
@@ -2484,7 +2485,8 @@ namespace {
 void AsmPrinter::emitConstantPool() {
   const MachineConstantPool *MCP = MF->getConstantPool();
   const std::vector<MachineConstantPoolEntry> &CP = MCP->getConstants();
-  if (CP.empty()) return;
+  if (CP.empty())
+    return;
 
   // Calculate sections for constant pool entries. We collect entries to go into
   // the same section together to reduce amount of section switch statements.
@@ -2561,10 +2563,13 @@ void AsmPrinter::emitConstantPool() {
 void AsmPrinter::emitJumpTableInfo() {
   const DataLayout &DL = MF->getDataLayout();
   const MachineJumpTableInfo *MJTI = MF->getJumpTableInfo();
-  if (!MJTI) return;
-  if (MJTI->getEntryKind() == MachineJumpTableInfo::EK_Inline) return;
+  if (!MJTI)
+    return;
+  if (MJTI->getEntryKind() == MachineJumpTableInfo::EK_Inline)
+    return;
   const std::vector<MachineJumpTableEntry> &JT = MJTI->getJumpTables();
-  if (JT.empty()) return;
+  if (JT.empty())
+    return;
 
   // Pick the directive to use to print the jump table entries, and switch to
   // the appropriate section.
@@ -2588,28 +2593,30 @@ void AsmPrinter::emitJumpTableInfo() {
     OutStreamer->emitDataRegion(MCDR_DataRegionJT32);
 
   for (unsigned JTI = 0, e = JT.size(); JTI != e; ++JTI) {
-    const std::vector<MachineBasicBlock*> &JTBBs = JT[JTI].MBBs;
+    const std::vector<MachineBasicBlock *> &JTBBs = JT[JTI].MBBs;
 
     // If this jump table was deleted, ignore it.
-    if (JTBBs.empty()) continue;
+    if (JTBBs.empty())
+      continue;
 
     // For the EK_LabelDifference32 entry, if using .set avoids a relocation,
     /// emit a .set directive for each unique entry.
     if (MJTI->getEntryKind() == MachineJumpTableInfo::EK_LabelDifference32 &&
         MAI->doesSetDirectiveSuppressReloc()) {
-      SmallPtrSet<const MachineBasicBlock*, 16> EmittedSets;
+      SmallPtrSet<const MachineBasicBlock *, 16> EmittedSets;
       const TargetLowering *TLI = MF->getSubtarget().getTargetLowering();
-      const MCExpr *Base = TLI->getPICJumpTableRelocBaseExpr(MF,JTI,OutContext);
+      const MCExpr *Base =
+          TLI->getPICJumpTableRelocBaseExpr(MF, JTI, OutContext);
       for (const MachineBasicBlock *MBB : JTBBs) {
         if (!EmittedSets.insert(MBB).second)
           continue;
 
         // .set LJTSet, LBB32-base
         const MCExpr *LHS =
-          MCSymbolRefExpr::create(MBB->getSymbol(), OutContext);
-        OutStreamer->emitAssignment(GetJTSetSymbol(JTI, MBB->getNumber()),
-                                    MCBinaryExpr::createSub(LHS, Base,
-                                                            OutContext));
+            MCSymbolRefExpr::create(MBB->getSymbol(), OutContext);
+        OutStreamer->emitAssignment(
+            GetJTSetSymbol(JTI, MBB->getNumber()),
+            MCBinaryExpr::createSub(LHS, Base, OutContext));
       }
     }
 
@@ -2623,7 +2630,7 @@ void AsmPrinter::emitJumpTableInfo() {
       // GetJTISymbol.
       OutStreamer->emitLabel(GetJTISymbol(JTI, true));
 
-    MCSymbol* JTISymbol = GetJTISymbol(JTI);
+    MCSymbol *JTISymbol = GetJTISymbol(JTI);
     OutStreamer->emitLabel(JTISymbol);
 
     for (const MachineBasicBlock *MBB : JTBBs)
@@ -2704,7 +2711,7 @@ void AsmPrinter::emitJumpTableEntry(const MachineJumpTableInfo *MJTI,
 /// do nothing and return false.
 bool AsmPrinter::emitSpecialLLVMGlobal(const GlobalVariable *GV) {
   if (GV->getName() == "llvm.used") {
-    if (MAI->hasNoDeadStrip())    // No need to emit this at all.
+    if (MAI->hasNoDeadStrip()) // No need to emit this at all.
       emitLLVMUsedList(cast<ConstantArray>(GV->getInitializer()));
     return true;
   }
@@ -2714,7 +2721,8 @@ bool AsmPrinter::emitSpecialLLVMGlobal(const GlobalVariable *GV) {
       GV->hasAvailableExternallyLinkage())
     return true;
 
-  if (!GV->hasAppendingLinkage()) return false;
+  if (!GV->hasAppendingLinkage())
+    return false;
 
   assert(GV->hasInitializer() && "Not a special LLVM global!");
 
@@ -2741,7 +2749,7 @@ void AsmPrinter::emitLLVMUsedList(const ConstantArray *InitList) {
   // Should be an array of 'i8*'.
   for (unsigned i = 0, e = InitList->getNumOperands(); i != e; ++i) {
     const GlobalValue *GV =
-      dyn_cast<GlobalValue>(InitList->getOperand(i)->stripPointerCasts());
+        dyn_cast<GlobalValue>(InitList->getOperand(i)->stripPointerCasts());
     if (GV)
       OutStreamer->emitSymbolAttribute(getSymbol(GV), MCSA_NoDeadStrip);
   }
@@ -3031,7 +3039,7 @@ const MCExpr *AsmPrinter::lowerConstant(const Constant *CV) {
     // integer type.  This promotes constant folding and simplifies this code.
     Constant *Op = CE->getOperand(0);
     Op = ConstantExpr::getIntegerCast(Op, DL.getIntPtrType(CV->getType()),
-                                      false/*ZExt*/);
+                                      false /*ZExt*/);
     return lowerConstant(Op);
   }
 
@@ -3133,7 +3141,8 @@ static int isRepeatedByteSequence(const ConstantDataSequential *V) {
   assert(!Data.empty() && "Empty aggregates should be CAZ node");
   char C = Data[0];
   for (unsigned i = 1, e = Data.size(); i != e; ++i)
-    if (Data[i] != C) return -1;
+    if (Data[i] != C)
+      return -1;
   return static_cast<uint8_t>(C); // Ensure 255 is not returned as -1.
 }
 
@@ -3277,7 +3286,8 @@ static void emitGlobalConstantVector(const DataLayout &DL,
     EmittedSize = DL.getTypeStoreSize(CV->getType());
   } else {
     for (unsigned I = 0, E = CV->getType()->getNumElements(); I != E; ++I) {
-      emitGlobalAliasInline(AP, DL.getTypeAllocSize(CV->getType()) * I, AliasList);
+      emitGlobalAliasInline(AP, DL.getTypeAllocSize(CV->getType()) * I,
+                            AliasList);
       emitGlobalConstantImpl(DL, CV->getOperand(I), AP);
     }
     EmittedSize =
@@ -3396,8 +3406,8 @@ static void emitGlobalConstantLargeInt(const ConstantInt *CI, AsmPrinter &AP) {
       // ExtraBits     0       1       (BitWidth / 64) - 1
       //       chu[nk1 chu][nk2 chu] ... [nkN-1 chunkN]
       ExtraBitsSize = alignTo(ExtraBitsSize, 8);
-      ExtraBits = Realigned.getRawData()[0] &
-        (((uint64_t)-1) >> (64 - ExtraBitsSize));
+      ExtraBits =
+          Realigned.getRawData()[0] & (((uint64_t)-1) >> (64 - ExtraBitsSize));
       if (BitWidth >= 64)
         Realigned.lshrInPlace(ExtraBitsSize);
     } else
@@ -3420,8 +3430,9 @@ static void emitGlobalConstantLargeInt(const ConstantInt *CI, AsmPrinter &AP) {
     uint64_t Size = AP.getDataLayout().getTypeStoreSize(CI->getType());
     Size -= (BitWidth / 64) * 8;
     assert(Size && Size * 8 >= ExtraBitsSize &&
-           (ExtraBits & (((uint64_t)-1) >> (64 - ExtraBitsSize)))
-           == ExtraBits && "Directive too small for extra bits.");
+           (ExtraBits & (((uint64_t)-1) >> (64 - ExtraBitsSize))) ==
+               ExtraBits &&
+           "Directive too small for extra bits.");
     AP.OutStreamer->emitIntValue(ExtraBits, Size);
   }
 }
@@ -3714,12 +3725,13 @@ MCSymbol *AsmPrinter::GetExternalSymbolSymbol(StringRef Sym) const {
 /// PrintParentLoopComment - Print comments about parent loops of this one.
 static void PrintParentLoopComment(raw_ostream &OS, const MachineLoop *Loop,
                                    unsigned FunctionNumber) {
-  if (!Loop) return;
+  if (!Loop)
+    return;
   PrintParentLoopComment(OS, Loop->getParentLoop(), FunctionNumber);
-  OS.indent(Loop->getLoopDepth()*2)
-    << "Parent Loop BB" << FunctionNumber << "_"
-    << Loop->getHeader()->getNumber()
-    << " Depth=" << Loop->getLoopDepth() << '\n';
+  OS.indent(Loop->getLoopDepth() * 2)
+      << "Parent Loop BB" << FunctionNumber << "_"
+      << Loop->getHeader()->getNumber() << " Depth=" << Loop->getLoopDepth()
+      << '\n';
 }
 
 /// PrintChildLoopComment - Print comments about child loops within
@@ -3728,10 +3740,10 @@ static void PrintChildLoopComment(raw_ostream &OS, const MachineLoop *Loop,
                                   unsigned FunctionNumber) {
   // Add child loop information
   for (const MachineLoop *CL : *Loop) {
-    OS.indent(CL->getLoopDepth()*2)
-      << "Child Loop BB" << FunctionNumber << "_"
-      << CL->getHeader()->getNumber() << " Depth " << CL->getLoopDepth()
-      << '\n';
+    OS.indent(CL->getLoopDepth() * 2)
+        << "Child Loop BB" << FunctionNumber << "_"
+        << CL->getHeader()->getNumber() << " Depth " << CL->getLoopDepth()
+        << '\n';
     PrintChildLoopComment(OS, CL, FunctionNumber);
   }
 }
@@ -3742,7 +3754,8 @@ static void emitBasicBlockLoopComments(const MachineBasicBlock &MBB,
                                        const AsmPrinter &AP) {
   // Add loop depth information
   const MachineLoop *Loop = LI->getLoopFor(&MBB);
-  if (!Loop) return;
+  if (!Loop)
+    return;
 
   MachineBasicBlock *Header = Loop->getHeader();
   assert(Header && "No header for loop");
@@ -3751,9 +3764,9 @@ static void emitBasicBlockLoopComments(const MachineBasicBlock &MBB,
   // and return.
   if (Header != &MBB) {
     AP.OutStreamer->AddComment("  in Loop: Header=BB" +
-                               Twine(AP.getFunctionNumber())+"_" +
-                               Twine(Loop->getHeader()->getNumber())+
-                               " Depth="+Twine(Loop->getLoopDepth()));
+                               Twine(AP.getFunctionNumber()) + "_" +
+                               Twine(Loop->getHeader()->getNumber()) +
+                               " Depth=" + Twine(Loop->getLoopDepth()));
     return;
   }
 
@@ -3764,7 +3777,7 @@ static void emitBasicBlockLoopComments(const MachineBasicBlock &MBB,
   PrintParentLoopComment(OS, Loop->getParentLoop(), AP.getFunctionNumber());
 
   OS << "=>";
-  OS.indent(Loop->getLoopDepth()*2-2);
+  OS.indent(Loop->getLoopDepth() * 2 - 2);
 
   OS << "This ";
   if (Loop->isInnermost())
@@ -3870,7 +3883,8 @@ void AsmPrinter::emitVisibility(MCSymbol *Sym, unsigned Visibility,
   MCSymbolAttr Attr = MCSA_Invalid;
 
   switch (Visibility) {
-  default: break;
+  default:
+    break;
   case GlobalValue::HiddenVisibility:
     if (IsDefinition)
       Attr = MAI->getHiddenVisibilityAttr();
@@ -3904,8 +3918,8 @@ bool AsmPrinter::shouldEmitLabelForBasicBlock(
 /// isBlockOnlyReachableByFallthough - Return true if the basic block has
 /// exactly one predecessor and the control transfer mechanism between
 /// the predecessor and this block is a fall-through.
-bool AsmPrinter::
-isBlockOnlyReachableByFallthrough(const MachineBasicBlock *MBB) const {
+bool AsmPrinter::isBlockOnlyReachableByFallthrough(
+    const MachineBasicBlock *MBB) const {
   // If this is a landing pad, it isn't a fall through.  If it has no preds,
   // then nothing falls through to it.
   if (MBB->isEHPad() || MBB->pred_empty())
@@ -4104,7 +4118,7 @@ void AsmPrinter::recordSled(MCSymbol *Sled, const MachineInstr &MI,
   auto Attr = F.getFnAttribute("function-instrument");
   bool LogArgs = F.hasFnAttribute("xray-log-args");
   bool AlwaysInstrument =
-    Attr.isStringAttribute() && Attr.getValueAsString() == "xray-always";
+      Attr.isStringAttribute() && Attr.getValueAsString() == "xray-always";
   if (Kind == SledKind::FUNCTION_ENTER && LogArgs)
     Kind = SledKind::LOG_ARGS_ENTER;
   Sleds.emplace_back(XRayFunctionEntry{Sled, CurrentFnSym, Kind,
diff --git a/llvm/lib/CodeGen/AsmPrinter/AsmPrinterDwarf.cpp b/llvm/lib/CodeGen/AsmPrinter/AsmPrinterDwarf.cpp
index 21d0d070c247f48..bfa5bd154709b65 100644
--- a/llvm/lib/CodeGen/AsmPrinter/AsmPrinterDwarf.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/AsmPrinterDwarf.cpp
@@ -60,17 +60,17 @@ static const char *DecodeDWARFEncoding(unsigned Encoding) {
     return "pcrel udata8";
   case dwarf::DW_EH_PE_pcrel | dwarf::DW_EH_PE_sdata8:
     return "pcrel sdata8";
-  case dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel | dwarf::DW_EH_PE_udata4
-      :
+  case dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel |
+      dwarf::DW_EH_PE_udata4:
     return "indirect pcrel udata4";
-  case dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel | dwarf::DW_EH_PE_sdata4
-      :
+  case dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel |
+      dwarf::DW_EH_PE_sdata4:
     return "indirect pcrel sdata4";
-  case dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel | dwarf::DW_EH_PE_udata8
-      :
+  case dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel |
+      dwarf::DW_EH_PE_udata8:
     return "indirect pcrel udata8";
-  case dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel | dwarf::DW_EH_PE_sdata8
-      :
+  case dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel |
+      dwarf::DW_EH_PE_sdata8:
     return "indirect pcrel sdata8";
   case dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_datarel |
       dwarf::DW_EH_PE_sdata4:
@@ -90,8 +90,8 @@ static const char *DecodeDWARFEncoding(unsigned Encoding) {
 void AsmPrinter::emitEncodingByte(unsigned Val, const char *Desc) const {
   if (isVerbose()) {
     if (Desc)
-      OutStreamer->AddComment(Twine(Desc) + " Encoding = " +
-                              Twine(DecodeDWARFEncoding(Val)));
+      OutStreamer->AddComment(Twine(Desc) +
+                              " Encoding = " + Twine(DecodeDWARFEncoding(Val)));
     else
       OutStreamer->AddComment(Twine("Encoding = ") + DecodeDWARFEncoding(Val));
   }
diff --git a/llvm/lib/CodeGen/AsmPrinter/AsmPrinterInlineAsm.cpp b/llvm/lib/CodeGen/AsmPrinter/AsmPrinterInlineAsm.cpp
index bb210527754d22d..0eb3fc777285299 100644
--- a/llvm/lib/CodeGen/AsmPrinter/AsmPrinterInlineAsm.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/AsmPrinterInlineAsm.cpp
@@ -66,7 +66,6 @@ unsigned AsmPrinter::addInlineAsmDiagBuffer(StringRef AsmStr,
   return BufNum;
 }
 
-
 /// EmitInlineAsm - Emit a blob of inline asm to the output streamer.
 void AsmPrinter::emitInlineAsm(StringRef Str, const MCSubtargetInfo &STI,
                                const MCTargetOptions &MCOptions,
@@ -77,7 +76,7 @@ void AsmPrinter::emitInlineAsm(StringRef Str, const MCSubtargetInfo &STI,
   // Remember if the buffer is nul terminated or not so we can avoid a copy.
   bool isNullTerminated = Str.back() == 0;
   if (isNullTerminated)
-    Str = Str.substr(0, Str.size()-1);
+    Str = Str.substr(0, Str.size() - 1);
 
   // If the output streamer does not have mature MC support or the integrated
   // assembler has been disabled or not required, just emit the blob textually.
@@ -111,8 +110,8 @@ void AsmPrinter::emitInlineAsm(StringRef Str, const MCSubtargetInfo &STI,
   // because it's not subtarget dependent.
   std::unique_ptr<MCInstrInfo> MII(TM.getTarget().createMCInstrInfo());
   assert(MII && "Failed to create instruction info");
-  std::unique_ptr<MCTargetAsmParser> TAP(TM.getTarget().createMCAsmParser(
-      STI, *Parser, *MII, MCOptions));
+  std::unique_ptr<MCTargetAsmParser> TAP(
+      TM.getTarget().createMCAsmParser(STI, *Parser, *MII, MCOptions));
   if (!TAP)
     report_fatal_error("Inline asm not supported by this streamer because"
                        " we don't have an asm parser for this target\n");
@@ -159,7 +158,7 @@ static void EmitInlineAsmStr(const char *AsmStr, const MachineInstr *MI,
     switch (*LastEmitted) {
     default: {
       // Not a special case, emit the string section literally.
-      const char *LiteralEnd = LastEmitted+1;
+      const char *LiteralEnd = LastEmitted + 1;
       while (*LiteralEnd && *LiteralEnd != '{' && *LiteralEnd != '|' &&
              *LiteralEnd != '}' && *LiteralEnd != '$' && *LiteralEnd != '\n')
         ++LiteralEnd;
@@ -169,21 +168,23 @@ static void EmitInlineAsmStr(const char *AsmStr, const MachineInstr *MI,
       break;
     }
     case '\n':
-      ++LastEmitted;   // Consume newline character.
-      OS << '\n';      // Indent code with newline.
+      ++LastEmitted; // Consume newline character.
+      OS << '\n';    // Indent code with newline.
       break;
     case '$': {
-      ++LastEmitted;   // Consume '$' character.
+      ++LastEmitted; // Consume '$' character.
       bool Done = true;
 
       // Handle escapes.
       switch (*LastEmitted) {
-      default: Done = false; break;
-      case '$':     // $$ -> $
+      default:
+        Done = false;
+        break;
+      case '$': // $$ -> $
         if (!InputIsIntelDialect)
           if (CurVariant == -1 || CurVariant == AsmPrinterVariant)
             OS << '$';
-        ++LastEmitted;  // Consume second '$' character.
+        ++LastEmitted; // Consume second '$' character.
         break;
       case '(':        // $( -> same as GCC's { character.
         ++LastEmitted; // Consume '(' character.
@@ -207,11 +208,12 @@ static void EmitInlineAsmStr(const char *AsmStr, const MachineInstr *MI,
           CurVariant = -1;
         break;
       }
-      if (Done) break;
+      if (Done)
+        break;
 
       bool HasCurlyBraces = false;
-      if (*LastEmitted == '{') {     // ${variable}
-        ++LastEmitted;               // Consume '{' character.
+      if (*LastEmitted == '{') { // ${variable}
+        ++LastEmitted;           // Consume '{' character.
         HasCurlyBraces = true;
       }
 
@@ -224,10 +226,11 @@ static void EmitInlineAsmStr(const char *AsmStr, const MachineInstr *MI,
         const char *StrEnd = strchr(StrStart, '}');
         if (!StrEnd)
           report_fatal_error("Unterminated ${:foo} operand in inline asm"
-                             " string: '" + Twine(AsmStr) + "'");
+                             " string: '" +
+                             Twine(AsmStr) + "'");
         if (CurVariant == -1 || CurVariant == AsmPrinterVariant)
           AP->PrintSpecial(MI, OS, StringRef(StrStart, StrEnd - StrStart));
-        LastEmitted = StrEnd+1;
+        LastEmitted = StrEnd + 1;
         break;
       }
 
@@ -237,7 +240,7 @@ static void EmitInlineAsmStr(const char *AsmStr, const MachineInstr *MI,
         ++IDEnd;
 
       unsigned Val;
-      if (StringRef(IDStart, IDEnd-IDStart).getAsInteger(10, Val))
+      if (StringRef(IDStart, IDEnd - IDStart).getAsInteger(10, Val))
         report_fatal_error("Bad $ operand number in inline asm string: '" +
                            Twine(AsmStr) + "'");
       LastEmitted = IDEnd;
@@ -246,25 +249,25 @@ static void EmitInlineAsmStr(const char *AsmStr, const MachineInstr *MI,
         report_fatal_error("Invalid $ operand number in inline asm string: '" +
                            Twine(AsmStr) + "'");
 
-      char Modifier[2] = { 0, 0 };
+      char Modifier[2] = {0, 0};
 
       if (HasCurlyBraces) {
         // If we have curly braces, check for a modifier character.  This
         // supports syntax like ${0:u}, which correspond to "%u0" in GCC asm.
         if (*LastEmitted == ':') {
-          ++LastEmitted;    // Consume ':' character.
+          ++LastEmitted; // Consume ':' character.
           if (*LastEmitted == 0)
             report_fatal_error("Bad ${:} expression in inline asm string: '" +
                                Twine(AsmStr) + "'");
 
           Modifier[0] = *LastEmitted;
-          ++LastEmitted;    // Consume modifier character.
+          ++LastEmitted; // Consume modifier character.
         }
 
         if (*LastEmitted != '}')
           report_fatal_error("Bad ${} expression in inline asm string: '" +
                              Twine(AsmStr) + "'");
-        ++LastEmitted;    // Consume '}' character.
+        ++LastEmitted; // Consume '}' character.
       }
 
       // Okay, we finally have a value number.  Ask the target to print this
@@ -323,7 +326,7 @@ static void EmitInlineAsmStr(const char *AsmStr, const MachineInstr *MI,
   }
   if (InputIsIntelDialect)
     OS << "\n\t.att_syntax";
-  OS << '\n' << (char)0;  // null terminate string.
+  OS << '\n' << (char)0; // null terminate string.
 }
 
 /// This method formats and emits the specified machine instruction that is an
@@ -366,7 +369,7 @@ void AsmPrinter::emitInlineAsm(const MachineInstr *MI) const {
   SmallString<256> StringData;
   raw_svector_ostream OS(StringData);
 
-  AsmPrinter *AP = const_cast<AsmPrinter*>(this);
+  AsmPrinter *AP = const_cast<AsmPrinter *>(this);
   EmitInlineAsmStr(AsmStr, MI, MMI, MAI, AP, LocCookie, OS);
 
   // Emit warnings if we use reserved registers on the clobber list, as
@@ -451,7 +454,7 @@ void AsmPrinter::PrintSpecial(const MachineInstr *MI, raw_ostream &OS,
     std::string msg;
     raw_string_ostream Msg(msg);
     Msg << "Unknown special formatter '" << Code
-         << "' for machine instr: " << *MI;
+        << "' for machine instr: " << *MI;
     report_fatal_error(Twine(Msg.str()));
   }
 }
@@ -470,20 +473,21 @@ bool AsmPrinter::PrintAsmOperand(const MachineInstr *MI, unsigned OpNo,
                                  const char *ExtraCode, raw_ostream &O) {
   // Does this asm operand have a single letter operand modifier?
   if (ExtraCode && ExtraCode[0]) {
-    if (ExtraCode[1] != 0) return true; // Unknown modifier.
+    if (ExtraCode[1] != 0)
+      return true; // Unknown modifier.
 
     // https://gcc.gnu.org/onlinedocs/gccint/Output-Template.html
     const MachineOperand &MO = MI->getOperand(OpNo);
     switch (ExtraCode[0]) {
     default:
-      return true;  // Unknown modifier.
-    case 'a': // Print as memory address.
+      return true; // Unknown modifier.
+    case 'a':      // Print as memory address.
       if (MO.isReg()) {
         PrintAsmMemoryOperand(MI, OpNo, nullptr, O);
         return false;
       }
       [[fallthrough]]; // GCC allows '%a' to behave like '%c' with immediates.
-    case 'c': // Substitute immediate value without immediate syntax
+    case 'c':          // Substitute immediate value without immediate syntax
       if (MO.isImm()) {
         O << MO.getImm();
         return false;
@@ -493,12 +497,12 @@ bool AsmPrinter::PrintAsmOperand(const MachineInstr *MI, unsigned OpNo,
         return false;
       }
       return true;
-    case 'n':  // Negate the immediate constant.
+    case 'n': // Negate the immediate constant.
       if (!MO.isImm())
         return true;
       O << -MO.getImm();
       return false;
-    case 's':  // The GCC deprecated s modifier
+    case 's': // The GCC deprecated s modifier
       if (!MO.isImm())
         return true;
       O << ((32 - MO.getImm()) & 31);
diff --git a/llvm/lib/CodeGen/AsmPrinter/ByteStreamer.h b/llvm/lib/CodeGen/AsmPrinter/ByteStreamer.h
index bd2c60eadd61291..0a0f6e3fa8f4c66 100644
--- a/llvm/lib/CodeGen/AsmPrinter/ByteStreamer.h
+++ b/llvm/lib/CodeGen/AsmPrinter/ByteStreamer.h
@@ -22,12 +22,12 @@
 
 namespace llvm {
 class ByteStreamer {
- protected:
+protected:
   ~ByteStreamer() = default;
-  ByteStreamer(const ByteStreamer&) = default;
+  ByteStreamer(const ByteStreamer &) = default;
   ByteStreamer() = default;
 
- public:
+public:
   // For now we're just handling the calls we need for dwarf emission/hashing.
   virtual void emitInt8(uint8_t Byte, const Twine &Comment = "") = 0;
   virtual void emitSLEB128(uint64_t DWord, const Twine &Comment = "") = 0;
@@ -67,12 +67,13 @@ class APByteStreamer final : public ByteStreamer {
 };
 
 class HashingByteStreamer final : public ByteStreamer {
- private:
+private:
   DIEHash &Hash;
- public:
-   HashingByteStreamer(DIEHash &H) : Hash(H) {}
-   void emitInt8(uint8_t Byte, const Twine &Comment) override {
-     Hash.update(Byte);
+
+public:
+  HashingByteStreamer(DIEHash &H) : Hash(H) {}
+  void emitInt8(uint8_t Byte, const Twine &Comment) override {
+    Hash.update(Byte);
   }
   void emitSLEB128(uint64_t DWord, const Twine &Comment) override {
     Hash.addSLEB128(DWord);
@@ -116,7 +117,6 @@ class BufferByteStreamer final : public ByteStreamer {
       // with each other.
       for (size_t i = 1; i < Length; ++i)
         Comments.push_back("");
-
     }
   }
   void emitULEB128(uint64_t DWord, const Twine &Comment,
@@ -140,6 +140,6 @@ class BufferByteStreamer final : public ByteStreamer {
   }
 };
 
-}
+} // namespace llvm
 
 #endif
diff --git a/llvm/lib/CodeGen/AsmPrinter/CMakeLists.txt b/llvm/lib/CodeGen/AsmPrinter/CMakeLists.txt
index 0fe4b905831ff58..37aee1351fe4cd5 100644
--- a/llvm/lib/CodeGen/AsmPrinter/CMakeLists.txt
+++ b/llvm/lib/CodeGen/AsmPrinter/CMakeLists.txt
@@ -1,47 +1,15 @@
-add_llvm_component_library(LLVMAsmPrinter
-  AccelTable.cpp
-  AddressPool.cpp
-  AIXException.cpp
-  ARMException.cpp
-  AsmPrinter.cpp
-  AsmPrinterDwarf.cpp
-  AsmPrinterInlineAsm.cpp
-  DbgEntityHistoryCalculator.cpp
-  DebugHandlerBase.cpp
-  DebugLocStream.cpp
-  DIE.cpp
-  DIEHash.cpp
-  DwarfCFIException.cpp
-  DwarfCompileUnit.cpp
-  DwarfDebug.cpp
-  DwarfExpression.cpp
-  DwarfFile.cpp
-  DwarfStringPool.cpp
-  DwarfUnit.cpp
-  EHStreamer.cpp
-  ErlangGCPrinter.cpp
-  OcamlGCPrinter.cpp
-  PseudoProbePrinter.cpp
-  WinCFGuard.cpp
-  WinException.cpp
-  CodeViewDebug.cpp
-  WasmException.cpp
+add_llvm_component_library(
+    LLVMAsmPrinter AccelTable.cpp AddressPool.cpp AIXException.cpp ARMException
+        .cpp AsmPrinter.cpp AsmPrinterDwarf.cpp AsmPrinterInlineAsm
+        .cpp DbgEntityHistoryCalculator.cpp DebugHandlerBase.cpp DebugLocStream
+        .cpp DIE.cpp DIEHash.cpp DwarfCFIException.cpp DwarfCompileUnit
+        .cpp DwarfDebug.cpp DwarfExpression.cpp DwarfFile.cpp DwarfStringPool
+        .cpp DwarfUnit.cpp EHStreamer.cpp ErlangGCPrinter.cpp OcamlGCPrinter
+        .cpp PseudoProbePrinter.cpp WinCFGuard.cpp WinException
+        .cpp CodeViewDebug.cpp WasmException.cpp
 
-  DEPENDS
-  intrinsics_gen
+            DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  Analysis
-  BinaryFormat
-  CodeGen
-  CodeGenTypes
-  Core
-  DebugInfoCodeView
-  DebugInfoDWARF
-  MC
-  MCParser
-  Remarks
-  Support
-  Target
-  TargetParser
-  )
+                LINK_COMPONENTS Analysis BinaryFormat CodeGen CodeGenTypes Core
+                    DebugInfoCodeView DebugInfoDWARF MC MCParser Remarks Support
+                        Target TargetParser)
diff --git a/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.cpp b/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.cpp
index c43ca255505d77c..c2bb8c7a2df3ae3 100644
--- a/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.cpp
@@ -415,7 +415,8 @@ TypeIndex CodeViewDebug::getFuncIdForSubprogram(const DISubprogram *SP) {
 }
 
 static bool isNonTrivial(const DICompositeType *DCTy) {
-  return ((DCTy->getFlags() & DINode::FlagNonTrivial) == DINode::FlagNonTrivial);
+  return ((DCTy->getFlags() & DINode::FlagNonTrivial) ==
+          DINode::FlagNonTrivial);
 }
 
 static FunctionOptions
@@ -439,8 +440,7 @@ getFunctionOptions(const DISubroutineType *Ty,
   if (ClassTy && isNonTrivial(ClassTy) && SPName == ClassTy->getName()) {
     FO |= FunctionOptions::Constructor;
 
-  // TODO: put the FunctionOptions::ConstructorWithVirtualBases flag.
-
+    // TODO: put the FunctionOptions::ConstructorWithVirtualBases flag.
   }
   return FO;
 }
@@ -869,12 +869,11 @@ void CodeViewDebug::emitCompilerInformation() {
   // Some Microsoft tools, like Binscope, expect a backend version number of at
   // least 8.something, so we'll coerce the LLVM version into a form that
   // guarantees it'll be big enough without really lying about the version.
-  int Major = 1000 * LLVM_VERSION_MAJOR +
-              10 * LLVM_VERSION_MINOR +
-              LLVM_VERSION_PATCH;
+  int Major =
+      1000 * LLVM_VERSION_MAJOR + 10 * LLVM_VERSION_MINOR + LLVM_VERSION_PATCH;
   // Clamp it for builds that use unusually large version numbers.
   Major = std::min<int>(Major, std::numeric_limits<uint16_t>::max());
-  Version BackVer = {{ Major, 0, 0, 0 }};
+  Version BackVer = {{Major, 0, 0, 0}};
   OS.AddComment("Backend version");
   for (int N : BackVer.Part)
     OS.emitInt16(N);
@@ -1059,8 +1058,7 @@ void CodeViewDebug::switchToDebugSectionForSymbol(const MCSymbol *GVSym) {
 
 // Emit an S_THUNK32/S_END symbol pair for a thunk routine.
 // The only supported thunk ordinal is currently the standard type.
-void CodeViewDebug::emitDebugInfoForThunk(const Function *GV,
-                                          FunctionInfo &FI,
+void CodeViewDebug::emitDebugInfoForThunk(const Function *GV, FunctionInfo &FI,
                                           const MCSymbol *Fn) {
   std::string FuncName =
       std::string(GlobalValue::dropLLVMManglingEscape(GV->getName()));
@@ -1256,8 +1254,8 @@ void CodeViewDebug::emitDebugInfoForFunction(const Function *GV,
   OS.emitCVLinetableDirective(FI.FuncId, Fn, FI.End);
 }
 
-CodeViewDebug::LocalVarDef
-CodeViewDebug::createDefRangeMem(uint16_t CVRegister, int Offset) {
+CodeViewDebug::LocalVarDef CodeViewDebug::createDefRangeMem(uint16_t CVRegister,
+                                                            int Offset) {
   LocalVarDef DR;
   DR.InMemory = -1;
   DR.DataOffset = Offset;
@@ -1355,8 +1353,7 @@ void CodeViewDebug::calculateRanges(
     // relatively common.
     std::optional<DbgVariableLocation> Location =
         DbgVariableLocation::extractFromMachineInstruction(*DVInst);
-    if (!Location)
-    {
+    if (!Location) {
       // When we don't have a location this is usually because LLVM has
       // transformed it into a constant and we only have an llvm.dbg.value. We
       // can't represent these well in CodeView since S_LOCAL only works on
@@ -1537,8 +1534,8 @@ void CodeViewDebug::beginFunctionImpl(const MachineFunction *MF) {
   }
   FPO |= FrameProcedureOptions(uint32_t(CurFn->EncodedLocalFramePtrReg) << 14U);
   FPO |= FrameProcedureOptions(uint32_t(CurFn->EncodedParamFramePtrReg) << 16U);
-  if (Asm->TM.getOptLevel() != CodeGenOpt::None &&
-      !GV.hasOptSize() && !GV.hasOptNone())
+  if (Asm->TM.getOptLevel() != CodeGenOpt::None && !GV.hasOptSize() &&
+      !GV.hasOptNone())
     FPO |= FrameProcedureOptions::OptimizedForSpeed;
   if (GV.hasProfileData()) {
     FPO |= FrameProcedureOptions::ValidProfileCounts;
@@ -1669,7 +1666,7 @@ TypeIndex CodeViewDebug::lowerType(const DIType *Ty, const DIType *ClassTy) {
   case dwarf::DW_TAG_restrict_type:
   case dwarf::DW_TAG_const_type:
   case dwarf::DW_TAG_volatile_type:
-  // TODO: add support for DW_TAG_atomic_type here
+    // TODO: add support for DW_TAG_atomic_type here
     return lowerTypeModifier(cast<DIDerivedType>(Ty));
   case dwarf::DW_TAG_subroutine_type:
     if (ClassTy) {
@@ -1807,57 +1804,115 @@ TypeIndex CodeViewDebug::lowerTypeBasic(const DIBasicType *Ty) {
     break;
   case dwarf::DW_ATE_boolean:
     switch (ByteSize) {
-    case 1:  STK = SimpleTypeKind::Boolean8;   break;
-    case 2:  STK = SimpleTypeKind::Boolean16;  break;
-    case 4:  STK = SimpleTypeKind::Boolean32;  break;
-    case 8:  STK = SimpleTypeKind::Boolean64;  break;
-    case 16: STK = SimpleTypeKind::Boolean128; break;
+    case 1:
+      STK = SimpleTypeKind::Boolean8;
+      break;
+    case 2:
+      STK = SimpleTypeKind::Boolean16;
+      break;
+    case 4:
+      STK = SimpleTypeKind::Boolean32;
+      break;
+    case 8:
+      STK = SimpleTypeKind::Boolean64;
+      break;
+    case 16:
+      STK = SimpleTypeKind::Boolean128;
+      break;
     }
     break;
   case dwarf::DW_ATE_complex_float:
     // The CodeView size for a complex represents the size of
     // an individual component.
     switch (ByteSize) {
-    case 4:  STK = SimpleTypeKind::Complex16;  break;
-    case 8:  STK = SimpleTypeKind::Complex32;  break;
-    case 16: STK = SimpleTypeKind::Complex64;  break;
-    case 20: STK = SimpleTypeKind::Complex80;  break;
-    case 32: STK = SimpleTypeKind::Complex128; break;
+    case 4:
+      STK = SimpleTypeKind::Complex16;
+      break;
+    case 8:
+      STK = SimpleTypeKind::Complex32;
+      break;
+    case 16:
+      STK = SimpleTypeKind::Complex64;
+      break;
+    case 20:
+      STK = SimpleTypeKind::Complex80;
+      break;
+    case 32:
+      STK = SimpleTypeKind::Complex128;
+      break;
     }
     break;
   case dwarf::DW_ATE_float:
     switch (ByteSize) {
-    case 2:  STK = SimpleTypeKind::Float16;  break;
-    case 4:  STK = SimpleTypeKind::Float32;  break;
-    case 6:  STK = SimpleTypeKind::Float48;  break;
-    case 8:  STK = SimpleTypeKind::Float64;  break;
-    case 10: STK = SimpleTypeKind::Float80;  break;
-    case 16: STK = SimpleTypeKind::Float128; break;
+    case 2:
+      STK = SimpleTypeKind::Float16;
+      break;
+    case 4:
+      STK = SimpleTypeKind::Float32;
+      break;
+    case 6:
+      STK = SimpleTypeKind::Float48;
+      break;
+    case 8:
+      STK = SimpleTypeKind::Float64;
+      break;
+    case 10:
+      STK = SimpleTypeKind::Float80;
+      break;
+    case 16:
+      STK = SimpleTypeKind::Float128;
+      break;
     }
     break;
   case dwarf::DW_ATE_signed:
     switch (ByteSize) {
-    case 1:  STK = SimpleTypeKind::SignedCharacter; break;
-    case 2:  STK = SimpleTypeKind::Int16Short;      break;
-    case 4:  STK = SimpleTypeKind::Int32;           break;
-    case 8:  STK = SimpleTypeKind::Int64Quad;       break;
-    case 16: STK = SimpleTypeKind::Int128Oct;       break;
+    case 1:
+      STK = SimpleTypeKind::SignedCharacter;
+      break;
+    case 2:
+      STK = SimpleTypeKind::Int16Short;
+      break;
+    case 4:
+      STK = SimpleTypeKind::Int32;
+      break;
+    case 8:
+      STK = SimpleTypeKind::Int64Quad;
+      break;
+    case 16:
+      STK = SimpleTypeKind::Int128Oct;
+      break;
     }
     break;
   case dwarf::DW_ATE_unsigned:
     switch (ByteSize) {
-    case 1:  STK = SimpleTypeKind::UnsignedCharacter; break;
-    case 2:  STK = SimpleTypeKind::UInt16Short;       break;
-    case 4:  STK = SimpleTypeKind::UInt32;            break;
-    case 8:  STK = SimpleTypeKind::UInt64Quad;        break;
-    case 16: STK = SimpleTypeKind::UInt128Oct;        break;
+    case 1:
+      STK = SimpleTypeKind::UnsignedCharacter;
+      break;
+    case 2:
+      STK = SimpleTypeKind::UInt16Short;
+      break;
+    case 4:
+      STK = SimpleTypeKind::UInt32;
+      break;
+    case 8:
+      STK = SimpleTypeKind::UInt64Quad;
+      break;
+    case 16:
+      STK = SimpleTypeKind::UInt128Oct;
+      break;
     }
     break;
   case dwarf::DW_ATE_UTF:
     switch (ByteSize) {
-    case 1: STK = SimpleTypeKind::Character8; break;
-    case 2: STK = SimpleTypeKind::Character16; break;
-    case 4: STK = SimpleTypeKind::Character32; break;
+    case 1:
+      STK = SimpleTypeKind::Character8;
+      break;
+    case 2:
+      STK = SimpleTypeKind::Character16;
+      break;
+    case 4:
+      STK = SimpleTypeKind::Character32;
+      break;
     }
     break;
   case dwarf::DW_ATE_signed_char:
@@ -1912,7 +1967,8 @@ TypeIndex CodeViewDebug::lowerTypePointer(const DIDerivedType *Ty,
       Ty->getSizeInBits() == 64 ? PointerKind::Near64 : PointerKind::Near32;
   PointerMode PM = PointerMode::Pointer;
   switch (Ty->getTag()) {
-  default: llvm_unreachable("not a pointer tag type");
+  default:
+    llvm_unreachable("not a pointer tag type");
   case dwarf::DW_TAG_pointer_type:
     PM = PointerMode::Pointer;
     break;
@@ -1971,8 +2027,8 @@ TypeIndex CodeViewDebug::lowerTypeMemberPointer(const DIDerivedType *Ty,
   TypeIndex ClassTI = getTypeIndex(Ty->getClassType());
   TypeIndex PointeeTI =
       getTypeIndex(Ty->getBaseType(), IsPMF ? Ty->getClassType() : nullptr);
-  PointerKind PK = getPointerSizeInBytes() == 8 ? PointerKind::Near64
-                                                : PointerKind::Near32;
+  PointerKind PK =
+      getPointerSizeInBytes() == 8 ? PointerKind::Near64 : PointerKind::Near32;
   PointerMode PM = IsPMF ? PointerMode::PointerToMemberFunction
                          : PointerMode::PointerToDataMember;
 
@@ -1988,12 +2044,18 @@ TypeIndex CodeViewDebug::lowerTypeMemberPointer(const DIDerivedType *Ty,
 /// have a translation, use the NearC convention.
 static CallingConvention dwarfCCToCodeView(unsigned DwarfCC) {
   switch (DwarfCC) {
-  case dwarf::DW_CC_normal:             return CallingConvention::NearC;
-  case dwarf::DW_CC_BORLAND_msfastcall: return CallingConvention::NearFast;
-  case dwarf::DW_CC_BORLAND_thiscall:   return CallingConvention::ThisCall;
-  case dwarf::DW_CC_BORLAND_stdcall:    return CallingConvention::NearStdCall;
-  case dwarf::DW_CC_BORLAND_pascal:     return CallingConvention::NearPascal;
-  case dwarf::DW_CC_LLVM_vectorcall:    return CallingConvention::NearVector;
+  case dwarf::DW_CC_normal:
+    return CallingConvention::NearC;
+  case dwarf::DW_CC_BORLAND_msfastcall:
+    return CallingConvention::NearFast;
+  case dwarf::DW_CC_BORLAND_thiscall:
+    return CallingConvention::ThisCall;
+  case dwarf::DW_CC_BORLAND_stdcall:
+    return CallingConvention::NearStdCall;
+  case dwarf::DW_CC_BORLAND_pascal:
+    return CallingConvention::NearPascal;
+  case dwarf::DW_CC_LLVM_vectorcall:
+    return CallingConvention::NearVector;
   }
   return CallingConvention::NearC;
 }
@@ -2143,9 +2205,12 @@ TypeIndex CodeViewDebug::lowerTypeVFTableShape(const DIDerivedType *Ty) {
 
 static MemberAccess translateAccessFlags(unsigned RecordTag, unsigned Flags) {
   switch (Flags & DINode::FlagAccessibility) {
-  case DINode::FlagPrivate:   return MemberAccess::Private;
-  case DINode::FlagPublic:    return MemberAccess::Public;
-  case DINode::FlagProtected: return MemberAccess::Protected;
+  case DINode::FlagPrivate:
+    return MemberAccess::Private;
+  case DINode::FlagPublic:
+    return MemberAccess::Public;
+  case DINode::FlagProtected:
+    return MemberAccess::Protected;
   case 0:
     // If there was no explicit access control, provide the default for the tag.
     return RecordTag == dwarf::DW_TAG_class_type ? MemberAccess::Private
@@ -2415,7 +2480,7 @@ static bool shouldAlwaysEmitCompleteClassType(const DICompositeType *Ty) {
   // This routine is used by lowerTypeClass and lowerTypeUnion to determine
   // if a complete type should be emitted instead of a forward reference.
   return Ty->getName().empty() && Ty->getIdentifier().empty() &&
-      !Ty->isForwardDecl();
+         !Ty->isForwardDecl();
 }
 
 TypeIndex CodeViewDebug::lowerTypeClass(const DICompositeType *Ty) {
@@ -2436,8 +2501,7 @@ TypeIndex CodeViewDebug::lowerTypeClass(const DICompositeType *Ty) {
   // First, construct the forward decl.  Don't look into Ty to compute the
   // forward decl options, since it might not be available in all TUs.
   TypeRecordKind Kind = getRecordKind(Ty);
-  ClassOptions CO =
-      ClassOptions::ForwardReference | getCommonClassOptions(Ty);
+  ClassOptions CO = ClassOptions::ForwardReference | getCommonClassOptions(Ty);
   std::string FullName = getFullyQualifiedName(Ty);
   ClassRecord CR(Kind, 0, CO, TypeIndex(), TypeIndex(), TypeIndex(), 0,
                  FullName, Ty->getIdentifier());
@@ -2489,8 +2553,7 @@ TypeIndex CodeViewDebug::lowerTypeUnion(const DICompositeType *Ty) {
   if (shouldAlwaysEmitCompleteClassType(Ty))
     return getCompleteTypeIndex(Ty);
 
-  ClassOptions CO =
-      ClassOptions::ForwardReference | getCommonClassOptions(Ty);
+  ClassOptions CO = ClassOptions::ForwardReference | getCommonClassOptions(Ty);
   std::string FullName = getFullyQualifiedName(Ty);
   UnionRecord UR(0, CO, TypeIndex(), 0, FullName, Ty->getIdentifier());
   TypeIndex FwdDeclTI = TypeTable.writeLeafType(UR);
@@ -2542,7 +2605,8 @@ CodeViewDebug::lowerRecordFieldList(const DICompositeType *Ty) {
       unsigned VBPtrOffset = I->getVBPtrOffset();
       // FIXME: Despite the accessor name, the offset is really in bytes.
       unsigned VBTableIndex = I->getOffsetInBits() / 4;
-      auto RecordKind = (I->getFlags() & DINode::FlagIndirectVirtualBase) == DINode::FlagIndirectVirtualBase
+      auto RecordKind = (I->getFlags() & DINode::FlagIndirectVirtualBase) ==
+                                DINode::FlagIndirectVirtualBase
                             ? TypeRecordKind::IndirectVirtualBaseClass
                             : TypeRecordKind::VirtualBaseClass;
       VirtualBaseClassRecord VBCR(
@@ -2920,7 +2984,7 @@ void CodeViewDebug::emitLocalVariable(const FunctionInfo &FI,
 }
 
 void CodeViewDebug::emitLexicalBlockList(ArrayRef<LexicalBlock *> Blocks,
-                                         const FunctionInfo& FI) {
+                                         const FunctionInfo &FI) {
   for (LexicalBlock *Block : Blocks)
     emitLexicalBlock(*Block, FI);
 }
@@ -2928,20 +2992,20 @@ void CodeViewDebug::emitLexicalBlockList(ArrayRef<LexicalBlock *> Blocks,
 /// Emit an S_BLOCK32 and S_END record pair delimiting the contents of a
 /// lexical block scope.
 void CodeViewDebug::emitLexicalBlock(const LexicalBlock &Block,
-                                     const FunctionInfo& FI) {
+                                     const FunctionInfo &FI) {
   MCSymbol *RecordEnd = beginSymbolRecord(SymbolKind::S_BLOCK32);
   OS.AddComment("PtrParent");
   OS.emitInt32(0); // PtrParent
   OS.AddComment("PtrEnd");
   OS.emitInt32(0); // PtrEnd
   OS.AddComment("Code size");
-  OS.emitAbsoluteSymbolDiff(Block.End, Block.Begin, 4);   // Code Size
+  OS.emitAbsoluteSymbolDiff(Block.End, Block.Begin, 4); // Code Size
   OS.AddComment("Function section relative address");
   OS.emitCOFFSecRel32(Block.Begin, /*Offset=*/0); // Func Offset
   OS.AddComment("Function section index");
   OS.emitCOFFSectionIndex(FI.Begin); // Func Symbol
   OS.AddComment("Lexical block name");
-  emitNullTerminatedSymbolName(OS, Block.Name);           // Name
+  emitNullTerminatedSymbolName(OS, Block.Name); // Name
   endSymbolRecord(RecordEnd);
 
   // Emit variables local to this lexical block.
@@ -2958,10 +3022,10 @@ void CodeViewDebug::emitLexicalBlock(const LexicalBlock &Block,
 /// Convenience routine for collecting lexical block information for a list
 /// of lexical scopes.
 void CodeViewDebug::collectLexicalBlockInfo(
-        SmallVectorImpl<LexicalScope *> &Scopes,
-        SmallVectorImpl<LexicalBlock *> &Blocks,
-        SmallVectorImpl<LocalVariable> &Locals,
-        SmallVectorImpl<CVGlobalVariable> &Globals) {
+    SmallVectorImpl<LexicalScope *> &Scopes,
+    SmallVectorImpl<LexicalBlock *> &Blocks,
+    SmallVectorImpl<LocalVariable> &Locals,
+    SmallVectorImpl<CVGlobalVariable> &Globals) {
   for (LexicalScope *Scope : Scopes)
     collectLexicalBlockInfo(*Scope, Blocks, Locals, Globals);
 }
@@ -2969,8 +3033,7 @@ void CodeViewDebug::collectLexicalBlockInfo(
 /// Populate the lexical blocks and local variable lists of the parent with
 /// information about the specified lexical scope.
 void CodeViewDebug::collectLexicalBlockInfo(
-    LexicalScope &Scope,
-    SmallVectorImpl<LexicalBlock *> &ParentBlocks,
+    LexicalScope &Scope, SmallVectorImpl<LexicalBlock *> &ParentBlocks,
     SmallVectorImpl<LocalVariable> &ParentLocals,
     SmallVectorImpl<CVGlobalVariable> &ParentGlobals) {
   if (Scope.isAbstractScope())
@@ -3019,9 +3082,7 @@ void CodeViewDebug::collectLexicalBlockInfo(
       ParentLocals.append(Locals->begin(), Locals->end());
     if (Globals)
       ParentGlobals.append(Globals->begin(), Globals->end());
-    collectLexicalBlockInfo(Scope.getChildren(),
-                            ParentBlocks,
-                            ParentLocals,
+    collectLexicalBlockInfo(Scope.getChildren(), ParentBlocks, ParentLocals,
                             ParentGlobals);
     return;
   }
@@ -3048,9 +3109,7 @@ void CodeViewDebug::collectLexicalBlockInfo(
   if (Globals)
     Block.Globals = std::move(*Globals);
   ParentBlocks.push_back(&Block);
-  collectLexicalBlockInfo(Scope.getChildren(),
-                          Block.Children,
-                          Block.Locals,
+  collectLexicalBlockInfo(Scope.getChildren(), Block.Children, Block.Locals,
                           Block.Globals);
 }
 
@@ -3063,9 +3122,7 @@ void CodeViewDebug::endFunctionImpl(const MachineFunction *MF) {
 
   // Build the lexical block structure to emit for this routine.
   if (LexicalScope *CFS = LScopes.getCurrentFunctionScope())
-    collectLexicalBlockInfo(*CFS,
-                            CurFn->ChildBlocks,
-                            CurFn->Locals,
+    collectLexicalBlockInfo(*CFS, CurFn->ChildBlocks, CurFn->Locals,
                             CurFn->Globals);
 
   // Clear the scope and variable information from the map which will not be
@@ -3107,9 +3164,7 @@ void CodeViewDebug::endFunctionImpl(const MachineFunction *MF) {
 // corresponds to optimized code that doesn't have a distinct source location.
 // In this case, we try to use the previous or next source location depending on
 // the context.
-static bool isUsableDebugLoc(DebugLoc DL) {
-  return DL && DL.getLine() != 0;
-}
+static bool isUsableDebugLoc(DebugLoc DL) { return DL && DL.getLine() != 0; }
 
 void CodeViewDebug::beginInstruction(const MachineInstr *MI) {
   DebugHandlerBase::beginInstruction(MI);
@@ -3232,7 +3287,8 @@ void CodeViewDebug::collectGlobalVariableInfo() {
       // generally the filename and line number, which isn't possible to output
       // in CodeView. String literals should be the only unnamed GlobalVariable
       // with debug info.
-      if (DIGV->getName().empty()) continue;
+      if (DIGV->getName().empty())
+        continue;
 
       if ((DIE->getNumElements() == 2) &&
           (DIE->getElement(0) == dwarf::DW_OP_plus_uconst))
@@ -3258,8 +3314,8 @@ void CodeViewDebug::collectGlobalVariableInfo() {
       if (Scope && isa<DILocalScope>(Scope)) {
         // Locate a global variable list for this scope, creating one if
         // necessary.
-        auto Insertion = ScopeGlobals.insert(
-            {Scope, std::unique_ptr<GlobalVariableList>()});
+        auto Insertion =
+            ScopeGlobals.insert({Scope, std::unique_ptr<GlobalVariableList>()});
         if (Insertion.second)
           Insertion.first->second = std::make_unique<GlobalVariableList>();
         VariableList = Insertion.first->second.get();
diff --git a/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.h b/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.h
index eb274d25de9197a..495168ac419884a 100644
--- a/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.h
+++ b/llvm/lib/CodeGen/AsmPrinter/CodeViewDebug.h
@@ -161,7 +161,7 @@ class LLVM_LIBRARY_VISIBILITY CodeViewDebug : public DebugHandlerBase {
     SmallVector<LocalVariable, 1> Locals;
     SmallVector<CVGlobalVariable, 1> Globals;
 
-    std::unordered_map<const DILexicalBlockBase*, LexicalBlock> LexicalBlocks;
+    std::unordered_map<const DILexicalBlockBase *, LexicalBlock> LexicalBlocks;
 
     // Lexical blocks containing local variables.
     SmallVector<LexicalBlock *, 1> ChildBlocks;
@@ -225,7 +225,7 @@ class LLVM_LIBRARY_VISIBILITY CodeViewDebug : public DebugHandlerBase {
   // Map to separate global variables according to the lexical scope they
   // belong in. A null local scope represents the global scope.
   typedef SmallVector<CVGlobalVariable, 1> GlobalVariableList;
-  DenseMap<const DIScope*, std::unique_ptr<GlobalVariableList> > ScopeGlobals;
+  DenseMap<const DIScope *, std::unique_ptr<GlobalVariableList>> ScopeGlobals;
 
   // Array of global variables which  need to be emitted into a COMDAT section.
   SmallVector<CVGlobalVariable, 1> ComdatVariables;
@@ -333,8 +333,7 @@ class LLVM_LIBRARY_VISIBILITY CodeViewDebug : public DebugHandlerBase {
 
   void emitInlineeLinesSubsection();
 
-  void emitDebugInfoForThunk(const Function *GV,
-                             FunctionInfo &FI,
+  void emitDebugInfoForThunk(const Function *GV, FunctionInfo &FI,
                              const MCSymbol *Fn);
 
   void emitDebugInfoForFunction(const Function *GV, FunctionInfo &FI);
@@ -384,10 +383,11 @@ class LLVM_LIBRARY_VISIBILITY CodeViewDebug : public DebugHandlerBase {
                                SmallVectorImpl<LexicalBlock *> &Blocks,
                                SmallVectorImpl<LocalVariable> &Locals,
                                SmallVectorImpl<CVGlobalVariable> &Globals);
-  void collectLexicalBlockInfo(LexicalScope &Scope,
-                               SmallVectorImpl<LexicalBlock *> &ParentBlocks,
-                               SmallVectorImpl<LocalVariable> &ParentLocals,
-                               SmallVectorImpl<CVGlobalVariable> &ParentGlobals);
+  void
+  collectLexicalBlockInfo(LexicalScope &Scope,
+                          SmallVectorImpl<LexicalBlock *> &ParentBlocks,
+                          SmallVectorImpl<LocalVariable> &ParentLocals,
+                          SmallVectorImpl<CVGlobalVariable> &ParentGlobals);
 
   /// Records information about a local variable in the appropriate scope. In
   /// particular, locals from inlined code live inside the inlining site.
@@ -402,10 +402,10 @@ class LLVM_LIBRARY_VISIBILITY CodeViewDebug : public DebugHandlerBase {
 
   /// Emits a sequence of lexical block scopes and their children.
   void emitLexicalBlockList(ArrayRef<LexicalBlock *> Blocks,
-                            const FunctionInfo& FI);
+                            const FunctionInfo &FI);
 
   /// Emit a lexical block scope and its children.
-  void emitLexicalBlock(const LexicalBlock &Block, const FunctionInfo& FI);
+  void emitLexicalBlock(const LexicalBlock &Block, const FunctionInfo &FI);
 
   /// Translates the DIType to codeview if necessary and returns a type index
   /// for it.
diff --git a/llvm/lib/CodeGen/AsmPrinter/DIE.cpp b/llvm/lib/CodeGen/AsmPrinter/DIE.cpp
index 619155cafe9277d..980e6dd8deb3916 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DIE.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/DIE.cpp
@@ -101,18 +101,11 @@ void DIEAbbrev::Emit(const AsmPrinter *AP) const {
 
 LLVM_DUMP_METHOD
 void DIEAbbrev::print(raw_ostream &O) const {
-  O << "Abbreviation @"
-    << format("0x%lx", (long)(intptr_t)this)
-    << "  "
-    << dwarf::TagString(Tag)
-    << " "
-    << dwarf::ChildrenString(Children)
-    << '\n';
+  O << "Abbreviation @" << format("0x%lx", (long)(intptr_t)this) << "  "
+    << dwarf::TagString(Tag) << " " << dwarf::ChildrenString(Children) << '\n';
 
   for (unsigned i = 0, N = Data.size(); i < N; ++i) {
-    O << "  "
-      << dwarf::AttributeString(Data[i].getAttribute())
-      << "  "
+    O << "  " << dwarf::AttributeString(Data[i].getAttribute()) << "  "
       << dwarf::FormEncodingString(Data[i].getForm());
 
     if (Data[i].getForm() == dwarf::DW_FORM_implicit_const)
@@ -123,9 +116,7 @@ void DIEAbbrev::print(raw_ostream &O) const {
 }
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
-LLVM_DUMP_METHOD void DIEAbbrev::dump() const {
-  print(dbgs());
-}
+LLVM_DUMP_METHOD void DIEAbbrev::dump() const { print(dbgs()); }
 #endif
 
 //===----------------------------------------------------------------------===//
@@ -239,7 +230,7 @@ static void printValues(raw_ostream &O, const DIEValueList &Values,
 LLVM_DUMP_METHOD
 void DIE::print(raw_ostream &O, unsigned IndentCount) const {
   const std::string Indent(IndentCount, ' ');
-  O << Indent << "Die: " << format("0x%lx", (long)(intptr_t) this)
+  O << Indent << "Die: " << format("0x%lx", (long)(intptr_t)this)
     << ", Offset: " << Offset << ", Size: " << Size << "\n";
 
   O << Indent << dwarf::TagString(getTag()) << " "
@@ -262,9 +253,7 @@ void DIE::print(raw_ostream &O, unsigned IndentCount) const {
 }
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
-LLVM_DUMP_METHOD void DIE::dump() const {
-  print(dbgs());
-}
+LLVM_DUMP_METHOD void DIE::dump() const { print(dbgs()); }
 #endif
 
 unsigned DIE::computeOffsetsAndAbbrevs(const dwarf::FormParams &FormParams,
@@ -354,9 +343,7 @@ void DIEValue::print(raw_ostream &O) const {
 }
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
-LLVM_DUMP_METHOD void DIEValue::dump() const {
-  print(dbgs());
-}
+LLVM_DUMP_METHOD void DIEValue::dump() const { print(dbgs()); }
 #endif
 
 //===----------------------------------------------------------------------===//
@@ -416,7 +403,8 @@ void DIEInteger::emitValue(const AsmPrinter *Asm, dwarf::Form Form) const {
   case dwarf::DW_FORM_sdata:
     Asm->emitSLEB128(Integer);
     return;
-  default: llvm_unreachable("DIE Value form not supported yet");
+  default:
+    llvm_unreachable("DIE Value form not supported yet");
   }
 }
 
@@ -439,7 +427,8 @@ unsigned DIEInteger::sizeOf(const dwarf::FormParams &FormParams,
     return getULEB128Size(Integer);
   case dwarf::DW_FORM_sdata:
     return getSLEB128Size(Integer);
-  default: llvm_unreachable("DIE Value form not supported yet");
+  default:
+    llvm_unreachable("DIE Value form not supported yet");
   }
 }
 
@@ -527,7 +516,9 @@ unsigned DIEBaseTypeRef::sizeOf(const dwarf::FormParams &, dwarf::Form) const {
 }
 
 LLVM_DUMP_METHOD
-void DIEBaseTypeRef::print(raw_ostream &O) const { O << "BaseTypeRef: " << Index; }
+void DIEBaseTypeRef::print(raw_ostream &O) const {
+  O << "BaseTypeRef: " << Index;
+}
 
 //===----------------------------------------------------------------------===//
 // DIEDelta Implementation
@@ -720,10 +711,17 @@ unsigned DIELoc::computeSize(const dwarf::FormParams &FormParams) const {
 ///
 void DIELoc::emitValue(const AsmPrinter *Asm, dwarf::Form Form) const {
   switch (Form) {
-  default: llvm_unreachable("Improper form for block");
-  case dwarf::DW_FORM_block1: Asm->emitInt8(Size);    break;
-  case dwarf::DW_FORM_block2: Asm->emitInt16(Size);   break;
-  case dwarf::DW_FORM_block4: Asm->emitInt32(Size);   break;
+  default:
+    llvm_unreachable("Improper form for block");
+  case dwarf::DW_FORM_block1:
+    Asm->emitInt8(Size);
+    break;
+  case dwarf::DW_FORM_block2:
+    Asm->emitInt16(Size);
+    break;
+  case dwarf::DW_FORM_block4:
+    Asm->emitInt32(Size);
+    break;
   case dwarf::DW_FORM_block:
   case dwarf::DW_FORM_exprloc:
     Asm->emitULEB128(Size);
@@ -738,13 +736,17 @@ void DIELoc::emitValue(const AsmPrinter *Asm, dwarf::Form Form) const {
 ///
 unsigned DIELoc::sizeOf(const dwarf::FormParams &, dwarf::Form Form) const {
   switch (Form) {
-  case dwarf::DW_FORM_block1: return Size + sizeof(int8_t);
-  case dwarf::DW_FORM_block2: return Size + sizeof(int16_t);
-  case dwarf::DW_FORM_block4: return Size + sizeof(int32_t);
+  case dwarf::DW_FORM_block1:
+    return Size + sizeof(int8_t);
+  case dwarf::DW_FORM_block2:
+    return Size + sizeof(int16_t);
+  case dwarf::DW_FORM_block4:
+    return Size + sizeof(int32_t);
   case dwarf::DW_FORM_block:
   case dwarf::DW_FORM_exprloc:
     return Size + getULEB128Size(Size);
-  default: llvm_unreachable("Improper form for block");
+  default:
+    llvm_unreachable("Improper form for block");
   }
 }
 
@@ -770,16 +772,25 @@ unsigned DIEBlock::computeSize(const dwarf::FormParams &FormParams) const {
 ///
 void DIEBlock::emitValue(const AsmPrinter *Asm, dwarf::Form Form) const {
   switch (Form) {
-  default: llvm_unreachable("Improper form for block");
-  case dwarf::DW_FORM_block1: Asm->emitInt8(Size);    break;
-  case dwarf::DW_FORM_block2: Asm->emitInt16(Size);   break;
-  case dwarf::DW_FORM_block4: Asm->emitInt32(Size);   break;
+  default:
+    llvm_unreachable("Improper form for block");
+  case dwarf::DW_FORM_block1:
+    Asm->emitInt8(Size);
+    break;
+  case dwarf::DW_FORM_block2:
+    Asm->emitInt16(Size);
+    break;
+  case dwarf::DW_FORM_block4:
+    Asm->emitInt32(Size);
+    break;
   case dwarf::DW_FORM_exprloc:
   case dwarf::DW_FORM_block:
     Asm->emitULEB128(Size);
     break;
-  case dwarf::DW_FORM_string: break;
-  case dwarf::DW_FORM_data16: break;
+  case dwarf::DW_FORM_string:
+    break;
+  case dwarf::DW_FORM_data16:
+    break;
   }
 
   for (const auto &V : values())
@@ -790,13 +801,19 @@ void DIEBlock::emitValue(const AsmPrinter *Asm, dwarf::Form Form) const {
 ///
 unsigned DIEBlock::sizeOf(const dwarf::FormParams &, dwarf::Form Form) const {
   switch (Form) {
-  case dwarf::DW_FORM_block1: return Size + sizeof(int8_t);
-  case dwarf::DW_FORM_block2: return Size + sizeof(int16_t);
-  case dwarf::DW_FORM_block4: return Size + sizeof(int32_t);
+  case dwarf::DW_FORM_block1:
+    return Size + sizeof(int8_t);
+  case dwarf::DW_FORM_block2:
+    return Size + sizeof(int16_t);
+  case dwarf::DW_FORM_block4:
+    return Size + sizeof(int32_t);
   case dwarf::DW_FORM_exprloc:
-  case dwarf::DW_FORM_block:  return Size + getULEB128Size(Size);
-  case dwarf::DW_FORM_data16: return 16;
-  default: llvm_unreachable("Improper form for block");
+  case dwarf::DW_FORM_block:
+    return Size + getULEB128Size(Size);
+  case dwarf::DW_FORM_data16:
+    return 16;
+  default:
+    llvm_unreachable("Improper form for block");
   }
 }
 
diff --git a/llvm/lib/CodeGen/AsmPrinter/DIEHash.cpp b/llvm/lib/CodeGen/AsmPrinter/DIEHash.cpp
index 08ed78eb20a168f..c61b35107dfaf69 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DIEHash.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/DIEHash.cpp
@@ -336,8 +336,8 @@ void DIEHash::hashAttribute(const DIEValue &Value, dwarf::Tag Tag) {
 void DIEHash::hashAttributes(const DIEAttrs &Attrs, dwarf::Tag Tag) {
 #define HANDLE_DIE_HASH_ATTR(NAME)                                             \
   {                                                                            \
-    if (Attrs.NAME)                                                           \
-      hashAttribute(Attrs.NAME, Tag);                                         \
+    if (Attrs.NAME)                                                            \
+      hashAttribute(Attrs.NAME, Tag);                                          \
   }
 #include "DIEHashAttributes.def"
   // FIXME: Add the extended attributes.
@@ -377,7 +377,8 @@ void DIEHash::computeHash(const DIE &Die) {
   for (const auto &C : Die.children()) {
     // 7.27 Step 7
     // If C is a nested type entry or a member function entry, ...
-    if (isType(C.getTag()) || (C.getTag() == dwarf::DW_TAG_subprogram && isType(C.getParent()->getTag()))) {
+    if (isType(C.getTag()) || (C.getTag() == dwarf::DW_TAG_subprogram &&
+                               isType(C.getParent()->getTag()))) {
       StringRef Name = getDIEStringAttr(C, dwarf::DW_AT_name);
       // ... and has a DW_AT_name attribute
       if (!Name.empty()) {
diff --git a/llvm/lib/CodeGen/AsmPrinter/DIEHash.h b/llvm/lib/CodeGen/AsmPrinter/DIEHash.h
index 24a973b39271582..6551e87a88b01b2 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DIEHash.h
+++ b/llvm/lib/CodeGen/AsmPrinter/DIEHash.h
@@ -107,6 +107,6 @@ class DIEHash {
   DwarfCompileUnit *CU;
   DenseMap<const DIE *, unsigned> Numbering;
 };
-}
+} // namespace llvm
 
 #endif
diff --git a/llvm/lib/CodeGen/AsmPrinter/DbgEntityHistoryCalculator.cpp b/llvm/lib/CodeGen/AsmPrinter/DbgEntityHistoryCalculator.cpp
index 55a0afcf7a33f16..f5eed3a7d6e0519 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DbgEntityHistoryCalculator.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/DbgEntityHistoryCalculator.cpp
@@ -479,7 +479,7 @@ void llvm::calculateDbgEntityHistory(const MachineFunction *MF,
         assert(MI.getNumOperands() == 1 && "Invalid DBG_LABEL instruction!");
         const DILabel *RawLabel = MI.getDebugLabel();
         assert(RawLabel->isValidLocationForIntrinsic(MI.getDebugLoc()) &&
-            "Expected inlined-at fields to agree");
+               "Expected inlined-at fields to agree");
         // When collecting debug information for labels, there is no MCSymbol
         // generated for it. So, we keep MachineInstr in DbgLabels in order
         // to query MCSymbol afterward.
@@ -511,7 +511,7 @@ void llvm::calculateDbgEntityHistory(const MachineFunction *MF,
           // invalid outside of the function body.
           else if (MO.getReg() != FrameReg ||
                    (!MI.getFlag(MachineInstr::FrameDestroy) &&
-                   !MI.getFlag(MachineInstr::FrameSetup))) {
+                    !MI.getFlag(MachineInstr::FrameSetup))) {
             for (MCRegAliasIterator AI(MO.getReg(), TRI, true); AI.isValid();
                  ++AI)
               clobberRegisterUses(RegVars, *AI, DbgValues, LiveEntries, MI);
diff --git a/llvm/lib/CodeGen/AsmPrinter/DebugLocEntry.h b/llvm/lib/CodeGen/AsmPrinter/DebugLocEntry.h
index 726aba18bb804c5..cf280d82a8795bd 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DebugLocEntry.h
+++ b/llvm/lib/CodeGen/AsmPrinter/DebugLocEntry.h
@@ -224,9 +224,9 @@ class DebugLocEntry {
   void addValues(ArrayRef<DbgValueLoc> Vals) {
     Values.append(Vals.begin(), Vals.end());
     sortUniqueValues();
-    assert((Values.size() == 1 || all_of(Values, [](DbgValueLoc V) {
-              return V.isFragment();
-            })) && "must either have a single value or multiple pieces");
+    assert((Values.size() == 1 ||
+            all_of(Values, [](DbgValueLoc V) { return V.isFragment(); })) &&
+           "must either have a single value or multiple pieces");
   }
 
   // Sort the pieces by offset.
@@ -246,10 +246,8 @@ class DebugLocEntry {
   }
 
   /// Lower this entry into a DWARF expression.
-  void finalize(const AsmPrinter &AP,
-                DebugLocStream::ListBuilder &List,
-                const DIBasicType *BT,
-                DwarfCompileUnit &TheCU);
+  void finalize(const AsmPrinter &AP, DebugLocStream::ListBuilder &List,
+                const DIBasicType *BT, DwarfCompileUnit &TheCU);
 };
 
 /// Compare two DbgValueLocEntries for equality.
@@ -279,12 +277,11 @@ inline bool operator==(const DbgValueLoc &A, const DbgValueLoc &B) {
 }
 
 /// Compare two fragments based on their offset.
-inline bool operator<(const DbgValueLoc &A,
-                      const DbgValueLoc &B) {
+inline bool operator<(const DbgValueLoc &A, const DbgValueLoc &B) {
   return A.getExpression()->getFragmentInfo()->OffsetInBits <
          B.getExpression()->getFragmentInfo()->OffsetInBits;
 }
 
-}
+} // namespace llvm
 
 #endif
diff --git a/llvm/lib/CodeGen/AsmPrinter/DebugLocStream.h b/llvm/lib/CodeGen/AsmPrinter/DebugLocStream.h
index a96bdd034918695..52d32b5cfe25b04 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DebugLocStream.h
+++ b/llvm/lib/CodeGen/AsmPrinter/DebugLocStream.h
@@ -56,16 +56,12 @@ class DebugLocStream {
   bool GenerateComments;
 
 public:
-  DebugLocStream(bool GenerateComments) : GenerateComments(GenerateComments) { }
+  DebugLocStream(bool GenerateComments) : GenerateComments(GenerateComments) {}
   size_t getNumLists() const { return Lists.size(); }
   const List &getList(size_t LI) const { return Lists[LI]; }
   ArrayRef<List> getLists() const { return Lists; }
-  MCSymbol *getSym() const {
-    return Sym;
-  }
-  void setSym(MCSymbol *Sym) {
-    this->Sym = Sym;
-  }
+  MCSymbol *getSym() const { return Sym; }
+  void setSym(MCSymbol *Sym) { this->Sym = Sym; }
 
   class ListBuilder;
   class EntryBuilder;
@@ -166,9 +162,7 @@ class DebugLocStream::ListBuilder {
       : Locs(Locs), Asm(Asm), V(V), MI(MI), ListIndex(Locs.startList(&CU)),
         TagOffset(std::nullopt) {}
 
-  void setTagOffset(uint8_t TO) {
-    TagOffset = TO;
-  }
+  void setTagOffset(uint8_t TO) { TagOffset = TO; }
 
   /// Finalize the list.
   ///
diff --git a/llvm/lib/CodeGen/AsmPrinter/DwarfCFIException.cpp b/llvm/lib/CodeGen/AsmPrinter/DwarfCFIException.cpp
index 10c844ddb14a146..90171cc856f8c38 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DwarfCFIException.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/DwarfCFIException.cpp
@@ -86,8 +86,8 @@ void DwarfCFIException::beginFunction(const MachineFunction *MF) {
       Per;
 
   unsigned LSDAEncoding = TLOF.getLSDAEncoding();
-  shouldEmitLSDA = shouldEmitPersonality &&
-    LSDAEncoding != dwarf::DW_EH_PE_omit;
+  shouldEmitLSDA =
+      shouldEmitPersonality && LSDAEncoding != dwarf::DW_EH_PE_omit;
 
   const MCAsmInfo &MAI = *MF->getMMI().getContext().getAsmInfo();
   if (MAI.getExceptionHandlingType() != ExceptionHandling::None)
diff --git a/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.cpp b/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.cpp
index 938538ef7980462..c80f6936e52785a 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.cpp
@@ -147,7 +147,7 @@ DIE *DwarfCompileUnit::getOrCreateGlobalVariableDIE(
 
   auto *CB = GVContext ? dyn_cast<DICommonBlock>(GVContext) : nullptr;
   DIE *ContextDIE = CB ? getOrCreateCommonBlock(CB, GlobalExprs)
-    : getOrCreateContextDIE(GVContext);
+                       : getOrCreateContextDIE(GVContext);
 
   // Add to map.
   DIE *VariableDIE = &createAndAddDIE(GV->getTag(), *ContextDIE, GV);
@@ -200,8 +200,9 @@ DIE *DwarfCompileUnit::getOrCreateGlobalVariableDIE(
   return VariableDIE;
 }
 
-void DwarfCompileUnit::addLocationAttribute(
-    DIE *VariableDIE, const DIGlobalVariable *GV, ArrayRef<GlobalExpr> GlobalExprs) {
+void DwarfCompileUnit::addLocationAttribute(DIE *VariableDIE,
+                                            const DIGlobalVariable *GV,
+                                            ArrayRef<GlobalExpr> GlobalExprs) {
   bool addToAccelTable = false;
   DIELoc *Loc = nullptr;
   std::optional<unsigned> NVPTXAddressSpace;
@@ -405,8 +406,7 @@ void DwarfCompileUnit::addRange(RangeSpan Range) {
   // emitted into and the subprogram was contained within. If these are the
   // same then extend our current range, otherwise add this as a new range.
   if (CURanges.empty() || !SameAsPrevCU ||
-      (&CURanges.back().End->getSection() !=
-       &Range.End->getSection())) {
+      (&CURanges.back().End->getSection() != &Range.End->getSection())) {
     // Before a new range is added, always terminate the prior line table.
     if (PrevCU)
       DD->terminateLineTable(PrevCU);
@@ -434,8 +434,8 @@ void DwarfCompileUnit::initStmtList() {
   // left in the skeleton CU and so not included.
   // The line table entries are not always emitted in assembly, so it
   // is not okay to use line_table_start here.
-      addSectionLabel(getUnitDie(), dwarf::DW_AT_stmt_list, LineTableStartSym,
-                      TLOF.getDwarfLineSection()->getBeginSymbol());
+  addSectionLabel(getUnitDie(), dwarf::DW_AT_stmt_list, LineTableStartSym,
+                  TLOF.getDwarfLineSection()->getBeginSymbol());
 }
 
 void DwarfCompileUnit::applyStmtList(DIE &D) {
@@ -550,7 +550,7 @@ DIE &DwarfCompileUnit::updateSubprogramScopeDIE(const DISubprogram *SP) {
         DIEDwarfExpression DwarfExpr(*Asm, *this, *Loc);
         DIExpressionCursor Cursor({});
         DwarfExpr.addWasmLocation(FrameBase.Location.WasmLoc.Kind,
-            FrameBase.Location.WasmLoc.Index);
+                                  FrameBase.Location.WasmLoc.Index);
         DwarfExpr.addExpression(std::move(Cursor));
         addBlock(*SPDie, dwarf::DW_AT_frame_base, DwarfExpr.finalize());
       }
@@ -1243,12 +1243,9 @@ DwarfCompileUnit::getDwarf5OrGNULocationAtom(dwarf::LocationAtom Loc) const {
   }
 }
 
-DIE &DwarfCompileUnit::constructCallSiteEntryDIE(DIE &ScopeDIE,
-                                                 const DISubprogram *CalleeSP,
-                                                 bool IsTail,
-                                                 const MCSymbol *PCAddr,
-                                                 const MCSymbol *CallAddr,
-                                                 unsigned CallReg) {
+DIE &DwarfCompileUnit::constructCallSiteEntryDIE(
+    DIE &ScopeDIE, const DISubprogram *CalleeSP, bool IsTail,
+    const MCSymbol *PCAddr, const MCSymbol *CallAddr, unsigned CallReg) {
   // Insert a call site entry DIE within ScopeDIE.
   DIE &CallSiteDIE = createAndAddDIE(getDwarf5OrGNUTag(dwarf::DW_TAG_call_site),
                                      ScopeDIE, nullptr);
@@ -1447,8 +1444,8 @@ void DwarfCompileUnit::createAbstractEntity(const DINode *Node,
                                            nullptr /* IA */);
     DU->addScopeVariable(Scope, cast<DbgVariable>(Entity.get()));
   } else if (isa<const DILabel>(Node)) {
-    Entity = std::make_unique<DbgLabel>(
-                        cast<const DILabel>(Node), nullptr /* IA */);
+    Entity =
+        std::make_unique<DbgLabel>(cast<const DILabel>(Node), nullptr /* IA */);
     DU->addScopeLabel(Scope, cast<DbgLabel>(Entity.get()));
   }
 }
@@ -1460,9 +1457,9 @@ void DwarfCompileUnit::emitHeader(bool UseOffsets) {
     Asm->OutStreamer->emitLabel(LabelBegin);
   }
 
-  dwarf::UnitType UT = Skeleton ? dwarf::DW_UT_split_compile
-                                : DD->useSplitDwarf() ? dwarf::DW_UT_skeleton
-                                                      : dwarf::DW_UT_compile;
+  dwarf::UnitType UT = Skeleton              ? dwarf::DW_UT_split_compile
+                       : DD->useSplitDwarf() ? dwarf::DW_UT_skeleton
+                                             : dwarf::DW_UT_compile;
   DwarfUnit::emitCommonHeader(UseOffsets, UT);
   if (DD->getDwarfVersion() >= 5 && UT != dwarf::DW_UT_compile)
     Asm->emitInt64(getDWOId());
@@ -1645,7 +1642,8 @@ bool DwarfCompileUnit::isDwoUnit() const {
   return DD->useSplitDwarf() && Skeleton;
 }
 
-void DwarfCompileUnit::finishNonUnitTypeDIE(DIE& D, const DICompositeType *CTy) {
+void DwarfCompileUnit::finishNonUnitTypeDIE(DIE &D,
+                                            const DICompositeType *CTy) {
   constructTypeDIE(D, CTy);
 }
 
@@ -1675,11 +1673,12 @@ void DwarfCompileUnit::createBaseTypeDIEs() {
   // child list.
   for (auto &Btr : reverse(ExprRefedBaseTypes)) {
     DIE &Die = getUnitDie().addChildFront(
-      DIE::get(DIEValueAllocator, dwarf::DW_TAG_base_type));
+        DIE::get(DIEValueAllocator, dwarf::DW_TAG_base_type));
     SmallString<32> Str;
     addString(Die, dwarf::DW_AT_name,
-              Twine(dwarf::AttributeEncodingString(Btr.Encoding) +
-                    "_" + Twine(Btr.BitSize)).toStringRef(Str));
+              Twine(dwarf::AttributeEncodingString(Btr.Encoding) + "_" +
+                    Twine(Btr.BitSize))
+                  .toStringRef(Str));
     addUInt(Die, dwarf::DW_AT_encoding, dwarf::DW_FORM_data1, Btr.Encoding);
     // Round up to smallest number of bytes that contains this number of bits.
     addUInt(Die, dwarf::DW_AT_byte_size, std::nullopt,
diff --git a/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.h b/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.h
index 71f65f44c0f1a5f..096a572742b3640 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.h
+++ b/llvm/lib/CodeGen/AsmPrinter/DwarfCompileUnit.h
@@ -114,7 +114,7 @@ class DwarfCompileUnit final : public DwarfUnit {
     return DU->getAbstractEntities();
   }
 
-  void finishNonUnitTypeDIE(DIE& D, const DICompositeType *CTy) override;
+  void finishNonUnitTypeDIE(DIE &D, const DICompositeType *CTy) override;
 
   /// Add info for Wasm-global-based relocation.
   void addWasmRelocBaseGlobal(DIELoc *Loc, StringRef GlobalName,
@@ -128,9 +128,7 @@ class DwarfCompileUnit final : public DwarfUnit {
   bool hasRangeLists() const { return HasRangeLists; }
   unsigned getUniqueID() const { return UniqueID; }
 
-  DwarfCompileUnit *getSkeleton() const {
-    return Skeleton;
-  }
+  DwarfCompileUnit *getSkeleton() const { return Skeleton; }
 
   bool includeMinimalInlineScopes() const;
 
@@ -149,8 +147,8 @@ class DwarfCompileUnit final : public DwarfUnit {
   };
 
   struct BaseTypeRef {
-    BaseTypeRef(unsigned BitSize, dwarf::TypeKind Encoding) :
-      BitSize(BitSize), Encoding(Encoding) {}
+    BaseTypeRef(unsigned BitSize, dwarf::TypeKind Encoding)
+        : BitSize(BitSize), Encoding(Encoding) {}
     unsigned BitSize;
     dwarf::TypeKind Encoding;
     DIE *Die = nullptr;
@@ -159,9 +157,8 @@ class DwarfCompileUnit final : public DwarfUnit {
   std::vector<BaseTypeRef> ExprRefedBaseTypes;
 
   /// Get or create global variable DIE.
-  DIE *
-  getOrCreateGlobalVariableDIE(const DIGlobalVariable *GV,
-                               ArrayRef<GlobalExpr> GlobalExprs);
+  DIE *getOrCreateGlobalVariableDIE(const DIGlobalVariable *GV,
+                                    ArrayRef<GlobalExpr> GlobalExprs);
 
   DIE *getOrCreateCommonBlock(const DICommonBlock *CB,
                               ArrayRef<GlobalExpr> GlobalExprs);
@@ -288,9 +285,9 @@ class DwarfCompileUnit final : public DwarfUnit {
 
   unsigned getHeaderSize() const override {
     // DWARF v5 added the DWO ID to the header for split/skeleton units.
-    unsigned DWOIdSize =
-        DD->getDwarfVersion() >= 5 && DD->useSplitDwarf() ? sizeof(uint64_t)
-                                                          : 0;
+    unsigned DWOIdSize = DD->getDwarfVersion() >= 5 && DD->useSplitDwarf()
+                             ? sizeof(uint64_t)
+                             : 0;
     return DwarfUnit::getHeaderSize() + DWOIdSize;
   }
   unsigned getLength() {
@@ -308,9 +305,7 @@ class DwarfCompileUnit final : public DwarfUnit {
     return LabelBegin;
   }
 
-  MCSymbol *getMacroLabelBegin() const {
-    return MacroLabelBegin;
-  }
+  MCSymbol *getMacroLabelBegin() const { return MacroLabelBegin; }
 
   /// Add a new global name to the compile unit.
   void addGlobalName(StringRef Name, const DIE &Die,
diff --git a/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp b/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
index ff59b7829435139..79d2d512d8c540e 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.cpp
@@ -102,13 +102,12 @@ static cl::opt<AccelTableKind> AccelTables(
                clEnumValN(AccelTableKind::Dwarf, "Dwarf", "DWARF")),
     cl::init(AccelTableKind::Default));
 
-static cl::opt<DefaultOnOff>
-DwarfInlinedStrings("dwarf-inlined-strings", cl::Hidden,
-                 cl::desc("Use inlined strings rather than string section."),
-                 cl::values(clEnumVal(Default, "Default for platform"),
-                            clEnumVal(Enable, "Enabled"),
-                            clEnumVal(Disable, "Disabled")),
-                 cl::init(Default));
+static cl::opt<DefaultOnOff> DwarfInlinedStrings(
+    "dwarf-inlined-strings", cl::Hidden,
+    cl::desc("Use inlined strings rather than string section."),
+    cl::values(clEnumVal(Default, "Default for platform"),
+               clEnumVal(Enable, "Enabled"), clEnumVal(Disable, "Disabled")),
+    cl::init(Default));
 
 static cl::opt<bool>
     NoDwarfRangesSection("no-dwarf-ranges-section", cl::Hidden,
@@ -227,9 +226,7 @@ void DebugLocDwarfExpression::commitTemporaryBuffer() {
   TmpBuf->Comments.clear();
 }
 
-const DIType *DbgVariable::getType() const {
-  return getVariable()->getType();
-}
+const DIType *DbgVariable::getType() const { return getVariable()->getType(); }
 
 /// Get .debug_loc entry for the instruction range starting at MI.
 static DbgValueLoc getDebugLocValue(const MachineInstr *MI) {
@@ -275,10 +272,9 @@ ArrayRef<DbgVariable::FrameIndexExpr> DbgVariable::getFrameIndexExprs() const {
   if (FrameIndexExprs.size() == 1)
     return FrameIndexExprs;
 
-  assert(llvm::all_of(FrameIndexExprs,
-                      [](const FrameIndexExpr &A) {
-                        return A.Expr->isFragment();
-                      }) &&
+  assert(llvm::all_of(
+             FrameIndexExprs,
+             [](const FrameIndexExpr &A) { return A.Expr->isFragment(); }) &&
          "multiple FI expressions without DW_OP_LLVM_fragment");
   llvm::sort(FrameIndexExprs,
              [](const FrameIndexExpr &A, const FrameIndexExpr &B) -> bool {
@@ -293,7 +289,8 @@ void DbgVariable::addMMIEntry(const DbgVariable &V) {
   assert(DebugLocListIndex == ~0U && !ValueLoc.get() && "not an MMI entry");
   assert(V.DebugLocListIndex == ~0U && !V.ValueLoc.get() && "not an MMI entry");
   assert(V.getVariable() == getVariable() && "conflicting variable");
-  assert(V.getInlinedAt() == getInlinedAt() && "conflicting inlined-at location");
+  assert(V.getInlinedAt() == getInlinedAt() &&
+         "conflicting inlined-at location");
 
   assert(!FrameIndexExprs.empty() && "Expected an MMI entry");
   assert(!V.FrameIndexExprs.empty() && "Expected an MMI entry");
@@ -385,8 +382,9 @@ DwarfDebug::DwarfDebug(AsmPrinter *A)
     UseAllLinkageNames = DwarfLinkageNames == AllLinkageNames;
 
   unsigned DwarfVersionNumber = Asm->TM.Options.MCOptions.DwarfVersion;
-  unsigned DwarfVersion = DwarfVersionNumber ? DwarfVersionNumber
-                                    : MMI->getModule()->getDwarfVersion();
+  unsigned DwarfVersion = DwarfVersionNumber
+                              ? DwarfVersionNumber
+                              : MMI->getModule()->getDwarfVersion();
   // Use dwarf 4 by default if nothing is requested. For NVPTX, use dwarf 2.
   DwarfVersion =
       TT.isNVPTX() ? 2 : (DwarfVersion ? DwarfVersion : dwarf::DWARF_VERSION);
@@ -447,7 +445,8 @@ DwarfDebug::DwarfDebug(AsmPrinter *A)
   UseDebugMacroSection =
       DwarfVersion >= 5 || (UseGNUDebugMacro && !useSplitDwarf());
   if (DwarfOpConvert == Default)
-    EnableOpConvert = !((tuneForGDB() && useSplitDwarf()) || (tuneForLLDB() && !TT.isOSBinFormatMachO()));
+    EnableOpConvert = !((tuneForGDB() && useSplitDwarf()) ||
+                        (tuneForLLDB() && !TT.isOSBinFormatMachO()));
   else
     EnableOpConvert = (DwarfOpConvert == Enable);
 
@@ -565,7 +564,8 @@ void DwarfDebug::constructAbstractSubprogramScopeDIE(DwarfCompileUnit &SrcCU,
 
   // Find the subprogram's DwarfCompileUnit in the SPMap in case the subprogram
   // was inlined from another compile unit.
-  if (useSplitDwarf() && !shareAcrossDWOCUs() && !SP->getUnit()->getSplitDebugInlining())
+  if (useSplitDwarf() && !shareAcrossDWOCUs() &&
+      !SP->getUnit()->getSplitDebugInlining())
     // Avoid building the original CU if it won't be used
     SrcCU.constructAbstractSubprogramScopeDIE(Scope);
   else {
@@ -865,14 +865,16 @@ static void collectCallSiteParameters(const MachineInstr *CallMI,
     assert(std::next(Suc) == BundleEnd &&
            "More than one instruction in call delay slot");
     // Try to interpret value loaded by instruction.
-    if (!interpretNextInstr(&*Suc, ForwardedRegWorklist, Params, ClobberedRegUnits))
+    if (!interpretNextInstr(&*Suc, ForwardedRegWorklist, Params,
+                            ClobberedRegUnits))
       return;
   }
 
   // Search for a loading value in forwarding registers.
   for (; I != MBB->rend(); ++I) {
     // Try to interpret values loaded by instruction.
-    if (!interpretNextInstr(&*I, ForwardedRegWorklist, Params, ClobberedRegUnits))
+    if (!interpretNextInstr(&*I, ForwardedRegWorklist, Params,
+                            ClobberedRegUnits))
       return;
   }
 
@@ -1093,8 +1095,7 @@ DwarfDebug::getOrCreateDwarfCompileUnit(const DICompileUnit *DIUnit) {
   if (auto *CU = CUMap.lookup(DIUnit))
     return *CU;
 
-  if (useSplitDwarf() &&
-      !shareAcrossDWOCUs() &&
+  if (useSplitDwarf() && !shareAcrossDWOCUs() &&
       (!DIUnit->getSplitDebugInlining() ||
        DIUnit->getEmissionKind() == DICompileUnit::FullDebug) &&
       !CUMap.empty()) {
@@ -1187,7 +1188,6 @@ void DwarfDebug::beginModule(Module *M) {
     (useSplitDwarf() ? SkeletonHolder : InfoHolder)
         .setStringOffsetsStartSym(Asm->createTempSymbol("str_offsets_base"));
 
-
   // Create the symbols that designates the start of the DWARF v5 range list
   // and locations list tables. They are located past the table headers.
   if (getDwarfVersion() >= 5) {
@@ -1398,7 +1398,7 @@ void DwarfDebug::finalizeModuleInfo() {
                             TLOF.getDwarfMacinfoSection()->getBeginSymbol());
       }
     }
-    }
+  }
 
   // Emit all frontend-produced Skeleton CUs, i.e., Clang modules.
   for (auto *CUNode : MMI->getModule()->debug_compile_units())
@@ -1470,7 +1470,7 @@ void DwarfDebug::endModule() {
   emitDebugRanges();
 
   if (useSplitDwarf())
-  // Emit info into a debug macinfo.dwo section.
+    // Emit info into a debug macinfo.dwo section.
     emitDebugMacinfoDWO();
   else
     // Emit info into a debug macinfo/macro section.
@@ -1512,8 +1512,8 @@ void DwarfDebug::endModule() {
   // FIXME: AbstractVariables.clear();
 }
 
-void DwarfDebug::ensureAbstractEntityIsCreatedIfScoped(DwarfCompileUnit &CU,
-    const DINode *Node, const MDNode *ScopeNode) {
+void DwarfDebug::ensureAbstractEntityIsCreatedIfScoped(
+    DwarfCompileUnit &CU, const DINode *Node, const MDNode *ScopeNode) {
   if (CU.getExistingAbstractEntity(Node))
     return;
 
@@ -1559,9 +1559,10 @@ void DwarfDebug::collectVariableInfoFromMFTable(
       continue;
     }
 
-    ensureAbstractEntityIsCreatedIfScoped(TheCU, Var.first, Scope->getScopeNode());
+    ensureAbstractEntityIsCreatedIfScoped(TheCU, Var.first,
+                                          Scope->getScopeNode());
     auto RegVar = std::make_unique<DbgVariable>(
-                    cast<DILocalVariable>(Var.first), Var.second);
+        cast<DILocalVariable>(Var.first), Var.second);
     if (VI.inStackSlot())
       RegVar->initializeMMI(VI.Expr, VI.getStackSlot());
     else
@@ -1684,8 +1685,7 @@ static bool validThroughout(LexicalScopes &LScopes,
 // [4-)     [(@g, fragment 0, 96)]
 bool DwarfDebug::buildLocationList(SmallVectorImpl<DebugLocEntry> &DebugLoc,
                                    const DbgValueHistoryMap::Entries &Entries) {
-  using OpenRange =
-      std::pair<DbgValueHistoryMap::EntryIndex, DbgValueLoc>;
+  using OpenRange = std::pair<DbgValueHistoryMap::EntryIndex, DbgValueLoc>;
   SmallVector<OpenRange, 4> OpenRanges;
   bool isSafeForSingleLocation = true;
   const MachineInstr *StartDebugMI = nullptr;
@@ -1711,8 +1711,7 @@ bool DwarfDebug::buildLocationList(SmallVectorImpl<DebugLocEntry> &DebugLoc,
       EndLabel = Asm->MBBSectionRanges[EndMBB.getSectionIDNum()].EndLabel;
       if (EI->isClobber())
         EndMI = EI->getInstr();
-    }
-    else if (std::next(EI)->isClobber())
+    } else if (std::next(EI)->isClobber())
       EndLabel = getLabelAfterInsn(std::next(EI)->getInstr());
     else
       EndLabel = getLabelBeforeInsn(std::next(EI)->getInstr());
@@ -1849,17 +1848,15 @@ DbgEntity *DwarfDebug::createConcreteEntity(DwarfCompileUnit &TheCU,
                                             const MCSymbol *Sym) {
   ensureAbstractEntityIsCreatedIfScoped(TheCU, Node, Scope.getScopeNode());
   if (isa<const DILocalVariable>(Node)) {
-    ConcreteEntities.push_back(
-        std::make_unique<DbgVariable>(cast<const DILocalVariable>(Node),
-                                       Location));
-    InfoHolder.addScopeVariable(&Scope,
-        cast<DbgVariable>(ConcreteEntities.back().get()));
+    ConcreteEntities.push_back(std::make_unique<DbgVariable>(
+        cast<const DILocalVariable>(Node), Location));
+    InfoHolder.addScopeVariable(
+        &Scope, cast<DbgVariable>(ConcreteEntities.back().get()));
   } else if (isa<const DILabel>(Node)) {
     ConcreteEntities.push_back(
-        std::make_unique<DbgLabel>(cast<const DILabel>(Node),
-                                    Location, Sym));
+        std::make_unique<DbgLabel>(cast<const DILabel>(Node), Location, Sym));
     InfoHolder.addScopeLabel(&Scope,
-        cast<DbgLabel>(ConcreteEntities.back().get()));
+                             cast<DbgLabel>(ConcreteEntities.back().get()));
   }
   return ConcreteEntities.back().get();
 }
@@ -1895,8 +1892,8 @@ void DwarfDebug::collectEntityInfo(DwarfCompileUnit &TheCU,
       continue;
 
     Processed.insert(IV);
-    DbgVariable *RegVar = cast<DbgVariable>(createConcreteEntity(TheCU,
-                                            *Scope, LocalVar, IV.second));
+    DbgVariable *RegVar = cast<DbgVariable>(
+        createConcreteEntity(TheCU, *Scope, LocalVar, IV.second));
 
     const MachineInstr *MInsn = HistoryMapEntries.front().getInstr();
     assert(MInsn->isDebugValue() && "History must begin with debug value");
@@ -2213,7 +2210,8 @@ void DwarfDebug::beginFunctionImpl(const MachineFunction *MF) {
   CurFn = MF;
 
   auto *SP = MF->getFunction().getSubprogram();
-  assert(LScopes.empty() || SP == LScopes.getCurrentFunctionScope()->getScopeNode());
+  assert(LScopes.empty() ||
+         SP == LScopes.getCurrentFunctionScope()->getScopeNode());
   if (SP->getUnit()->getEmissionKind() == DICompileUnit::NoDebug)
     return;
 
@@ -2263,7 +2261,8 @@ void DwarfDebug::skippedNonDebugFunction() {
 void DwarfDebug::endFunctionImpl(const MachineFunction *MF) {
   const DISubprogram *SP = MF->getFunction().getSubprogram();
 
-  assert(CurFn == MF &&
+  assert(
+      CurFn == MF &&
       "endFunction should be called with the same function as beginFunction");
 
   // Set DwarfDwarfCompileUnitID in MCContext to default value.
@@ -2611,15 +2610,16 @@ void DwarfDebug::emitDebugLocEntry(ByteStreamer &Streamer,
     Offset++;
     for (unsigned I = 0; I < Op.getDescription().Op.size(); ++I) {
       if (Op.getDescription().Op[I] == Encoding::BaseTypeRef) {
-        unsigned Length =
-          Streamer.emitDIERef(*CU->ExprRefedBaseTypes[Op.getRawOperand(I)].Die);
+        unsigned Length = Streamer.emitDIERef(
+            *CU->ExprRefedBaseTypes[Op.getRawOperand(I)].Die);
         // Make sure comments stay aligned.
         for (unsigned J = 0; J < Length; ++J)
           if (Comment != End)
             Comment++;
       } else {
         for (uint64_t J = Offset; J < Op.getOperandEndOffset(I); ++J)
-          Streamer.emitInt8(Data.getData()[J], Comment != End ? *(Comment++) : "");
+          Streamer.emitInt8(Data.getData()[J],
+                            Comment != End ? *(Comment++) : "");
       }
       Offset = Op.getOperandEndOffset(I);
     }
@@ -2724,8 +2724,7 @@ void DwarfDebug::emitDebugLocValue(const AsmPrinter &AP, const DIBasicType *BT,
 
 void DebugLocEntry::finalize(const AsmPrinter &AP,
                              DebugLocStream::ListBuilder &List,
-                             const DIBasicType *BT,
-                             DwarfCompileUnit &TheCU) {
+                             const DIBasicType *BT, DwarfCompileUnit &TheCU) {
   assert(!Values.empty() &&
          "location list entries without values are redundant");
   assert(Begin != End && "unexpected location list entry with empty range");
@@ -2735,9 +2734,8 @@ void DebugLocEntry::finalize(const AsmPrinter &AP,
   const DbgValueLoc &Value = Values[0];
   if (Value.isFragment()) {
     // Emit all fragments that belong to the same variable and range.
-    assert(llvm::all_of(Values, [](DbgValueLoc P) {
-          return P.isFragment();
-        }) && "all values are expected to be fragments");
+    assert(llvm::all_of(Values, [](DbgValueLoc P) { return P.isFragment(); }) &&
+           "all values are expected to be fragments");
     assert(llvm::is_sorted(Values) && "fragments are expected to be sorted");
 
     for (const auto &Fragment : Values)
@@ -2758,7 +2756,8 @@ void DwarfDebug::emitDebugLocEntryLocation(const DebugLocStream::Entry &Entry,
   Asm->OutStreamer->AddComment("Loc expr size");
   if (getDwarfVersion() >= 5)
     Asm->emitULEB128(DebugLocs.getBytes(Entry).size());
-  else if (DebugLocs.getBytes(Entry).size() <= std::numeric_limits<uint16_t>::max())
+  else if (DebugLocs.getBytes(Entry).size() <=
+           std::numeric_limits<uint16_t>::max())
     Asm->emitInt16(DebugLocs.getBytes(Entry).size());
   else {
     // The entry is too big to fit into 16 bit, drop it as there is nothing we
@@ -2810,13 +2809,12 @@ static MCSymbol *emitLoclistsTableHeader(AsmPrinter *Asm,
 }
 
 template <typename Ranges, typename PayloadEmitter>
-static void emitRangeList(
-    DwarfDebug &DD, AsmPrinter *Asm, MCSymbol *Sym, const Ranges &R,
-    const DwarfCompileUnit &CU, unsigned BaseAddressx, unsigned OffsetPair,
-    unsigned StartxLength, unsigned EndOfList,
-    StringRef (*StringifyEnum)(unsigned),
-    bool ShouldUseBaseAddress,
-    PayloadEmitter EmitPayload) {
+static void
+emitRangeList(DwarfDebug &DD, AsmPrinter *Asm, MCSymbol *Sym, const Ranges &R,
+              const DwarfCompileUnit &CU, unsigned BaseAddressx,
+              unsigned OffsetPair, unsigned StartxLength, unsigned EndOfList,
+              StringRef (*StringifyEnum)(unsigned), bool ShouldUseBaseAddress,
+              PayloadEmitter EmitPayload) {
 
   auto Size = Asm->MAI->getCodePointerSize();
   bool UseDwarf5 = DD.getDwarfVersion() >= 5;
@@ -2826,7 +2824,8 @@ static void emitRangeList(
 
   // Gather all the ranges that apply to the same section so they can share
   // a base address entry.
-  MapVector<const MCSection *, std::vector<decltype(&*R.begin())>> SectionRanges;
+  MapVector<const MCSection *, std::vector<decltype(&*R.begin())>>
+      SectionRanges;
 
   for (const auto &Range : R)
     SectionRanges[&Range.Begin->getSection()].push_back(&Range);
@@ -2906,7 +2905,8 @@ static void emitRangeList(
 }
 
 // Handles emission of both debug_loclist / debug_loclist.dwo
-static void emitLocList(DwarfDebug &DD, AsmPrinter *Asm, const DebugLocStream::List &List) {
+static void emitLocList(DwarfDebug &DD, AsmPrinter *Asm,
+                        const DebugLocStream::List &List) {
   emitRangeList(DD, Asm, List.Label, DD.getDebugLocs().getEntries(List),
                 *List.CU, dwarf::DW_LLE_base_addressx,
                 dwarf::DW_LLE_offset_pair, dwarf::DW_LLE_startx_length,
@@ -2936,17 +2936,15 @@ void DwarfDebug::emitDebugLocImpl(MCSection *Sec) {
 
 // Emit locations into the .debug_loc/.debug_loclists section.
 void DwarfDebug::emitDebugLoc() {
-  emitDebugLocImpl(
-      getDwarfVersion() >= 5
-          ? Asm->getObjFileLowering().getDwarfLoclistsSection()
-          : Asm->getObjFileLowering().getDwarfLocSection());
+  emitDebugLocImpl(getDwarfVersion() >= 5
+                       ? Asm->getObjFileLowering().getDwarfLoclistsSection()
+                       : Asm->getObjFileLowering().getDwarfLocSection());
 }
 
 // Emit locations into the .debug_loc.dwo/.debug_loclists.dwo section.
 void DwarfDebug::emitDebugLocDWO() {
   if (getDwarfVersion() >= 5) {
-    emitDebugLocImpl(
-        Asm->getObjFileLowering().getDwarfLoclistsDWOSection());
+    emitDebugLocImpl(Asm->getObjFileLowering().getDwarfLoclistsDWOSection());
 
     return;
   }
@@ -3147,16 +3145,16 @@ void DwarfDebug::emitDebugARanges() {
 /// Emit a single range list. We handle both DWARF v5 and earlier.
 static void emitRangeList(DwarfDebug &DD, AsmPrinter *Asm,
                           const RangeSpanList &List) {
-  emitRangeList(DD, Asm, List.Label, List.Ranges, *List.CU,
-                dwarf::DW_RLE_base_addressx, dwarf::DW_RLE_offset_pair,
-                dwarf::DW_RLE_startx_length, dwarf::DW_RLE_end_of_list,
-                llvm::dwarf::RangeListEncodingString,
-                List.CU->getCUNode()->getRangesBaseAddress() ||
-                    DD.getDwarfVersion() >= 5,
-                [](auto) {});
+  emitRangeList(
+      DD, Asm, List.Label, List.Ranges, *List.CU, dwarf::DW_RLE_base_addressx,
+      dwarf::DW_RLE_offset_pair, dwarf::DW_RLE_startx_length,
+      dwarf::DW_RLE_end_of_list, llvm::dwarf::RangeListEncodingString,
+      List.CU->getCUNode()->getRangesBaseAddress() || DD.getDwarfVersion() >= 5,
+      [](auto) {});
 }
 
-void DwarfDebug::emitDebugRangesImpl(const DwarfFile &Holder, MCSection *Section) {
+void DwarfDebug::emitDebugRangesImpl(const DwarfFile &Holder,
+                                     MCSection *Section) {
   if (Holder.getRangeLists().empty())
     return;
 
@@ -3459,7 +3457,7 @@ void DwarfDebug::addDwarfTypeUnitType(DwarfCompileUnit &CU,
   AddrPool.resetUsedFlag();
 
   auto OwnedUnit = std::make_unique<DwarfTypeUnit>(CU, Asm, this, &InfoHolder,
-                                                    getDwoLineTable(CU));
+                                                   getDwoLineTable(CU));
   DwarfTypeUnit &NewTU = *OwnedUnit;
   DIE &UnitDie = NewTU.getUnitDie();
   TypeUnitsUnderConstruction.emplace_back(std::move(OwnedUnit), CTy);
diff --git a/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h b/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
index dfbf0d1b7d8987e..d971ab29d691691 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
+++ b/llvm/lib/CodeGen/AsmPrinter/DwarfDebug.h
@@ -64,10 +64,7 @@ class Module;
 /// DbgVariable and DbgLabel.
 class DbgEntity {
 public:
-  enum DbgEntityKind {
-    DbgVariableKind,
-    DbgLabelKind
-  };
+  enum DbgEntityKind { DbgVariableKind, DbgLabelKind };
 
 private:
   const DINode *Entity;
@@ -298,11 +295,12 @@ class DbgVariable : public DbgEntity {
 ///
 /// Labels are collected from \c DBG_LABEL instructions.
 class DbgLabel : public DbgEntity {
-  const MCSymbol *Sym;                  /// Symbol before DBG_LABEL instruction.
+  const MCSymbol *Sym; /// Symbol before DBG_LABEL instruction.
 
 public:
   /// We need MCSymbol information to generate DW_AT_low_pc.
-  DbgLabel(const DILabel *L, const DILocation *IA, const MCSymbol *Sym = nullptr)
+  DbgLabel(const DILabel *L, const DILocation *IA,
+           const MCSymbol *Sym = nullptr)
       : DbgEntity(L, IA, DbgLabelKind), Sym(Sym) {}
 
   /// Accessors.
@@ -314,9 +312,7 @@ class DbgLabel : public DbgEntity {
   /// @}
 
   /// Translate tag to proper Dwarf tag.
-  dwarf::Tag getTag() const {
-    return dwarf::DW_TAG_label;
-  }
+  dwarf::Tag getTag() const { return dwarf::DW_TAG_label; }
 
   static bool classof(const DbgEntity *N) {
     return N->getDbgEntityID() == DbgLabelKind;
@@ -330,8 +326,7 @@ class DbgCallSiteParam {
   DbgValueLoc Value; ///< Corresponding location for the parameter value at
                      ///< the call site.
 public:
-  DbgCallSiteParam(unsigned Reg, DbgValueLoc Val)
-      : Register(Reg), Value(Val) {
+  DbgCallSiteParam(unsigned Reg, DbgValueLoc Val) : Register(Reg), Value(Val) {
     assert(Reg && "Parameter register cannot be undef");
   }
 
@@ -438,7 +433,7 @@ class DwarfDebug : public DebugHandlerBase {
   /// temp symbols inside DWARF sections.
   bool UseSectionsAsReferences = false;
 
-  ///Allow emission of the .debug_loc section.
+  /// Allow emission of the .debug_loc section.
   bool UseLocSection = true;
 
   /// Generate DWARF v4 type units.
@@ -531,14 +526,14 @@ class DwarfDebug : public DebugHandlerBase {
                                              const DINode *Node,
                                              const MDNode *Scope);
 
-  DbgEntity *createConcreteEntity(DwarfCompileUnit &TheCU,
-                                  LexicalScope &Scope,
+  DbgEntity *createConcreteEntity(DwarfCompileUnit &TheCU, LexicalScope &Scope,
                                   const DINode *Node,
                                   const DILocation *Location,
                                   const MCSymbol *Sym = nullptr);
 
   /// Construct a DIE for this abstract scope.
-  void constructAbstractSubprogramScopeDIE(DwarfCompileUnit &SrcCU, LexicalScope *Scope);
+  void constructAbstractSubprogramScopeDIE(DwarfCompileUnit &SrcCU,
+                                           LexicalScope *Scope);
 
   /// Construct DIEs for call site entries describing the calls in \p MF.
   void constructCallSiteEntryDIEs(const DISubprogram &SP, DwarfCompileUnit &CU,
@@ -775,9 +770,7 @@ class DwarfDebug : public DebugHandlerBase {
   }
 
   /// Returns whether to use sections as labels rather than temp symbols.
-  bool useSectionsAsReferences() const {
-    return UseSectionsAsReferences;
-  }
+  bool useSectionsAsReferences() const { return UseSectionsAsReferences; }
 
   /// Returns whether .debug_loc section should be emitted.
   bool useLocSection() const { return UseLocSection; }
@@ -808,13 +801,9 @@ class DwarfDebug : public DebugHandlerBase {
     return UseSegmentedStringOffsetsTable;
   }
 
-  bool emitDebugEntryValues() const {
-    return EmitDebugEntryValues;
-  }
+  bool emitDebugEntryValues() const { return EmitDebugEntryValues; }
 
-  bool useOpConvert() const {
-    return EnableOpConvert;
-  }
+  bool useOpConvert() const { return EnableOpConvert; }
 
   bool shareAcrossDWOCUs() const;
 
diff --git a/llvm/lib/CodeGen/AsmPrinter/DwarfExpression.cpp b/llvm/lib/CodeGen/AsmPrinter/DwarfExpression.cpp
index a74d43897d45b30..44b9785a65c869f 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DwarfExpression.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/DwarfExpression.cpp
@@ -707,7 +707,7 @@ void DwarfExpression::emitLegacySExt(unsigned FromBits) {
 void DwarfExpression::emitLegacyZExt(unsigned FromBits) {
   // Heuristic to decide the most efficient encoding.
   // A ULEB can encode 7 1-bits per byte.
-  if (FromBits / 7 < 1+1+1+1+1) {
+  if (FromBits / 7 < 1 + 1 + 1 + 1 + 1) {
     // (X & (1 << FromBits - 1))
     emitOp(dwarf::DW_OP_constu);
     emitUnsigned((1ULL << FromBits) - 1);
@@ -729,7 +729,7 @@ void DwarfExpression::emitLegacyZExt(unsigned FromBits) {
 
 void DwarfExpression::addWasmLocation(unsigned Index, uint64_t Offset) {
   emitOp(dwarf::DW_OP_WASM_location);
-  emitUnsigned(Index == 4/*TI_LOCAL_INDIRECT*/ ? 0/*TI_LOCAL*/ : Index);
+  emitUnsigned(Index == 4 /*TI_LOCAL_INDIRECT*/ ? 0 /*TI_LOCAL*/ : Index);
   emitUnsigned(Offset);
   if (Index == 4 /*TI_LOCAL_INDIRECT*/) {
     assert(LocationKind == Unknown);
diff --git a/llvm/lib/CodeGen/AsmPrinter/DwarfFile.h b/llvm/lib/CodeGen/AsmPrinter/DwarfFile.h
index 464f4f048016df3..73c250fa42f6a05 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DwarfFile.h
+++ b/llvm/lib/CodeGen/AsmPrinter/DwarfFile.h
@@ -159,9 +159,7 @@ class DwarfFile {
     return ScopeVariables;
   }
 
-  DenseMap<LexicalScope *, LabelList> &getScopeLabels() {
-    return ScopeLabels;
-  }
+  DenseMap<LexicalScope *, LabelList> &getScopeLabels() { return ScopeLabels; }
 
   DenseMap<const DILocalScope *, DIE *> &getAbstractScopeDIEs() {
     return AbstractLocalScopeDIEs;
diff --git a/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.cpp b/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.cpp
index d30f0ef7af348af..ffc62c43bb8737e 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.cpp
@@ -42,7 +42,7 @@ DIEDwarfExpression::DIEDwarfExpression(const AsmPrinter &AP,
                                        DwarfCompileUnit &CU, DIELoc &DIE)
     : DwarfExpression(AP.getDwarfVersion(), CU), AP(AP), OutDIE(DIE) {}
 
-void DIEDwarfExpression::emitOp(uint8_t Op, const char* Comment) {
+void DIEDwarfExpression::emitOp(uint8_t Op, const char *Comment) {
   CU.addUInt(getActiveDIE(), dwarf::DW_FORM_data1, Op);
 }
 
@@ -88,8 +88,7 @@ DwarfTypeUnit::DwarfTypeUnit(DwarfCompileUnit &CU, AsmPrinter *A,
                              DwarfDebug *DW, DwarfFile *DWU,
                              MCDwarfDwoLineTable *SplitLineTable)
     : DwarfUnit(dwarf::DW_TAG_type_unit, CU.getCUNode(), A, DW, DWU), CU(CU),
-      SplitLineTable(SplitLineTable) {
-}
+      SplitLineTable(SplitLineTable) {}
 
 DwarfUnit::~DwarfUnit() {
   for (DIEBlock *B : DIEBlocks)
@@ -374,7 +373,7 @@ void DwarfUnit::addDIEEntry(DIE &Die, dwarf::Attribute Attribute,
   if (!EntryCU)
     EntryCU = getUnitDie().getUnit();
   assert(EntryCU == CU || !DD->useSplitDwarf() || DD->shareAcrossDWOCUs() ||
-         !static_cast<const DwarfUnit*>(CU)->isDwoUnit());
+         !static_cast<const DwarfUnit *>(CU)->isDwoUnit());
   addAttribute(Die, Attribute,
                EntryCU == CU ? dwarf::DW_FORM_ref4 : dwarf::DW_FORM_ref_addr,
                Entry);
@@ -780,10 +779,10 @@ void DwarfUnit::constructTypeDIE(DIE &Buffer, const DIDerivedType *DTy) {
   }
 
   // Add size if non-zero (derived types might be zero-sized.)
-  if (Size && Tag != dwarf::DW_TAG_pointer_type
-           && Tag != dwarf::DW_TAG_ptr_to_member_type
-           && Tag != dwarf::DW_TAG_reference_type
-           && Tag != dwarf::DW_TAG_rvalue_reference_type)
+  if (Size && Tag != dwarf::DW_TAG_pointer_type &&
+      Tag != dwarf::DW_TAG_ptr_to_member_type &&
+      Tag != dwarf::DW_TAG_reference_type &&
+      Tag != dwarf::DW_TAG_rvalue_reference_type)
     addUInt(Buffer, dwarf::DW_AT_byte_size, std::nullopt, Size);
 
   if (Tag == dwarf::DW_TAG_ptr_to_member_type)
@@ -808,7 +807,7 @@ void DwarfUnit::constructSubprogramArguments(DIE &Buffer, DITypeRefArray Args) {
   for (unsigned i = 1, N = Args.size(); i < N; ++i) {
     const DIType *Ty = Args[i];
     if (!Ty) {
-      assert(i == N-1 && "Unspecified parameter must be the last argument");
+      assert(i == N - 1 && "Unspecified parameter must be the last argument");
       createAndAddDIE(dwarf::DW_TAG_unspecified_parameters, Buffer);
     } else {
       DIE &Arg = createAndAddDIE(dwarf::DW_TAG_formal_parameter, Buffer);
@@ -926,7 +925,7 @@ void DwarfUnit::constructTypeDIE(DIE &Buffer, const DICompositeType *CTy) {
           // DW_TAG_variant.
           DIE &Variant = createAndAddDIE(dwarf::DW_TAG_variant, Buffer);
           if (const ConstantInt *CI =
-              dyn_cast_or_null<ConstantInt>(DDTy->getDiscriminantValue())) {
+                  dyn_cast_or_null<ConstantInt>(DDTy->getDiscriminantValue())) {
             if (DD->isUnsignedDIType(Discriminator->getBaseType()))
               addUInt(Variant, dwarf::DW_AT_discr_value, std::nullopt,
                       CI->getZExtValue());
@@ -1239,8 +1238,8 @@ void DwarfUnit::applySubprogramAttributes(const DISubprogram *SP, DIE &SPDie,
                                           bool SkipSPAttributes) {
   // If -fdebug-info-for-profiling is enabled, need to emit the subprogram
   // and its source location.
-  bool SkipSPSourceLocation = SkipSPAttributes &&
-                              !CUNode->getDebugInfoForProfiling();
+  bool SkipSPSourceLocation =
+      SkipSPAttributes && !CUNode->getDebugInfoForProfiling();
   if (!SkipSPSourceLocation)
     if (applySubprogramDefinitionAttributes(SP, SPDie, SkipSPAttributes))
       return;
@@ -1436,9 +1435,9 @@ DIE *DwarfUnit::getIndexTyDie() {
   StringRef Name = "__ARRAY_SIZE_TYPE__";
   addString(*IndexTyDie, dwarf::DW_AT_name, Name);
   addUInt(*IndexTyDie, dwarf::DW_AT_byte_size, std::nullopt, sizeof(int64_t));
-  addUInt(*IndexTyDie, dwarf::DW_AT_encoding, dwarf::DW_FORM_data1,
-          dwarf::getArrayIndexTypeEncoding(
-              (dwarf::SourceLanguage)getLanguage()));
+  addUInt(
+      *IndexTyDie, dwarf::DW_AT_encoding, dwarf::DW_FORM_data1,
+      dwarf::getArrayIndexTypeEncoding((dwarf::SourceLanguage)getLanguage()));
   DD->addAccelType(*CUNode, Name, *IndexTyDie, /*Flags*/ 0);
   return IndexTyDie;
 }
@@ -1557,8 +1556,9 @@ void DwarfUnit::constructEnumTypeDIE(DIE &Buffer, const DICompositeType *CTy) {
   }
 
   auto *Context = CTy->getScope();
-  bool IndexEnumerators = !Context || isa<DICompileUnit>(Context) || isa<DIFile>(Context) ||
-      isa<DINamespace>(Context) || isa<DICommonBlock>(Context);
+  bool IndexEnumerators = !Context || isa<DICompileUnit>(Context) ||
+                          isa<DIFile>(Context) || isa<DINamespace>(Context) ||
+                          isa<DICommonBlock>(Context);
   DINodeArray Elements = CTy->getElements();
 
   // Add enumerators to enumeration type.
@@ -1691,8 +1691,8 @@ DIE &DwarfUnit::constructMemberDIE(DIE &Buffer, const DIDerivedType *DT) {
   // Objective-C properties.
   if (DINode *PNode = DT->getObjCProperty())
     if (DIE *PDie = getDIE(PNode))
-      addAttribute(MemberDie, dwarf::DW_AT_APPLE_property,
-                   dwarf::DW_FORM_ref4, DIEEntry(*PDie));
+      addAttribute(MemberDie, dwarf::DW_AT_APPLE_property, dwarf::DW_FORM_ref4,
+                   DIEEntry(*PDie));
 
   if (DT->isArtificial())
     addFlag(MemberDie, dwarf::DW_AT_artificial);
@@ -1778,9 +1778,9 @@ void DwarfUnit::emitCommonHeader(bool UseOffsets, dwarf::UnitType UT) {
 }
 
 void DwarfTypeUnit::emitHeader(bool UseOffsets) {
-  DwarfUnit::emitCommonHeader(UseOffsets,
-                              DD->useSplitDwarf() ? dwarf::DW_UT_split_type
-                                                  : dwarf::DW_UT_type);
+  DwarfUnit::emitCommonHeader(UseOffsets, DD->useSplitDwarf()
+                                              ? dwarf::DW_UT_split_type
+                                              : dwarf::DW_UT_type);
   Asm->OutStreamer->AddComment("Type Signature");
   Asm->OutStreamer->emitIntValue(TypeSignature, sizeof(TypeSignature));
   Asm->OutStreamer->AddComment("Type DIE Offset");
@@ -1842,7 +1842,7 @@ void DwarfUnit::addRnglistsBase() {
                   TLOF.getDwarfRnglistsSection()->getBeginSymbol());
 }
 
-void DwarfTypeUnit::finishNonUnitTypeDIE(DIE& D, const DICompositeType *CTy) {
+void DwarfTypeUnit::finishNonUnitTypeDIE(DIE &D, const DICompositeType *CTy) {
   DD->getAddressPool().resetUsedFlag(true);
 }
 
diff --git a/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.h b/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.h
index 8f17e94c2d1c331..10072cf3a43ce47 100644
--- a/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.h
+++ b/llvm/lib/CodeGen/AsmPrinter/DwarfUnit.h
@@ -68,10 +68,11 @@ class DwarfUnit : public DIEUnit {
   /// corresponds to the MDNode mapped with the subprogram DIE.
   DenseMap<DIE *, const DINode *> ContainingTypeMap;
 
-  DwarfUnit(dwarf::Tag, const DICompileUnit *Node, AsmPrinter *A, DwarfDebug *DW,
-            DwarfFile *DWU);
+  DwarfUnit(dwarf::Tag, const DICompileUnit *Node, AsmPrinter *A,
+            DwarfDebug *DW, DwarfFile *DWU);
 
-  bool applySubprogramDefinitionAttributes(const DISubprogram *SP, DIE &SPDie, bool Minimal);
+  bool applySubprogramDefinitionAttributes(const DISubprogram *SP, DIE &SPDie,
+                                           bool Minimal);
 
   bool isShareableAcrossCUs(const DINode *D) const;
 
@@ -93,7 +94,7 @@ class DwarfUnit : public DIEUnit {
 
 public:
   // Accessors.
-  AsmPrinter* getAsmPrinter() const { return Asm; }
+  AsmPrinter *getAsmPrinter() const { return Asm; }
   MCSymbol *getEndLabel() const { return EndLabel; }
   uint16_t getLanguage() const { return CUNode->getSourceLanguage(); }
   const DICompileUnit *getCUNode() const { return CUNode; }
@@ -341,7 +342,7 @@ class DwarfUnit : public DIEUnit {
   /// Set D as anonymous type for index which can be reused later.
   void setIndexTyDie(DIE *D) { IndexTyDie = D; }
 
-  virtual void finishNonUnitTypeDIE(DIE& D, const DICompositeType *CTy) = 0;
+  virtual void finishNonUnitTypeDIE(DIE &D, const DICompositeType *CTy) = 0;
 
   /// If this is a named finished type then include it in the list of types for
   /// the accelerator tables.
@@ -364,7 +365,7 @@ class DwarfTypeUnit final : public DwarfUnit {
   bool UsedLineTable = false;
 
   unsigned getOrCreateSourceID(const DIFile *File) override;
-  void finishNonUnitTypeDIE(DIE& D, const DICompositeType *CTy) override;
+  void finishNonUnitTypeDIE(DIE &D, const DICompositeType *CTy) override;
   bool isDwoUnit() const override;
 
 public:
@@ -386,5 +387,5 @@ class DwarfTypeUnit final : public DwarfUnit {
                      const DIScope *Context) override;
   DwarfCompileUnit &getCU() override { return CU; }
 };
-} // end llvm namespace
+} // namespace llvm
 #endif
diff --git a/llvm/lib/CodeGen/AsmPrinter/EHStreamer.cpp b/llvm/lib/CodeGen/AsmPrinter/EHStreamer.cpp
index eef6b1d93f36e89..8055f1550018a4b 100644
--- a/llvm/lib/CodeGen/AsmPrinter/EHStreamer.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/EHStreamer.cpp
@@ -130,7 +130,7 @@ void EHStreamer::computeActionsTable(
         SizeActionEntry = SizeTypeID + getSLEB128Size(NextAction);
         SizeSiteActions += SizeActionEntry;
 
-        ActionEntry Action = { ValueForTypeID, NextAction, PrevAction };
+        ActionEntry Action = {ValueForTypeID, NextAction, PrevAction};
         Actions.push_back(Action);
         PrevAction = Actions.size() - 1;
       }
@@ -162,10 +162,12 @@ bool EHStreamer::callToNoUnwindFunction(const MachineInstr *MI) {
   bool SawFunc = false;
 
   for (const MachineOperand &MO : MI->operands()) {
-    if (!MO.isGlobal()) continue;
+    if (!MO.isGlobal())
+      continue;
 
     const Function *F = dyn_cast<Function>(MO.getGlobal());
-    if (!F) continue;
+    if (!F)
+      continue;
 
     if (SawFunc) {
       // Be conservative. If we have more than one function operand for this
@@ -202,7 +204,7 @@ void EHStreamer::computePadMap(
       if (!BeginLabel->isDefined() || !EndLabel->isDefined())
         continue;
       assert(!PadMap.count(BeginLabel) && "Duplicate landing pad labels!");
-      PadRange P = { i, j };
+      PadRange P = {i, j};
       PadMap[BeginLabel] = P;
     }
   }
@@ -307,12 +309,8 @@ void EHStreamer::computeCallSiteTable(
         PreviousIsInvoke = false;
       } else {
         // This try-range is for an invoke.
-        CallSiteEntry Site = {
-          BeginLabel,
-          LastLabel,
-          LandingPad,
-          FirstActions[P.PadIndex]
-        };
+        CallSiteEntry Site = {BeginLabel, LastLabel, LandingPad,
+                              FirstActions[P.PadIndex]};
 
         // Try to merge with the previous call-site. SJLJ doesn't do this
         if (PreviousIsInvoke && !IsSJLJ) {
@@ -421,8 +419,8 @@ MCSymbol *EHStreamer::emitExceptionTable() {
   bool IsWasm = Asm->MAI->getExceptionHandlingType() == ExceptionHandling::Wasm;
   bool HasLEB128Directives = Asm->MAI->hasLEB128Directives();
   unsigned CallSiteEncoding =
-      IsSJLJ ? static_cast<unsigned>(dwarf::DW_EH_PE_udata4) :
-               Asm->getObjFileLowering().getCallSiteEncoding();
+      IsSJLJ ? static_cast<unsigned>(dwarf::DW_EH_PE_udata4)
+             : Asm->getObjFileLowering().getCallSiteEncoding();
   bool HaveTTData = !TypeInfos.empty() || !FilterIds.empty();
 
   // Type infos.
@@ -473,9 +471,8 @@ MCSymbol *EHStreamer::emitExceptionTable() {
   Asm->emitAlignment(Align(4));
 
   // Emit the LSDA.
-  MCSymbol *GCCETSym =
-    Asm->OutContext.getOrCreateSymbol(Twine("GCC_except_table")+
-                                      Twine(Asm->getFunctionNumber()));
+  MCSymbol *GCCETSym = Asm->OutContext.getOrCreateSymbol(
+      Twine("GCC_except_table") + Twine(Asm->getFunctionNumber()));
   Asm->OutStreamer->emitLabel(GCCETSym);
   MCSymbol *CstEndLabel = Asm->createTempSymbol(
       CallSiteRanges.size() > 1 ? "action_table_base" : "cst_end");
@@ -594,14 +591,16 @@ MCSymbol *EHStreamer::emitExceptionTable() {
     EmitTypeTableRefAndCallSiteTableEndRef();
 
     unsigned idx = 0;
-    for (SmallVectorImpl<CallSiteEntry>::const_iterator
-         I = CallSites.begin(), E = CallSites.end(); I != E; ++I, ++idx) {
+    for (SmallVectorImpl<CallSiteEntry>::const_iterator I = CallSites.begin(),
+                                                        E = CallSites.end();
+         I != E; ++I, ++idx) {
       const CallSiteEntry &S = *I;
 
       // Index of the call site entry.
       if (VerboseAsm) {
         Asm->OutStreamer->AddComment(">> Call Site " + Twine(idx) + " <<");
-        Asm->OutStreamer->AddComment("  On exception at call site "+Twine(idx));
+        Asm->OutStreamer->AddComment("  On exception at call site " +
+                                     Twine(idx));
       }
       Asm->emitULEB128(idx);
 
@@ -767,7 +766,8 @@ MCSymbol *EHStreamer::emitExceptionTable() {
   for (const ActionEntry &Action : Actions) {
     if (VerboseAsm) {
       // Emit comments that decode the action table.
-      Asm->OutStreamer->AddComment(">> Action Record " + Twine(++Entry) + " <<");
+      Asm->OutStreamer->AddComment(">> Action Record " + Twine(++Entry) +
+                                   " <<");
     }
 
     // Type Filter
@@ -836,8 +836,9 @@ void EHStreamer::emitTypeInfos(unsigned TTypeEncoding, MCSymbol *TTBaseLabel) {
     Asm->OutStreamer->addBlankLine();
     Entry = 0;
   }
-  for (std::vector<unsigned>::const_iterator
-         I = FilterIds.begin(), E = FilterIds.end(); I < E; ++I) {
+  for (std::vector<unsigned>::const_iterator I = FilterIds.begin(),
+                                             E = FilterIds.end();
+       I < E; ++I) {
     unsigned TypeID = *I;
     if (VerboseAsm) {
       --Entry;
diff --git a/llvm/lib/CodeGen/AsmPrinter/WinException.cpp b/llvm/lib/CodeGen/AsmPrinter/WinException.cpp
index 6d6432b61f2d7dd..36b5177ebaeff3b 100644
--- a/llvm/lib/CodeGen/AsmPrinter/WinException.cpp
+++ b/llvm/lib/CodeGen/AsmPrinter/WinException.cpp
@@ -90,8 +90,8 @@ void WinException::beginFunction(const MachineFunction *MF) {
                                PerEncoding != dwarf::DW_EH_PE_omit && PerFn);
 
   unsigned LSDAEncoding = TLOF.getLSDAEncoding();
-  shouldEmitLSDA = shouldEmitPersonality &&
-    LSDAEncoding != dwarf::DW_EH_PE_omit;
+  shouldEmitLSDA =
+      shouldEmitPersonality && LSDAEncoding != dwarf::DW_EH_PE_omit;
 
   // If we're not using CFI, we don't want the CFI or the personality, but we
   // might want EH tables if we had EH pads.
@@ -187,8 +187,7 @@ static MCSymbol *getMCSymbolForMBB(AsmPrinter *Asm,
                                FuncLinkageName + "@4HA");
 }
 
-void WinException::beginFunclet(const MachineBasicBlock &MBB,
-                                MCSymbol *Sym) {
+void WinException::beginFunclet(const MachineBasicBlock &MBB, MCSymbol *Sym) {
   CurrentFuncletEntry = &MBB;
 
   const Function &F = Asm->MF->getFunction();
@@ -266,7 +265,8 @@ void WinException::endFuncletImpl() {
 
       // If this is a C++ catch funclet (or the parent function),
       // emit a reference to the LSDA for the parent function.
-      StringRef FuncLinkageName = GlobalValue::dropLLVMManglingEscape(F.getName());
+      StringRef FuncLinkageName =
+          GlobalValue::dropLLVMManglingEscape(F.getName());
       MCSymbol *FuncInfoXData = Asm->OutContext.getOrCreateSymbol(
           Twine("$cppxdata$", FuncLinkageName));
       Asm->OutStreamer->emitValue(create32bitRef(FuncInfoXData), 4);
@@ -304,9 +304,10 @@ void WinException::endFuncletImpl() {
 const MCExpr *WinException::create32bitRef(const MCSymbol *Value) {
   if (!Value)
     return MCConstantExpr::create(0, Asm->OutContext);
-  return MCSymbolRefExpr::create(Value, useImageRel32
-                                            ? MCSymbolRefExpr::VK_COFF_IMGREL32
-                                            : MCSymbolRefExpr::VK_None,
+  return MCSymbolRefExpr::create(Value,
+                                 useImageRel32
+                                     ? MCSymbolRefExpr::VK_COFF_IMGREL32
+                                     : MCSymbolRefExpr::VK_None,
                                  Asm->OutContext);
 }
 
@@ -349,17 +350,17 @@ int WinException::getFrameIndexOffset(int FrameIndex,
     StackOffset Offset =
         TFI.getFrameIndexReferencePreferSP(*Asm->MF, FrameIndex, UnusedReg,
                                            /*IgnoreSPUpdates*/ true);
-    assert(UnusedReg ==
-           Asm->MF->getSubtarget()
-               .getTargetLowering()
-               ->getStackPointerRegisterToSaveRestore());
+    assert(UnusedReg == Asm->MF->getSubtarget()
+                            .getTargetLowering()
+                            ->getStackPointerRegisterToSaveRestore());
     return Offset.getFixed();
   }
 
   // For 32-bit, offsets should be relative to the end of the EH registration
   // node. For 64-bit, it's relative to SP at the end of the prologue.
   assert(FuncInfo.EHRegNodeEndOffset != INT_MAX);
-  StackOffset Offset = TFI.getFrameIndexReference(*Asm->MF, FrameIndex, UnusedReg);
+  StackOffset Offset =
+      TFI.getFrameIndexReference(*Asm->MF, FrameIndex, UnusedReg);
   Offset += StackOffset::getFixed(FuncInfo.EHRegNodeEndOffset);
   assert(!Offset.getScalable() &&
          "Frame offsets with a scalable component are not supported");
@@ -654,8 +655,9 @@ void WinException::emitSEHActionsForRange(const WinEHFuncInfo &FuncInfo,
     OS.emitValue(getLabel(BeginLabel), 4);
     AddComment("LabelEnd");
     OS.emitValue(getLabelPlusOne(EndLabel), 4);
-    AddComment(UME.IsFinally ? "FinallyFunclet" : UME.Filter ? "FilterFunction"
-                                                             : "CatchAll");
+    AddComment(UME.IsFinally ? "FinallyFunclet"
+               : UME.Filter  ? "FilterFunction"
+                             : "CatchAll");
     OS.emitValue(FilterOrFinally, 4);
     AddComment(UME.IsFinally ? "Null" : "ExceptionHandler");
     OS.emitValue(ExceptOrNull, 4);
diff --git a/llvm/lib/CodeGen/AsmPrinter/WinException.h b/llvm/lib/CodeGen/AsmPrinter/WinException.h
index 638589adf0ddcba..957a12f299ca41d 100644
--- a/llvm/lib/CodeGen/AsmPrinter/WinException.h
+++ b/llvm/lib/CodeGen/AsmPrinter/WinException.h
@@ -92,6 +92,7 @@ class LLVM_LIBRARY_VISIBILITY WinException : public EHStreamer {
   int getFrameIndexOffset(int FrameIndex, const WinEHFuncInfo &FuncInfo);
 
   void endFuncletImpl();
+
 public:
   //===--------------------------------------------------------------------===//
   // Main entry points.
@@ -115,7 +116,6 @@ class LLVM_LIBRARY_VISIBILITY WinException : public EHStreamer {
   void beginFunclet(const MachineBasicBlock &MBB, MCSymbol *Sym) override;
   void endFunclet() override;
 };
-}
+} // namespace llvm
 
 #endif
-
diff --git a/llvm/lib/CodeGen/AtomicExpandPass.cpp b/llvm/lib/CodeGen/AtomicExpandPass.cpp
index ccf3e9ec6492105..70770cfdfe2314b 100644
--- a/llvm/lib/CodeGen/AtomicExpandPass.cpp
+++ b/llvm/lib/CodeGen/AtomicExpandPass.cpp
@@ -400,9 +400,8 @@ AtomicExpand::convertAtomicXchgToIntegerType(AtomicRMWInst *RMWI) {
                       ? Builder.CreatePtrToInt(Val, NewTy)
                       : Builder.CreateBitCast(Val, NewTy);
 
-  auto *NewRMWI =
-      Builder.CreateAtomicRMW(AtomicRMWInst::Xchg, Addr, NewVal,
-                              RMWI->getAlign(), RMWI->getOrdering());
+  auto *NewRMWI = Builder.CreateAtomicRMW(
+      AtomicRMWInst::Xchg, Addr, NewVal, RMWI->getAlign(), RMWI->getOrdering());
   NewRMWI->setVolatile(RMWI->isVolatile());
   LLVM_DEBUG(dbgs() << "Replaced " << *RMWI << " with " << *NewRMWI << "\n");
 
diff --git a/llvm/lib/CodeGen/BasicBlockSections.cpp b/llvm/lib/CodeGen/BasicBlockSections.cpp
index de7c17082fa4bb9..3fead61d6f024c6 100644
--- a/llvm/lib/CodeGen/BasicBlockSections.cpp
+++ b/llvm/lib/CodeGen/BasicBlockSections.cpp
@@ -321,8 +321,7 @@ bool BasicBlockSections::runOnMachineFunction(MachineFunction &MF) {
   // clusters of basic blocks using basic block ids. Source drift can
   // invalidate these groupings leading to sub-optimal code generation with
   // regards to performance.
-  if (BBSectionsType == BasicBlockSection::List &&
-      hasInstrProfHashMismatch(MF))
+  if (BBSectionsType == BasicBlockSection::List && hasInstrProfHashMismatch(MF))
     return true;
   // Renumber blocks before sorting them. This is useful for accessing the
   // original layout positions and finding the original fallthroughs.
diff --git a/llvm/lib/CodeGen/BasicTargetTransformInfo.cpp b/llvm/lib/CodeGen/BasicTargetTransformInfo.cpp
index 57cefae2066a9cd..bb894671ba5b82c 100644
--- a/llvm/lib/CodeGen/BasicTargetTransformInfo.cpp
+++ b/llvm/lib/CodeGen/BasicTargetTransformInfo.cpp
@@ -25,9 +25,9 @@ using namespace llvm;
 // This flag is used by the template base class for BasicTTIImpl, and here to
 // provide a definition.
 cl::opt<unsigned>
-llvm::PartialUnrollingThreshold("partial-unrolling-threshold", cl::init(0),
-                                cl::desc("Threshold for partial unrolling"),
-                                cl::Hidden);
+    llvm::PartialUnrollingThreshold("partial-unrolling-threshold", cl::init(0),
+                                    cl::desc("Threshold for partial unrolling"),
+                                    cl::Hidden);
 
 BasicTTIImpl::BasicTTIImpl(const TargetMachine *TM, const Function &F)
     : BaseT(TM, F.getParent()->getDataLayout()), ST(TM->getSubtargetImpl(F)),
diff --git a/llvm/lib/CodeGen/BranchFolding.cpp b/llvm/lib/CodeGen/BranchFolding.cpp
index 3830f25debaf3cb..ef0b9a736242bcc 100644
--- a/llvm/lib/CodeGen/BranchFolding.cpp
+++ b/llvm/lib/CodeGen/BranchFolding.cpp
@@ -66,50 +66,51 @@ using namespace llvm;
 
 STATISTIC(NumDeadBlocks, "Number of dead blocks removed");
 STATISTIC(NumBranchOpts, "Number of branches optimized");
-STATISTIC(NumTailMerge , "Number of block tails merged");
-STATISTIC(NumHoist     , "Number of times common instructions are hoisted");
-STATISTIC(NumTailCalls,  "Number of tail calls optimized");
+STATISTIC(NumTailMerge, "Number of block tails merged");
+STATISTIC(NumHoist, "Number of times common instructions are hoisted");
+STATISTIC(NumTailCalls, "Number of tail calls optimized");
 
 static cl::opt<cl::boolOrDefault> FlagEnableTailMerge("enable-tail-merge",
-                              cl::init(cl::BOU_UNSET), cl::Hidden);
+                                                      cl::init(cl::BOU_UNSET),
+                                                      cl::Hidden);
 
 // Throttle for huge numbers of predecessors (compile speed problems)
-static cl::opt<unsigned>
-TailMergeThreshold("tail-merge-threshold",
-          cl::desc("Max number of predecessors to consider tail merging"),
-          cl::init(150), cl::Hidden);
+static cl::opt<unsigned> TailMergeThreshold(
+    "tail-merge-threshold",
+    cl::desc("Max number of predecessors to consider tail merging"),
+    cl::init(150), cl::Hidden);
 
 // Heuristic for tail merging (and, inversely, tail duplication).
 // TODO: This should be replaced with a target query.
-static cl::opt<unsigned>
-TailMergeSize("tail-merge-size",
-              cl::desc("Min number of instructions to consider tail merging"),
-              cl::init(3), cl::Hidden);
+static cl::opt<unsigned> TailMergeSize(
+    "tail-merge-size",
+    cl::desc("Min number of instructions to consider tail merging"),
+    cl::init(3), cl::Hidden);
 
 namespace {
 
-  /// BranchFolderPass - Wrap branch folder in a machine function pass.
-  class BranchFolderPass : public MachineFunctionPass {
-  public:
-    static char ID;
+/// BranchFolderPass - Wrap branch folder in a machine function pass.
+class BranchFolderPass : public MachineFunctionPass {
+public:
+  static char ID;
 
-    explicit BranchFolderPass(): MachineFunctionPass(ID) {}
+  explicit BranchFolderPass() : MachineFunctionPass(ID) {}
 
-    bool runOnMachineFunction(MachineFunction &MF) override;
+  bool runOnMachineFunction(MachineFunction &MF) override;
 
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.addRequired<MachineBlockFrequencyInfo>();
-      AU.addRequired<MachineBranchProbabilityInfo>();
-      AU.addRequired<ProfileSummaryInfoWrapperPass>();
-      AU.addRequired<TargetPassConfig>();
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.addRequired<MachineBlockFrequencyInfo>();
+    AU.addRequired<MachineBranchProbabilityInfo>();
+    AU.addRequired<ProfileSummaryInfoWrapperPass>();
+    AU.addRequired<TargetPassConfig>();
+    MachineFunctionPass::getAnalysisUsage(AU);
+  }
 
-    MachineFunctionProperties getRequiredProperties() const override {
-      return MachineFunctionProperties().set(
-          MachineFunctionProperties::Property::NoPHIs);
-    }
-  };
+  MachineFunctionProperties getRequiredProperties() const override {
+    return MachineFunctionProperties().set(
+        MachineFunctionProperties::Property::NoPHIs);
+  }
+};
 
 } // end anonymous namespace
 
@@ -117,8 +118,8 @@ char BranchFolderPass::ID = 0;
 
 char &llvm::BranchFolderPassID = BranchFolderPass::ID;
 
-INITIALIZE_PASS(BranchFolderPass, DEBUG_TYPE,
-                "Control Flow Optimizer", false, false)
+INITIALIZE_PASS(BranchFolderPass, DEBUG_TYPE, "Control Flow Optimizer", false,
+                false)
 
 bool BranchFolderPass::runOnMachineFunction(MachineFunction &MF) {
   if (skipFunction(MF.getFunction()))
@@ -129,8 +130,7 @@ bool BranchFolderPass::runOnMachineFunction(MachineFunction &MF) {
   // HW that requires structurized CFG.
   bool EnableTailMerge = !MF.getTarget().requiresStructuredCFG() &&
                          PassConfig->getEnableTailMerge();
-  MBFIWrapper MBBFreqInfo(
-      getAnalysis<MachineBlockFrequencyInfo>());
+  MBFIWrapper MBBFreqInfo(getAnalysis<MachineBlockFrequencyInfo>());
   BranchFolder Folder(EnableTailMerge, /*CommonHoist=*/true, MBBFreqInfo,
                       getAnalysis<MachineBranchProbabilityInfo>(),
                       &getAnalysis<ProfileSummaryInfoWrapperPass>().getPSI());
@@ -150,8 +150,12 @@ BranchFolder::BranchFolder(bool DefaultEnableTailMerge, bool CommonHoist,
   case cl::BOU_UNSET:
     EnableTailMerge = DefaultEnableTailMerge;
     break;
-  case cl::BOU_TRUE: EnableTailMerge = true; break;
-  case cl::BOU_FALSE: EnableTailMerge = false; break;
+  case cl::BOU_TRUE:
+    EnableTailMerge = true;
+    break;
+  case cl::BOU_FALSE:
+    EnableTailMerge = false;
+    break;
   }
 }
 
@@ -162,7 +166,7 @@ void BranchFolder::RemoveDeadBlock(MachineBasicBlock *MBB) {
   MachineFunction *MF = MBB->getParent();
   // drop all successors.
   while (!MBB->succ_empty())
-    MBB->removeSuccessor(MBB->succ_end()-1);
+    MBB->removeSuccessor(MBB->succ_end() - 1);
 
   // Avoid matching if this pointer gets reused.
   TriedMerging.erase(MBB);
@@ -183,7 +187,8 @@ bool BranchFolder::OptimizeFunction(MachineFunction &MF,
                                     const TargetInstrInfo *tii,
                                     const TargetRegisterInfo *tri,
                                     MachineLoopInfo *mli, bool AfterPlacement) {
-  if (!tii) return false;
+  if (!tii)
+    return false;
 
   TriedMerging.clear();
 
@@ -205,7 +210,7 @@ bool BranchFolder::OptimizeFunction(MachineFunction &MF,
 
   bool MadeChangeThisIteration = true;
   while (MadeChangeThisIteration) {
-    MadeChangeThisIteration    = TailMergeBlocks(MF);
+    MadeChangeThisIteration = TailMergeBlocks(MF);
     // No need to clean up if tail merging does not change anything after the
     // block placement.
     if (!AfterBlockPlacement || MadeChangeThisIteration)
@@ -226,7 +231,8 @@ bool BranchFolder::OptimizeFunction(MachineFunction &MF,
   for (const MachineBasicBlock &BB : MF) {
     for (const MachineInstr &I : BB)
       for (const MachineOperand &Op : I.operands()) {
-        if (!Op.isJTI()) continue;
+        if (!Op.isJTI())
+          continue;
 
         // Remember that this JT is live.
         JTIsLive.set(Op.getIndex());
@@ -471,12 +477,12 @@ static void FixTail(MachineBasicBlock *CurMBB, MachineBasicBlock *SuccBB,
       }
     }
   }
-  TII->insertBranch(*CurMBB, SuccBB, nullptr,
-                    SmallVector<MachineOperand, 0>(), dl);
+  TII->insertBranch(*CurMBB, SuccBB, nullptr, SmallVector<MachineOperand, 0>(),
+                    dl);
 }
 
-bool
-BranchFolder::MergePotentialsElt::operator<(const MergePotentialsElt &o) const {
+bool BranchFolder::MergePotentialsElt::operator<(
+    const MergePotentialsElt &o) const {
   if (getHash() < o.getHash())
     return true;
   if (getHash() > o.getHash())
@@ -485,8 +491,8 @@ BranchFolder::MergePotentialsElt::operator<(const MergePotentialsElt &o) const {
     return true;
   if (getBlock()->getNumber() > o.getBlock()->getNumber())
     return false;
-  // _GLIBCXX_DEBUG checks strict weak ordering, which involves comparing
-  // an object with itself.
+    // _GLIBCXX_DEBUG checks strict weak ordering, which involves comparing
+    // an object with itself.
 #ifndef _GLIBCXX_DEBUG
   llvm_unreachable("Predecessor appears twice");
 #else
@@ -507,7 +513,8 @@ static unsigned CountTerminators(MachineBasicBlock *MBB,
       break;
     }
     --I;
-    if (!I->isTerminator()) break;
+    if (!I->isTerminator())
+      break;
     ++NumTerms;
   }
   return NumTerms;
@@ -532,24 +539,23 @@ static bool blockEndsInUnreachable(const MachineBasicBlock *MBB) {
 /// MinCommonTailLength  Minimum size of tail block to be merged.
 /// CommonTailLen   Out parameter to record the size of the shared tail between
 ///                 MBB1 and MBB2
-/// I1, I2          Iterator references that will be changed to point to the first
+/// I1, I2          Iterator references that will be changed to point to the
+/// first
 ///                 instruction in the common tail shared by MBB1,MBB2
-/// SuccBB          A common successor of MBB1, MBB2 which are in a canonical form
+/// SuccBB          A common successor of MBB1, MBB2 which are in a canonical
+/// form
 ///                 relative to SuccBB
 /// PredBB          The layout predecessor of SuccBB, if any.
 /// EHScopeMembership  map from block to EH scope #.
 /// AfterPlacement  True if we are merging blocks after layout. Stricter
 ///                 thresholds apply to prevent undoing tail-duplication.
-static bool
-ProfitableToMerge(MachineBasicBlock *MBB1, MachineBasicBlock *MBB2,
-                  unsigned MinCommonTailLength, unsigned &CommonTailLen,
-                  MachineBasicBlock::iterator &I1,
-                  MachineBasicBlock::iterator &I2, MachineBasicBlock *SuccBB,
-                  MachineBasicBlock *PredBB,
-                  DenseMap<const MachineBasicBlock *, int> &EHScopeMembership,
-                  bool AfterPlacement,
-                  MBFIWrapper &MBBFreqInfo,
-                  ProfileSummaryInfo *PSI) {
+static bool ProfitableToMerge(
+    MachineBasicBlock *MBB1, MachineBasicBlock *MBB2,
+    unsigned MinCommonTailLength, unsigned &CommonTailLen,
+    MachineBasicBlock::iterator &I1, MachineBasicBlock::iterator &I2,
+    MachineBasicBlock *SuccBB, MachineBasicBlock *PredBB,
+    DenseMap<const MachineBasicBlock *, int> &EHScopeMembership,
+    bool AfterPlacement, MBFIWrapper &MBBFreqInfo, ProfileSummaryInfo *PSI) {
   // It is never profitable to tail-merge blocks from two different EH scopes.
   if (!EHScopeMembership.empty()) {
     auto EHScope1 = EHScopeMembership.find(MBB1);
@@ -596,8 +602,8 @@ ProfitableToMerge(MachineBasicBlock *MBB1, MachineBasicBlock *MBB2,
   // are unlikely to become a fallthrough target after machine block placement.
   // Tail merging these blocks is unlikely to create additional unconditional
   // branches, and will reduce the size of this cold code.
-  if (FullBlockTail1 && FullBlockTail2 &&
-      blockEndsInUnreachable(MBB1) && blockEndsInUnreachable(MBB2))
+  if (FullBlockTail1 && FullBlockTail2 && blockEndsInUnreachable(MBB1) &&
+      blockEndsInUnreachable(MBB2))
     return true;
 
   // If one of the blocks can be completely merged and happens to be in
@@ -633,8 +639,7 @@ ProfitableToMerge(MachineBasicBlock *MBB1, MachineBasicBlock *MBB2,
   unsigned EffectiveTailLen = CommonTailLen;
   if (SuccBB && MBB1 != PredBB && MBB2 != PredBB &&
       (MBB1->succ_size() == 1 || !AfterPlacement) &&
-      !MBB1->back().isBarrier() &&
-      !MBB2->back().isBarrier())
+      !MBB1->back().isBarrier() && !MBB2->back().isBarrier())
     ++EffectiveTailLen;
 
   // Check if the common tail is long enough to be worthwhile.
@@ -646,10 +651,9 @@ ProfitableToMerge(MachineBasicBlock *MBB1, MachineBasicBlock *MBB2,
   // branch instruction, which is likely to be smaller than the 2
   // instructions that would be deleted in the merge.
   MachineFunction *MF = MBB1->getParent();
-  bool OptForSize =
-      MF->getFunction().hasOptSize() ||
-      (llvm::shouldOptimizeForSize(MBB1, PSI, &MBBFreqInfo) &&
-       llvm::shouldOptimizeForSize(MBB2, PSI, &MBBFreqInfo));
+  bool OptForSize = MF->getFunction().hasOptSize() ||
+                    (llvm::shouldOptimizeForSize(MBB1, PSI, &MBBFreqInfo) &&
+                     llvm::shouldOptimizeForSize(MBB2, PSI, &MBBFreqInfo));
   return EffectiveTailLen >= 2 && OptForSize &&
          (FullBlockTail1 || FullBlockTail2);
 }
@@ -668,10 +672,8 @@ unsigned BranchFolder::ComputeSameTails(unsigned CurHash,
     for (MPIterator I = std::prev(CurMPIter); I->getHash() == CurHash; --I) {
       unsigned CommonTailLen;
       if (ProfitableToMerge(CurMPIter->getBlock(), I->getBlock(),
-                            MinCommonTailLength,
-                            CommonTailLen, TrialBBI1, TrialBBI2,
-                            SuccBB, PredBB,
-                            EHScopeMembership,
+                            MinCommonTailLength, CommonTailLen, TrialBBI1,
+                            TrialBBI2, SuccBB, PredBB, EHScopeMembership,
                             AfterBlockPlacement, MBBFreqInfo, PSI)) {
         if (CommonTailLen > maxCommonTailLength) {
           SameTails.clear();
@@ -679,8 +681,7 @@ unsigned BranchFolder::ComputeSameTails(unsigned CurHash,
           HighestMPIter = CurMPIter;
           SameTails.push_back(SameTailElt(CurMPIter, TrialBBI1));
         }
-        if (HighestMPIter == CurMPIter &&
-            CommonTailLen == maxCommonTailLength)
+        if (HighestMPIter == CurMPIter && CommonTailLen == maxCommonTailLength)
           SameTails.push_back(SameTailElt(I, TrialBBI2));
       }
       if (I == B)
@@ -732,7 +733,7 @@ bool BranchFolder::CreateCommonTailOnlyBlock(MachineBasicBlock *&PredBB,
   }
 
   MachineBasicBlock::iterator BBI =
-    SameTails[commonTailIndex].getTailStartPos();
+      SameTails[commonTailIndex].getTailStartPos();
   MachineBasicBlock *MBB = SameTails[commonTailIndex].getBlock();
 
   LLVM_DEBUG(dbgs() << "\nSplitting " << printMBBReference(*MBB) << ", size "
@@ -741,8 +742,9 @@ bool BranchFolder::CreateCommonTailOnlyBlock(MachineBasicBlock *&PredBB,
   // If the split block unconditionally falls-thru to SuccBB, it will be
   // merged. In control flow terms it should then take SuccBB's name. e.g. If
   // SuccBB is an inner loop, the common tail is still part of the inner loop.
-  const BasicBlock *BB = (SuccBB && MBB->succ_size() == 1) ?
-    SuccBB->getBasicBlock() : MBB->getBasicBlock();
+  const BasicBlock *BB = (SuccBB && MBB->succ_size() == 1)
+                             ? SuccBB->getBasicBlock()
+                             : MBB->getBasicBlock();
   MachineBasicBlock *newMBB = SplitMBBAt(*MBB, BBI, BB);
   if (!newMBB) {
     LLVM_DEBUG(dbgs() << "... failed!");
@@ -759,9 +761,8 @@ bool BranchFolder::CreateCommonTailOnlyBlock(MachineBasicBlock *&PredBB,
   return true;
 }
 
-static void
-mergeOperations(MachineBasicBlock::iterator MBBIStartPos,
-                MachineBasicBlock &MBBCommon) {
+static void mergeOperations(MachineBasicBlock::iterator MBBIStartPos,
+                            MachineBasicBlock &MBBCommon) {
   MachineBasicBlock *MBB = MBBIStartPos->getParent();
   // Note CommonTailLen does not necessarily matches the size of
   // the common BB nor all its instructions because of debug
@@ -813,13 +814,13 @@ void BranchFolder::mergeCommonTails(unsigned commonTailIndex) {
   MachineBasicBlock *MBB = SameTails[commonTailIndex].getBlock();
 
   std::vector<MachineBasicBlock::iterator> NextCommonInsts(SameTails.size());
-  for (unsigned int i = 0 ; i != SameTails.size() ; ++i) {
+  for (unsigned int i = 0; i != SameTails.size(); ++i) {
     if (i != commonTailIndex) {
       NextCommonInsts[i] = SameTails[i].getTailStartPos();
       mergeOperations(SameTails[i].getTailStartPos(), *MBB);
     } else {
       assert(SameTails[i].getTailStartPos() == MBB->begin() &&
-          "MBB is not a common tail only block");
+             "MBB is not a common tail only block");
     }
   }
 
@@ -827,17 +828,17 @@ void BranchFolder::mergeCommonTails(unsigned commonTailIndex) {
     if (!countsAsInstruction(MI))
       continue;
     DebugLoc DL = MI.getDebugLoc();
-    for (unsigned int i = 0 ; i < NextCommonInsts.size() ; i++) {
+    for (unsigned int i = 0; i < NextCommonInsts.size(); i++) {
       if (i == commonTailIndex)
         continue;
 
       auto &Pos = NextCommonInsts[i];
       assert(Pos != SameTails[i].getBlock()->end() &&
-          "Reached BB end within common tail");
+             "Reached BB end within common tail");
       while (!countsAsInstruction(*Pos)) {
         ++Pos;
         assert(Pos != SameTails[i].getBlock()->end() &&
-            "Reached BB end within common tail");
+               "Reached BB end within common tail");
       }
       assert(MI.isIdenticalTo(*Pos) && "Expected matching MIIs!");
       DL = DILocation::getMergedLocation(DL, Pos->getDebugLoc());
@@ -917,9 +918,8 @@ bool BranchFolder::TryTailMergeBlocks(MachineBasicBlock *SuccBB,
 
     // Build SameTails, identifying the set of blocks with this hash code
     // and with the maximum number of instructions in common.
-    unsigned maxCommonTailLength = ComputeSameTails(CurHash,
-                                                    MinCommonTailLength,
-                                                    SuccBB, PredBB);
+    unsigned maxCommonTailLength =
+        ComputeSameTails(CurHash, MinCommonTailLength, SuccBB, PredBB);
 
     // If we didn't find any pair that has at least MinCommonTailLength
     // instructions in common, remove all blocks with this hash code and retry.
@@ -969,8 +969,8 @@ bool BranchFolder::TryTailMergeBlocks(MachineBasicBlock *SuccBB,
          !SameTails[commonTailIndex].tailIsWholeBlock())) {
       // None of the blocks consist entirely of the common tail.
       // Split a block so that one does.
-      if (!CreateCommonTailOnlyBlock(PredBB, SuccBB,
-                                     maxCommonTailLength, commonTailIndex)) {
+      if (!CreateCommonTailOnlyBlock(PredBB, SuccBB, maxCommonTailLength,
+                                     commonTailIndex)) {
         RemoveBlocksWithHash(CurHash, SuccBB, PredBB);
         continue;
       }
@@ -989,7 +989,7 @@ bool BranchFolder::TryTailMergeBlocks(MachineBasicBlock *SuccBB,
     // Traversal must be forwards so erases work.
     LLVM_DEBUG(dbgs() << "\nUsing common tail in " << printMBBReference(*MBB)
                       << " for ");
-    for (unsigned int i=0, e = SameTails.size(); i != e; ++i) {
+    for (unsigned int i = 0, e = SameTails.size(); i != e; ++i) {
       if (commonTailIndex == i)
         continue;
       LLVM_DEBUG(dbgs() << printMBBReference(*SameTails[i].getBlock())
@@ -1053,7 +1053,8 @@ bool BranchFolder::TailMergeBlocks(MachineFunction &MF) {
 
   for (MachineFunction::iterator I = std::next(MF.begin()), E = MF.end();
        I != E; ++I) {
-    if (I->pred_size() < 2) continue;
+    if (I->pred_size() < 2)
+      continue;
     SmallPtrSet<MachineBasicBlock *, 8> UniquePreds;
     MachineBasicBlock *IBB = &*I;
     MachineBasicBlock *PredBB = &*std::prev(I);
@@ -1126,8 +1127,8 @@ bool BranchFolder::TailMergeBlocks(MachineFunction &MF) {
           TII->removeBranch(*PBB);
           if (!Cond.empty())
             // reinsert conditional branch only, for now
-            TII->insertBranch(*PBB, (TBB == IBB) ? FBB : TBB, nullptr,
-                              NewCond, dl);
+            TII->insertBranch(*PBB, (TBB == IBB) ? FBB : TBB, nullptr, NewCond,
+                              dl);
         }
 
         MergePotentials.push_back(MergePotentialsElt(HashEndOfMBB(*PBB), PBB));
@@ -1258,8 +1259,10 @@ static bool IsBetterFallthrough(MachineBasicBlock *MBB1,
 
   // If there is a clear successor ordering we make sure that one block
   // will fall through to the next
-  if (MBB1->isSuccessor(MBB2)) return true;
-  if (MBB2->isSuccessor(MBB1)) return false;
+  if (MBB1->isSuccessor(MBB2))
+    return true;
+  if (MBB2->isSuccessor(MBB1))
+    return false;
 
   return MBB2I->isCall() && !MBB1I->isCall();
 }
@@ -1353,7 +1356,8 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
       SameEHScope) {
     salvageDebugInfoFromEmptyBlock(TII, *MBB);
     // Dead block?  Leave for cleanup later.
-    if (MBB->pred_empty()) return MadeChange;
+    if (MBB->pred_empty())
+      return MadeChange;
 
     if (FallThrough == MF.end()) {
       // TODO: Simplify preds to not branch here if possible!
@@ -1366,7 +1370,7 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
       // Rewrite all predecessors of the old block to go to the fallthrough
       // instead.
       while (!MBB->pred_empty()) {
-        MachineBasicBlock *Pred = *(MBB->pred_end()-1);
+        MachineBasicBlock *Pred = *(MBB->pred_end() - 1);
         Pred->ReplaceUsesOfBlockWith(MBB, &*FallThrough);
       }
       // If MBB was the target of a jump table, update jump tables to go to the
@@ -1409,8 +1413,7 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
     // This has to check PrevBB->succ_size() because EH edges are ignored by
     // analyzeBranch.
     if (PriorCond.empty() && !PriorTBB && MBB->pred_size() == 1 &&
-        PrevBB.succ_size() == 1 &&
-        !MBB->hasAddressTaken() && !MBB->isEHPad()) {
+        PrevBB.succ_size() == 1 && !MBB->hasAddressTaken() && !MBB->isEHPad()) {
       LLVM_DEBUG(dbgs() << "\nMerging into block: " << PrevBB
                         << "From MBB: " << *MBB);
       // Remove redundant DBG_VALUEs first.
@@ -1420,12 +1423,13 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
         MachineBasicBlock::iterator MBBIter = MBB->begin();
         // Check if DBG_VALUE at the end of PrevBB is identical to the
         // DBG_VALUE at the beginning of MBB.
-        while (PrevBBIter != PrevBB.begin() && MBBIter != MBB->end()
-               && PrevBBIter->isDebugInstr() && MBBIter->isDebugInstr()) {
+        while (PrevBBIter != PrevBB.begin() && MBBIter != MBB->end() &&
+               PrevBBIter->isDebugInstr() && MBBIter->isDebugInstr()) {
           if (!MBBIter->isIdenticalTo(*PrevBBIter))
             break;
           MachineInstr &DuplicateDbg = *MBBIter;
-          ++MBBIter; -- PrevBBIter;
+          ++MBBIter;
+          --PrevBBIter;
           DuplicateDbg.eraseFromParent();
         }
       }
@@ -1490,8 +1494,7 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
       // last block in the function, we'd just keep swapping the two blocks for
       // last.  Only do the swap if one is clearly better to fall through than
       // the other.
-      if (FallThrough == --MF.end() &&
-          !IsBetterFallthrough(PriorTBB, MBB))
+      if (FallThrough == --MF.end() && !IsBetterFallthrough(PriorTBB, MBB))
         DoTransform = false;
 
       if (DoTransform) {
@@ -1575,9 +1578,8 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
 
     // If this branch is the only thing in its block, see if we can forward
     // other blocks across it.
-    if (CurTBB && CurCond.empty() && !CurFBB &&
-        IsBranchOnlyBlock(MBB) && CurTBB != MBB &&
-        !MBB->hasAddressTaken() && !MBB->isEHPad()) {
+    if (CurTBB && CurCond.empty() && !CurFBB && IsBranchOnlyBlock(MBB) &&
+        CurTBB != MBB && !MBB->hasAddressTaken() && !MBB->isEHPad()) {
       DebugLoc dl = getBranchDebugLoc(*MBB);
       // This block may contain just an unconditional branch.  Because there can
       // be 'non-branch terminators' in the block, try removing the branch and
@@ -1605,8 +1607,7 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
           if (!PredHasNoFallThrough && PrevBB.isSuccessor(MBB) &&
               PriorTBB != MBB && PriorFBB != MBB) {
             if (!PriorTBB) {
-              assert(PriorCond.empty() && !PriorFBB &&
-                     "Bad branch analysis");
+              assert(PriorCond.empty() && !PriorFBB && "Bad branch analysis");
               PriorTBB = MBB;
             } else {
               assert(!PriorFBB && "Machine CFG out of date!");
@@ -1621,7 +1622,7 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
           size_t PI = 0;
           bool DidChange = false;
           bool HasBranchToSelf = false;
-          while(PI != MBB->pred_size()) {
+          while (PI != MBB->pred_size()) {
             MachineBasicBlock *PMBB = *(MBB->pred_begin() + PI);
             if (PMBB == MBB) {
               // If this block has an uncond branch to itself, leave it.
@@ -1654,7 +1655,8 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
           if (DidChange) {
             ++NumBranchOpts;
             MadeChange = true;
-            if (!HasBranchToSelf) return MadeChange;
+            if (!HasBranchToSelf)
+              return MadeChange;
           }
         }
       }
@@ -1745,8 +1747,7 @@ bool BranchFolder::OptimizeBlock(MachineBasicBlock *MBB) {
       // possible and not remove the "!FallThrough()->isEHPad" condition below.
       MachineBasicBlock *PrevTBB = nullptr, *PrevFBB = nullptr;
       SmallVector<MachineOperand, 4> PrevCond;
-      if (FallThrough != MF.end() &&
-          !FallThrough->isEHPad() &&
+      if (FallThrough != MF.end() && !FallThrough->isEHPad() &&
           !TII->analyzeBranch(PrevBB, PrevTBB, PrevFBB, PrevCond, true) &&
           PrevBB.isSuccessor(&*FallThrough)) {
         MBB->moveAfter(&MF.back());
@@ -1799,12 +1800,11 @@ static void addRegAndItsAliases(Register Reg, const TargetRegisterInfo *TRI,
 /// the preferred location. This function also gathers uses and defs of the
 /// instructions from the insertion point to the end of the block. The data is
 /// used by HoistCommonCodeInSuccs to ensure safety.
-static
-MachineBasicBlock::iterator findHoistingInsertPosAndDeps(MachineBasicBlock *MBB,
-                                                  const TargetInstrInfo *TII,
-                                                  const TargetRegisterInfo *TRI,
-                                                  SmallSet<Register, 4> &Uses,
-                                                  SmallSet<Register, 4> &Defs) {
+static MachineBasicBlock::iterator
+findHoistingInsertPosAndDeps(MachineBasicBlock *MBB, const TargetInstrInfo *TII,
+                             const TargetRegisterInfo *TRI,
+                             SmallSet<Register, 4> &Uses,
+                             SmallSet<Register, 4> &Defs) {
   MachineBasicBlock::iterator Loc = MBB->getFirstTerminator();
   if (!TII->isUnpredicatedTerminator(*Loc))
     return MBB->end();
@@ -1902,7 +1902,8 @@ bool BranchFolder::HoistCommonCodeInSuccs(MachineBasicBlock *MBB) {
   if (TII->analyzeBranch(*MBB, TBB, FBB, Cond, true) || !TBB || Cond.empty())
     return false;
 
-  if (!FBB) FBB = findFalseBlock(MBB, TBB);
+  if (!FBB)
+    FBB = findFalseBlock(MBB, TBB);
   if (!FBB)
     // Malformed bcc? True and false blocks are the same?
     return false;
@@ -1917,7 +1918,7 @@ bool BranchFolder::HoistCommonCodeInSuccs(MachineBasicBlock *MBB) {
   // point to the end of the block.
   SmallSet<Register, 4> Uses, Defs;
   MachineBasicBlock::iterator Loc =
-    findHoistingInsertPosAndDeps(MBB, TII, TRI, Uses, Defs);
+      findHoistingInsertPosAndDeps(MBB, TII, TRI, Uses, Defs);
   if (Loc == MBB->end())
     return false;
 
diff --git a/llvm/lib/CodeGen/BranchFolding.h b/llvm/lib/CodeGen/BranchFolding.h
index 63b2ef04b21ba0f..0cdd3bddd0019d7 100644
--- a/llvm/lib/CodeGen/BranchFolding.h
+++ b/llvm/lib/CodeGen/BranchFolding.h
@@ -28,172 +28,160 @@ class ProfileSummaryInfo;
 class TargetInstrInfo;
 class TargetRegisterInfo;
 
-  class LLVM_LIBRARY_VISIBILITY BranchFolder {
+class LLVM_LIBRARY_VISIBILITY BranchFolder {
+public:
+  explicit BranchFolder(bool DefaultEnableTailMerge, bool CommonHoist,
+                        MBFIWrapper &FreqInfo,
+                        const MachineBranchProbabilityInfo &ProbInfo,
+                        ProfileSummaryInfo *PSI,
+                        // Min tail length to merge. Defaults to commandline
+                        // flag. Ignored for optsize.
+                        unsigned MinTailLength = 0);
+
+  /// Perhaps branch folding, tail merging and other CFG optimizations on the
+  /// given function.  Block placement changes the layout and may create new
+  /// tail merging opportunities.
+  bool OptimizeFunction(MachineFunction &MF, const TargetInstrInfo *tii,
+                        const TargetRegisterInfo *tri,
+                        MachineLoopInfo *mli = nullptr,
+                        bool AfterPlacement = false);
+
+private:
+  class MergePotentialsElt {
+    unsigned Hash;
+    MachineBasicBlock *Block;
+
   public:
-    explicit BranchFolder(bool DefaultEnableTailMerge, bool CommonHoist,
-                          MBFIWrapper &FreqInfo,
-                          const MachineBranchProbabilityInfo &ProbInfo,
-                          ProfileSummaryInfo *PSI,
-                          // Min tail length to merge. Defaults to commandline
-                          // flag. Ignored for optsize.
-                          unsigned MinTailLength = 0);
-
-    /// Perhaps branch folding, tail merging and other CFG optimizations on the
-    /// given function.  Block placement changes the layout and may create new
-    /// tail merging opportunities.
-    bool OptimizeFunction(MachineFunction &MF, const TargetInstrInfo *tii,
-                          const TargetRegisterInfo *tri,
-                          MachineLoopInfo *mli = nullptr,
-                          bool AfterPlacement = false);
-
-  private:
-    class MergePotentialsElt {
-      unsigned Hash;
-      MachineBasicBlock *Block;
-
-    public:
-      MergePotentialsElt(unsigned h, MachineBasicBlock *b)
-        : Hash(h), Block(b) {}
-
-      unsigned getHash() const { return Hash; }
-      MachineBasicBlock *getBlock() const { return Block; }
-
-      void setBlock(MachineBasicBlock *MBB) {
-        Block = MBB;
-      }
-
-      bool operator<(const MergePotentialsElt &) const;
-    };
-
-    using MPIterator = std::vector<MergePotentialsElt>::iterator;
-
-    std::vector<MergePotentialsElt> MergePotentials;
-    SmallPtrSet<const MachineBasicBlock*, 2> TriedMerging;
-    DenseMap<const MachineBasicBlock *, int> EHScopeMembership;
-
-    class SameTailElt {
-      MPIterator MPIter;
-      MachineBasicBlock::iterator TailStartPos;
-
-    public:
-      SameTailElt(MPIterator mp, MachineBasicBlock::iterator tsp)
+    MergePotentialsElt(unsigned h, MachineBasicBlock *b) : Hash(h), Block(b) {}
+
+    unsigned getHash() const { return Hash; }
+    MachineBasicBlock *getBlock() const { return Block; }
+
+    void setBlock(MachineBasicBlock *MBB) { Block = MBB; }
+
+    bool operator<(const MergePotentialsElt &) const;
+  };
+
+  using MPIterator = std::vector<MergePotentialsElt>::iterator;
+
+  std::vector<MergePotentialsElt> MergePotentials;
+  SmallPtrSet<const MachineBasicBlock *, 2> TriedMerging;
+  DenseMap<const MachineBasicBlock *, int> EHScopeMembership;
+
+  class SameTailElt {
+    MPIterator MPIter;
+    MachineBasicBlock::iterator TailStartPos;
+
+  public:
+    SameTailElt(MPIterator mp, MachineBasicBlock::iterator tsp)
         : MPIter(mp), TailStartPos(tsp) {}
 
-      MPIterator getMPIter() const {
-        return MPIter;
-      }
-
-      MergePotentialsElt &getMergePotentialsElt() const {
-        return *getMPIter();
-      }
-
-      MachineBasicBlock::iterator getTailStartPos() const {
-        return TailStartPos;
-      }
-
-      unsigned getHash() const {
-        return getMergePotentialsElt().getHash();
-      }
-
-      MachineBasicBlock *getBlock() const {
-        return getMergePotentialsElt().getBlock();
-      }
-
-      bool tailIsWholeBlock() const {
-        return TailStartPos == getBlock()->begin();
-      }
-
-      void setBlock(MachineBasicBlock *MBB) {
-        getMergePotentialsElt().setBlock(MBB);
-      }
-
-      void setTailStartPos(MachineBasicBlock::iterator Pos) {
-        TailStartPos = Pos;
-      }
-    };
-    std::vector<SameTailElt> SameTails;
-
-    bool AfterBlockPlacement = false;
-    bool EnableTailMerge = false;
-    bool EnableHoistCommonCode = false;
-    bool UpdateLiveIns = false;
-    unsigned MinCommonTailLength;
-    const TargetInstrInfo *TII = nullptr;
-    const MachineRegisterInfo *MRI = nullptr;
-    const TargetRegisterInfo *TRI = nullptr;
-    MachineLoopInfo *MLI = nullptr;
-    LivePhysRegs LiveRegs;
-
-  private:
-    MBFIWrapper &MBBFreqInfo;
-    const MachineBranchProbabilityInfo &MBPI;
-    ProfileSummaryInfo *PSI;
-
-    bool TailMergeBlocks(MachineFunction &MF);
-    bool TryTailMergeBlocks(MachineBasicBlock* SuccBB,
-                       MachineBasicBlock* PredBB,
-                       unsigned MinCommonTailLength);
-    void setCommonTailEdgeWeights(MachineBasicBlock &TailMBB);
-
-    /// Delete the instruction OldInst and everything after it, replacing it
-    /// with an unconditional branch to NewDest.
-    void replaceTailWithBranchTo(MachineBasicBlock::iterator OldInst,
-                                 MachineBasicBlock &NewDest);
-
-    /// Given a machine basic block and an iterator into it, split the MBB so
-    /// that the part before the iterator falls into the part starting at the
-    /// iterator.  This returns the new MBB.
-    MachineBasicBlock *SplitMBBAt(MachineBasicBlock &CurMBB,
-                                  MachineBasicBlock::iterator BBI1,
-                                  const BasicBlock *BB);
-
-    /// Look through all the blocks in MergePotentials that have hash CurHash
-    /// (guaranteed to match the last element).  Build the vector SameTails of
-    /// all those that have the (same) largest number of instructions in common
-    /// of any pair of these blocks.  SameTails entries contain an iterator into
-    /// MergePotentials (from which the MachineBasicBlock can be found) and a
-    /// MachineBasicBlock::iterator into that MBB indicating the instruction
-    /// where the matching code sequence begins.  Order of elements in SameTails
-    /// is the reverse of the order in which those blocks appear in
-    /// MergePotentials (where they are not necessarily consecutive).
-    unsigned ComputeSameTails(unsigned CurHash, unsigned minCommonTailLength,
-                              MachineBasicBlock *SuccBB,
-                              MachineBasicBlock *PredBB);
-
-    /// Remove all blocks with hash CurHash from MergePotentials, restoring
-    /// branches at ends of blocks as appropriate.
-    void RemoveBlocksWithHash(unsigned CurHash, MachineBasicBlock* SuccBB,
-                                                MachineBasicBlock* PredBB);
-
-    /// None of the blocks to be tail-merged consist only of the common tail.
-    /// Create a block that does by splitting one.
-    bool CreateCommonTailOnlyBlock(MachineBasicBlock *&PredBB,
-                                   MachineBasicBlock *SuccBB,
-                                   unsigned maxCommonTailLength,
-                                   unsigned &commonTailIndex);
-
-    /// Create merged DebugLocs of identical instructions across SameTails and
-    /// assign it to the instruction in common tail; merge MMOs and undef flags.
-    void mergeCommonTails(unsigned commonTailIndex);
-
-    bool OptimizeBranches(MachineFunction &MF);
-
-    /// Analyze and optimize control flow related to the specified block. This
-    /// is never called on the entry block.
-    bool OptimizeBlock(MachineBasicBlock *MBB);
-
-    /// Remove the specified dead machine basic block from the function,
-    /// updating the CFG.
-    void RemoveDeadBlock(MachineBasicBlock *MBB);
-
-    /// Hoist common instruction sequences at the start of basic blocks to their
-    /// common predecessor.
-    bool HoistCommonCode(MachineFunction &MF);
-
-    /// If the successors of MBB has common instruction sequence at the start of
-    /// the function, move the instructions before MBB terminator if it's legal.
-    bool HoistCommonCodeInSuccs(MachineBasicBlock *MBB);
+    MPIterator getMPIter() const { return MPIter; }
+
+    MergePotentialsElt &getMergePotentialsElt() const { return *getMPIter(); }
+
+    MachineBasicBlock::iterator getTailStartPos() const { return TailStartPos; }
+
+    unsigned getHash() const { return getMergePotentialsElt().getHash(); }
+
+    MachineBasicBlock *getBlock() const {
+      return getMergePotentialsElt().getBlock();
+    }
+
+    bool tailIsWholeBlock() const {
+      return TailStartPos == getBlock()->begin();
+    }
+
+    void setBlock(MachineBasicBlock *MBB) {
+      getMergePotentialsElt().setBlock(MBB);
+    }
+
+    void setTailStartPos(MachineBasicBlock::iterator Pos) {
+      TailStartPos = Pos;
+    }
   };
+  std::vector<SameTailElt> SameTails;
+
+  bool AfterBlockPlacement = false;
+  bool EnableTailMerge = false;
+  bool EnableHoistCommonCode = false;
+  bool UpdateLiveIns = false;
+  unsigned MinCommonTailLength;
+  const TargetInstrInfo *TII = nullptr;
+  const MachineRegisterInfo *MRI = nullptr;
+  const TargetRegisterInfo *TRI = nullptr;
+  MachineLoopInfo *MLI = nullptr;
+  LivePhysRegs LiveRegs;
+
+private:
+  MBFIWrapper &MBBFreqInfo;
+  const MachineBranchProbabilityInfo &MBPI;
+  ProfileSummaryInfo *PSI;
+
+  bool TailMergeBlocks(MachineFunction &MF);
+  bool TryTailMergeBlocks(MachineBasicBlock *SuccBB, MachineBasicBlock *PredBB,
+                          unsigned MinCommonTailLength);
+  void setCommonTailEdgeWeights(MachineBasicBlock &TailMBB);
+
+  /// Delete the instruction OldInst and everything after it, replacing it
+  /// with an unconditional branch to NewDest.
+  void replaceTailWithBranchTo(MachineBasicBlock::iterator OldInst,
+                               MachineBasicBlock &NewDest);
+
+  /// Given a machine basic block and an iterator into it, split the MBB so
+  /// that the part before the iterator falls into the part starting at the
+  /// iterator.  This returns the new MBB.
+  MachineBasicBlock *SplitMBBAt(MachineBasicBlock &CurMBB,
+                                MachineBasicBlock::iterator BBI1,
+                                const BasicBlock *BB);
+
+  /// Look through all the blocks in MergePotentials that have hash CurHash
+  /// (guaranteed to match the last element).  Build the vector SameTails of
+  /// all those that have the (same) largest number of instructions in common
+  /// of any pair of these blocks.  SameTails entries contain an iterator into
+  /// MergePotentials (from which the MachineBasicBlock can be found) and a
+  /// MachineBasicBlock::iterator into that MBB indicating the instruction
+  /// where the matching code sequence begins.  Order of elements in SameTails
+  /// is the reverse of the order in which those blocks appear in
+  /// MergePotentials (where they are not necessarily consecutive).
+  unsigned ComputeSameTails(unsigned CurHash, unsigned minCommonTailLength,
+                            MachineBasicBlock *SuccBB,
+                            MachineBasicBlock *PredBB);
+
+  /// Remove all blocks with hash CurHash from MergePotentials, restoring
+  /// branches at ends of blocks as appropriate.
+  void RemoveBlocksWithHash(unsigned CurHash, MachineBasicBlock *SuccBB,
+                            MachineBasicBlock *PredBB);
+
+  /// None of the blocks to be tail-merged consist only of the common tail.
+  /// Create a block that does by splitting one.
+  bool CreateCommonTailOnlyBlock(MachineBasicBlock *&PredBB,
+                                 MachineBasicBlock *SuccBB,
+                                 unsigned maxCommonTailLength,
+                                 unsigned &commonTailIndex);
+
+  /// Create merged DebugLocs of identical instructions across SameTails and
+  /// assign it to the instruction in common tail; merge MMOs and undef flags.
+  void mergeCommonTails(unsigned commonTailIndex);
+
+  bool OptimizeBranches(MachineFunction &MF);
+
+  /// Analyze and optimize control flow related to the specified block. This
+  /// is never called on the entry block.
+  bool OptimizeBlock(MachineBasicBlock *MBB);
+
+  /// Remove the specified dead machine basic block from the function,
+  /// updating the CFG.
+  void RemoveDeadBlock(MachineBasicBlock *MBB);
+
+  /// Hoist common instruction sequences at the start of basic blocks to their
+  /// common predecessor.
+  bool HoistCommonCode(MachineFunction &MF);
+
+  /// If the successors of MBB has common instruction sequence at the start of
+  /// the function, move the instructions before MBB terminator if it's legal.
+  bool HoistCommonCodeInSuccs(MachineBasicBlock *MBB);
+};
 
 } // end namespace llvm
 
diff --git a/llvm/lib/CodeGen/BranchRelaxation.cpp b/llvm/lib/CodeGen/BranchRelaxation.cpp
index f50eb5e1730a328..898738aacc7e22f 100644
--- a/llvm/lib/CodeGen/BranchRelaxation.cpp
+++ b/llvm/lib/CodeGen/BranchRelaxation.cpp
@@ -72,8 +72,8 @@ class BranchRelaxation : public MachineFunctionPass {
       if (Alignment <= ParentAlign)
         return alignTo(PO, Alignment);
 
-      // The alignment of this MBB is larger than the function's alignment, so we
-      // can't tell whether or not it will insert nops. Assume that it will.
+      // The alignment of this MBB is larger than the function's alignment, so
+      // we can't tell whether or not it will insert nops. Assume that it will.
       return alignTo(PO, Alignment) + Alignment.value() - ParentAlign.value();
     }
   };
@@ -103,7 +103,8 @@ class BranchRelaxation : public MachineFunctionPass {
   MachineBasicBlock *splitBlockBeforeInstr(MachineInstr &MI,
                                            MachineBasicBlock *DestBB);
   void adjustBlockOffsets(MachineBasicBlock &Start);
-  bool isBlockInRange(const MachineInstr &MI, const MachineBasicBlock &BB) const;
+  bool isBlockInRange(const MachineInstr &MI,
+                      const MachineBasicBlock &BB) const;
 
   bool fixupConditionalBranch(MachineInstr &MI);
   bool fixupUnconditionalBranch(MachineInstr &MI);
@@ -199,7 +200,8 @@ void BranchRelaxation::scanFunction() {
 }
 
 /// computeBlockSize - Compute the size for MBB.
-uint64_t BranchRelaxation::computeBlockSize(const MachineBasicBlock &MBB) const {
+uint64_t
+BranchRelaxation::computeBlockSize(const MachineBasicBlock &MBB) const {
   uint64_t Size = 0;
   for (const MachineInstr &MI : MBB)
     Size += TII->getInstSizeInBytes(MI);
@@ -328,8 +330,8 @@ BranchRelaxation::splitBlockBeforeInstr(MachineInstr &MI,
 
 /// isBlockInRange - Returns true if the distance between specific MI and
 /// specific BB can fit in MI's displacement field.
-bool BranchRelaxation::isBlockInRange(
-  const MachineInstr &MI, const MachineBasicBlock &DestBB) const {
+bool BranchRelaxation::isBlockInRange(const MachineInstr &MI,
+                                      const MachineBasicBlock &DestBB) const {
   int64_t BrOffset = getInstrOffset(MI);
   int64_t DestOffset = BlockInfo[DestBB.getNumber()].Offset;
 
@@ -369,7 +371,7 @@ bool BranchRelaxation::fixupConditionalBranch(MachineInstr &MI) {
   };
   auto insertBranch = [&](MachineBasicBlock *MBB, MachineBasicBlock *TBB,
                           MachineBasicBlock *FBB,
-                          SmallVectorImpl<MachineOperand>& Cond) {
+                          SmallVectorImpl<MachineOperand> &Cond) {
     unsigned &BBSize = BlockInfo[MBB->getNumber()].Size;
     int NewBrSize = 0;
     TII->insertBranch(*MBB, TBB, FBB, Cond, DL, &NewBrSize);
@@ -479,8 +481,8 @@ bool BranchRelaxation::fixupConditionalBranch(MachineInstr &MI) {
       NewBB->addSuccessor(FBB);
     }
 
-    // We now have an appropriate fall-through block in place (either naturally or
-    // just created), so we can use the inverted the condition.
+    // We now have an appropriate fall-through block in place (either naturally
+    // or just created), so we can use the inverted the condition.
     MachineBasicBlock &NextBB = *std::next(MachineFunction::iterator(MBB));
 
     LLVM_DEBUG(dbgs() << "  Insert B to " << printMBBReference(*TBB)
@@ -577,8 +579,8 @@ bool BranchRelaxation::fixupUnconditionalBranch(MachineInstr &MI) {
   // Create the optional restore block and, initially, place it at the end of
   // function. That block will be placed later if it's used; otherwise, it will
   // be erased.
-  MachineBasicBlock *RestoreBB = createNewBlockAfter(MF->back(),
-                                                     DestBB->getBasicBlock());
+  MachineBasicBlock *RestoreBB =
+      createNewBlockAfter(MF->back(), DestBB->getBasicBlock());
   std::prev(RestoreBB->getIterator())
       ->setIsEndSection(RestoreBB->isEndSection());
   RestoreBB->setIsEndSection(false);
diff --git a/llvm/lib/CodeGen/BreakFalseDeps.cpp b/llvm/lib/CodeGen/BreakFalseDeps.cpp
index 618e41894b29bc7..4da30b00b044593 100644
--- a/llvm/lib/CodeGen/BreakFalseDeps.cpp
+++ b/llvm/lib/CodeGen/BreakFalseDeps.cpp
@@ -1,4 +1,5 @@
-//==- llvm/CodeGen/BreakFalseDeps.cpp - Break False Dependency Fix -*- C++ -*==//
+//==- llvm/CodeGen/BreakFalseDeps.cpp - Break False Dependency Fix -*- C++
+//-*==//
 //
 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
 // See https://llvm.org/LICENSE.txt for license information.
@@ -65,7 +66,7 @@ class BreakFalseDeps : public MachineFunctionPass {
 
   MachineFunctionProperties getRequiredProperties() const override {
     return MachineFunctionProperties().set(
-      MachineFunctionProperties::Property::NoVRegs);
+        MachineFunctionProperties::Property::NoVRegs);
   }
 
 private:
@@ -82,7 +83,7 @@ class BreakFalseDeps : public MachineFunctionPass {
   /// Returns true if it was able to find a true dependency, thus not requiring
   /// a dependency breaking instruction regardless of clearance.
   bool pickBestRegisterForUndef(MachineInstr *MI, unsigned OpIdx,
-    unsigned Pref);
+                                unsigned Pref);
 
   /// Return true to if it makes sense to break dependence on a partial
   /// def or undef use.
@@ -101,14 +102,15 @@ class BreakFalseDeps : public MachineFunctionPass {
 #define DEBUG_TYPE "break-false-deps"
 
 char BreakFalseDeps::ID = 0;
-INITIALIZE_PASS_BEGIN(BreakFalseDeps, DEBUG_TYPE, "BreakFalseDeps", false, false)
+INITIALIZE_PASS_BEGIN(BreakFalseDeps, DEBUG_TYPE, "BreakFalseDeps", false,
+                      false)
 INITIALIZE_PASS_DEPENDENCY(ReachingDefAnalysis)
 INITIALIZE_PASS_END(BreakFalseDeps, DEBUG_TYPE, "BreakFalseDeps", false, false)
 
 FunctionPass *llvm::createBreakFalseDeps() { return new BreakFalseDeps(); }
 
 bool BreakFalseDeps::pickBestRegisterForUndef(MachineInstr *MI, unsigned OpIdx,
-  unsigned Pref) {
+                                              unsigned Pref) {
 
   // We can't change tied operands.
   if (MI->isRegTiedToDefOperand(OpIdx))
@@ -135,7 +137,7 @@ bool BreakFalseDeps::pickBestRegisterForUndef(MachineInstr *MI, unsigned OpIdx,
 
   // Get the undef operand's register class
   const TargetRegisterClass *OpRC =
-    TII->getRegClass(MI->getDesc(), OpIdx, TRI, *MF);
+      TII->getRegClass(MI->getDesc(), OpIdx, TRI, *MF);
   assert(OpRC && "Not a valid register class");
 
   // If the instruction has a true dependency, we can hide the false depdency
@@ -215,8 +217,8 @@ void BreakFalseDeps::processDefs(MachineInstr *MI) {
     return;
 
   for (unsigned i = 0,
-    e = MI->isVariadic() ? MI->getNumOperands() : MCID.getNumDefs();
-    i != e; ++i) {
+                e = MI->isVariadic() ? MI->getNumOperands() : MCID.getNumDefs();
+       i != e; ++i) {
     MachineOperand &MO = MI->getOperand(i);
     if (!MO.isReg() || !MO.getReg())
       continue;
diff --git a/llvm/lib/CodeGen/CFIFixup.cpp b/llvm/lib/CodeGen/CFIFixup.cpp
index 837dbd77d07361a..bd1ea05e463da14 100644
--- a/llvm/lib/CodeGen/CFIFixup.cpp
+++ b/llvm/lib/CodeGen/CFIFixup.cpp
@@ -111,7 +111,8 @@ bool CFIFixup::runOnMachineFunction(MachineFunction &MF) {
     bool HasFrameOnEntry : 1;
     bool HasFrameOnExit : 1;
   };
-  SmallVector<BlockFlags, 32> BlockInfo(NumBlocks, {false, false, false, false});
+  SmallVector<BlockFlags, 32> BlockInfo(NumBlocks,
+                                        {false, false, false, false});
   BlockInfo[0].Reachable = true;
   BlockInfo[0].StrongNoFrameOnEntry = true;
 
diff --git a/llvm/lib/CodeGen/CFIInstrInserter.cpp b/llvm/lib/CodeGen/CFIInstrInserter.cpp
index 6a024287f00286f..7ad388aed41e3ed 100644
--- a/llvm/lib/CodeGen/CFIInstrInserter.cpp
+++ b/llvm/lib/CodeGen/CFIInstrInserter.cpp
@@ -28,14 +28,14 @@
 #include "llvm/MC/MCDwarf.h"
 using namespace llvm;
 
-static cl::opt<bool> VerifyCFI("verify-cfiinstrs",
-    cl::desc("Verify Call Frame Information instructions"),
-    cl::init(false),
-    cl::Hidden);
+static cl::opt<bool>
+    VerifyCFI("verify-cfiinstrs",
+              cl::desc("Verify Call Frame Information instructions"),
+              cl::init(false), cl::Hidden);
 
 namespace {
 class CFIInstrInserter : public MachineFunctionPass {
- public:
+public:
   static char ID;
 
   CFIInstrInserter() : MachineFunctionPass(ID) {
@@ -64,7 +64,7 @@ class CFIInstrInserter : public MachineFunctionPass {
     return insertedCFI;
   }
 
- private:
+private:
   struct MBBCFAInfo {
     MachineBasicBlock *MBB;
     /// Value of cfa offset valid at basic block entry.
@@ -132,7 +132,7 @@ class CFIInstrInserter : public MachineFunctionPass {
   /// outgoing offset and register of the MBB.
   unsigned verify(MachineFunction &MF);
 };
-}  // namespace
+} // namespace
 
 char CFIInstrInserter::ID = 0;
 INITIALIZE_PASS(CFIInstrInserter, "cfi-instr-inserter",
@@ -303,7 +303,8 @@ bool CFIInstrInserter::insertCFIInstrs(MachineFunction &MF) {
   BitVector SetDifference;
   for (MachineBasicBlock &MBB : MF) {
     // Skip the first MBB in a function
-    if (MBB.getNumber() == MF.front().getNumber()) continue;
+    if (MBB.getNumber() == MF.front().getNumber())
+      continue;
 
     const MBBCFAInfo &MBBInfo = MBBVector[MBB.getNumber()];
     auto MBBI = MBBInfo.MBB->begin();
diff --git a/llvm/lib/CodeGen/CMakeLists.txt b/llvm/lib/CodeGen/CMakeLists.txt
index 389c70d04f17ba3..051e607f0ae07aa 100644
--- a/llvm/lib/CodeGen/CMakeLists.txt
+++ b/llvm/lib/CodeGen/CMakeLists.txt
@@ -1,294 +1,167 @@
 if (DEFINED LLVM_HAVE_TF_AOT OR LLVM_HAVE_TFLITE)
-  include(TensorFlowCompile)
-  set(LLVM_RAEVICT_MODEL_PATH_DEFAULT "models/regalloc-eviction")
+include(TensorFlowCompile) set(LLVM_RAEVICT_MODEL_PATH_DEFAULT
+                               "models/regalloc-eviction")
 
-  set(LLVM_RAEVICT_MODEL_CURRENT_URL "<UNSPECIFIED>" CACHE STRING "URL to download the LLVM register allocator eviction model")
+    set(LLVM_RAEVICT_MODEL_CURRENT_URL
+        "<UNSPECIFIED>" CACHE STRING
+        "URL to download the LLVM register allocator eviction model")
 
-  if (DEFINED LLVM_HAVE_TF_AOT)
-    tf_find_and_compile(
-      ${LLVM_RAEVICT_MODEL_PATH}
-      ${LLVM_RAEVICT_MODEL_CURRENT_URL}
-      ${LLVM_RAEVICT_MODEL_PATH_DEFAULT}
-      "../Analysis/models/gen-regalloc-eviction-test-model.py"
-      serve
-      action
-      RegAllocEvictModel
-      llvm::RegAllocEvictModel
-    )
-  endif()
+        if (DEFINED LLVM_HAVE_TF_AOT) tf_find_and_compile(
+            ${LLVM_RAEVICT_MODEL_PATH} ${LLVM_RAEVICT_MODEL_CURRENT_URL} $ {
+              LLVM_RAEVICT_MODEL_PATH_DEFAULT
+            } "../Analysis/models/gen-regalloc-eviction-test-model.py" serve
+                action RegAllocEvictModel llvm::RegAllocEvictModel) endif()
 
-  if (LLVM_HAVE_TFLITE)
-    list(APPEND MLLinkDeps ${tensorflow_c_api} ${tensorflow_fx})
-  endif()
-endif()
+            if (LLVM_HAVE_TFLITE) list(APPEND MLLinkDeps ${tensorflow_c_api} ${
+                tensorflow_fx}) endif() endif()
 
-# This provides the implementation of MVT and LLT.
-# Be careful to append deps on this, since Targets' tablegens depend on this.
-add_llvm_component_library(LLVMCodeGenTypes
-  LowLevelType.cpp
-  PARTIAL_SOURCES_INTENDED
+#This provides the implementation of MVT and LLT.
+#Be careful to append deps on this, since Targets' tablegens depend on this.
+                add_llvm_component_library(
+                    LLVMCodeGenTypes LowLevelType.cpp PARTIAL_SOURCES_INTENDED
 
-  DEPENDS
-  vt_gen
+                        DEPENDS vt_gen
 
-  LINK_COMPONENTS
-  Support
-  )
+                            LINK_COMPONENTS Support)
 
-add_llvm_component_library(LLVMCodeGen
-  AggressiveAntiDepBreaker.cpp
-  AllocationOrder.cpp
-  Analysis.cpp
-  AssignmentTrackingAnalysis.cpp
-  AtomicExpandPass.cpp
-  BasicTargetTransformInfo.cpp
-  BranchFolding.cpp
-  BranchRelaxation.cpp
-  BreakFalseDeps.cpp
-  BasicBlockSections.cpp
-  BasicBlockSectionsProfileReader.cpp
-  CalcSpillWeights.cpp
-  CallBrPrepare.cpp
-  CallingConvLower.cpp
-  CFGuardLongjmp.cpp
-  CFIFixup.cpp
-  CFIInstrInserter.cpp
-  CodeGen.cpp
-  CodeGenCommonISel.cpp
-  CodeGenPassBuilder.cpp
-  CodeGenPrepare.cpp
-  CommandFlags.cpp
-  ComplexDeinterleavingPass.cpp
-  CriticalAntiDepBreaker.cpp
-  DeadMachineInstructionElim.cpp
-  DetectDeadLanes.cpp
-  DFAPacketizer.cpp
-  DwarfEHPrepare.cpp
-  EarlyIfConversion.cpp
-  EdgeBundles.cpp
-  EHContGuardCatchret.cpp
-  ExecutionDomainFix.cpp
-  ExpandLargeDivRem.cpp
-  ExpandLargeFpConvert.cpp
-  ExpandMemCmp.cpp
-  ExpandPostRAPseudos.cpp
-  ExpandReductions.cpp
-  ExpandVectorPredication.cpp
-  FaultMaps.cpp
-  FEntryInserter.cpp
-  FinalizeISel.cpp
-  FixupStatepointCallerSaved.cpp
-  FuncletLayout.cpp
-  GCMetadata.cpp
-  GCMetadataPrinter.cpp
-  GCRootLowering.cpp
-  GlobalMerge.cpp
-  HardwareLoops.cpp
-  IfConversion.cpp
-  ImplicitNullChecks.cpp
-  IndirectBrExpandPass.cpp
-  InlineSpiller.cpp
-  InterferenceCache.cpp
-  InterleavedAccessPass.cpp
-  InterleavedLoadCombinePass.cpp
-  IntrinsicLowering.cpp
-  JMCInstrumenter.cpp
-  KCFI.cpp
-  LatencyPriorityQueue.cpp
-  LazyMachineBlockFrequencyInfo.cpp
-  LexicalScopes.cpp
-  LiveDebugVariables.cpp
-  LiveIntervals.cpp
-  LiveInterval.cpp
-  LiveIntervalUnion.cpp
-  LivePhysRegs.cpp
-  LiveRangeCalc.cpp
-  LiveIntervalCalc.cpp
-  LiveRangeEdit.cpp
-  LiveRangeShrink.cpp
-  LiveRegMatrix.cpp
-  LiveRegUnits.cpp
-  LiveStacks.cpp
-  LiveVariables.cpp
-  LLVMTargetMachine.cpp
-  LocalStackSlotAllocation.cpp
-  LoopTraversal.cpp
-  LowLevelTypeUtils.cpp
-  LowerEmuTLS.cpp
-  MachineBasicBlock.cpp
-  MachineBlockFrequencyInfo.cpp
-  MachineBlockPlacement.cpp
-  MachineBranchProbabilityInfo.cpp
-  MachineCFGPrinter.cpp
-  MachineCombiner.cpp
-  MachineCopyPropagation.cpp
-  MachineCSE.cpp
-  MachineCheckDebugify.cpp
-  MachineCycleAnalysis.cpp
-  MachineDebugify.cpp
-  MachineDominanceFrontier.cpp
-  MachineDominators.cpp
-  MachineFrameInfo.cpp
-  MachineFunction.cpp
-  MachineFunctionPass.cpp
-  MachineFunctionPrinterPass.cpp
-  MachineFunctionSplitter.cpp
-  MachineInstrBundle.cpp
-  MachineInstr.cpp
-  MachineLateInstrsCleanup.cpp
-  MachineLICM.cpp
-  MachineLoopInfo.cpp
-  MachineLoopUtils.cpp
-  MachineModuleInfo.cpp
-  MachineModuleInfoImpls.cpp
-  MachineModuleSlotTracker.cpp
-  MachineOperand.cpp
-  MachineOptimizationRemarkEmitter.cpp
-  MachineOutliner.cpp
-  MachinePassManager.cpp
-  MachinePipeliner.cpp
-  MachinePostDominators.cpp
-  MachineRegionInfo.cpp
-  MachineRegisterInfo.cpp
-  MachineScheduler.cpp
-  MachineSink.cpp
-  MachineSizeOpts.cpp
-  MachineSSAContext.cpp
-  MachineSSAUpdater.cpp
-  MachineStripDebug.cpp
-  MachineTraceMetrics.cpp
-  MachineUniformityAnalysis.cpp
-  MachineVerifier.cpp
-  MIRFSDiscriminator.cpp
-  MIRSampleProfile.cpp
-  MIRYamlMapping.cpp
-  MLRegAllocEvictAdvisor.cpp
-  MLRegAllocPriorityAdvisor.cpp
-  ModuloSchedule.cpp
-  MultiHazardRecognizer.cpp
-  PatchableFunction.cpp
-  MBFIWrapper.cpp
-  MIRPrinter.cpp
-  MIRPrintingPass.cpp
-  MacroFusion.cpp
-  NonRelocatableStringpool.cpp
-  OptimizePHIs.cpp
-  ParallelCG.cpp
-  PeepholeOptimizer.cpp
-  PHIElimination.cpp
-  PHIEliminationUtils.cpp
-  PostRAHazardRecognizer.cpp
-  PostRASchedulerList.cpp
-  PreISelIntrinsicLowering.cpp
-  ProcessImplicitDefs.cpp
-  PrologEpilogInserter.cpp
-  PseudoProbeInserter.cpp
-  PseudoSourceValue.cpp
-  RDFGraph.cpp
-  RDFLiveness.cpp
-  RDFRegisters.cpp
-  ReachingDefAnalysis.cpp
-  RegAllocBase.cpp
-  RegAllocBasic.cpp
-  RegAllocEvictionAdvisor.cpp
-  RegAllocFast.cpp
-  RegAllocGreedy.cpp
-  RegAllocPBQP.cpp
-  RegAllocPriorityAdvisor.cpp
-  RegAllocScore.cpp
-  RegisterClassInfo.cpp
-  RegisterCoalescer.cpp
-  RegisterPressure.cpp
-  RegisterScavenging.cpp
-  GCEmptyBasicBlocks.cpp
-  RemoveRedundantDebugValues.cpp
-  RenameIndependentSubregs.cpp
-  MachineStableHash.cpp
-  MIRVRegNamerUtils.cpp
-  MIRNamerPass.cpp
-  MIRCanonicalizerPass.cpp
-  RegisterUsageInfo.cpp
-  RegUsageInfoCollector.cpp
-  RegUsageInfoPropagate.cpp
-  ReplaceWithVeclib.cpp
-  ResetMachineFunctionPass.cpp
-  RegisterBank.cpp
-  RegisterBankInfo.cpp
-  SafeStack.cpp
-  SafeStackLayout.cpp
-  SanitizerBinaryMetadata.cpp
-  ScheduleDAG.cpp
-  ScheduleDAGInstrs.cpp
-  ScheduleDAGPrinter.cpp
-  ScoreboardHazardRecognizer.cpp
-  SelectOptimize.cpp
-  ShadowStackGCLowering.cpp
-  ShrinkWrap.cpp
-  SjLjEHPrepare.cpp
-  SlotIndexes.cpp
-  SpillPlacement.cpp
-  SplitKit.cpp
-  StackColoring.cpp
-  StackFrameLayoutAnalysisPass.cpp
-  StackMapLivenessAnalysis.cpp
-  StackMaps.cpp
-  StackProtector.cpp
-  StackSlotColoring.cpp
-  SwiftErrorValueTracking.cpp
-  SwitchLoweringUtils.cpp
-  TailDuplication.cpp
-  TailDuplicator.cpp
-  TargetFrameLoweringImpl.cpp
-  TargetInstrInfo.cpp
-  TargetLoweringBase.cpp
-  TargetLoweringObjectFileImpl.cpp
-  TargetOptionsImpl.cpp
-  TargetPassConfig.cpp
-  TargetRegisterInfo.cpp
-  TargetSchedule.cpp
-  TargetSubtargetInfo.cpp
-  TwoAddressInstructionPass.cpp
-  TypePromotion.cpp
-  UnreachableBlockElim.cpp
-  ValueTypes.cpp
-  VLIWMachineScheduler.cpp
-  VirtRegMap.cpp
-  WasmEHPrepare.cpp
-  WinEHPrepare.cpp
-  XRayInstrumentation.cpp
-  ${GeneratedMLSources}
+                    add_llvm_component_library(
+                        LLVMCodeGen AggressiveAntiDepBreaker.cpp AllocationOrder
+                            .cpp Analysis.cpp AssignmentTrackingAnalysis
+                            .cpp AtomicExpandPass.cpp BasicTargetTransformInfo
+                            .cpp BranchFolding.cpp BranchRelaxation
+                            .cpp BreakFalseDeps.cpp BasicBlockSections
+                            .cpp BasicBlockSectionsProfileReader
+                            .cpp CalcSpillWeights.cpp CallBrPrepare
+                            .cpp CallingConvLower.cpp CFGuardLongjmp
+                            .cpp CFIFixup.cpp CFIInstrInserter.cpp CodeGen
+                            .cpp CodeGenCommonISel.cpp CodeGenPassBuilder
+                            .cpp CodeGenPrepare.cpp CommandFlags
+                            .cpp ComplexDeinterleavingPass
+                            .cpp CriticalAntiDepBreaker
+                            .cpp DeadMachineInstructionElim.cpp DetectDeadLanes
+                            .cpp DFAPacketizer.cpp DwarfEHPrepare
+                            .cpp EarlyIfConversion.cpp EdgeBundles
+                            .cpp EHContGuardCatchret.cpp ExecutionDomainFix
+                            .cpp ExpandLargeDivRem.cpp ExpandLargeFpConvert
+                            .cpp ExpandMemCmp.cpp ExpandPostRAPseudos
+                            .cpp ExpandReductions.cpp ExpandVectorPredication
+                            .cpp FaultMaps.cpp FEntryInserter.cpp FinalizeISel
+                            .cpp FixupStatepointCallerSaved.cpp FuncletLayout
+                            .cpp GCMetadata.cpp GCMetadataPrinter
+                            .cpp GCRootLowering.cpp GlobalMerge
+                            .cpp HardwareLoops.cpp IfConversion
+                            .cpp ImplicitNullChecks.cpp IndirectBrExpandPass
+                            .cpp InlineSpiller.cpp InterferenceCache
+                            .cpp InterleavedAccessPass
+                            .cpp InterleavedLoadCombinePass
+                            .cpp IntrinsicLowering.cpp JMCInstrumenter.cpp KCFI
+                            .cpp LatencyPriorityQueue
+                            .cpp LazyMachineBlockFrequencyInfo.cpp LexicalScopes
+                            .cpp LiveDebugVariables.cpp LiveIntervals
+                            .cpp LiveInterval.cpp LiveIntervalUnion
+                            .cpp LivePhysRegs.cpp LiveRangeCalc
+                            .cpp LiveIntervalCalc.cpp LiveRangeEdit
+                            .cpp LiveRangeShrink.cpp LiveRegMatrix
+                            .cpp LiveRegUnits.cpp LiveStacks.cpp LiveVariables
+                            .cpp LLVMTargetMachine.cpp LocalStackSlotAllocation
+                            .cpp LoopTraversal.cpp LowLevelTypeUtils
+                            .cpp LowerEmuTLS.cpp MachineBasicBlock
+                            .cpp MachineBlockFrequencyInfo
+                            .cpp MachineBlockPlacement
+                            .cpp MachineBranchProbabilityInfo
+                            .cpp MachineCFGPrinter.cpp MachineCombiner
+                            .cpp MachineCopyPropagation.cpp MachineCSE
+                            .cpp MachineCheckDebugify.cpp MachineCycleAnalysis
+                            .cpp MachineDebugify.cpp MachineDominanceFrontier
+                            .cpp MachineDominators.cpp MachineFrameInfo
+                            .cpp MachineFunction.cpp MachineFunctionPass
+                            .cpp MachineFunctionPrinterPass
+                            .cpp MachineFunctionSplitter.cpp MachineInstrBundle
+                            .cpp MachineInstr.cpp MachineLateInstrsCleanup
+                            .cpp MachineLICM.cpp MachineLoopInfo
+                            .cpp MachineLoopUtils.cpp MachineModuleInfo
+                            .cpp MachineModuleInfoImpls
+                            .cpp MachineModuleSlotTracker.cpp MachineOperand
+                            .cpp MachineOptimizationRemarkEmitter
+                            .cpp MachineOutliner.cpp MachinePassManager
+                            .cpp MachinePipeliner.cpp MachinePostDominators
+                            .cpp MachineRegionInfo.cpp MachineRegisterInfo
+                            .cpp MachineScheduler.cpp MachineSink
+                            .cpp MachineSizeOpts.cpp MachineSSAContext
+                            .cpp MachineSSAUpdater.cpp MachineStripDebug
+                            .cpp MachineTraceMetrics
+                            .cpp MachineUniformityAnalysis.cpp MachineVerifier
+                            .cpp MIRFSDiscriminator.cpp MIRSampleProfile
+                            .cpp MIRYamlMapping.cpp MLRegAllocEvictAdvisor
+                            .cpp MLRegAllocPriorityAdvisor.cpp ModuloSchedule
+                            .cpp MultiHazardRecognizer.cpp PatchableFunction
+                            .cpp MBFIWrapper.cpp MIRPrinter.cpp MIRPrintingPass
+                            .cpp MacroFusion.cpp NonRelocatableStringpool
+                            .cpp OptimizePHIs.cpp ParallelCG
+                            .cpp PeepholeOptimizer.cpp PHIElimination
+                            .cpp PHIEliminationUtils.cpp PostRAHazardRecognizer
+                            .cpp PostRASchedulerList
+                            .cpp PreISelIntrinsicLowering
+                            .cpp ProcessImplicitDefs.cpp PrologEpilogInserter
+                            .cpp PseudoProbeInserter.cpp PseudoSourceValue
+                            .cpp RDFGraph.cpp RDFLiveness.cpp RDFRegisters
+                            .cpp ReachingDefAnalysis.cpp RegAllocBase
+                            .cpp RegAllocBasic.cpp RegAllocEvictionAdvisor
+                            .cpp RegAllocFast.cpp RegAllocGreedy
+                            .cpp RegAllocPBQP.cpp RegAllocPriorityAdvisor
+                            .cpp RegAllocScore.cpp RegisterClassInfo
+                            .cpp RegisterCoalescer.cpp RegisterPressure
+                            .cpp RegisterScavenging.cpp GCEmptyBasicBlocks
+                            .cpp RemoveRedundantDebugValues
+                            .cpp RenameIndependentSubregs.cpp MachineStableHash
+                            .cpp MIRVRegNamerUtils.cpp MIRNamerPass
+                            .cpp MIRCanonicalizerPass.cpp RegisterUsageInfo
+                            .cpp RegUsageInfoCollector.cpp RegUsageInfoPropagate
+                            .cpp ReplaceWithVeclib.cpp ResetMachineFunctionPass
+                            .cpp RegisterBank.cpp RegisterBankInfo.cpp SafeStack
+                            .cpp SafeStackLayout.cpp SanitizerBinaryMetadata
+                            .cpp ScheduleDAG.cpp ScheduleDAGInstrs
+                            .cpp ScheduleDAGPrinter
+                            .cpp ScoreboardHazardRecognizer.cpp SelectOptimize
+                            .cpp ShadowStackGCLowering.cpp ShrinkWrap
+                            .cpp SjLjEHPrepare.cpp SlotIndexes
+                            .cpp SpillPlacement.cpp SplitKit.cpp StackColoring
+                            .cpp StackFrameLayoutAnalysisPass
+                            .cpp StackMapLivenessAnalysis.cpp StackMaps
+                            .cpp StackProtector.cpp StackSlotColoring
+                            .cpp SwiftErrorValueTracking.cpp SwitchLoweringUtils
+                            .cpp TailDuplication.cpp TailDuplicator
+                            .cpp TargetFrameLoweringImpl.cpp TargetInstrInfo
+                            .cpp TargetLoweringBase
+                            .cpp TargetLoweringObjectFileImpl
+                            .cpp TargetOptionsImpl.cpp TargetPassConfig
+                            .cpp TargetRegisterInfo.cpp TargetSchedule
+                            .cpp TargetSubtargetInfo
+                            .cpp TwoAddressInstructionPass.cpp TypePromotion
+                            .cpp UnreachableBlockElim.cpp ValueTypes
+                            .cpp VLIWMachineScheduler.cpp VirtRegMap
+                            .cpp WasmEHPrepare.cpp WinEHPrepare
+                            .cpp XRayInstrumentation.cpp ${GeneratedMLSources}
 
-  LiveDebugValues/LiveDebugValues.cpp
-  LiveDebugValues/VarLocBasedImpl.cpp
-  LiveDebugValues/InstrRefBasedImpl.cpp
+                        LiveDebugValues /
+                        LiveDebugValues.cpp LiveDebugValues /
+                        VarLocBasedImpl.cpp LiveDebugValues /
+                        InstrRefBasedImpl.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/CodeGen
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/CodeGen/PBQP
+                            ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+                        llvm / CodeGen ${LLVM_MAIN_INCLUDE_DIR} / llvm /
+                        CodeGen /
+                        PBQP
 
-  LINK_LIBS ${LLVM_PTHREAD_LIB} ${MLLinkDeps}
+                            LINK_LIBS ${LLVM_PTHREAD_LIB} ${MLLinkDeps}
 
-  DEPENDS
-  intrinsics_gen
-  ${MLDeps}
+                        DEPENDS intrinsics_gen ${MLDeps}
 
-  LINK_COMPONENTS
-  Analysis
-  BitReader
-  BitWriter
-  CodeGenTypes
-  Core
-  MC
-  ObjCARC
-  ProfileData
-  Scalar
-  Support
-  Target
-  TargetParser
-  TransformUtils
-  )
+                        LINK_COMPONENTS Analysis BitReader BitWriter
+                            CodeGenTypes Core MC ObjCARC ProfileData Scalar
+                                Support Target TargetParser TransformUtils)
 
-add_subdirectory(SelectionDAG)
-add_subdirectory(AsmPrinter)
-add_subdirectory(MIRParser)
-add_subdirectory(GlobalISel)
+                        add_subdirectory(SelectionDAG)
+                            add_subdirectory(AsmPrinter)
+                                add_subdirectory(MIRParser)
+                                    add_subdirectory(GlobalISel)
diff --git a/llvm/lib/CodeGen/CalcSpillWeights.cpp b/llvm/lib/CodeGen/CalcSpillWeights.cpp
index 6e98e2384ef975f..46ed012045793e8 100644
--- a/llvm/lib/CodeGen/CalcSpillWeights.cpp
+++ b/llvm/lib/CodeGen/CalcSpillWeights.cpp
@@ -131,11 +131,11 @@ bool VirtRegAuxInfo::isRematerializable(const LiveInterval &LI,
 bool VirtRegAuxInfo::isLiveAtStatepointVarArg(LiveInterval &LI) {
   return any_of(VRM.getRegInfo().reg_operands(LI.reg()),
                 [](MachineOperand &MO) {
-    MachineInstr *MI = MO.getParent();
-    if (MI->getOpcode() != TargetOpcode::STATEPOINT)
-      return false;
-    return StatepointOpers(MI).getVarIdx() <= MO.getOperandNo();
-  });
+                  MachineInstr *MI = MO.getParent();
+                  if (MI->getOpcode() != TargetOpcode::STATEPOINT)
+                    return false;
+                  return StatepointOpers(MI).getVarIdx() <= MO.getOperandNo();
+                });
 }
 
 void VirtRegAuxInfo::calculateSpillWeightAndHint(LiveInterval &LI) {
diff --git a/llvm/lib/CodeGen/CallingConvLower.cpp b/llvm/lib/CodeGen/CallingConvLower.cpp
index b7152587a9fa059..a9037315aca46b1 100644
--- a/llvm/lib/CodeGen/CallingConvLower.cpp
+++ b/llvm/lib/CodeGen/CallingConvLower.cpp
@@ -36,7 +36,7 @@ CCState::CCState(CallingConv::ID CC, bool IsVarArg, MachineFunction &MF,
   StackSize = 0;
 
   clearByValRegsInfo();
-  UsedRegs.resize((TRI.getNumRegs()+31)/32);
+  UsedRegs.resize((TRI.getNumRegs() + 31) / 32);
 }
 
 /// Allocate space on the stack large enough to pass an argument by value.
@@ -46,7 +46,7 @@ void CCState::HandleByVal(unsigned ValNo, MVT ValVT, MVT LocVT,
                           CCValAssign::LocInfo LocInfo, int MinSize,
                           Align MinAlign, ISD::ArgFlagsTy ArgFlags) {
   Align Alignment = ArgFlags.getNonZeroByValAlign();
-  unsigned Size  = ArgFlags.getByValSize();
+  unsigned Size = ArgFlags.getByValSize();
   if (MinSize > (int)Size)
     Size = MinSize;
   if (MinAlign > Alignment)
@@ -81,9 +81,8 @@ bool CCState::IsShadowAllocatedReg(MCRegister Reg) const {
 
 /// Analyze an array of argument values,
 /// incorporating info about the formals into this state.
-void
-CCState::AnalyzeFormalArguments(const SmallVectorImpl<ISD::InputArg> &Ins,
-                                CCAssignFn Fn) {
+void CCState::AnalyzeFormalArguments(const SmallVectorImpl<ISD::InputArg> &Ins,
+                                     CCAssignFn Fn) {
   unsigned NumArgs = Ins.size();
 
   for (unsigned i = 0; i != NumArgs; ++i) {
@@ -131,8 +130,8 @@ void CCState::AnalyzeCallOperands(const SmallVectorImpl<ISD::OutputArg> &Outs,
     ISD::ArgFlagsTy ArgFlags = Outs[i].Flags;
     if (Fn(i, ArgVT, ArgVT, CCValAssign::Full, ArgFlags, *this)) {
 #ifndef NDEBUG
-      dbgs() << "Call operand #" << i << " has unhandled type "
-             << ArgVT << '\n';
+      dbgs() << "Call operand #" << i << " has unhandled type " << ArgVT
+             << '\n';
 #endif
       llvm_unreachable(nullptr);
     }
@@ -149,8 +148,8 @@ void CCState::AnalyzeCallOperands(SmallVectorImpl<MVT> &ArgVTs,
     ISD::ArgFlagsTy ArgFlags = Flags[i];
     if (Fn(i, ArgVT, ArgVT, CCValAssign::Full, ArgFlags, *this)) {
 #ifndef NDEBUG
-      dbgs() << "Call operand #" << i << " has unhandled type "
-             << ArgVT << '\n';
+      dbgs() << "Call operand #" << i << " has unhandled type " << ArgVT
+             << '\n';
 #endif
       llvm_unreachable(nullptr);
     }
@@ -166,8 +165,7 @@ void CCState::AnalyzeCallResult(const SmallVectorImpl<ISD::InputArg> &Ins,
     ISD::ArgFlagsTy Flags = Ins[i].Flags;
     if (Fn(i, VT, VT, CCValAssign::Full, Flags, *this)) {
 #ifndef NDEBUG
-      dbgs() << "Call result #" << i << " has unhandled type "
-             << VT << '\n';
+      dbgs() << "Call result #" << i << " has unhandled type " << VT << '\n';
 #endif
       llvm_unreachable(nullptr);
     }
@@ -178,8 +176,7 @@ void CCState::AnalyzeCallResult(const SmallVectorImpl<ISD::InputArg> &Ins,
 void CCState::AnalyzeCallResult(MVT VT, CCAssignFn Fn) {
   if (Fn(0, VT, VT, CCValAssign::Full, ISD::ArgFlagsTy(), *this)) {
 #ifndef NDEBUG
-    dbgs() << "Call result has unhandled type "
-           << VT << '\n';
+    dbgs() << "Call result has unhandled type " << VT << '\n';
 #endif
     llvm_unreachable(nullptr);
   }
diff --git a/llvm/lib/CodeGen/CodeGenCommonISel.cpp b/llvm/lib/CodeGen/CodeGenCommonISel.cpp
index 577c5dbc8e2da8e..f2f21f7a6c7dea1 100644
--- a/llvm/lib/CodeGen/CodeGenCommonISel.cpp
+++ b/llvm/lib/CodeGen/CodeGenCommonISel.cpp
@@ -25,8 +25,7 @@ using namespace llvm;
 
 /// Add a successor MBB to ParentMBB< creating a new MachineBB for BB if SuccMBB
 /// is 0.
-MachineBasicBlock *
-StackProtectorDescriptor::addSuccessorMBB(
+MachineBasicBlock *StackProtectorDescriptor::addSuccessorMBB(
     const BasicBlock *BB, MachineBasicBlock *ParentMBB, bool IsLikely,
     MachineBasicBlock *SuccMBB) {
   // If SuccBB has not been created yet, create it.
@@ -98,8 +97,8 @@ static bool MIIsInTerminatorSequence(const MachineInstr &MI) {
   // Grab the copy source...
   MachineInstr::const_mop_iterator OPI2 = OPI;
   ++OPI2;
-  assert(OPI2 != MI.operands_end()
-         && "Should have a copy implying we should have 2 arguments.");
+  assert(OPI2 != MI.operands_end() &&
+         "Should have a copy implying we should have 2 arguments.");
 
   // Make sure that the copy dest is not a vreg when the copy source is a
   // physical register.
@@ -158,7 +157,7 @@ llvm::findSplitPointForStackProtector(MachineBasicBlock *BB,
       --Previous;
       if (Previous->isCall())
         return SplitPoint;
-    } while(Previous->getOpcode() != TII.getCallFrameSetupOpcode());
+    } while (Previous->getOpcode() != TII.getCallFrameSetupOpcode());
 
     return Previous;
   }
@@ -215,8 +214,8 @@ static MachineOperand *getSalvageOpsForCopy(const MachineRegisterInfo &MRI,
 }
 
 static MachineOperand *getSalvageOpsForTrunc(const MachineRegisterInfo &MRI,
-                                            MachineInstr &Trunc,
-                                            SmallVectorImpl<uint64_t> &Ops) {
+                                             MachineInstr &Trunc,
+                                             SmallVectorImpl<uint64_t> &Ops) {
   assert(Trunc.getOpcode() == TargetOpcode::G_TRUNC && "Must be a G_TRUNC");
 
   const auto FromLLT = MRI.getType(Trunc.getOperand(1).getReg());
diff --git a/llvm/lib/CodeGen/CodeGenPrepare.cpp b/llvm/lib/CodeGen/CodeGenPrepare.cpp
index 7fc62f28873ffac..127e08558ddca76 100644
--- a/llvm/lib/CodeGen/CodeGenPrepare.cpp
+++ b/llvm/lib/CodeGen/CodeGenPrepare.cpp
@@ -2198,8 +2198,7 @@ static bool OptimizeExtractBits(BinaryOperator *ShiftI, ConstantInt *CI,
 ///     %ctz = phi i64 [ 64, %entry ], [ %z, %cond.false ]
 ///
 /// If the transform is performed, return true and set ModifiedDT to true.
-static bool despeculateCountZeros(IntrinsicInst *CountZeros,
-                                  LoopInfo &LI,
+static bool despeculateCountZeros(IntrinsicInst *CountZeros, LoopInfo &LI,
                                   const TargetLowering *TLI,
                                   const DataLayout *DL, ModifyDT &ModifiedDT,
                                   SmallSet<BasicBlock *, 32> &FreshBBs,
@@ -4720,9 +4719,9 @@ bool AddressingModeMatcher::matchOperationAddr(User *AddrInst, unsigned Opcode,
     // Try to match an integer constant second to increase its chance of ending
     // up in `BaseOffs`, resp. decrease its chance of ending up in `BaseReg`.
     int First = 0, Second = 1;
-    if (isa<ConstantInt>(AddrInst->getOperand(First))
-      && !isa<ConstantInt>(AddrInst->getOperand(Second)))
-        std::swap(First, Second);
+    if (isa<ConstantInt>(AddrInst->getOperand(First)) &&
+        !isa<ConstantInt>(AddrInst->getOperand(Second)))
+      std::swap(First, Second);
     AddrMode.InBounds = false;
     if (matchAddr(AddrInst->getOperand(First), Depth + 1) &&
         matchAddr(AddrInst->getOperand(Second), Depth + 1))
@@ -4806,32 +4805,32 @@ bool AddressingModeMatcher::matchOperationAddr(User *AddrInst, unsigned Opcode,
     if (VariableOperand == -1) {
       AddrMode.BaseOffs += ConstantOffset;
       if (matchAddr(AddrInst->getOperand(0), Depth + 1)) {
-          if (!cast<GEPOperator>(AddrInst)->isInBounds())
-            AddrMode.InBounds = false;
-          return true;
+        if (!cast<GEPOperator>(AddrInst)->isInBounds())
+          AddrMode.InBounds = false;
+        return true;
       }
       AddrMode.BaseOffs -= ConstantOffset;
 
       if (EnableGEPOffsetSplit && isa<GetElementPtrInst>(AddrInst) &&
           TLI.shouldConsiderGEPOffsetSplit() && Depth == 0 &&
           ConstantOffset > 0) {
-          // Record GEPs with non-zero offsets as candidates for splitting in
-          // the event that the offset cannot fit into the r+i addressing mode.
-          // Simple and common case that only one GEP is used in calculating the
-          // address for the memory access.
-          Value *Base = AddrInst->getOperand(0);
-          auto *BaseI = dyn_cast<Instruction>(Base);
-          auto *GEP = cast<GetElementPtrInst>(AddrInst);
-          if (isa<Argument>(Base) || isa<GlobalValue>(Base) ||
-              (BaseI && !isa<CastInst>(BaseI) &&
-               !isa<GetElementPtrInst>(BaseI))) {
-            // Make sure the parent block allows inserting non-PHI instructions
-            // before the terminator.
-            BasicBlock *Parent = BaseI ? BaseI->getParent()
-                                       : &GEP->getFunction()->getEntryBlock();
-            if (!Parent->getTerminator()->isEHPad())
+        // Record GEPs with non-zero offsets as candidates for splitting in
+        // the event that the offset cannot fit into the r+i addressing mode.
+        // Simple and common case that only one GEP is used in calculating the
+        // address for the memory access.
+        Value *Base = AddrInst->getOperand(0);
+        auto *BaseI = dyn_cast<Instruction>(Base);
+        auto *GEP = cast<GetElementPtrInst>(AddrInst);
+        if (isa<Argument>(Base) || isa<GlobalValue>(Base) ||
+            (BaseI && !isa<CastInst>(BaseI) &&
+             !isa<GetElementPtrInst>(BaseI))) {
+          // Make sure the parent block allows inserting non-PHI instructions
+          // before the terminator.
+          BasicBlock *Parent =
+              BaseI ? BaseI->getParent() : &GEP->getFunction()->getEntryBlock();
+          if (!Parent->getTerminator()->isEHPad())
             LargeOffsetGEP = std::make_pair(GEP, ConstantOffset);
-          }
+        }
       }
 
       return false;
@@ -5137,7 +5136,6 @@ static bool FindAllMemoryUses(
                            PSI, BFI, SeenInsts);
 }
 
-
 /// Return true if Val is already known to be live at the use site that we're
 /// folding it into. If so, there is no cost to include it in the addressing
 /// mode. KnownLive1 and KnownLive2 are two values that we know are live at the
@@ -6155,8 +6153,8 @@ bool CodeGenPrepare::splitLargeGEPOffsets() {
           if (isa<PHINode>(BaseI))
             NewBaseInsertPt = NewBaseInsertBB->getFirstInsertionPt();
           else if (InvokeInst *Invoke = dyn_cast<InvokeInst>(BaseI)) {
-            NewBaseInsertBB =
-                SplitEdge(NewBaseInsertBB, Invoke->getNormalDest(), DT.get(), LI);
+            NewBaseInsertBB = SplitEdge(NewBaseInsertBB,
+                                        Invoke->getNormalDest(), DT.get(), LI);
             NewBaseInsertPt = NewBaseInsertBB->getFirstInsertionPt();
           } else
             NewBaseInsertPt = std::next(BaseI->getIterator());
diff --git a/llvm/lib/CodeGen/CommandFlags.cpp b/llvm/lib/CodeGen/CommandFlags.cpp
index c34a52a6f2de908..aea6ce819cbf3b0 100644
--- a/llvm/lib/CodeGen/CommandFlags.cpp
+++ b/llvm/lib/CodeGen/CommandFlags.cpp
@@ -254,17 +254,17 @@ codegen::RegisterCodeGenFlags::RegisterCodeGenFlags() {
 
   // FIXME: Doesn't have way to specify separate input and output modes.
   static cl::opt<DenormalMode::DenormalModeKind> DenormalFPMath(
-    "denormal-fp-math",
-    cl::desc("Select which denormal numbers the code is permitted to require"),
-    cl::init(DenormalMode::IEEE),
-    DenormFlagEnumOptions);
+      "denormal-fp-math",
+      cl::desc(
+          "Select which denormal numbers the code is permitted to require"),
+      cl::init(DenormalMode::IEEE), DenormFlagEnumOptions);
   CGBINDOPT(DenormalFPMath);
 
   static cl::opt<DenormalMode::DenormalModeKind> DenormalFP32Math(
-    "denormal-fp-math-f32",
-    cl::desc("Select which denormal numbers the code is permitted to require for float"),
-    cl::init(DenormalMode::Invalid),
-    DenormFlagEnumOptions);
+      "denormal-fp-math-f32",
+      cl::desc("Select which denormal numbers the code is permitted to require "
+               "for float"),
+      cl::init(DenormalMode::Invalid), DenormFlagEnumOptions);
   CGBINDOPT(DenormalFP32Math);
 
   static cl::opt<bool> EnableHonorSignDependentRoundingFPMath(
@@ -477,7 +477,8 @@ codegen::RegisterCodeGenFlags::RegisterCodeGenFlags() {
 
   static cl::opt<bool> JMCInstrument(
       "enable-jmc-instrument",
-      cl::desc("Instrument functions with a call to __CheckForDebuggerJustMyCode"),
+      cl::desc(
+          "Instrument functions with a call to __CheckForDebuggerJustMyCode"),
       cl::init(false));
   CGBINDOPT(JMCInstrument);
 
@@ -699,9 +700,8 @@ void codegen::setFunctionAttributes(StringRef CPU, StringRef Features,
     // FIXME: Command line flag should expose separate input/output modes.
     DenormalMode::DenormalModeKind DenormKind = getDenormalFP32Math();
 
-    NewAttrs.addAttribute(
-      "denormal-fp-math-f32",
-      DenormalMode(DenormKind, DenormKind).str());
+    NewAttrs.addAttribute("denormal-fp-math-f32",
+                          DenormalMode(DenormKind, DenormKind).str());
   }
 
   if (TrapFuncNameView->getNumOccurrences() > 0)
diff --git a/llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp b/llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp
index 106db7c51f2799e..999e176e2fd35a1 100644
--- a/llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp
+++ b/llvm/lib/CodeGen/CriticalAntiDepBreaker.cpp
@@ -79,8 +79,7 @@ void CriticalAntiDepBreaker::StartBlock(MachineBasicBlock *BB) {
   // callee-saved register that is not saved in the prolog.
   const MachineFrameInfo &MFI = MF.getFrameInfo();
   BitVector Pristine = MFI.getPristineRegs(MF);
-  for (const MCPhysReg *I = MF.getRegInfo().getCalleeSavedRegs(); *I;
-       ++I) {
+  for (const MCPhysReg *I = MF.getRegInfo().getCalleeSavedRegs(); *I; ++I) {
     unsigned Reg = *I;
     if (!IsReturnBlock && !Pristine.test(Reg))
       continue;
@@ -180,9 +179,11 @@ void CriticalAntiDepBreaker::PrescanInstruction(MachineInstr &MI) {
   // Classes and RegRefs.
   for (unsigned i = 0, e = MI.getNumOperands(); i != e; ++i) {
     MachineOperand &MO = MI.getOperand(i);
-    if (!MO.isReg()) continue;
+    if (!MO.isReg())
+      continue;
     Register Reg = MO.getReg();
-    if (Reg == 0) continue;
+    if (Reg == 0)
+      continue;
     const TargetRegisterClass *NewRC = nullptr;
 
     if (i < MI.getDesc().getNumOperands())
@@ -221,7 +222,8 @@ void CriticalAntiDepBreaker::PrescanInstruction(MachineInstr &MI) {
 
   for (unsigned I = 0, E = MI.getNumOperands(); I != E; ++I) {
     const MachineOperand &MO = MI.getOperand(I);
-    if (!MO.isReg()) continue;
+    if (!MO.isReg())
+      continue;
     Register Reg = MO.getReg();
     if (!Reg.isValid())
       continue;
@@ -276,10 +278,13 @@ void CriticalAntiDepBreaker::ScanInstruction(MachineInstr &MI, unsigned Count) {
         }
       }
 
-      if (!MO.isReg()) continue;
+      if (!MO.isReg())
+        continue;
       Register Reg = MO.getReg();
-      if (Reg == 0) continue;
-      if (!MO.isDef()) continue;
+      if (Reg == 0)
+        continue;
+      if (!MO.isDef())
+        continue;
 
       // Ignore two-addr defs.
       if (MI.isRegTiedToUseOperand(i))
@@ -306,10 +311,13 @@ void CriticalAntiDepBreaker::ScanInstruction(MachineInstr &MI, unsigned Count) {
   }
   for (unsigned i = 0, e = MI.getNumOperands(); i != e; ++i) {
     MachineOperand &MO = MI.getOperand(i);
-    if (!MO.isReg()) continue;
+    if (!MO.isReg())
+      continue;
     Register Reg = MO.getReg();
-    if (Reg == 0) continue;
-    if (!MO.isUse()) continue;
+    if (Reg == 0)
+      continue;
+    if (!MO.isUse())
+      continue;
 
     const TargetRegisterClass *NewRC = nullptr;
     if (i < MI.getDesc().getNumOperands())
@@ -347,11 +355,10 @@ void CriticalAntiDepBreaker::ScanInstruction(MachineInstr &MI, unsigned Count) {
 // RegRefs because the def is inserted by PrescanInstruction and not erased
 // during ScanInstruction. So checking for an instruction with definitions of
 // both NewReg and AntiDepReg covers it.
-bool
-CriticalAntiDepBreaker::isNewRegClobberedByRefs(RegRefIter RegRefBegin,
-                                                RegRefIter RegRefEnd,
-                                                unsigned NewReg) {
-  for (RegRefIter I = RegRefBegin; I != RegRefEnd; ++I ) {
+bool CriticalAntiDepBreaker::isNewRegClobberedByRefs(RegRefIter RegRefBegin,
+                                                     RegRefIter RegRefEnd,
+                                                     unsigned NewReg) {
+  for (RegRefIter I = RegRefBegin; I != RegRefEnd; ++I) {
     MachineOperand *RefOper = I->second;
 
     // Don't allow the instruction defining AntiDepReg to earlyclobber its
@@ -389,31 +396,32 @@ CriticalAntiDepBreaker::isNewRegClobberedByRefs(RegRefIter RegRefBegin,
   return false;
 }
 
-unsigned CriticalAntiDepBreaker::
-findSuitableFreeRegister(RegRefIter RegRefBegin,
-                         RegRefIter RegRefEnd,
-                         unsigned AntiDepReg,
-                         unsigned LastNewReg,
-                         const TargetRegisterClass *RC,
-                         SmallVectorImpl<unsigned> &Forbid) {
+unsigned CriticalAntiDepBreaker::findSuitableFreeRegister(
+    RegRefIter RegRefBegin, RegRefIter RegRefEnd, unsigned AntiDepReg,
+    unsigned LastNewReg, const TargetRegisterClass *RC,
+    SmallVectorImpl<unsigned> &Forbid) {
   ArrayRef<MCPhysReg> Order = RegClassInfo.getOrder(RC);
   for (unsigned NewReg : Order) {
     // Don't replace a register with itself.
-    if (NewReg == AntiDepReg) continue;
+    if (NewReg == AntiDepReg)
+      continue;
     // Don't replace a register with one that was recently used to repair
     // an anti-dependence with this AntiDepReg, because that would
     // re-introduce that anti-dependence.
-    if (NewReg == LastNewReg) continue;
+    if (NewReg == LastNewReg)
+      continue;
     // If any instructions that define AntiDepReg also define the NewReg, it's
     // not suitable.  For example, Instruction with multiple definitions can
     // result in this condition.
-    if (isNewRegClobberedByRefs(RegRefBegin, RegRefEnd, NewReg)) continue;
+    if (isNewRegClobberedByRefs(RegRefBegin, RegRefEnd, NewReg))
+      continue;
     // If NewReg is dead and NewReg's most recent def is not before
     // AntiDepReg's kill, it's safe to replace AntiDepReg with NewReg.
-    assert(((KillIndices[AntiDepReg] == ~0u) != (DefIndices[AntiDepReg] == ~0u))
-           && "Kill and Def maps aren't consistent for AntiDepReg!");
-    assert(((KillIndices[NewReg] == ~0u) != (DefIndices[NewReg] == ~0u))
-           && "Kill and Def maps aren't consistent for NewReg!");
+    assert(
+        ((KillIndices[AntiDepReg] == ~0u) != (DefIndices[AntiDepReg] == ~0u)) &&
+        "Kill and Def maps aren't consistent for AntiDepReg!");
+    assert(((KillIndices[NewReg] == ~0u) != (DefIndices[NewReg] == ~0u)) &&
+           "Kill and Def maps aren't consistent for NewReg!");
     if (KillIndices[NewReg] != ~0u ||
         Classes[NewReg] == reinterpret_cast<TargetRegisterClass *>(-1) ||
         KillIndices[AntiDepReg] > DefIndices[NewReg])
@@ -425,7 +433,8 @@ findSuitableFreeRegister(RegRefIter RegRefBegin,
         Forbidden = true;
         break;
       }
-    if (Forbidden) continue;
+    if (Forbidden)
+      continue;
     return NewReg;
   }
 
@@ -433,15 +442,14 @@ findSuitableFreeRegister(RegRefIter RegRefBegin,
   return 0;
 }
 
-unsigned CriticalAntiDepBreaker::
-BreakAntiDependencies(const std::vector<SUnit> &SUnits,
-                      MachineBasicBlock::iterator Begin,
-                      MachineBasicBlock::iterator End,
-                      unsigned InsertPosIndex,
-                      DbgValueVector &DbgValues) {
+unsigned CriticalAntiDepBreaker::BreakAntiDependencies(
+    const std::vector<SUnit> &SUnits, MachineBasicBlock::iterator Begin,
+    MachineBasicBlock::iterator End, unsigned InsertPosIndex,
+    DbgValueVector &DbgValues) {
   // The code below assumes that there is at least one instruction,
   // so just duck out immediately if the block is empty.
-  if (SUnits.empty()) return 0;
+  if (SUnits.empty())
+    return 0;
 
   // Keep a map of the MachineInstr*'s back to the SUnit representing them.
   // This is used for updating debug information.
@@ -610,9 +618,11 @@ BreakAntiDependencies(const std::vector<SUnit> &SUnits,
       // save a list of them so that we don't pick a new register
       // that overlaps any of them.
       for (const MachineOperand &MO : MI.operands()) {
-        if (!MO.isReg()) continue;
+        if (!MO.isReg())
+          continue;
         Register Reg = MO.getReg();
-        if (Reg == 0) continue;
+        if (Reg == 0)
+          continue;
         if (MO.isUse() && TRI->regsOverlap(AntiDepReg, Reg)) {
           AntiDepReg = 0;
           break;
@@ -624,8 +634,8 @@ BreakAntiDependencies(const std::vector<SUnit> &SUnits,
 
     // Determine AntiDepReg's register class, if it is live and is
     // consistently used within a single class.
-    const TargetRegisterClass *RC = AntiDepReg != 0 ? Classes[AntiDepReg]
-                                                    : nullptr;
+    const TargetRegisterClass *RC =
+        AntiDepReg != 0 ? Classes[AntiDepReg] : nullptr;
     assert((AntiDepReg == 0 || RC != nullptr) &&
            "Register should be live if it's causing an anti-dependence!");
     if (RC == reinterpret_cast<TargetRegisterClass *>(-1))
@@ -638,11 +648,10 @@ BreakAntiDependencies(const std::vector<SUnit> &SUnits,
     if (AntiDepReg != 0) {
       std::pair<std::multimap<unsigned, MachineOperand *>::iterator,
                 std::multimap<unsigned, MachineOperand *>::iterator>
-        Range = RegRefs.equal_range(AntiDepReg);
-      if (unsigned NewReg = findSuitableFreeRegister(Range.first, Range.second,
-                                                     AntiDepReg,
-                                                     LastNewReg[AntiDepReg],
-                                                     RC, ForbidRegs)) {
+          Range = RegRefs.equal_range(AntiDepReg);
+      if (unsigned NewReg = findSuitableFreeRegister(
+              Range.first, Range.second, AntiDepReg, LastNewReg[AntiDepReg], RC,
+              ForbidRegs)) {
         LLVM_DEBUG(dbgs() << "Breaking anti-dependence edge on "
                           << printReg(AntiDepReg, TRI) << " with "
                           << RegRefs.count(AntiDepReg) << " references"
@@ -651,15 +660,18 @@ BreakAntiDependencies(const std::vector<SUnit> &SUnits,
         // Update the references to the old register to refer to the new
         // register.
         for (std::multimap<unsigned, MachineOperand *>::iterator
-             Q = Range.first, QE = Range.second; Q != QE; ++Q) {
+                 Q = Range.first,
+                 QE = Range.second;
+             Q != QE; ++Q) {
           Q->second->setReg(NewReg);
           // If the SU for the instruction being updated has debug information
           // related to the anti-dependency register, make sure to update that
           // as well.
           const SUnit *SU = MISUnitMap[Q->second->getParent()];
-          if (!SU) continue;
-          UpdateDbgValues(DbgValues, Q->second->getParent(),
-                          AntiDepReg, NewReg);
+          if (!SU)
+            continue;
+          UpdateDbgValues(DbgValues, Q->second->getParent(), AntiDepReg,
+                          NewReg);
         }
 
         // We just went back in time and modified history; the
@@ -668,16 +680,15 @@ BreakAntiDependencies(const std::vector<SUnit> &SUnits,
         Classes[NewReg] = Classes[AntiDepReg];
         DefIndices[NewReg] = DefIndices[AntiDepReg];
         KillIndices[NewReg] = KillIndices[AntiDepReg];
-        assert(((KillIndices[NewReg] == ~0u) !=
-                (DefIndices[NewReg] == ~0u)) &&
-             "Kill and Def maps aren't consistent for NewReg!");
+        assert(((KillIndices[NewReg] == ~0u) != (DefIndices[NewReg] == ~0u)) &&
+               "Kill and Def maps aren't consistent for NewReg!");
 
         Classes[AntiDepReg] = nullptr;
         DefIndices[AntiDepReg] = KillIndices[AntiDepReg];
         KillIndices[AntiDepReg] = ~0u;
         assert(((KillIndices[AntiDepReg] == ~0u) !=
                 (DefIndices[AntiDepReg] == ~0u)) &&
-             "Kill and Def maps aren't consistent for AntiDepReg!");
+               "Kill and Def maps aren't consistent for AntiDepReg!");
 
         RegRefs.erase(AntiDepReg);
         LastNewReg[AntiDepReg] = NewReg;
diff --git a/llvm/lib/CodeGen/CriticalAntiDepBreaker.h b/llvm/lib/CodeGen/CriticalAntiDepBreaker.h
index 640506b6e9ed49a..15f1a77c3dfda0a 100644
--- a/llvm/lib/CodeGen/CriticalAntiDepBreaker.h
+++ b/llvm/lib/CodeGen/CriticalAntiDepBreaker.h
@@ -34,78 +34,75 @@ class TargetRegisterClass;
 class TargetRegisterInfo;
 
 class LLVM_LIBRARY_VISIBILITY CriticalAntiDepBreaker : public AntiDepBreaker {
-    MachineFunction& MF;
-    MachineRegisterInfo &MRI;
-    const TargetInstrInfo *TII;
-    const TargetRegisterInfo *TRI;
-    const RegisterClassInfo &RegClassInfo;
-
-    /// The set of allocatable registers.
-    /// We'll be ignoring anti-dependencies on non-allocatable registers,
-    /// because they may not be safe to break.
-    const BitVector AllocatableSet;
-
-    /// For live regs that are only used in one register class in a
-    /// live range, the register class. If the register is not live, the
-    /// corresponding value is null. If the register is live but used in
-    /// multiple register classes, the corresponding value is -1 casted to a
-    /// pointer.
-    std::vector<const TargetRegisterClass *> Classes;
-
-    /// Map registers to all their references within a live range.
-    std::multimap<unsigned, MachineOperand *> RegRefs;
-
-    using RegRefIter =
-        std::multimap<unsigned, MachineOperand *>::const_iterator;
-
-    /// The index of the most recent kill (proceeding bottom-up),
-    /// or ~0u if the register is not live.
-    std::vector<unsigned> KillIndices;
-
-    /// The index of the most recent complete def (proceeding
-    /// bottom up), or ~0u if the register is live.
-    std::vector<unsigned> DefIndices;
-
-    /// A set of registers which are live and cannot be changed to
-    /// break anti-dependencies.
-    BitVector KeepRegs;
-
-  public:
-    CriticalAntiDepBreaker(MachineFunction& MFi, const RegisterClassInfo &RCI);
-    ~CriticalAntiDepBreaker() override;
-
-    /// Initialize anti-dep breaking for a new basic block.
-    void StartBlock(MachineBasicBlock *BB) override;
-
-    /// Identifiy anti-dependencies along the critical path
-    /// of the ScheduleDAG and break them by renaming registers.
-    unsigned BreakAntiDependencies(const std::vector<SUnit> &SUnits,
-                                   MachineBasicBlock::iterator Begin,
-                                   MachineBasicBlock::iterator End,
-                                   unsigned InsertPosIndex,
-                                   DbgValueVector &DbgValues) override;
-
-    /// Update liveness information to account for the current
-    /// instruction, which will not be scheduled.
-    void Observe(MachineInstr &MI, unsigned Count,
-                 unsigned InsertPosIndex) override;
-
-    /// Finish anti-dep breaking for a basic block.
-    void FinishBlock() override;
-
-  private:
-    void PrescanInstruction(MachineInstr &MI);
-    void ScanInstruction(MachineInstr &MI, unsigned Count);
-    bool isNewRegClobberedByRefs(RegRefIter RegRefBegin,
-                                 RegRefIter RegRefEnd,
-                                 unsigned NewReg);
-    unsigned findSuitableFreeRegister(RegRefIter RegRefBegin,
-                                      RegRefIter RegRefEnd,
-                                      unsigned AntiDepReg,
-                                      unsigned LastNewReg,
-                                      const TargetRegisterClass *RC,
-                                      SmallVectorImpl<unsigned> &Forbid);
-  };
+  MachineFunction &MF;
+  MachineRegisterInfo &MRI;
+  const TargetInstrInfo *TII;
+  const TargetRegisterInfo *TRI;
+  const RegisterClassInfo &RegClassInfo;
+
+  /// The set of allocatable registers.
+  /// We'll be ignoring anti-dependencies on non-allocatable registers,
+  /// because they may not be safe to break.
+  const BitVector AllocatableSet;
+
+  /// For live regs that are only used in one register class in a
+  /// live range, the register class. If the register is not live, the
+  /// corresponding value is null. If the register is live but used in
+  /// multiple register classes, the corresponding value is -1 casted to a
+  /// pointer.
+  std::vector<const TargetRegisterClass *> Classes;
+
+  /// Map registers to all their references within a live range.
+  std::multimap<unsigned, MachineOperand *> RegRefs;
+
+  using RegRefIter = std::multimap<unsigned, MachineOperand *>::const_iterator;
+
+  /// The index of the most recent kill (proceeding bottom-up),
+  /// or ~0u if the register is not live.
+  std::vector<unsigned> KillIndices;
+
+  /// The index of the most recent complete def (proceeding
+  /// bottom up), or ~0u if the register is live.
+  std::vector<unsigned> DefIndices;
+
+  /// A set of registers which are live and cannot be changed to
+  /// break anti-dependencies.
+  BitVector KeepRegs;
+
+public:
+  CriticalAntiDepBreaker(MachineFunction &MFi, const RegisterClassInfo &RCI);
+  ~CriticalAntiDepBreaker() override;
+
+  /// Initialize anti-dep breaking for a new basic block.
+  void StartBlock(MachineBasicBlock *BB) override;
+
+  /// Identifiy anti-dependencies along the critical path
+  /// of the ScheduleDAG and break them by renaming registers.
+  unsigned BreakAntiDependencies(const std::vector<SUnit> &SUnits,
+                                 MachineBasicBlock::iterator Begin,
+                                 MachineBasicBlock::iterator End,
+                                 unsigned InsertPosIndex,
+                                 DbgValueVector &DbgValues) override;
+
+  /// Update liveness information to account for the current
+  /// instruction, which will not be scheduled.
+  void Observe(MachineInstr &MI, unsigned Count,
+               unsigned InsertPosIndex) override;
+
+  /// Finish anti-dep breaking for a basic block.
+  void FinishBlock() override;
+
+private:
+  void PrescanInstruction(MachineInstr &MI);
+  void ScanInstruction(MachineInstr &MI, unsigned Count);
+  bool isNewRegClobberedByRefs(RegRefIter RegRefBegin, RegRefIter RegRefEnd,
+                               unsigned NewReg);
+  unsigned findSuitableFreeRegister(RegRefIter RegRefBegin,
+                                    RegRefIter RegRefEnd, unsigned AntiDepReg,
+                                    unsigned LastNewReg,
+                                    const TargetRegisterClass *RC,
+                                    SmallVectorImpl<unsigned> &Forbid);
+};
 
 } // end namespace llvm
 
diff --git a/llvm/lib/CodeGen/DFAPacketizer.cpp b/llvm/lib/CodeGen/DFAPacketizer.cpp
index 48bb4a07662e10f..16516eb970cefe5 100644
--- a/llvm/lib/CodeGen/DFAPacketizer.cpp
+++ b/llvm/lib/CodeGen/DFAPacketizer.cpp
@@ -45,8 +45,9 @@ using namespace llvm;
 
 #define DEBUG_TYPE "packets"
 
-static cl::opt<unsigned> InstrLimit("dfa-instr-limit", cl::Hidden,
-  cl::init(0), cl::desc("If present, stops packetizing after N instructions"));
+static cl::opt<unsigned>
+    InstrLimit("dfa-instr-limit", cl::Hidden, cl::init(0),
+               cl::desc("If present, stops packetizing after N instructions"));
 
 static unsigned InstrCount = 0;
 
@@ -97,8 +98,7 @@ unsigned DFAPacketizer::getUsedResources(unsigned InstIdx) {
 }
 
 DefaultVLIWScheduler::DefaultVLIWScheduler(MachineFunction &MF,
-                                           MachineLoopInfo &MLI,
-                                           AAResults *AA)
+                                           MachineLoopInfo &MLI, AAResults *AA)
     : ScheduleDAGInstrs(MF, &MLI), AA(AA) {
   CanHandleTerminators = true;
 }
@@ -268,8 +268,7 @@ bool VLIWPacketizerList::alias(const MachineMemOperand &Op1,
   return AAResult != AliasResult::NoAlias;
 }
 
-bool VLIWPacketizerList::alias(const MachineInstr &MI1,
-                               const MachineInstr &MI2,
+bool VLIWPacketizerList::alias(const MachineInstr &MI1, const MachineInstr &MI2,
                                bool UseTBAA) const {
   if (MI1.memoperands_empty() || MI2.memoperands_empty())
     return true;
@@ -283,6 +282,6 @@ bool VLIWPacketizerList::alias(const MachineInstr &MI1,
 
 // Add a DAG mutation object to the ordered list.
 void VLIWPacketizerList::addMutation(
-      std::unique_ptr<ScheduleDAGMutation> Mutation) {
+    std::unique_ptr<ScheduleDAGMutation> Mutation) {
   VLIWScheduler->addMutation(std::move(Mutation));
 }
diff --git a/llvm/lib/CodeGen/DeadMachineInstructionElim.cpp b/llvm/lib/CodeGen/DeadMachineInstructionElim.cpp
index 6a7de3b241feeea..8485872b10ead9f 100644
--- a/llvm/lib/CodeGen/DeadMachineInstructionElim.cpp
+++ b/llvm/lib/CodeGen/DeadMachineInstructionElim.cpp
@@ -25,33 +25,33 @@ using namespace llvm;
 
 #define DEBUG_TYPE "dead-mi-elimination"
 
-STATISTIC(NumDeletes,          "Number of dead instructions deleted");
+STATISTIC(NumDeletes, "Number of dead instructions deleted");
 
 namespace {
-  class DeadMachineInstructionElim : public MachineFunctionPass {
-    bool runOnMachineFunction(MachineFunction &MF) override;
+class DeadMachineInstructionElim : public MachineFunctionPass {
+  bool runOnMachineFunction(MachineFunction &MF) override;
 
-    const MachineRegisterInfo *MRI = nullptr;
-    const TargetInstrInfo *TII = nullptr;
-    LiveRegUnits LivePhysRegs;
+  const MachineRegisterInfo *MRI = nullptr;
+  const TargetInstrInfo *TII = nullptr;
+  LiveRegUnits LivePhysRegs;
 
-  public:
-    static char ID; // Pass identification, replacement for typeid
-    DeadMachineInstructionElim() : MachineFunctionPass(ID) {
-     initializeDeadMachineInstructionElimPass(*PassRegistry::getPassRegistry());
-    }
+public:
+  static char ID; // Pass identification, replacement for typeid
+  DeadMachineInstructionElim() : MachineFunctionPass(ID) {
+    initializeDeadMachineInstructionElimPass(*PassRegistry::getPassRegistry());
+  }
 
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.setPreservesCFG();
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.setPreservesCFG();
+    MachineFunctionPass::getAnalysisUsage(AU);
+  }
 
-  private:
-    bool isDead(const MachineInstr *MI) const;
+private:
+  bool isDead(const MachineInstr *MI) const;
 
-    bool eliminateDeadMI(MachineFunction &MF);
-  };
-}
+  bool eliminateDeadMI(MachineFunction &MF);
+};
+} // namespace
 char DeadMachineInstructionElim::ID = 0;
 char &llvm::DeadMachineInstructionElimID = DeadMachineInstructionElim::ID;
 
diff --git a/llvm/lib/CodeGen/DetectDeadLanes.cpp b/llvm/lib/CodeGen/DetectDeadLanes.cpp
index 86e9f3abe010d70..2fea9e6d79c90ec 100644
--- a/llvm/lib/CodeGen/DetectDeadLanes.cpp
+++ b/llvm/lib/CodeGen/DetectDeadLanes.cpp
@@ -64,8 +64,7 @@ static bool lowersToCopies(const MachineInstr &MI) {
   return false;
 }
 
-static bool isCrossCopy(const MachineRegisterInfo &MRI,
-                        const MachineInstr &MI,
+static bool isCrossCopy(const MachineRegisterInfo &MRI, const MachineInstr &MI,
                         const TargetRegisterClass *DstRC,
                         const MachineOperand &MO) {
   assert(lowersToCopies(MI));
@@ -85,7 +84,7 @@ static bool isCrossCopy(const MachineRegisterInfo &MRI,
     break;
   case TargetOpcode::REG_SEQUENCE: {
     unsigned OpNum = MO.getOperandNo();
-    DstSubIdx = MI.getOperand(OpNum+1).getImm();
+    DstSubIdx = MI.getOperand(OpNum + 1).getImm();
     break;
   }
   case TargetOpcode::EXTRACT_SUBREG: {
@@ -313,8 +312,8 @@ LaneBitmask DeadLaneDetector::determineInitialDefinedLanes(unsigned Reg) {
         }
         unsigned MOSubReg = MO.getSubReg();
         MODefinedLanes = MRI->getMaxLaneMaskForVReg(MOReg);
-        MODefinedLanes = TRI->reverseComposeSubRegIndexLaneMask(
-            MOSubReg, MODefinedLanes);
+        MODefinedLanes =
+            TRI->reverseComposeSubRegIndexLaneMask(MOSubReg, MODefinedLanes);
       }
 
       unsigned OpNum = MO.getOperandNo();
diff --git a/llvm/lib/CodeGen/EarlyIfConversion.cpp b/llvm/lib/CodeGen/EarlyIfConversion.cpp
index 61867d74bfa293c..599b8a64e000cc4 100644
--- a/llvm/lib/CodeGen/EarlyIfConversion.cpp
+++ b/llvm/lib/CodeGen/EarlyIfConversion.cpp
@@ -44,16 +44,16 @@ using namespace llvm;
 
 // Absolute maximum number of instructions allowed per speculated block.
 // This bypasses all other heuristics, so it should be set fairly high.
-static cl::opt<unsigned>
-BlockInstrLimit("early-ifcvt-limit", cl::init(30), cl::Hidden,
-  cl::desc("Maximum number of instructions per speculated block."));
+static cl::opt<unsigned> BlockInstrLimit(
+    "early-ifcvt-limit", cl::init(30), cl::Hidden,
+    cl::desc("Maximum number of instructions per speculated block."));
 
 // Stress testing mode - disable heuristics.
 static cl::opt<bool> Stress("stress-early-ifcvt", cl::Hidden,
-  cl::desc("Turn all knobs to 11"));
+                            cl::desc("Turn all knobs to 11"));
 
-STATISTIC(NumDiamondsSeen,  "Number of diamonds");
-STATISTIC(NumDiamondsConv,  "Number of diamonds converted");
+STATISTIC(NumDiamondsSeen, "Number of diamonds");
+STATISTIC(NumDiamondsConv, "Number of diamonds converted");
 STATISTIC(NumTrianglesSeen, "Number of triangles");
 STATISTIC(NumTrianglesConv, "Number of triangles converted");
 
@@ -125,7 +125,7 @@ class SSAIfConv {
 private:
   /// Instructions in Head that define values used by the conditional blocks.
   /// The hoisted instructions must be inserted after these instructions.
-  SmallPtrSet<MachineInstr*, 8> InsertAfter;
+  SmallPtrSet<MachineInstr *, 8> InsertAfter;
 
   /// Register units clobbered by the conditional blocks.
   BitVector ClobberedRegUnits;
@@ -187,7 +187,6 @@ class SSAIfConv {
 };
 } // end anonymous namespace
 
-
 /// canSpeculateInstrs - Returns true if all the instructions in MBB can safely
 /// be speculated. The terminators are not considered.
 ///
@@ -430,8 +429,6 @@ bool SSAIfConv::findInsertionPoint() {
   return false;
 }
 
-
-
 /// canConvertIf - analyze the sub-cfg rooted in MBB, and return true if it is
 /// a potential candidate for if-conversion. Fill out the internal state.
 ///
@@ -517,9 +514,9 @@ bool SSAIfConv::canConvertIf(MachineBasicBlock *MBB, bool Predicate) {
     PHIInfo &PI = PHIs.back();
     // Find PHI operands corresponding to TPred and FPred.
     for (unsigned i = 1; i != PI.PHI->getNumOperands(); i += 2) {
-      if (PI.PHI->getOperand(i+1).getMBB() == TPred)
+      if (PI.PHI->getOperand(i + 1).getMBB() == TPred)
         PI.TReg = PI.PHI->getOperand(i).getReg();
-      if (PI.PHI->getOperand(i+1).getMBB() == FPred)
+      if (PI.PHI->getOperand(i + 1).getMBB() == FPred)
         PI.FReg = PI.PHI->getOperand(i).getReg();
     }
     assert(Register::isVirtualRegister(PI.TReg) && "Bad PHI");
@@ -657,20 +654,20 @@ void SSAIfConv::rewritePHIOperands() {
     } else {
       Register PHIDst = PI.PHI->getOperand(0).getReg();
       DstReg = MRI->createVirtualRegister(MRI->getRegClass(PHIDst));
-      TII->insertSelect(*Head, FirstTerm, HeadDL,
-                         DstReg, Cond, PI.TReg, PI.FReg);
+      TII->insertSelect(*Head, FirstTerm, HeadDL, DstReg, Cond, PI.TReg,
+                        PI.FReg);
       LLVM_DEBUG(dbgs() << "          --> " << *std::prev(FirstTerm));
     }
 
     // Rewrite PHI operands TPred -> (DstReg, Head), remove FPred.
     for (unsigned i = PI.PHI->getNumOperands(); i != 1; i -= 2) {
-      MachineBasicBlock *MBB = PI.PHI->getOperand(i-1).getMBB();
+      MachineBasicBlock *MBB = PI.PHI->getOperand(i - 1).getMBB();
       if (MBB == getTPred()) {
-        PI.PHI->getOperand(i-1).setMBB(Head);
-        PI.PHI->getOperand(i-2).setReg(DstReg);
+        PI.PHI->getOperand(i - 1).setMBB(Head);
+        PI.PHI->getOperand(i - 2).setReg(DstReg);
       } else if (MBB == getFPred()) {
-        PI.PHI->removeOperand(i-1);
-        PI.PHI->removeOperand(i-2);
+        PI.PHI->removeOperand(i - 1);
+        PI.PHI->removeOperand(i - 2);
       }
     }
     LLVM_DEBUG(dbgs() << "          --> " << *PI.PHI);
@@ -739,8 +736,7 @@ void SSAIfConv::convertIf(SmallVectorImpl<MachineBasicBlock *> &RemovedBlocks,
     // Splice Tail onto the end of Head.
     LLVM_DEBUG(dbgs() << "Joining tail " << printMBBReference(*Tail)
                       << " into head " << printMBBReference(*Head) << '\n');
-    Head->splice(Head->end(), Tail,
-                     Tail->begin(), Tail->end());
+    Head->splice(Head->end(), Tail, Tail->begin(), Tail->end());
     Head->transferSuccessorsAndUpdatePHIs(Tail);
     RemovedBlocks.push_back(Tail);
     Tail->eraseFromParent();
@@ -778,7 +774,7 @@ class EarlyIfConverter : public MachineFunctionPass {
   StringRef getPassName() const override { return "Early If-Conversion"; }
 
 private:
-  bool tryConvertIf(MachineBasicBlock*);
+  bool tryConvertIf(MachineBasicBlock *);
   void invalidateTraces();
   bool shouldConvertIf();
 };
@@ -787,13 +783,13 @@ class EarlyIfConverter : public MachineFunctionPass {
 char EarlyIfConverter::ID = 0;
 char &llvm::EarlyIfConverterID = EarlyIfConverter::ID;
 
-INITIALIZE_PASS_BEGIN(EarlyIfConverter, DEBUG_TYPE,
-                      "Early If Converter", false, false)
+INITIALIZE_PASS_BEGIN(EarlyIfConverter, DEBUG_TYPE, "Early If Converter", false,
+                      false)
 INITIALIZE_PASS_DEPENDENCY(MachineBranchProbabilityInfo)
 INITIALIZE_PASS_DEPENDENCY(MachineDominatorTree)
 INITIALIZE_PASS_DEPENDENCY(MachineTraceMetrics)
-INITIALIZE_PASS_END(EarlyIfConverter, DEBUG_TYPE,
-                    "Early If Converter", false, false)
+INITIALIZE_PASS_END(EarlyIfConverter, DEBUG_TYPE, "Early If Converter", false,
+                    false)
 
 void EarlyIfConverter::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.addRequired<MachineBranchProbabilityInfo>();
@@ -911,11 +907,11 @@ bool EarlyIfConverter::shouldConvertIf() {
   MachineTraceMetrics::Trace TBBTrace = MinInstr->getTrace(IfConv.getTPred());
   MachineTraceMetrics::Trace FBBTrace = MinInstr->getTrace(IfConv.getFPred());
   LLVM_DEBUG(dbgs() << "TBB: " << TBBTrace << "FBB: " << FBBTrace);
-  unsigned MinCrit = std::min(TBBTrace.getCriticalPath(),
-                              FBBTrace.getCriticalPath());
+  unsigned MinCrit =
+      std::min(TBBTrace.getCriticalPath(), FBBTrace.getCriticalPath());
 
   // Set a somewhat arbitrary limit on the critical path extension we accept.
-  unsigned CritLimit = SchedModel.MispredictPenalty/2;
+  unsigned CritLimit = SchedModel.MispredictPenalty / 2;
 
   MachineBasicBlock &MBB = *IfConv.Head;
   MachineOptimizationRemarkEmitter MORE(*MBB.getParent(), nullptr);
@@ -923,7 +919,7 @@ bool EarlyIfConverter::shouldConvertIf() {
   // If-conversion only makes sense when there is unexploited ILP. Compute the
   // maximum-ILP resource length of the trace after if-conversion. Compare it
   // to the shortest critical path.
-  SmallVector<const MachineBasicBlock*, 1> ExtraBlocks;
+  SmallVector<const MachineBasicBlock *, 1> ExtraBlocks;
   if (IfConv.TBB != IfConv.Tail)
     ExtraBlocks.push_back(IfConv.TBB);
   unsigned ResLength = FBBTrace.getResourceLength(ExtraBlocks);
@@ -1067,7 +1063,7 @@ bool EarlyIfConverter::tryConvertIf(MachineBasicBlock *MBB) {
   while (IfConv.canConvertIf(MBB) && shouldConvertIf()) {
     // If-convert MBB and update analyses.
     invalidateTraces();
-    SmallVector<MachineBasicBlock*, 4> RemovedBlocks;
+    SmallVector<MachineBasicBlock *, 4> RemovedBlocks;
     IfConv.convertIf(RemovedBlocks);
     Changed = true;
     updateDomTree(DomTree, IfConv, RemovedBlocks);
diff --git a/llvm/lib/CodeGen/EdgeBundles.cpp b/llvm/lib/CodeGen/EdgeBundles.cpp
index 3dd354e8ab7e802..2e27daf23376b6d 100644
--- a/llvm/lib/CodeGen/EdgeBundles.cpp
+++ b/llvm/lib/CodeGen/EdgeBundles.cpp
@@ -23,13 +23,13 @@
 using namespace llvm;
 
 static cl::opt<bool>
-ViewEdgeBundles("view-edge-bundles", cl::Hidden,
-                cl::desc("Pop up a window to show edge bundle graphs"));
+    ViewEdgeBundles("view-edge-bundles", cl::Hidden,
+                    cl::desc("Pop up a window to show edge bundle graphs"));
 
 char EdgeBundles::ID = 0;
 
 INITIALIZE_PASS(EdgeBundles, "edge-bundles", "Bundle Machine CFG Edges",
-                /* cfg = */true, /* is_analysis = */ true)
+                /* cfg = */ true, /* is_analysis = */ true)
 
 char &llvm::EdgeBundlesID = EdgeBundles::ID;
 
@@ -71,9 +71,8 @@ bool EdgeBundles::runOnMachineFunction(MachineFunction &mf) {
 namespace llvm {
 
 /// Specialize WriteGraph, the standard implementation won't work.
-template<>
-raw_ostream &WriteGraph<>(raw_ostream &O, const EdgeBundles &G,
-                          bool ShortNames,
+template <>
+raw_ostream &WriteGraph<>(raw_ostream &O, const EdgeBundles &G, bool ShortNames,
                           const Twine &Title) {
   const MachineFunction *MF = G.getMachineFunction();
 
@@ -96,6 +95,4 @@ raw_ostream &WriteGraph<>(raw_ostream &O, const EdgeBundles &G,
 } // end namespace llvm
 
 /// view - Visualize the annotated bipartite CFG with Graphviz.
-void EdgeBundles::view() const {
-  ViewGraph(*this, "EdgeBundles");
-}
+void EdgeBundles::view() const { ViewGraph(*this, "EdgeBundles"); }
diff --git a/llvm/lib/CodeGen/ExpandLargeFpConvert.cpp b/llvm/lib/CodeGen/ExpandLargeFpConvert.cpp
index ca8056a53139721..4f7f40343f49a3f 100644
--- a/llvm/lib/CodeGen/ExpandLargeFpConvert.cpp
+++ b/llvm/lib/CodeGen/ExpandLargeFpConvert.cpp
@@ -33,9 +33,9 @@ using namespace llvm;
 
 static cl::opt<unsigned>
     ExpandFpConvertBits("expand-fp-convert-bits", cl::Hidden,
-                     cl::init(llvm::IntegerType::MAX_INT_BITS),
-                     cl::desc("fp convert instructions on integers with "
-                              "more than <N> bits are expanded."));
+                        cl::init(llvm::IntegerType::MAX_INT_BITS),
+                        cl::desc("fp convert instructions on integers with "
+                                 "more than <N> bits are expanded."));
 
 /// Generate code to convert a fp number to integer, replacing FPToS(U)I with
 /// the generated code. This currently generates code similarly to compiler-rt's
@@ -62,8 +62,8 @@ static cl::opt<unsigned>
 ///   br i1 %cmp6.not, label %if.end12, label %if.then8
 ///
 /// if.then8:                                         ; preds = %if.end
-///   %cond11 = select i1 %tobool.not, i64 9223372036854775807, i64 -9223372036854775808
-///   br label %cleanup
+///   %cond11 = select i1 %tobool.not, i64 9223372036854775807, i64
+///   -9223372036854775808 br label %cleanup
 ///
 /// if.end12:                                         ; preds = %if.end
 ///   %cmp13 = icmp ult i64 %shr, 150
@@ -81,9 +81,10 @@ static cl::opt<unsigned>
 ///   %mul19 = mul nsw i64 %shl, %conv
 ///   br label %cleanup
 ///
-/// cleanup:                                          ; preds = %entry, %if.else, %if.then15, %if.then8
-///   %retval.0 = phi i64 [ %cond11, %if.then8 ], [ %mul, %if.then15 ], [ %mul19, %if.else ], [ 0, %entry ]
-///   ret i64 %retval.0
+/// cleanup:                                          ; preds = %entry,
+/// %if.else, %if.then15, %if.then8
+///   %retval.0 = phi i64 [ %cond11, %if.then8 ], [ %mul, %if.then15 ], [
+///   %mul19, %if.else ], [ 0, %entry ] ret i64 %retval.0
 /// }
 ///
 /// Replace fp to integer with generated code.
@@ -202,8 +203,7 @@ static void expandFPToI(Instruction *FPToI) {
   // if.else:
   Builder.SetInsertPoint(IfElse);
   Value *Sub15 = Builder.CreateAdd(
-      And2,
-      ConstantInt::getSigned(IntTy, -(ExponentBias + FPMantissaWidth)));
+      And2, ConstantInt::getSigned(IntTy, -(ExponentBias + FPMantissaWidth)));
   Value *Shl = Builder.CreateShl(Or, Sub15);
   Value *Mul16 = Builder.CreateMul(Shl, Sign);
   Builder.CreateBr(End);
@@ -265,13 +265,11 @@ static void expandFPToI(Instruction *FPToI) {
 ///   %or = or i64 %shr6, %conv11
 ///   br label %sw.epilog
 ///
-/// sw.epilog:                                        ; preds = %sw.default, %if.then4, %sw.bb
-///   %a.addr.0 = phi i64 [ %or, %sw.default ], [ %sub, %if.then4 ], [ %shl, %sw.bb ]
-///   %1 = lshr i64 %a.addr.0, 2
-///   %2 = and i64 %1, 1
-///   %or16 = or i64 %2, %a.addr.0
-///   %inc = add nsw i64 %or16, 1
-///   %3 = and i64 %inc, 67108864
+/// sw.epilog:                                        ; preds = %sw.default,
+/// %if.then4, %sw.bb
+///   %a.addr.0 = phi i64 [ %or, %sw.default ], [ %sub, %if.then4 ], [ %shl,
+///   %sw.bb ] %1 = lshr i64 %a.addr.0, 2 %2 = and i64 %1, 1 %or16 = or i64 %2,
+///   %a.addr.0 %inc = add nsw i64 %or16, 1 %3 = and i64 %inc, 67108864
 ///   %tobool.not = icmp eq i64 %3, 0
 ///   %spec.select.v = select i1 %tobool.not, i64 2, i64 3
 ///   %spec.select = ashr i64 %inc, %spec.select.v
@@ -284,7 +282,8 @@ static void expandFPToI(Instruction *FPToI) {
 ///   %shl25 = shl i64 %sub, %sh_prom24
 ///   br label %if.end26
 ///
-/// if.end26:                                         ; preds = %sw.epilog, %if.else
+/// if.end26:                                         ; preds = %sw.epilog,
+/// %if.else
 ///   %a.addr.1 = phi i64 [ %shl25, %if.else ], [ %spec.select, %sw.epilog ]
 ///   %e.0 = phi i32 [ %sub2, %if.else ], [ %spec.select56, %sw.epilog ]
 ///   %conv27 = trunc i64 %shr to i32
@@ -298,7 +297,8 @@ static void expandFPToI(Instruction *FPToI) {
 ///   %4 = bitcast i32 %or33 to float
 ///   br label %return
 ///
-/// return:                                           ; preds = %entry, %if.end26
+/// return:                                           ; preds = %entry,
+/// %if.end26
 ///   %retval.0 = phi float [ %4, %if.end26 ], [ 0.000000e+00, %entry ]
 ///   ret float %retval.0
 /// }
diff --git a/llvm/lib/CodeGen/ExpandMemCmp.cpp b/llvm/lib/CodeGen/ExpandMemCmp.cpp
index 500f31bd8e89295..0a6d69995fd6260 100644
--- a/llvm/lib/CodeGen/ExpandMemCmp.cpp
+++ b/llvm/lib/CodeGen/ExpandMemCmp.cpp
@@ -59,7 +59,6 @@ static cl::opt<unsigned> MaxLoadsPerMemcmpOptSize(
 
 namespace {
 
-
 // This class provides helper functions to expand a memcmp library call into an
 // inline expansion.
 class MemCmpExpansion {
@@ -89,8 +88,7 @@ class MemCmpExpansion {
   // 1x1-byte load, which would be represented as [{16, 0}, {16, 16}, {1, 32}.
   struct LoadEntry {
     LoadEntry(unsigned LoadSize, uint64_t Offset)
-        : LoadSize(LoadSize), Offset(Offset) {
-    }
+        : LoadSize(LoadSize), Offset(Offset) {}
 
     // The size of the load for this block, in bytes.
     unsigned LoadSize;
@@ -629,7 +627,8 @@ Value *MemCmpExpansion::getMemCmpExpansion() {
     // calculate which source was larger. The calculation requires the
     // two loaded source values of each load compare block.
     // These will be saved in the phi nodes created by setupResultBlockPHINodes.
-    if (!IsUsedForZeroCmp) setupResultBlockPHINodes();
+    if (!IsUsedForZeroCmp)
+      setupResultBlockPHINodes();
 
     // Create the number of required load compare basic blocks.
     createLoadCmpBlocks();
@@ -759,15 +758,14 @@ static bool expandMemCmp(CallInst *CI, const TargetTransformInfo *TTI,
       IsBCmp || isOnlyUsedInZeroEqualityComparison(CI);
   bool OptForSize = CI->getFunction()->hasOptSize() ||
                     llvm::shouldOptimizeForSize(CI->getParent(), PSI, BFI);
-  auto Options = TTI->enableMemCmpExpansion(OptForSize,
-                                            IsUsedForZeroCmp);
-  if (!Options) return false;
+  auto Options = TTI->enableMemCmpExpansion(OptForSize, IsUsedForZeroCmp);
+  if (!Options)
+    return false;
 
   if (MemCmpEqZeroNumLoadsPerBlock.getNumOccurrences())
     Options.NumLoadsPerBlock = MemCmpEqZeroNumLoadsPerBlock;
 
-  if (OptForSize &&
-      MaxLoadsPerMemcmpOptSize.getNumOccurrences())
+  if (OptForSize && MaxLoadsPerMemcmpOptSize.getNumOccurrences())
     Options.MaxNumLoads = MaxLoadsPerMemcmpOptSize;
 
   if (!OptForSize && MaxLoadsPerMemcmp.getNumOccurrences())
@@ -801,13 +799,14 @@ class ExpandMemCmpPass : public FunctionPass {
   }
 
   bool runOnFunction(Function &F) override {
-    if (skipFunction(F)) return false;
+    if (skipFunction(F))
+      return false;
 
     auto *TPC = getAnalysisIfAvailable<TargetPassConfig>();
     if (!TPC) {
       return false;
     }
-    const TargetLowering* TL =
+    const TargetLowering *TL =
         TPC->getTM<TargetMachine>().getSubtargetImpl(F)->getTargetLowering();
 
     const TargetLibraryInfo *TLI =
@@ -815,9 +814,9 @@ class ExpandMemCmpPass : public FunctionPass {
     const TargetTransformInfo *TTI =
         &getAnalysis<TargetTransformInfoWrapperPass>().getTTI(F);
     auto *PSI = &getAnalysis<ProfileSummaryInfoWrapperPass>().getPSI();
-    auto *BFI = (PSI && PSI->hasProfileSummary()) ?
-           &getAnalysis<LazyBlockFrequencyInfoPass>().getBFI() :
-           nullptr;
+    auto *BFI = (PSI && PSI->hasProfileSummary())
+                    ? &getAnalysis<LazyBlockFrequencyInfoPass>().getBFI()
+                    : nullptr;
     DominatorTree *DT = nullptr;
     if (auto *DTWP = getAnalysisIfAvailable<DominatorTreeWrapperPass>())
       DT = &DTWP->getDomTree();
@@ -852,7 +851,7 @@ bool ExpandMemCmpPass::runOnBlock(BasicBlock &BB, const TargetLibraryInfo *TLI,
                                   const DataLayout &DL, ProfileSummaryInfo *PSI,
                                   BlockFrequencyInfo *BFI,
                                   DomTreeUpdater *DTU) {
-  for (Instruction& I : BB) {
+  for (Instruction &I : BB) {
     CallInst *CI = dyn_cast<CallInst>(&I);
     if (!CI) {
       continue;
@@ -876,7 +875,7 @@ ExpandMemCmpPass::runImpl(Function &F, const TargetLibraryInfo *TLI,
   if (DT)
     DTU.emplace(DT, DomTreeUpdater::UpdateStrategy::Lazy);
 
-  const DataLayout& DL = F.getParent()->getDataLayout();
+  const DataLayout &DL = F.getParent()->getDataLayout();
   bool MadeChanges = false;
   for (auto BBIt = F.begin(); BBIt != F.end();) {
     if (runOnBlock(*BBIt, TLI, TTI, TL, DL, PSI, BFI, DTU ? &*DTU : nullptr)) {
@@ -911,6 +910,4 @@ INITIALIZE_PASS_DEPENDENCY(DominatorTreeWrapperPass)
 INITIALIZE_PASS_END(ExpandMemCmpPass, "expandmemcmp",
                     "Expand memcmp() to load/stores", false, false)
 
-FunctionPass *llvm::createExpandMemCmpPass() {
-  return new ExpandMemCmpPass();
-}
+FunctionPass *llvm::createExpandMemCmpPass() { return new ExpandMemCmpPass(); }
diff --git a/llvm/lib/CodeGen/ExpandPostRAPseudos.cpp b/llvm/lib/CodeGen/ExpandPostRAPseudos.cpp
index 3a79f20f47322fa..70df8e2b6e546f1 100644
--- a/llvm/lib/CodeGen/ExpandPostRAPseudos.cpp
+++ b/llvm/lib/CodeGen/ExpandPostRAPseudos.cpp
@@ -43,7 +43,7 @@ struct ExpandPostRA : public MachineFunctionPass {
   }
 
   /// runOnMachineFunction - pass entry point
-  bool runOnMachineFunction(MachineFunction&) override;
+  bool runOnMachineFunction(MachineFunction &) override;
 
 private:
   bool LowerSubregToReg(MachineInstr *MI);
@@ -61,12 +61,12 @@ bool ExpandPostRA::LowerSubregToReg(MachineInstr *MI) {
   assert((MI->getOperand(0).isReg() && MI->getOperand(0).isDef()) &&
          MI->getOperand(1).isImm() &&
          (MI->getOperand(2).isReg() && MI->getOperand(2).isUse()) &&
-          MI->getOperand(3).isImm() && "Invalid subreg_to_reg");
+         MI->getOperand(3).isImm() && "Invalid subreg_to_reg");
 
   Register DstReg = MI->getOperand(0).getReg();
   Register InsReg = MI->getOperand(2).getReg();
   assert(!MI->getOperand(2).getSubReg() && "SubIdx on physreg?");
-  unsigned SubIdx  = MI->getOperand(3).getImm();
+  unsigned SubIdx = MI->getOperand(3).getImm();
 
   assert(SubIdx != 0 && "Invalid index for insert_subreg");
   Register DstSubReg = TRI->getSubReg(DstReg, SubIdx);
@@ -93,8 +93,8 @@ bool ExpandPostRA::LowerSubregToReg(MachineInstr *MI) {
     // We must leave %rax live.
     if (DstReg != InsReg) {
       MI->setDesc(TII->get(TargetOpcode::KILL));
-      MI->removeOperand(3);     // SubIdx
-      MI->removeOperand(1);     // Imm
+      MI->removeOperand(3); // SubIdx
+      MI->removeOperand(1); // Imm
       LLVM_DEBUG(dbgs() << "subreg: replace by: " << *MI);
       return true;
     }
diff --git a/llvm/lib/CodeGen/ExpandReductions.cpp b/llvm/lib/CodeGen/ExpandReductions.cpp
index 79b6dc9154b3fc1..0b4db28ed627e45 100644
--- a/llvm/lib/CodeGen/ExpandReductions.cpp
+++ b/llvm/lib/CodeGen/ExpandReductions.cpp
@@ -80,7 +80,8 @@ bool expandReductions(Function &F, const TargetTransformInfo *TTI) {
   for (auto &I : instructions(F)) {
     if (auto *II = dyn_cast<IntrinsicInst>(&I)) {
       switch (II->getIntrinsicID()) {
-      default: break;
+      default:
+        break;
       case Intrinsic::vector_reduce_fadd:
       case Intrinsic::vector_reduce_fmul:
       case Intrinsic::vector_reduce_add:
@@ -113,7 +114,8 @@ bool expandReductions(Function &F, const TargetTransformInfo *TTI) {
     IRBuilder<>::FastMathFlagGuard FMFGuard(Builder);
     Builder.setFastMathFlags(FMF);
     switch (ID) {
-    default: llvm_unreachable("Unexpected intrinsic!");
+    default:
+      llvm_unreachable("Unexpected intrinsic!");
     case Intrinsic::vector_reduce_fadd:
     case Intrinsic::vector_reduce_fmul: {
       // FMFs must be attached to the call, otherwise it's an ordered reduction
@@ -128,8 +130,8 @@ bool expandReductions(Function &F, const TargetTransformInfo *TTI) {
           continue;
 
         Rdx = getShuffleReduction(Builder, Vec, getOpcode(ID), RK);
-        Rdx = Builder.CreateBinOp((Instruction::BinaryOps)getOpcode(ID),
-                                  Acc, Rdx, "bin.rdx");
+        Rdx = Builder.CreateBinOp((Instruction::BinaryOps)getOpcode(ID), Acc,
+                                  Rdx, "bin.rdx");
       }
       break;
     }
@@ -207,7 +209,7 @@ class ExpandReductions : public FunctionPass {
   }
 
   bool runOnFunction(Function &F) override {
-    const auto *TTI =&getAnalysis<TargetTransformInfoWrapperPass>().getTTI(F);
+    const auto *TTI = &getAnalysis<TargetTransformInfoWrapperPass>().getTTI(F);
     return expandReductions(F, TTI);
   }
 
@@ -216,7 +218,7 @@ class ExpandReductions : public FunctionPass {
     AU.setPreservesCFG();
   }
 };
-}
+} // namespace
 
 char ExpandReductions::ID;
 INITIALIZE_PASS_BEGIN(ExpandReductions, "expand-reductions",
diff --git a/llvm/lib/CodeGen/ExpandVectorPredication.cpp b/llvm/lib/CodeGen/ExpandVectorPredication.cpp
index 9807be0bea39eec..194dee669ce1286 100644
--- a/llvm/lib/CodeGen/ExpandVectorPredication.cpp
+++ b/llvm/lib/CodeGen/ExpandVectorPredication.cpp
@@ -604,7 +604,7 @@ Value *CachingVPExpander::expandPredication(VPIntrinsic &VPI) {
   case Intrinsic::vp_fneg: {
     Value *NewNegOp = Builder.CreateFNeg(VPI.getOperand(0), VPI.getName());
     replaceOperation(*NewNegOp, VPI);
-    return NewNegOp;  
+    return NewNegOp;
   }
   case Intrinsic::vp_fabs:
     return expandPredicationToFPCall(Builder, VPI, Intrinsic::fabs);
diff --git a/llvm/lib/CodeGen/FEntryInserter.cpp b/llvm/lib/CodeGen/FEntryInserter.cpp
index 68304dd41db04a4..1ccedfde923f26d 100644
--- a/llvm/lib/CodeGen/FEntryInserter.cpp
+++ b/llvm/lib/CodeGen/FEntryInserter.cpp
@@ -29,7 +29,7 @@ struct FEntryInserter : public MachineFunctionPass {
 
   bool runOnMachineFunction(MachineFunction &F) override;
 };
-}
+} // namespace
 
 bool FEntryInserter::runOnMachineFunction(MachineFunction &MF) {
   const std::string FEntryName = std::string(
diff --git a/llvm/lib/CodeGen/FinalizeISel.cpp b/llvm/lib/CodeGen/FinalizeISel.cpp
index 329c9587e321228..76cdfd6f6dc2422 100644
--- a/llvm/lib/CodeGen/FinalizeISel.cpp
+++ b/llvm/lib/CodeGen/FinalizeISel.cpp
@@ -24,18 +24,18 @@ using namespace llvm;
 #define DEBUG_TYPE "finalize-isel"
 
 namespace {
-  class FinalizeISel : public MachineFunctionPass {
-  public:
-    static char ID; // Pass identification, replacement for typeid
-    FinalizeISel() : MachineFunctionPass(ID) {}
+class FinalizeISel : public MachineFunctionPass {
+public:
+  static char ID; // Pass identification, replacement for typeid
+  FinalizeISel() : MachineFunctionPass(ID) {}
 
-  private:
-    bool runOnMachineFunction(MachineFunction &MF) override;
+private:
+  bool runOnMachineFunction(MachineFunction &MF) override;
 
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
-  };
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    MachineFunctionPass::getAnalysisUsage(AU);
+  }
+};
 } // end anonymous namespace
 
 char FinalizeISel::ID = 0;
@@ -51,7 +51,7 @@ bool FinalizeISel::runOnMachineFunction(MachineFunction &MF) {
   for (MachineFunction::iterator I = MF.begin(), E = MF.end(); I != E; ++I) {
     MachineBasicBlock *MBB = &*I;
     for (MachineBasicBlock::iterator MBBI = MBB->begin(), MBBE = MBB->end();
-         MBBI != MBBE; ) {
+         MBBI != MBBE;) {
       MachineInstr &MI = *MBBI++;
 
       // If MI is a pseudo, expand it.
diff --git a/llvm/lib/CodeGen/FuncletLayout.cpp b/llvm/lib/CodeGen/FuncletLayout.cpp
index f1222a88b054adf..5520b82d2bbbc9e 100644
--- a/llvm/lib/CodeGen/FuncletLayout.cpp
+++ b/llvm/lib/CodeGen/FuncletLayout.cpp
@@ -33,12 +33,12 @@ class FuncletLayout : public MachineFunctionPass {
         MachineFunctionProperties::Property::NoVRegs);
   }
 };
-}
+} // namespace
 
 char FuncletLayout::ID = 0;
 char &llvm::FuncletLayoutID = FuncletLayout::ID;
-INITIALIZE_PASS(FuncletLayout, DEBUG_TYPE,
-                "Contiguously Lay Out Funclets", false, false)
+INITIALIZE_PASS(FuncletLayout, DEBUG_TYPE, "Contiguously Lay Out Funclets",
+                false, false)
 
 bool FuncletLayout::runOnMachineFunction(MachineFunction &F) {
   // Even though this gets information from getEHScopeMembership(), this pass is
diff --git a/llvm/lib/CodeGen/GCMetadata.cpp b/llvm/lib/CodeGen/GCMetadata.cpp
index 4d27143c529823e..721879438622dc7 100644
--- a/llvm/lib/CodeGen/GCMetadata.cpp
+++ b/llvm/lib/CodeGen/GCMetadata.cpp
@@ -116,7 +116,8 @@ bool Printer::runOnFunction(Function &F) {
   for (GCFunctionInfo::iterator PI = FD->begin(), PE = FD->end(); PI != PE;
        ++PI) {
 
-    OS << "\t" << PI->Label->getName() << ": " << "post-call"
+    OS << "\t" << PI->Label->getName() << ": "
+       << "post-call"
        << ", live = {";
 
     ListSeparator LS(",");
diff --git a/llvm/lib/CodeGen/GCRootLowering.cpp b/llvm/lib/CodeGen/GCRootLowering.cpp
index c0ce37091933af9..86c44860f69f527 100644
--- a/llvm/lib/CodeGen/GCRootLowering.cpp
+++ b/llvm/lib/CodeGen/GCRootLowering.cpp
@@ -70,7 +70,7 @@ class GCMachineCodeAnalysis : public MachineFunctionPass {
 
   bool runOnMachineFunction(MachineFunction &MF) override;
 };
-}
+} // namespace
 
 // -----------------------------------------------------------------------------
 
@@ -197,11 +197,12 @@ bool LowerIntrinsics::DoLowering(Function &F, GCStrategy &S) {
 
       Function *F = CI->getCalledFunction();
       switch (F->getIntrinsicID()) {
-      default: break;
+      default:
+        break;
       case Intrinsic::gcwrite: {
         // Replace a write barrier with a simple store.
-        Value *St = new StoreInst(CI->getArgOperand(0),
-                                  CI->getArgOperand(2), CI);
+        Value *St =
+            new StoreInst(CI->getArgOperand(0), CI->getArgOperand(2), CI);
         CI->replaceAllUsesWith(St);
         CI->eraseFromParent();
         MadeChange = true;
diff --git a/llvm/lib/CodeGen/GlobalISel/CMakeLists.txt b/llvm/lib/CodeGen/GlobalISel/CMakeLists.txt
index 46e6c6df5998e5b..33d94c75566085e 100644
--- a/llvm/lib/CodeGen/GlobalISel/CMakeLists.txt
+++ b/llvm/lib/CodeGen/GlobalISel/CMakeLists.txt
@@ -1,44 +1,18 @@
-add_llvm_component_library(LLVMGlobalISel
-  CSEInfo.cpp
-  GISelKnownBits.cpp
-  CSEMIRBuilder.cpp
-  CallLowering.cpp
-  GlobalISel.cpp
-  Combiner.cpp
-  CombinerHelper.cpp
-  GIMatchTableExecutor.cpp
-  GISelChangeObserver.cpp
-  IRTranslator.cpp
-  InlineAsmLowering.cpp
-  InstructionSelect.cpp
-  InstructionSelector.cpp
-  LegalityPredicates.cpp
-  LegalizeMutations.cpp
-  Legalizer.cpp
-  LegalizerHelper.cpp
-  LegalizerInfo.cpp
-  LegacyLegalizerInfo.cpp
-  LoadStoreOpt.cpp
-  Localizer.cpp
-  LostDebugLocObserver.cpp
-  MachineIRBuilder.cpp
-  RegBankSelect.cpp
-  Utils.cpp
+add_llvm_component_library(
+    LLVMGlobalISel CSEInfo.cpp GISelKnownBits.cpp CSEMIRBuilder.cpp CallLowering
+        .cpp GlobalISel.cpp Combiner.cpp CombinerHelper.cpp GIMatchTableExecutor
+        .cpp GISelChangeObserver.cpp IRTranslator.cpp InlineAsmLowering
+        .cpp InstructionSelect.cpp InstructionSelector.cpp LegalityPredicates
+        .cpp LegalizeMutations.cpp Legalizer.cpp LegalizerHelper
+        .cpp LegalizerInfo.cpp LegacyLegalizerInfo.cpp LoadStoreOpt
+        .cpp Localizer.cpp LostDebugLocObserver.cpp MachineIRBuilder
+        .cpp RegBankSelect.cpp Utils.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/CodeGen/GlobalISel
+            ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+    llvm / CodeGen /
+    GlobalISel
 
-  DEPENDS
-  intrinsics_gen
+        DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  Analysis
-  CodeGen
-  CodeGenTypes
-  Core
-  MC
-  SelectionDAG
-  Support
-  Target
-  TransformUtils
-  )
+            LINK_COMPONENTS Analysis CodeGen CodeGenTypes Core MC SelectionDAG
+                Support Target TransformUtils)
diff --git a/llvm/lib/CodeGen/GlobalISel/CallLowering.cpp b/llvm/lib/CodeGen/GlobalISel/CallLowering.cpp
index 0b1f151135be9fc..861806e3cdec15a 100644
--- a/llvm/lib/CodeGen/GlobalISel/CallLowering.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/CallLowering.cpp
@@ -121,7 +121,6 @@ bool CallLowering::lowerCall(MachineIRBuilder &MIRBuilder, const CallBase &CB,
     CanBeTailCalled = false;
   }
 
-
   // First step is to marshall all the function's parameters into the correct
   // physregs and memory locations. Gather the sequence of argument types that
   // we'll pass to the assigner function.
@@ -756,8 +755,7 @@ bool CallLowering::handleAssignments(ValueHandler &Handler,
       }
 
       if (VA.isMemLoc() && Flags.isByVal()) {
-        assert(Args[i].Regs.size() == 1 &&
-               "didn't expect split byval pointer");
+        assert(Args[i].Regs.size() == 1 && "didn't expect split byval pointer");
 
         if (Handler.isIncomingArgumentHandler()) {
           // We just need to copy the frame index value to the pointer.
@@ -797,7 +795,8 @@ bool CallLowering::handleAssignments(ValueHandler &Handler,
         continue;
       }
 
-      assert(!VA.needsCustom() && "custom loc should have been handled already");
+      assert(!VA.needsCustom() &&
+             "custom loc should have been handled already");
 
       if (i == 0 && !ThisReturnRegs.empty() &&
           Handler.isIncomingArgumentHandler() &&
@@ -1157,7 +1156,8 @@ Register CallLowering::ValueHandler::extendRegister(Register ValReg,
   }
 
   switch (VA.getLocInfo()) {
-  default: break;
+  default:
+    break;
   case CCValAssign::Full:
   case CCValAssign::BCvt:
     // FIXME: bitconverting between vector types may or may not be a
diff --git a/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
index 9030efb9c07b6e3..ba6cb7dd015b221 100644
--- a/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/CombinerHelper.cpp
@@ -116,7 +116,7 @@ isBigEndian(const SmallDenseMap<int64_t, int64_t, 8> &MemOffset2Idx,
   if (Width < 2)
     return std::nullopt;
   bool BigEndian = true, LittleEndian = true;
-  for (unsigned MemOffset = 0; MemOffset < Width; ++ MemOffset) {
+  for (unsigned MemOffset = 0; MemOffset < Width; ++MemOffset) {
     auto MemOffsetAndIdx = MemOffset2Idx.find(MemOffset);
     if (MemOffsetAndIdx == MemOffset2Idx.end())
       return std::nullopt;
@@ -273,8 +273,8 @@ bool CombinerHelper::matchCombineConcatVectors(MachineInstr &MI, bool &IsUndef,
   }
   return true;
 }
-void CombinerHelper::applyCombineConcatVectors(
-    MachineInstr &MI, bool IsUndef, const ArrayRef<Register> Ops) {
+void CombinerHelper::applyCombineConcatVectors(MachineInstr &MI, bool IsUndef,
+                                               const ArrayRef<Register> Ops) {
   // We determined that the concat_vectors can be flatten.
   // Generate the flattened build_vector.
   Register DstReg = MI.getOperand(0).getReg();
@@ -579,10 +579,9 @@ bool CombinerHelper::matchCombineExtendingLoads(MachineInstr &MI,
   // and emit a variant of (extend (trunc X)) for the others according to the
   // relative type sizes. At the same time, pick an extend to use based on the
   // extend involved in the chosen type.
-  unsigned PreferredOpcode =
-      isa<GLoad>(&MI)
-          ? TargetOpcode::G_ANYEXT
-          : isa<GSExtLoad>(&MI) ? TargetOpcode::G_SEXT : TargetOpcode::G_ZEXT;
+  unsigned PreferredOpcode = isa<GLoad>(&MI)       ? TargetOpcode::G_ANYEXT
+                             : isa<GSExtLoad>(&MI) ? TargetOpcode::G_SEXT
+                                                   : TargetOpcode::G_ZEXT;
   Preferred = {LLT(), PreferredOpcode, nullptr};
   for (auto &UseMI : MRI.use_nodbg_instructions(LoadReg)) {
     if (UseMI.getOpcode() == TargetOpcode::G_SEXT ||
@@ -1081,7 +1080,8 @@ bool CombinerHelper::tryCombineIndexedLoadStore(MachineInstr &MI) {
   return false;
 }
 
-bool CombinerHelper::matchCombineIndexedLoadStore(MachineInstr &MI, IndexedLoadStoreMatchInfo &MatchInfo) {
+bool CombinerHelper::matchCombineIndexedLoadStore(
+    MachineInstr &MI, IndexedLoadStoreMatchInfo &MatchInfo) {
   unsigned Opcode = MI.getOpcode();
   if (Opcode != TargetOpcode::G_LOAD && Opcode != TargetOpcode::G_SEXTLOAD &&
       Opcode != TargetOpcode::G_ZEXTLOAD && Opcode != TargetOpcode::G_STORE)
@@ -1240,7 +1240,7 @@ void CombinerHelper::applyCombineDivRem(MachineInstr &MI,
   Builder.buildInstr(IsSigned ? TargetOpcode::G_SDIVREM
                               : TargetOpcode::G_UDIVREM,
                      {DestDivReg, DestRemReg},
-                     { FirstInst->getOperand(1), FirstInst->getOperand(2) });
+                     {FirstInst->getOperand(1), FirstInst->getOperand(2)});
   MI.eraseFromParent();
   OtherMI->eraseFromParent();
 }
@@ -1680,7 +1680,8 @@ bool CombinerHelper::matchCommuteShift(MachineInstr &MI, BuildFnTy &MatchInfo) {
 
   auto *SrcDef = MRI.getVRegDef(SrcReg);
   assert((SrcDef->getOpcode() == TargetOpcode::G_ADD ||
-          SrcDef->getOpcode() == TargetOpcode::G_OR) && "Unexpected op");
+          SrcDef->getOpcode() == TargetOpcode::G_OR) &&
+         "Unexpected op");
   LLT SrcTy = MRI.getType(SrcReg);
   MatchInfo = [=](MachineIRBuilder &B) {
     auto S1 = B.buildShl(SrcTy, X, ShiftReg);
@@ -2002,7 +2003,8 @@ bool CombinerHelper::matchCombineShiftToUnmerge(MachineInstr &MI,
                                                 unsigned &ShiftVal) {
   assert((MI.getOpcode() == TargetOpcode::G_SHL ||
           MI.getOpcode() == TargetOpcode::G_LSHR ||
-          MI.getOpcode() == TargetOpcode::G_ASHR) && "Expected a shift");
+          MI.getOpcode() == TargetOpcode::G_ASHR) &&
+         "Expected a shift");
 
   LLT Ty = MRI.getType(MI.getOperand(0).getReg());
   if (Ty.isVector()) // TODO:
@@ -2046,8 +2048,10 @@ void CombinerHelper::applyCombineShiftToUnmerge(MachineInstr &MI,
     //   dst = G_MERGE_VALUES (G_LSHR hi, C - 32), 0
 
     if (NarrowShiftAmt != 0) {
-      Narrowed = Builder.buildLShr(HalfTy, Narrowed,
-        Builder.buildConstant(HalfTy, NarrowShiftAmt)).getReg(0);
+      Narrowed = Builder
+                     .buildLShr(HalfTy, Narrowed,
+                                Builder.buildConstant(HalfTy, NarrowShiftAmt))
+                     .getReg(0);
     }
 
     auto Zero = Builder.buildConstant(HalfTy, 0);
@@ -2059,17 +2063,18 @@ void CombinerHelper::applyCombineShiftToUnmerge(MachineInstr &MI,
     //   lo, hi = G_UNMERGE_VALUES x
     //   dst = G_MERGE_VALUES 0, (G_SHL hi, C - 32)
     if (NarrowShiftAmt != 0) {
-      Narrowed = Builder.buildShl(HalfTy, Narrowed,
-        Builder.buildConstant(HalfTy, NarrowShiftAmt)).getReg(0);
+      Narrowed = Builder
+                     .buildShl(HalfTy, Narrowed,
+                               Builder.buildConstant(HalfTy, NarrowShiftAmt))
+                     .getReg(0);
     }
 
     auto Zero = Builder.buildConstant(HalfTy, 0);
     Builder.buildMergeLikeInstr(DstReg, {Zero, Narrowed});
   } else {
     assert(MI.getOpcode() == TargetOpcode::G_ASHR);
-    auto Hi = Builder.buildAShr(
-      HalfTy, Unmerge.getReg(1),
-      Builder.buildConstant(HalfTy, HalfSize - 1));
+    auto Hi = Builder.buildAShr(HalfTy, Unmerge.getReg(1),
+                                Builder.buildConstant(HalfTy, HalfSize - 1));
 
     if (ShiftVal == HalfSize) {
       // (G_ASHR i64:x, 32) ->
@@ -2082,9 +2087,9 @@ void CombinerHelper::applyCombineShiftToUnmerge(MachineInstr &MI,
       //   G_MERGE_VALUES %narrowed, %narrowed
       Builder.buildMergeLikeInstr(DstReg, {Hi, Hi});
     } else {
-      auto Lo = Builder.buildAShr(
-        HalfTy, Unmerge.getReg(1),
-        Builder.buildConstant(HalfTy, ShiftVal - HalfSize));
+      auto Lo =
+          Builder.buildAShr(HalfTy, Unmerge.getReg(1),
+                            Builder.buildConstant(HalfTy, ShiftVal - HalfSize));
 
       // (G_ASHR i64:x, C) ->, for C >= 32
       //   G_MERGE_VALUES (G_ASHR hi_32(x), C - 32), (G_ASHR hi_32(x), 31)
@@ -2717,7 +2722,8 @@ void CombinerHelper::replaceInstWithConstant(MachineInstr &MI, APInt C) {
   MI.eraseFromParent();
 }
 
-void CombinerHelper::replaceInstWithFConstant(MachineInstr &MI, ConstantFP *CFP) {
+void CombinerHelper::replaceInstWithFConstant(MachineInstr &MI,
+                                              ConstantFP *CFP) {
   assert(MI.getNumDefs() == 1 && "Expected only one def?");
   Builder.setInstr(MI);
   Builder.buildFConstant(MI.getOperand(0), CFP->getValueAPF());
@@ -2968,9 +2974,7 @@ bool CombinerHelper::matchOverlappingAnd(
   Register R;
   int64_t C1;
   int64_t C2;
-  if (!mi_match(
-          Dst, MRI,
-          m_GAnd(m_GAnd(m_Reg(R), m_ICst(C1)), m_ICst(C2))))
+  if (!mi_match(Dst, MRI, m_GAnd(m_GAnd(m_Reg(R), m_ICst(C1)), m_ICst(C2))))
     return false;
 
   MatchInfo = [=](MachineIRBuilder &B) {
@@ -3582,8 +3586,8 @@ CombinerHelper::findLoadOffsetsForLoadOrCombine(
   // pattern.
   assert(Loads.size() == RegsToVisit.size() &&
          "Expected to find a load for each register?");
-  assert(EarliestLoad != LatestLoad && EarliestLoad &&
-         LatestLoad && "Expected at least two loads?");
+  assert(EarliestLoad != LatestLoad && EarliestLoad && LatestLoad &&
+         "Expected at least two loads?");
 
   // Check if there are any stores, calls, etc. between any of the loads. If
   // there are, then we can't safely perform the combine.
@@ -3681,10 +3685,9 @@ bool CombinerHelper::matchLoadOrCombine(
   // load x[i+1] -> byte 0 ---> wide_load x[i]
   // load x[i+2] -> byte 1
   const unsigned NumLoadsInTy = WideMemSizeInBits / NarrowMemSizeInBits;
-  const unsigned ZeroByteOffset =
-      *IsBigEndian
-          ? bigEndianByteAt(NumLoadsInTy, 0)
-          : littleEndianByteAt(NumLoadsInTy, 0);
+  const unsigned ZeroByteOffset = *IsBigEndian
+                                      ? bigEndianByteAt(NumLoadsInTy, 0)
+                                      : littleEndianByteAt(NumLoadsInTy, 0);
   auto ZeroOffsetIdx = MemOffset2Idx.find(ZeroByteOffset);
   if (ZeroOffsetIdx == MemOffset2Idx.end() ||
       ZeroOffsetIdx->second != LowestIdx)
@@ -4002,7 +4005,8 @@ bool CombinerHelper::matchFunnelShiftToRotate(MachineInstr &MI) {
     return false;
   unsigned RotateOpc =
       Opc == TargetOpcode::G_FSHL ? TargetOpcode::G_ROTL : TargetOpcode::G_ROTR;
-  return isLegalOrBeforeLegalizer({RotateOpc, {MRI.getType(X), MRI.getType(Y)}});
+  return isLegalOrBeforeLegalizer(
+      {RotateOpc, {MRI.getType(X), MRI.getType(Y)}});
 }
 
 void CombinerHelper::applyFunnelShiftToRotate(MachineInstr &MI) {
@@ -4320,9 +4324,7 @@ bool CombinerHelper::matchBitfieldExtractFromShrAnd(
 
   // If the shift subsumes the mask, emit the 0 directly.
   if (0 == (SMask >> ShrAmt)) {
-    MatchInfo = [=](MachineIRBuilder &B) {
-      B.buildConstant(Dst, 0);
-    };
+    MatchInfo = [=](MachineIRBuilder &B) { B.buildConstant(Dst, 0); };
     return true;
   }
 
@@ -4588,7 +4590,8 @@ bool CombinerHelper::matchReassocCommBinOp(MachineInstr &MI,
   return false;
 }
 
-bool CombinerHelper::matchConstantFoldCastOp(MachineInstr &MI, APInt &MatchInfo) {
+bool CombinerHelper::matchConstantFoldCastOp(MachineInstr &MI,
+                                             APInt &MatchInfo) {
   LLT DstTy = MRI.getType(MI.getOperand(0).getReg());
   Register SrcOp = MI.getOperand(1).getReg();
 
@@ -4600,7 +4603,8 @@ bool CombinerHelper::matchConstantFoldCastOp(MachineInstr &MI, APInt &MatchInfo)
   return false;
 }
 
-bool CombinerHelper::matchConstantFoldBinOp(MachineInstr &MI, APInt &MatchInfo) {
+bool CombinerHelper::matchConstantFoldBinOp(MachineInstr &MI,
+                                            APInt &MatchInfo) {
   Register Op1 = MI.getOperand(1).getReg();
   Register Op2 = MI.getOperand(2).getReg();
   auto MaybeCst = ConstantFoldBinOp(MI.getOpcode(), Op1, Op2, MRI);
@@ -4610,7 +4614,8 @@ bool CombinerHelper::matchConstantFoldBinOp(MachineInstr &MI, APInt &MatchInfo)
   return true;
 }
 
-bool CombinerHelper::matchConstantFoldFPBinOp(MachineInstr &MI, ConstantFP* &MatchInfo) {
+bool CombinerHelper::matchConstantFoldFPBinOp(MachineInstr &MI,
+                                              ConstantFP *&MatchInfo) {
   Register Op1 = MI.getOperand(1).getReg();
   Register Op2 = MI.getOperand(2).getReg();
   auto MaybeCst = ConstantFoldFPBinOp(MI.getOpcode(), Op1, Op2, MRI);
@@ -5803,7 +5808,7 @@ bool CombinerHelper::matchSelectToLogical(MachineInstr &MI,
     return true;
   }
 
- // select Cond, T, 1 --> or (not Cond), T
+  // select Cond, T, 1 --> or (not Cond), T
   if (MaybeCstFalse && MaybeCstFalse->isOne()) {
     MatchInfo = [=](MachineIRBuilder &MIB) {
       MIB.buildOr(DstReg, MIB.buildNot(OpTy, Cond), TrueReg);
diff --git a/llvm/lib/CodeGen/GlobalISel/GISelChangeObserver.cpp b/llvm/lib/CodeGen/GlobalISel/GISelChangeObserver.cpp
index 59f4d60a41d80db..7d6a55bbd4e503b 100644
--- a/llvm/lib/CodeGen/GlobalISel/GISelChangeObserver.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/GISelChangeObserver.cpp
@@ -15,8 +15,8 @@
 
 using namespace llvm;
 
-void GISelChangeObserver::changingAllUsesOfReg(
-    const MachineRegisterInfo &MRI, Register Reg) {
+void GISelChangeObserver::changingAllUsesOfReg(const MachineRegisterInfo &MRI,
+                                               Register Reg) {
   for (auto &ChangingMI : MRI.use_instructions(Reg)) {
     changingInstr(ChangingMI);
     ChangingAllUsesOfReg.insert(&ChangingMI);
diff --git a/llvm/lib/CodeGen/GlobalISel/GISelKnownBits.cpp b/llvm/lib/CodeGen/GlobalISel/GISelKnownBits.cpp
index 2c726aca3cdea54..32fd6d1b799fb75 100644
--- a/llvm/lib/CodeGen/GlobalISel/GISelKnownBits.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/GISelKnownBits.cpp
@@ -185,7 +185,8 @@ void GISelKnownBits::computeKnownBitsImpl(Register R, KnownBits &Known,
     break;
   case TargetOpcode::G_BUILD_VECTOR: {
     // Collect the known bits that are shared by every demanded vector element.
-    Known.Zero.setAllBits(); Known.One.setAllBits();
+    Known.Zero.setAllBits();
+    Known.One.setAllBits();
     for (unsigned i = 0, e = MI.getNumOperands() - 1; i < e; ++i) {
       if (!DemandedElts[i])
         continue;
@@ -354,19 +355,19 @@ void GISelKnownBits::computeKnownBitsImpl(Register R, KnownBits &Known,
   }
   case TargetOpcode::G_UMIN: {
     KnownBits KnownRHS;
-    computeKnownBitsImpl(MI.getOperand(1).getReg(), Known,
-                         DemandedElts, Depth + 1);
-    computeKnownBitsImpl(MI.getOperand(2).getReg(), KnownRHS,
-                         DemandedElts, Depth + 1);
+    computeKnownBitsImpl(MI.getOperand(1).getReg(), Known, DemandedElts,
+                         Depth + 1);
+    computeKnownBitsImpl(MI.getOperand(2).getReg(), KnownRHS, DemandedElts,
+                         Depth + 1);
     Known = KnownBits::umin(Known, KnownRHS);
     break;
   }
   case TargetOpcode::G_UMAX: {
     KnownBits KnownRHS;
-    computeKnownBitsImpl(MI.getOperand(1).getReg(), Known,
-                         DemandedElts, Depth + 1);
-    computeKnownBitsImpl(MI.getOperand(2).getReg(), KnownRHS,
-                         DemandedElts, Depth + 1);
+    computeKnownBitsImpl(MI.getOperand(1).getReg(), Known, DemandedElts,
+                         Depth + 1);
+    computeKnownBitsImpl(MI.getOperand(2).getReg(), KnownRHS, DemandedElts,
+                         Depth + 1);
     Known = KnownBits::umax(Known, KnownRHS);
     break;
   }
@@ -530,8 +531,8 @@ void GISelKnownBits::computeKnownBitsImpl(Register R, KnownBits &Known,
   case TargetOpcode::G_CTPOP: {
     computeKnownBitsImpl(MI.getOperand(1).getReg(), Known2, DemandedElts,
                          Depth + 1);
-    // We can bound the space the count needs.  Also, bits known to be zero can't
-    // contribute to the population.
+    // We can bound the space the count needs.  Also, bits known to be zero
+    // can't contribute to the population.
     unsigned BitsPossiblySet = Known2.countMaxPopulation();
     unsigned LowBits = llvm::bit_width(BitsPossiblySet);
     Known.Zero.setBitsFrom(LowBits);
@@ -656,7 +657,8 @@ unsigned GISelKnownBits::computeNumSignBits(Register R,
     Register Src = MI.getOperand(1).getReg();
     unsigned SrcBits = MI.getOperand(2).getImm();
     unsigned InRegBits = TyBits - SrcBits + 1;
-    return std::max(computeNumSignBits(Src, DemandedElts, Depth + 1), InRegBits);
+    return std::max(computeNumSignBits(Src, DemandedElts, Depth + 1),
+                    InRegBits);
   }
   case TargetOpcode::G_SEXTLOAD: {
     // FIXME: We need an in-memory type representation.
@@ -732,7 +734,7 @@ unsigned GISelKnownBits::computeNumSignBits(Register R,
   case TargetOpcode::G_INTRINSIC_CONVERGENT_W_SIDE_EFFECTS:
   default: {
     unsigned NumBits =
-      TL.computeNumSignBitsForTargetInstr(*this, R, DemandedElts, MRI, Depth);
+        TL.computeNumSignBitsForTargetInstr(*this, R, DemandedElts, MRI, Depth);
     if (NumBits > 1)
       FirstAnswer = std::max(FirstAnswer, NumBits);
     break;
@@ -743,9 +745,9 @@ unsigned GISelKnownBits::computeNumSignBits(Register R,
   // use this information.
   KnownBits Known = getKnownBits(R, DemandedElts, Depth);
   APInt Mask;
-  if (Known.isNonNegative()) {        // sign bit is 0
+  if (Known.isNonNegative()) { // sign bit is 0
     Mask = Known.Zero;
-  } else if (Known.isNegative()) {  // sign bit is 1;
+  } else if (Known.isNegative()) { // sign bit is 1;
     Mask = Known.One;
   } else {
     // Nothing known.
diff --git a/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp b/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
index 41a0295ddbae260..beebfa9de65451f 100644
--- a/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/IRTranslator.cpp
@@ -101,14 +101,14 @@ static cl::opt<bool>
 char IRTranslator::ID = 0;
 
 INITIALIZE_PASS_BEGIN(IRTranslator, DEBUG_TYPE, "IRTranslator LLVM IR -> MI",
-                false, false)
+                      false, false)
 INITIALIZE_PASS_DEPENDENCY(TargetPassConfig)
 INITIALIZE_PASS_DEPENDENCY(GISelCSEAnalysisWrapperPass)
 INITIALIZE_PASS_DEPENDENCY(BlockFrequencyInfoWrapperPass)
 INITIALIZE_PASS_DEPENDENCY(StackProtector)
 INITIALIZE_PASS_DEPENDENCY(TargetLibraryInfoWrapperPass)
 INITIALIZE_PASS_END(IRTranslator, DEBUG_TYPE, "IRTranslator LLVM IR -> MI",
-                false, false)
+                    false, false)
 
 static void reportTranslationError(MachineFunction &MF,
                                    const TargetPassConfig &TPC,
@@ -167,7 +167,6 @@ class DILocationVerifier : public GISelChangeObserver {
 } // namespace
 #endif // ifndef NDEBUG
 
-
 void IRTranslator::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.addRequired<StackProtector>();
   AU.addRequired<TargetPassConfig>();
@@ -210,8 +209,7 @@ ArrayRef<Register> IRTranslator::getOrCreateVRegs(const Value &Val) {
   auto *VRegs = VMap.getVRegs(Val);
   auto *Offsets = VMap.getOffsets(Val);
 
-  assert(Val.getType()->isSized() &&
-         "Don't know how to create an empty vreg");
+  assert(Val.getType()->isSized() && "Don't know how to create an empty vreg");
 
   SmallVector<LLT, 4> SplitTys;
   computeValueLLTs(*DL, *Val.getType(), SplitTys,
@@ -334,14 +332,14 @@ bool IRTranslator::translateCompare(const User &U,
   Register Op0 = getOrCreateVReg(*U.getOperand(0));
   Register Op1 = getOrCreateVReg(*U.getOperand(1));
   Register Res = getOrCreateVReg(U);
-  CmpInst::Predicate Pred =
-      CI ? CI->getPredicate() : static_cast<CmpInst::Predicate>(
-                                    cast<ConstantExpr>(U).getPredicate());
+  CmpInst::Predicate Pred = CI ? CI->getPredicate()
+                               : static_cast<CmpInst::Predicate>(
+                                     cast<ConstantExpr>(U).getPredicate());
   if (CmpInst::isIntPredicate(Pred))
     MIRBuilder.buildICmp(Pred, Res, Op0, Op1);
   else if (Pred == CmpInst::FCMP_FALSE)
-    MIRBuilder.buildCopy(
-        Res, getOrCreateVReg(*Constant::getNullValue(U.getType())));
+    MIRBuilder.buildCopy(Res,
+                         getOrCreateVReg(*Constant::getNullValue(U.getType())));
   else if (Pred == CmpInst::FCMP_TRUE)
     MIRBuilder.buildCopy(
         Res, getOrCreateVReg(*Constant::getAllOnesValue(U.getType())));
@@ -861,8 +859,8 @@ void IRTranslator::emitSwitchCase(SwitchCG::CaseBlock &CB,
     assert(CB.PredInfo.Pred == CmpInst::ICMP_SLE &&
            "Can only handle SLE ranges");
 
-    const APInt& Low = cast<ConstantInt>(CB.CmpLHS)->getValue();
-    const APInt& High = cast<ConstantInt>(CB.CmpRHS)->getValue();
+    const APInt &Low = cast<ConstantInt>(CB.CmpLHS)->getValue();
+    const APInt &High = cast<ConstantInt>(CB.CmpRHS)->getValue();
 
     Register CmpOpReg = getOrCreateVReg(*CB.CmpMHS);
     if (cast<ConstantInt>(CB.CmpLHS)->isMinValue(true)) {
@@ -897,16 +895,12 @@ void IRTranslator::emitSwitchCase(SwitchCG::CaseBlock &CB,
   MIB.setDebugLoc(OldDbgLoc);
 }
 
-bool IRTranslator::lowerJumpTableWorkItem(SwitchCG::SwitchWorkListItem W,
-                                          MachineBasicBlock *SwitchMBB,
-                                          MachineBasicBlock *CurMBB,
-                                          MachineBasicBlock *DefaultMBB,
-                                          MachineIRBuilder &MIB,
-                                          MachineFunction::iterator BBI,
-                                          BranchProbability UnhandledProbs,
-                                          SwitchCG::CaseClusterIt I,
-                                          MachineBasicBlock *Fallthrough,
-                                          bool FallthroughUnreachable) {
+bool IRTranslator::lowerJumpTableWorkItem(
+    SwitchCG::SwitchWorkListItem W, MachineBasicBlock *SwitchMBB,
+    MachineBasicBlock *CurMBB, MachineBasicBlock *DefaultMBB,
+    MachineIRBuilder &MIB, MachineFunction::iterator BBI,
+    BranchProbability UnhandledProbs, SwitchCG::CaseClusterIt I,
+    MachineBasicBlock *Fallthrough, bool FallthroughUnreachable) {
   using namespace SwitchCG;
   MachineFunction *CurMF = SwitchMBB->getParent();
   // FIXME: Optimize away range check based on pivot comparisons.
@@ -968,14 +962,11 @@ bool IRTranslator::lowerJumpTableWorkItem(SwitchCG::SwitchWorkListItem W,
   }
   return true;
 }
-bool IRTranslator::lowerSwitchRangeWorkItem(SwitchCG::CaseClusterIt I,
-                                            Value *Cond,
-                                            MachineBasicBlock *Fallthrough,
-                                            bool FallthroughUnreachable,
-                                            BranchProbability UnhandledProbs,
-                                            MachineBasicBlock *CurMBB,
-                                            MachineIRBuilder &MIB,
-                                            MachineBasicBlock *SwitchMBB) {
+bool IRTranslator::lowerSwitchRangeWorkItem(
+    SwitchCG::CaseClusterIt I, Value *Cond, MachineBasicBlock *Fallthrough,
+    bool FallthroughUnreachable, BranchProbability UnhandledProbs,
+    MachineBasicBlock *CurMBB, MachineIRBuilder &MIB,
+    MachineBasicBlock *SwitchMBB) {
   using namespace SwitchCG;
   const Value *RHS, *LHS, *MHS;
   CmpInst::Predicate Pred;
@@ -1553,8 +1544,10 @@ bool IRTranslator::translateGetElementPtr(const User &U,
       LLT IdxTy = MRI->getType(IdxReg);
       if (IdxTy != OffsetTy) {
         if (!IdxTy.isVector() && WantSplatVector) {
-          IdxReg = MIRBuilder.buildSplatVector(
-            OffsetTy.changeElementType(IdxTy), IdxReg).getReg(0);
+          IdxReg =
+              MIRBuilder
+                  .buildSplatVector(OffsetTy.changeElementType(IdxTy), IdxReg)
+                  .getReg(0);
         }
 
         IdxReg = MIRBuilder.buildSExtOrTrunc(OffsetTy, IdxReg).getReg(0);
@@ -1576,8 +1569,7 @@ bool IRTranslator::translateGetElementPtr(const User &U,
   }
 
   if (Offset != 0) {
-    auto OffsetMIB =
-        MIRBuilder.buildConstant(OffsetTy, Offset);
+    auto OffsetMIB = MIRBuilder.buildConstant(OffsetTy, Offset);
     MIRBuilder.buildPtrAdd(getOrCreateVReg(U), BaseReg, OffsetMIB.getReg(0));
     return true;
   }
@@ -1716,115 +1708,115 @@ bool IRTranslator::translateFixedPointIntrinsic(unsigned Op, const CallInst &CI,
   Register Src0 = getOrCreateVReg(*CI.getOperand(0));
   Register Src1 = getOrCreateVReg(*CI.getOperand(1));
   uint64_t Scale = cast<ConstantInt>(CI.getOperand(2))->getZExtValue();
-  MIRBuilder.buildInstr(Op, {Dst}, { Src0, Src1, Scale });
+  MIRBuilder.buildInstr(Op, {Dst}, {Src0, Src1, Scale});
   return true;
 }
 
 unsigned IRTranslator::getSimpleIntrinsicOpcode(Intrinsic::ID ID) {
   switch (ID) {
-    default:
-      break;
-    case Intrinsic::bswap:
-      return TargetOpcode::G_BSWAP;
-    case Intrinsic::bitreverse:
-      return TargetOpcode::G_BITREVERSE;
-    case Intrinsic::fshl:
-      return TargetOpcode::G_FSHL;
-    case Intrinsic::fshr:
-      return TargetOpcode::G_FSHR;
-    case Intrinsic::ceil:
-      return TargetOpcode::G_FCEIL;
-    case Intrinsic::cos:
-      return TargetOpcode::G_FCOS;
-    case Intrinsic::ctpop:
-      return TargetOpcode::G_CTPOP;
-    case Intrinsic::exp:
-      return TargetOpcode::G_FEXP;
-    case Intrinsic::exp2:
-      return TargetOpcode::G_FEXP2;
-    case Intrinsic::exp10:
-      return TargetOpcode::G_FEXP10;
-    case Intrinsic::fabs:
-      return TargetOpcode::G_FABS;
-    case Intrinsic::copysign:
-      return TargetOpcode::G_FCOPYSIGN;
-    case Intrinsic::minnum:
-      return TargetOpcode::G_FMINNUM;
-    case Intrinsic::maxnum:
-      return TargetOpcode::G_FMAXNUM;
-    case Intrinsic::minimum:
-      return TargetOpcode::G_FMINIMUM;
-    case Intrinsic::maximum:
-      return TargetOpcode::G_FMAXIMUM;
-    case Intrinsic::canonicalize:
-      return TargetOpcode::G_FCANONICALIZE;
-    case Intrinsic::floor:
-      return TargetOpcode::G_FFLOOR;
-    case Intrinsic::fma:
-      return TargetOpcode::G_FMA;
-    case Intrinsic::log:
-      return TargetOpcode::G_FLOG;
-    case Intrinsic::log2:
-      return TargetOpcode::G_FLOG2;
-    case Intrinsic::log10:
-      return TargetOpcode::G_FLOG10;
-    case Intrinsic::ldexp:
-      return TargetOpcode::G_FLDEXP;
-    case Intrinsic::nearbyint:
-      return TargetOpcode::G_FNEARBYINT;
-    case Intrinsic::pow:
-      return TargetOpcode::G_FPOW;
-    case Intrinsic::powi:
-      return TargetOpcode::G_FPOWI;
-    case Intrinsic::rint:
-      return TargetOpcode::G_FRINT;
-    case Intrinsic::round:
-      return TargetOpcode::G_INTRINSIC_ROUND;
-    case Intrinsic::roundeven:
-      return TargetOpcode::G_INTRINSIC_ROUNDEVEN;
-    case Intrinsic::sin:
-      return TargetOpcode::G_FSIN;
-    case Intrinsic::sqrt:
-      return TargetOpcode::G_FSQRT;
-    case Intrinsic::trunc:
-      return TargetOpcode::G_INTRINSIC_TRUNC;
-    case Intrinsic::readcyclecounter:
-      return TargetOpcode::G_READCYCLECOUNTER;
-    case Intrinsic::ptrmask:
-      return TargetOpcode::G_PTRMASK;
-    case Intrinsic::lrint:
-      return TargetOpcode::G_INTRINSIC_LRINT;
-    // FADD/FMUL require checking the FMF, so are handled elsewhere.
-    case Intrinsic::vector_reduce_fmin:
-      return TargetOpcode::G_VECREDUCE_FMIN;
-    case Intrinsic::vector_reduce_fmax:
-      return TargetOpcode::G_VECREDUCE_FMAX;
-    case Intrinsic::vector_reduce_fminimum:
-      return TargetOpcode::G_VECREDUCE_FMINIMUM;
-    case Intrinsic::vector_reduce_fmaximum:
-      return TargetOpcode::G_VECREDUCE_FMAXIMUM;
-    case Intrinsic::vector_reduce_add:
-      return TargetOpcode::G_VECREDUCE_ADD;
-    case Intrinsic::vector_reduce_mul:
-      return TargetOpcode::G_VECREDUCE_MUL;
-    case Intrinsic::vector_reduce_and:
-      return TargetOpcode::G_VECREDUCE_AND;
-    case Intrinsic::vector_reduce_or:
-      return TargetOpcode::G_VECREDUCE_OR;
-    case Intrinsic::vector_reduce_xor:
-      return TargetOpcode::G_VECREDUCE_XOR;
-    case Intrinsic::vector_reduce_smax:
-      return TargetOpcode::G_VECREDUCE_SMAX;
-    case Intrinsic::vector_reduce_smin:
-      return TargetOpcode::G_VECREDUCE_SMIN;
-    case Intrinsic::vector_reduce_umax:
-      return TargetOpcode::G_VECREDUCE_UMAX;
-    case Intrinsic::vector_reduce_umin:
-      return TargetOpcode::G_VECREDUCE_UMIN;
-    case Intrinsic::lround:
-      return TargetOpcode::G_LROUND;
-    case Intrinsic::llround:
-      return TargetOpcode::G_LLROUND;
+  default:
+    break;
+  case Intrinsic::bswap:
+    return TargetOpcode::G_BSWAP;
+  case Intrinsic::bitreverse:
+    return TargetOpcode::G_BITREVERSE;
+  case Intrinsic::fshl:
+    return TargetOpcode::G_FSHL;
+  case Intrinsic::fshr:
+    return TargetOpcode::G_FSHR;
+  case Intrinsic::ceil:
+    return TargetOpcode::G_FCEIL;
+  case Intrinsic::cos:
+    return TargetOpcode::G_FCOS;
+  case Intrinsic::ctpop:
+    return TargetOpcode::G_CTPOP;
+  case Intrinsic::exp:
+    return TargetOpcode::G_FEXP;
+  case Intrinsic::exp2:
+    return TargetOpcode::G_FEXP2;
+  case Intrinsic::exp10:
+    return TargetOpcode::G_FEXP10;
+  case Intrinsic::fabs:
+    return TargetOpcode::G_FABS;
+  case Intrinsic::copysign:
+    return TargetOpcode::G_FCOPYSIGN;
+  case Intrinsic::minnum:
+    return TargetOpcode::G_FMINNUM;
+  case Intrinsic::maxnum:
+    return TargetOpcode::G_FMAXNUM;
+  case Intrinsic::minimum:
+    return TargetOpcode::G_FMINIMUM;
+  case Intrinsic::maximum:
+    return TargetOpcode::G_FMAXIMUM;
+  case Intrinsic::canonicalize:
+    return TargetOpcode::G_FCANONICALIZE;
+  case Intrinsic::floor:
+    return TargetOpcode::G_FFLOOR;
+  case Intrinsic::fma:
+    return TargetOpcode::G_FMA;
+  case Intrinsic::log:
+    return TargetOpcode::G_FLOG;
+  case Intrinsic::log2:
+    return TargetOpcode::G_FLOG2;
+  case Intrinsic::log10:
+    return TargetOpcode::G_FLOG10;
+  case Intrinsic::ldexp:
+    return TargetOpcode::G_FLDEXP;
+  case Intrinsic::nearbyint:
+    return TargetOpcode::G_FNEARBYINT;
+  case Intrinsic::pow:
+    return TargetOpcode::G_FPOW;
+  case Intrinsic::powi:
+    return TargetOpcode::G_FPOWI;
+  case Intrinsic::rint:
+    return TargetOpcode::G_FRINT;
+  case Intrinsic::round:
+    return TargetOpcode::G_INTRINSIC_ROUND;
+  case Intrinsic::roundeven:
+    return TargetOpcode::G_INTRINSIC_ROUNDEVEN;
+  case Intrinsic::sin:
+    return TargetOpcode::G_FSIN;
+  case Intrinsic::sqrt:
+    return TargetOpcode::G_FSQRT;
+  case Intrinsic::trunc:
+    return TargetOpcode::G_INTRINSIC_TRUNC;
+  case Intrinsic::readcyclecounter:
+    return TargetOpcode::G_READCYCLECOUNTER;
+  case Intrinsic::ptrmask:
+    return TargetOpcode::G_PTRMASK;
+  case Intrinsic::lrint:
+    return TargetOpcode::G_INTRINSIC_LRINT;
+  // FADD/FMUL require checking the FMF, so are handled elsewhere.
+  case Intrinsic::vector_reduce_fmin:
+    return TargetOpcode::G_VECREDUCE_FMIN;
+  case Intrinsic::vector_reduce_fmax:
+    return TargetOpcode::G_VECREDUCE_FMAX;
+  case Intrinsic::vector_reduce_fminimum:
+    return TargetOpcode::G_VECREDUCE_FMINIMUM;
+  case Intrinsic::vector_reduce_fmaximum:
+    return TargetOpcode::G_VECREDUCE_FMAXIMUM;
+  case Intrinsic::vector_reduce_add:
+    return TargetOpcode::G_VECREDUCE_ADD;
+  case Intrinsic::vector_reduce_mul:
+    return TargetOpcode::G_VECREDUCE_MUL;
+  case Intrinsic::vector_reduce_and:
+    return TargetOpcode::G_VECREDUCE_AND;
+  case Intrinsic::vector_reduce_or:
+    return TargetOpcode::G_VECREDUCE_OR;
+  case Intrinsic::vector_reduce_xor:
+    return TargetOpcode::G_VECREDUCE_XOR;
+  case Intrinsic::vector_reduce_smax:
+    return TargetOpcode::G_VECREDUCE_SMAX;
+  case Intrinsic::vector_reduce_smin:
+    return TargetOpcode::G_VECREDUCE_SMIN;
+  case Intrinsic::vector_reduce_umax:
+    return TargetOpcode::G_VECREDUCE_UMAX;
+  case Intrinsic::vector_reduce_umin:
+    return TargetOpcode::G_VECREDUCE_UMIN;
+  case Intrinsic::lround:
+    return TargetOpcode::G_LROUND;
+  case Intrinsic::llround:
+    return TargetOpcode::G_LLROUND;
   }
   return Intrinsic::not_intrinsic;
 }
@@ -1874,7 +1866,7 @@ static unsigned getConstrainedOpcode(Intrinsic::ID ID) {
 }
 
 bool IRTranslator::translateConstrainedFPIntrinsic(
-  const ConstrainedFPIntrinsic &FPI, MachineIRBuilder &MIRBuilder) {
+    const ConstrainedFPIntrinsic &FPI, MachineIRBuilder &MIRBuilder) {
   fp::ExceptionBehavior EB = *FPI.getExceptionBehavior();
 
   unsigned Opcode = getConstrainedOpcode(FPI.getIntrinsicID());
@@ -2034,9 +2026,9 @@ bool IRTranslator::translateKnownIntrinsic(const CallInst &CI, Intrinsic::ID ID,
     const DbgLabelInst &DI = cast<DbgLabelInst>(CI);
     assert(DI.getLabel() && "Missing label");
 
-    assert(DI.getLabel()->isValidLocationForIntrinsic(
-               MIRBuilder.getDebugLoc()) &&
-           "Expected inlined-at fields to agree");
+    assert(
+        DI.getLabel()->isValidLocationForIntrinsic(MIRBuilder.getDebugLoc()) &&
+        "Expected inlined-at fields to agree");
 
     MIRBuilder.buildDbgLabel(DI.getLabel());
     return true;
@@ -2133,21 +2125,29 @@ bool IRTranslator::translateKnownIntrinsic(const CallInst &CI, Intrinsic::ID ID,
     // TODO: Preserve "int min is poison" arg in GMIR?
     return translateUnaryOp(TargetOpcode::G_ABS, CI, MIRBuilder);
   case Intrinsic::smul_fix:
-    return translateFixedPointIntrinsic(TargetOpcode::G_SMULFIX, CI, MIRBuilder);
+    return translateFixedPointIntrinsic(TargetOpcode::G_SMULFIX, CI,
+                                        MIRBuilder);
   case Intrinsic::umul_fix:
-    return translateFixedPointIntrinsic(TargetOpcode::G_UMULFIX, CI, MIRBuilder);
+    return translateFixedPointIntrinsic(TargetOpcode::G_UMULFIX, CI,
+                                        MIRBuilder);
   case Intrinsic::smul_fix_sat:
-    return translateFixedPointIntrinsic(TargetOpcode::G_SMULFIXSAT, CI, MIRBuilder);
+    return translateFixedPointIntrinsic(TargetOpcode::G_SMULFIXSAT, CI,
+                                        MIRBuilder);
   case Intrinsic::umul_fix_sat:
-    return translateFixedPointIntrinsic(TargetOpcode::G_UMULFIXSAT, CI, MIRBuilder);
+    return translateFixedPointIntrinsic(TargetOpcode::G_UMULFIXSAT, CI,
+                                        MIRBuilder);
   case Intrinsic::sdiv_fix:
-    return translateFixedPointIntrinsic(TargetOpcode::G_SDIVFIX, CI, MIRBuilder);
+    return translateFixedPointIntrinsic(TargetOpcode::G_SDIVFIX, CI,
+                                        MIRBuilder);
   case Intrinsic::udiv_fix:
-    return translateFixedPointIntrinsic(TargetOpcode::G_UDIVFIX, CI, MIRBuilder);
+    return translateFixedPointIntrinsic(TargetOpcode::G_UDIVFIX, CI,
+                                        MIRBuilder);
   case Intrinsic::sdiv_fix_sat:
-    return translateFixedPointIntrinsic(TargetOpcode::G_SDIVFIXSAT, CI, MIRBuilder);
+    return translateFixedPointIntrinsic(TargetOpcode::G_SDIVFIXSAT, CI,
+                                        MIRBuilder);
   case Intrinsic::udiv_fix_sat:
-    return translateFixedPointIntrinsic(TargetOpcode::G_UDIVFIXSAT, CI, MIRBuilder);
+    return translateFixedPointIntrinsic(TargetOpcode::G_UDIVFIXSAT, CI,
+                                        MIRBuilder);
   case Intrinsic::fmuladd: {
     const TargetMachine &TM = MF->getTarget();
     const TargetLowering &TLI = *MF->getSubtarget().getTargetLowering();
@@ -2249,11 +2249,11 @@ bool IRTranslator::translateKnownIntrinsic(const CallInst &CI, Intrinsic::ID ID,
   case Intrinsic::ctlz: {
     ConstantInt *Cst = cast<ConstantInt>(CI.getArgOperand(1));
     bool isTrailing = ID == Intrinsic::cttz;
-    unsigned Opcode = isTrailing
-                          ? Cst->isZero() ? TargetOpcode::G_CTTZ
-                                          : TargetOpcode::G_CTTZ_ZERO_UNDEF
-                          : Cst->isZero() ? TargetOpcode::G_CTLZ
-                                          : TargetOpcode::G_CTLZ_ZERO_UNDEF;
+    unsigned Opcode = isTrailing      ? Cst->isZero()
+                                            ? TargetOpcode::G_CTTZ
+                                            : TargetOpcode::G_CTTZ_ZERO_UNDEF
+                      : Cst->isZero() ? TargetOpcode::G_CTLZ
+                                      : TargetOpcode::G_CTLZ_ZERO_UNDEF;
     MIRBuilder.buildInstr(Opcode, {getOrCreateVReg(CI)},
                           {getOrCreateVReg(*CI.getArgOperand(0))});
     return true;
@@ -2293,8 +2293,8 @@ bool IRTranslator::translateKnownIntrinsic(const CallInst &CI, Intrinsic::ID ID,
   case Intrinsic::write_register: {
     Value *Arg = CI.getArgOperand(0);
     MIRBuilder.buildInstr(TargetOpcode::G_WRITE_REGISTER)
-      .addMetadata(cast<MDNode>(cast<MetadataAsValue>(Arg)->getMetadata()))
-      .addUse(getOrCreateVReg(*CI.getArgOperand(1)));
+        .addMetadata(cast<MDNode>(cast<MetadataAsValue>(Arg)->getMetadata()))
+        .addUse(getOrCreateVReg(*CI.getArgOperand(1)));
     return true;
   }
   case Intrinsic::localescape: {
@@ -2404,12 +2404,11 @@ bool IRTranslator::translateKnownIntrinsic(const CallInst &CI, Intrinsic::ID ID,
 
     return true;
   }
-#define INSTRUCTION(NAME, NARG, ROUND_MODE, INTRINSIC)  \
+#define INSTRUCTION(NAME, NARG, ROUND_MODE, INTRINSIC)                         \
   case Intrinsic::INTRINSIC:
 #include "llvm/IR/ConstrainedOps.def"
     return translateConstrainedFPIntrinsic(cast<ConstrainedFPIntrinsic>(CI),
                                            MIRBuilder);
-
   }
   return false;
 }
@@ -2575,16 +2574,15 @@ bool IRTranslator::translateCall(const User &U, MachineIRBuilder &MIRBuilder) {
       MPI = MachinePointerInfo(Info.ptrVal, Info.offset);
     else if (Info.fallbackAddressSpace)
       MPI = MachinePointerInfo(*Info.fallbackAddressSpace);
-    MIB.addMemOperand(
-        MF->getMachineMemOperand(MPI, Info.flags, MemTy, Alignment, CI.getAAMetadata()));
+    MIB.addMemOperand(MF->getMachineMemOperand(MPI, Info.flags, MemTy,
+                                               Alignment, CI.getAAMetadata()));
   }
 
   return true;
 }
 
 bool IRTranslator::findUnwindDestinations(
-    const BasicBlock *EHPadBB,
-    BranchProbability Prob,
+    const BasicBlock *EHPadBB, BranchProbability Prob,
     SmallVectorImpl<std::pair<MachineBasicBlock *, BranchProbability>>
         &UnwindDests) {
   EHPersonality Personality = classifyEHPersonality(
@@ -2749,8 +2747,7 @@ bool IRTranslator::translateLandingPad(const User &U,
 
   // Add a label to mark the beginning of the landing pad.  Deletion of the
   // landing pad can thus be detected via the MachineModuleInfo.
-  MIRBuilder.buildInstr(TargetOpcode::EH_LABEL)
-    .addSym(MF->addLandingPad(&MBB));
+  MIRBuilder.buildInstr(TargetOpcode::EH_LABEL).addSym(MF->addLandingPad(&MBB));
 
   // If the unwinder does not preserve all registers, ensure that the
   // function marks the clobbered registers as used.
@@ -2855,7 +2852,8 @@ bool IRTranslator::translateVAArg(const User &U, MachineIRBuilder &MIRBuilder) {
   return true;
 }
 
-bool IRTranslator::translateUnreachable(const User &U, MachineIRBuilder &MIRBuilder) {
+bool IRTranslator::translateUnreachable(const User &U,
+                                        MachineIRBuilder &MIRBuilder) {
   if (!MF->getTarget().Options.TrapUnreachable)
     return true;
 
@@ -2865,7 +2863,7 @@ bool IRTranslator::translateUnreachable(const User &U, MachineIRBuilder &MIRBuil
     const BasicBlock &BB = *UI.getParent();
     if (&UI != &BB.front()) {
       BasicBlock::const_iterator PredI =
-        std::prev(BasicBlock::const_iterator(UI));
+          std::prev(BasicBlock::const_iterator(UI));
       if (const CallInst *Call = dyn_cast<CallInst>(&*PredI)) {
         if (Call->doesNotReturn())
           return true;
@@ -3049,8 +3047,7 @@ bool IRTranslator::translateAtomicRMW(const User &U,
   return true;
 }
 
-bool IRTranslator::translateFence(const User &U,
-                                  MachineIRBuilder &MIRBuilder) {
+bool IRTranslator::translateFence(const User &U, MachineIRBuilder &MIRBuilder) {
   const FenceInst &Fence = cast<FenceInst>(U);
   MIRBuilder.buildFence(static_cast<unsigned>(Fence.getOrdering()),
                         Fence.getSyncScopeID());
@@ -3163,7 +3160,7 @@ bool IRTranslator::translate(const Constant &C, Register Reg) {
     }
     EntryBuilder->buildBuildVector(Reg, Ops);
   } else if (auto CE = dyn_cast<ConstantExpr>(&C)) {
-    switch(CE->getOpcode()) {
+    switch (CE->getOpcode()) {
 #define HANDLE_INST(NUM, OPCODE, CLASS)                                        \
   case Instruction::OPCODE:                                                    \
     return translate##OPCODE(*CE, *EntryBuilder.get());
@@ -3508,8 +3505,6 @@ bool IRTranslator::runOnMachineFunction(MachineFunction &CurMF) {
   SL = std::make_unique<GISelSwitchLowering>(this, FuncInfo);
   SL->init(TLI, TM, *DL);
 
-
-
   assert(PendingPHIs.empty() && "stale PHIs");
 
   // Targets which want to use big endian can enable it using
@@ -3538,7 +3533,7 @@ bool IRTranslator::runOnMachineFunction(MachineFunction &CurMF) {
   bool HasMustTailInVarArgFn = false;
 
   // Create all blocks, in IR order, to preserve the layout.
-  for (const BasicBlock &BB: F) {
+  for (const BasicBlock &BB : F) {
     auto *&MBB = BBToMBB[&BB];
 
     MBB = MF->CreateMachineBasicBlock(&BB);
@@ -3566,7 +3561,7 @@ bool IRTranslator::runOnMachineFunction(MachineFunction &CurMF) {
 
   // Lower the actual args into this basic block.
   SmallVector<ArrayRef<Register>, 8> VRegArgs;
-  for (const Argument &Arg: F.args()) {
+  for (const Argument &Arg : F.args()) {
     if (DL->getTypeStoreSize(Arg.getType()).isZero())
       continue; // Don't handle zero sized types.
     ArrayRef<Register> VRegs = getOrCreateVRegs(Arg);
diff --git a/llvm/lib/CodeGen/GlobalISel/LegacyLegalizerInfo.cpp b/llvm/lib/CodeGen/GlobalISel/LegacyLegalizerInfo.cpp
index 45b403bdd07658e..ea86ceee4842356 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegacyLegalizerInfo.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegacyLegalizerInfo.cpp
@@ -244,7 +244,8 @@ LegacyLegalizerInfo::decreaseToSmallerTypesAndIncreaseToSmallest(
 }
 
 LegacyLegalizerInfo::SizeAndAction
-LegacyLegalizerInfo::findAction(const SizeAndActionsVec &Vec, const uint32_t Size) {
+LegacyLegalizerInfo::findAction(const SizeAndActionsVec &Vec,
+                                const uint32_t Size) {
   assert(Size >= 1);
   // Find the last element in Vec that has a bitsize equal to or smaller than
   // the requested bit size.
@@ -311,11 +312,10 @@ LegacyLegalizerInfo::findScalarLegalAction(const InstrAspect &Aspect) const {
     return {NotFound, LLT()};
   }
   const SmallVector<SizeAndActionsVec, 1> &Actions =
-      Aspect.Type.isPointer()
-          ? AddrSpace2PointerActions[OpcodeIdx]
-                .find(Aspect.Type.getAddressSpace())
-                ->second
-          : ScalarActions[OpcodeIdx];
+      Aspect.Type.isPointer() ? AddrSpace2PointerActions[OpcodeIdx]
+                                    .find(Aspect.Type.getAddressSpace())
+                                    ->second
+                              : ScalarActions[OpcodeIdx];
   if (Aspect.Idx >= Actions.size())
     return {NotFound, LLT()};
   const SizeAndActionsVec &Vec = Actions[Aspect.Idx];
@@ -368,7 +368,6 @@ unsigned LegacyLegalizerInfo::getOpcodeIdxForOpcode(unsigned Opcode) const {
   return Opcode - FirstOp;
 }
 
-
 LegacyLegalizeActionStep
 LegacyLegalizerInfo::getAction(const LegalityQuery &Query) const {
   for (unsigned i = 0; i < Query.Types.size(); ++i) {
@@ -383,4 +382,3 @@ LegacyLegalizerInfo::getAction(const LegalityQuery &Query) const {
   LLVM_DEBUG(dbgs() << ".. (legacy) Legal\n");
   return {Legal, 0, LLT{}};
 }
-
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalityPredicates.cpp b/llvm/lib/CodeGen/GlobalISel/LegalityPredicates.cpp
index 2c77ed8b0600888..71bacc0b0d4dc90 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalityPredicates.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalityPredicates.cpp
@@ -123,7 +123,7 @@ LegalityPredicate LegalityPredicates::smallerThan(unsigned TypeIdx0,
 }
 
 LegalityPredicate LegalityPredicates::largerThan(unsigned TypeIdx0,
-                                                  unsigned TypeIdx1) {
+                                                 unsigned TypeIdx1) {
   return [=](const LegalityQuery &Query) {
     return Query.Types[TypeIdx0].getSizeInBits() >
            Query.Types[TypeIdx1].getSizeInBits();
diff --git a/llvm/lib/CodeGen/GlobalISel/Legalizer.cpp b/llvm/lib/CodeGen/GlobalISel/Legalizer.cpp
index aecbe0b7604c074..bfca479ddf3e1d5 100644
--- a/llvm/lib/CodeGen/GlobalISel/Legalizer.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/Legalizer.cpp
@@ -81,7 +81,7 @@ INITIALIZE_PASS_END(Legalizer, DEBUG_TYPE,
                     "Legalize the Machine IR a function's Machine IR", false,
                     false)
 
-Legalizer::Legalizer() : MachineFunctionPass(ID) { }
+Legalizer::Legalizer() : MachineFunctionPass(ID) {}
 
 void Legalizer::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.addRequired<TargetPassConfig>();
@@ -93,8 +93,7 @@ void Legalizer::getAnalysisUsage(AnalysisUsage &AU) const {
   MachineFunctionPass::getAnalysisUsage(AU);
 }
 
-void Legalizer::init(MachineFunction &MF) {
-}
+void Legalizer::init(MachineFunction &MF) {}
 
 static bool isArtifact(const MachineInstr &MI) {
   switch (MI.getOpcode()) {
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
index cfb95955d1f888b..b6226746d44c0e6 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalizerHelper.cpp
@@ -51,8 +51,8 @@ using namespace MIPatternMatch;
 ///
 /// Returns -1 in the first element of the pair if the breakdown is not
 /// satisfiable.
-static std::pair<int, int>
-getNarrowTypeBreakDown(LLT OrigTy, LLT NarrowTy, LLT &LeftoverTy) {
+static std::pair<int, int> getNarrowTypeBreakDown(LLT OrigTy, LLT NarrowTy,
+                                                  LLT &LeftoverTy) {
   assert(!LeftoverTy.isValid() && "this is an out argument");
 
   unsigned Size = OrigTy.getSizeInBits();
@@ -163,8 +163,8 @@ void LegalizerHelper::extractParts(Register Reg, LLT Ty, int NumParts,
   MIRBuilder.buildUnmerge(VRegs, Reg);
 }
 
-bool LegalizerHelper::extractParts(Register Reg, LLT RegTy,
-                                   LLT MainTy, LLT &LeftoverTy,
+bool LegalizerHelper::extractParts(Register Reg, LLT RegTy, LLT MainTy,
+                                   LLT &LeftoverTy,
                                    SmallVectorImpl<Register> &VRegs,
                                    SmallVectorImpl<Register> &LeftoverRegs) {
   assert(!LeftoverTy.isValid() && "this is an out argument");
@@ -250,10 +250,8 @@ void LegalizerHelper::extractVectorParts(Register Reg, unsigned NumElts,
   }
 }
 
-void LegalizerHelper::insertParts(Register DstReg,
-                                  LLT ResultTy, LLT PartTy,
-                                  ArrayRef<Register> PartRegs,
-                                  LLT LeftoverTy,
+void LegalizerHelper::insertParts(Register DstReg, LLT ResultTy, LLT PartTy,
+                                  ArrayRef<Register> PartRegs, LLT LeftoverTy,
                                   ArrayRef<Register> LeftoverRegs) {
   if (!LeftoverTy.isValid()) {
     assert(LeftoverRegs.empty());
@@ -368,7 +366,7 @@ LLT LegalizerHelper::buildLCMMergePieces(LLT DstTy, LLT NarrowTy, LLT GCDTy,
 
       // Shift the sign bit of the low register through the high register.
       auto ShiftAmt =
-        MIRBuilder.buildConstant(LLT::scalar(64), GCDTy.getSizeInBits() - 1);
+          MIRBuilder.buildConstant(LLT::scalar(64), GCDTy.getSizeInBits() - 1);
       PadReg = MIRBuilder.buildAShr(GCDTy, VRegs.back(), ShiftAmt).getReg(0);
     }
   }
@@ -853,11 +851,13 @@ LegalizerHelper::libcall(MachineInstr &MI, LostDebugLocObserver &LocObserver) {
   }
   case TargetOpcode::G_FPEXT:
   case TargetOpcode::G_FPTRUNC: {
-    Type *FromTy = getFloatTypeForLLT(Ctx,  MRI.getType(MI.getOperand(1).getReg()));
-    Type *ToTy = getFloatTypeForLLT(Ctx, MRI.getType(MI.getOperand(0).getReg()));
+    Type *FromTy =
+        getFloatTypeForLLT(Ctx, MRI.getType(MI.getOperand(1).getReg()));
+    Type *ToTy =
+        getFloatTypeForLLT(Ctx, MRI.getType(MI.getOperand(0).getReg()));
     if (!FromTy || !ToTy)
       return UnableToLegalize;
-    LegalizeResult Status = conversionLibcall(MI, MIRBuilder, ToTy, FromTy );
+    LegalizeResult Status = conversionLibcall(MI, MIRBuilder, ToTy, FromTy);
     if (Status != Legalized)
       return Status;
     break;
@@ -974,13 +974,12 @@ LegalizerHelper::LegalizeResult LegalizerHelper::narrowScalar(MachineInstr &MI,
     if (LeftoverBits != 0) {
       LeftoverTy = LLT::scalar(LeftoverBits);
       auto K = MIRBuilder.buildConstant(
-        LeftoverTy,
-        Val.lshr(NumParts * NarrowSize).trunc(LeftoverBits));
+          LeftoverTy, Val.lshr(NumParts * NarrowSize).trunc(LeftoverBits));
       LeftoverRegs.push_back(K.getReg(0));
     }
 
-    insertParts(MI.getOperand(0).getReg(),
-                Ty, NarrowTy, PartRegs, LeftoverTy, LeftoverRegs);
+    insertParts(MI.getOperand(0).getReg(), Ty, NarrowTy, PartRegs, LeftoverTy,
+                LeftoverRegs);
 
     MI.eraseFromParent();
     return Legalized;
@@ -1540,8 +1539,9 @@ LegalizerHelper::widenScalarMergeValues(MachineInstr &MI, unsigned TypeIdx,
 
       auto ZextInput = MIRBuilder.buildZExt(WideTy, SrcReg);
 
-      Register NextResult = I + 1 == NumOps && WideTy == DstTy ? DstReg :
-        MRI.createGenericVirtualRegister(WideTy);
+      Register NextResult = I + 1 == NumOps && WideTy == DstTy
+                                ? DstReg
+                                : MRI.createGenericVirtualRegister(WideTy);
 
       auto ShiftAmt = MIRBuilder.buildConstant(WideTy, Offset);
       auto Shl = MIRBuilder.buildShl(WideTy, ZextInput, ShiftAmt);
@@ -1786,8 +1786,7 @@ LegalizerHelper::widenScalarExtract(MachineInstr &MI, unsigned TypeIdx,
 
     if (Offset == 0) {
       // Avoid a shift in the degenerate case.
-      MIRBuilder.buildTrunc(DstReg,
-                            MIRBuilder.buildAnyExtOrTrunc(WideTy, Src));
+      MIRBuilder.buildTrunc(DstReg, MIRBuilder.buildAnyExtOrTrunc(WideTy, Src));
       MI.eraseFromParent();
       return Legalized;
     }
@@ -1799,8 +1798,8 @@ LegalizerHelper::widenScalarExtract(MachineInstr &MI, unsigned TypeIdx,
       ShiftTy = WideTy;
     }
 
-    auto LShr = MIRBuilder.buildLShr(
-      ShiftTy, Src, MIRBuilder.buildConstant(ShiftTy, Offset));
+    auto LShr = MIRBuilder.buildLShr(ShiftTy, Src,
+                                     MIRBuilder.buildConstant(ShiftTy, Offset));
     MIRBuilder.buildTrunc(DstReg, LShr);
     MI.eraseFromParent();
     return Legalized;
@@ -2127,8 +2126,8 @@ LegalizerHelper::widenScalar(MachineInstr &MI, unsigned TypeIdx, LLT WideTy) {
       // the top of the original type.
       auto TopBit =
           APInt::getOneBitSet(WideTy.getSizeInBits(), CurTy.getSizeInBits());
-      MIBSrc = MIRBuilder.buildOr(
-        WideTy, MIBSrc, MIRBuilder.buildConstant(WideTy, TopBit));
+      MIBSrc = MIRBuilder.buildOr(WideTy, MIBSrc,
+                                  MIRBuilder.buildConstant(WideTy, TopBit));
       // Now we know the operand is non-zero, use the more relaxed opcode.
       NewOpc = TargetOpcode::G_CTTZ_ZERO_UNDEF;
     }
@@ -2274,8 +2273,9 @@ LegalizerHelper::widenScalar(MachineInstr &MI, unsigned TypeIdx, LLT WideTy) {
     Observer.changingInstr(MI);
 
     if (TypeIdx == 0) {
-      unsigned CvtOp = MI.getOpcode() == TargetOpcode::G_ASHR ?
-        TargetOpcode::G_SEXT : TargetOpcode::G_ZEXT;
+      unsigned CvtOp = MI.getOpcode() == TargetOpcode::G_ASHR
+                           ? TargetOpcode::G_SEXT
+                           : TargetOpcode::G_ZEXT;
 
       widenScalarSrc(MI, WideTy, 1, CvtOp);
       widenScalarDst(MI, WideTy);
@@ -2374,8 +2374,8 @@ LegalizerHelper::widenScalar(MachineInstr &MI, unsigned TypeIdx, LLT WideTy) {
 
     Observer.changingInstr(MI);
 
-    unsigned ExtType = Ty.getScalarSizeInBits() == 1 ?
-      TargetOpcode::G_ZEXT : TargetOpcode::G_ANYEXT;
+    unsigned ExtType = Ty.getScalarSizeInBits() == 1 ? TargetOpcode::G_ZEXT
+                                                     : TargetOpcode::G_ANYEXT;
     widenScalarSrc(MI, WideTy, 0, ExtType);
 
     Observer.changedInstr(MI);
@@ -2785,8 +2785,9 @@ static Register getBitcastWiderVectorElementOffset(MachineIRBuilder &B,
   auto OffsetMask = B.buildConstant(
       IdxTy, ~(APInt::getAllOnes(IdxTy.getSizeInBits()) << Log2EltRatio));
   auto OffsetIdx = B.buildAnd(IdxTy, Idx, OffsetMask);
-  return B.buildShl(IdxTy, OffsetIdx,
-                    B.buildConstant(IdxTy, Log2_32(OldEltSize))).getReg(0);
+  return B
+      .buildShl(IdxTy, OffsetIdx, B.buildConstant(IdxTy, Log2_32(OldEltSize)))
+      .getReg(0);
 }
 
 /// Perform a G_EXTRACT_VECTOR_ELT in a different sized vector element. If this
@@ -2839,7 +2840,8 @@ LegalizerHelper::bitcastExtractVectorElt(MachineInstr &MI, unsigned TypeIdx,
     for (unsigned I = 0; I < NewEltsPerOldElt; ++I) {
       auto IdxOffset = MIRBuilder.buildConstant(IdxTy, I);
       auto TmpIdx = MIRBuilder.buildAdd(IdxTy, NewBaseIdx, IdxOffset);
-      auto Elt = MIRBuilder.buildExtractVectorElement(NewEltTy, CastVec, TmpIdx);
+      auto Elt =
+          MIRBuilder.buildExtractVectorElement(NewEltTy, CastVec, TmpIdx);
       NewOps[I] = Elt.getReg(0);
     }
 
@@ -2880,13 +2882,14 @@ LegalizerHelper::bitcastExtractVectorElt(MachineInstr &MI, unsigned TypeIdx,
 
     Register WideElt = CastVec;
     if (CastTy.isVector()) {
-      WideElt = MIRBuilder.buildExtractVectorElement(NewEltTy, CastVec,
-                                                     ScaledIdx).getReg(0);
+      WideElt =
+          MIRBuilder.buildExtractVectorElement(NewEltTy, CastVec, ScaledIdx)
+              .getReg(0);
     }
 
     // Compute the bit offset into the register of the target element.
     Register OffsetBits = getBitcastWiderVectorElementOffset(
-      MIRBuilder, Idx, NewEltSize, OldEltSize);
+        MIRBuilder, Idx, NewEltSize, OldEltSize);
 
     // Shift the wide element to get the target element.
     auto ExtractedBits = MIRBuilder.buildLShr(NewEltTy, WideElt, OffsetBits);
@@ -2902,18 +2905,17 @@ LegalizerHelper::bitcastExtractVectorElt(MachineInstr &MI, unsigned TypeIdx,
 /// TargetReg, while preserving other bits in \p TargetReg.
 ///
 /// (InsertReg << Offset) | (TargetReg & ~(-1 >> InsertReg.size()) << Offset)
-static Register buildBitFieldInsert(MachineIRBuilder &B,
-                                    Register TargetReg, Register InsertReg,
-                                    Register OffsetBits) {
+static Register buildBitFieldInsert(MachineIRBuilder &B, Register TargetReg,
+                                    Register InsertReg, Register OffsetBits) {
   LLT TargetTy = B.getMRI()->getType(TargetReg);
   LLT InsertTy = B.getMRI()->getType(InsertReg);
   auto ZextVal = B.buildZExt(TargetTy, InsertReg);
   auto ShiftedInsertVal = B.buildShl(TargetTy, ZextVal, OffsetBits);
 
   // Produce a bitmask of the value to insert
-  auto EltMask = B.buildConstant(
-    TargetTy, APInt::getLowBitsSet(TargetTy.getSizeInBits(),
-                                   InsertTy.getSizeInBits()));
+  auto EltMask =
+      B.buildConstant(TargetTy, APInt::getLowBitsSet(TargetTy.getSizeInBits(),
+                                                     InsertTy.getSizeInBits()));
   // Shift it into position
   auto ShiftedMask = B.buildShl(TargetTy, EltMask, OffsetBits);
   auto InvShiftedMask = B.buildNot(TargetTy, ShiftedMask);
@@ -2968,19 +2970,22 @@ LegalizerHelper::bitcastInsertVectorElt(MachineInstr &MI, unsigned TypeIdx,
 
     Register ExtractedElt = CastVec;
     if (CastTy.isVector()) {
-      ExtractedElt = MIRBuilder.buildExtractVectorElement(NewEltTy, CastVec,
-                                                          ScaledIdx).getReg(0);
+      ExtractedElt =
+          MIRBuilder.buildExtractVectorElement(NewEltTy, CastVec, ScaledIdx)
+              .getReg(0);
     }
 
     // Compute the bit offset into the register of the target element.
     Register OffsetBits = getBitcastWiderVectorElementOffset(
-      MIRBuilder, Idx, NewEltSize, OldEltSize);
+        MIRBuilder, Idx, NewEltSize, OldEltSize);
 
-    Register InsertedElt = buildBitFieldInsert(MIRBuilder, ExtractedElt,
-                                               Val, OffsetBits);
+    Register InsertedElt =
+        buildBitFieldInsert(MIRBuilder, ExtractedElt, Val, OffsetBits);
     if (CastTy.isVector()) {
-      InsertedElt = MIRBuilder.buildInsertVectorElement(
-        CastTy, CastVec, InsertedElt, ScaledIdx).getReg(0);
+      InsertedElt =
+          MIRBuilder
+              .buildInsertVectorElement(CastTy, CastVec, InsertedElt, ScaledIdx)
+              .getReg(0);
     }
 
     MIRBuilder.buildBitcast(Dst, InsertedElt);
@@ -3212,15 +3217,14 @@ LegalizerHelper::LegalizeResult LegalizerHelper::lowerStore(GStore &StoreMI) {
 
   // Generate the PtrAdd and truncating stores.
   LLT PtrTy = MRI.getType(PtrReg);
-  auto OffsetCst = MIRBuilder.buildConstant(
-    LLT::scalar(PtrTy.getSizeInBits()), LargeSplitSize / 8);
-  auto SmallPtr =
-    MIRBuilder.buildPtrAdd(PtrTy, PtrReg, OffsetCst);
+  auto OffsetCst = MIRBuilder.buildConstant(LLT::scalar(PtrTy.getSizeInBits()),
+                                            LargeSplitSize / 8);
+  auto SmallPtr = MIRBuilder.buildPtrAdd(PtrTy, PtrReg, OffsetCst);
 
   MachineMemOperand *LargeMMO =
-    MF.getMachineMemOperand(&MMO, 0, LargeSplitSize / 8);
+      MF.getMachineMemOperand(&MMO, 0, LargeSplitSize / 8);
   MachineMemOperand *SmallMMO =
-    MF.getMachineMemOperand(&MMO, LargeSplitSize / 8, SmallSplitSize / 8);
+      MF.getMachineMemOperand(&MMO, LargeSplitSize / 8, SmallSplitSize / 8);
   MIRBuilder.buildStore(ExtVal, PtrReg, *LargeMMO);
   MIRBuilder.buildStore(SmallVal, SmallPtr, *SmallMMO);
   StoreMI.eraseFromParent();
@@ -3299,16 +3303,16 @@ LegalizerHelper::bitcast(MachineInstr &MI, unsigned TypeIdx, LLT CastTy) {
 
 // Legalize an instruction by changing the opcode in place.
 void LegalizerHelper::changeOpcode(MachineInstr &MI, unsigned NewOpcode) {
-    Observer.changingInstr(MI);
-    MI.setDesc(MIRBuilder.getTII().get(NewOpcode));
-    Observer.changedInstr(MI);
+  Observer.changingInstr(MI);
+  MI.setDesc(MIRBuilder.getTII().get(NewOpcode));
+  Observer.changedInstr(MI);
 }
 
 LegalizerHelper::LegalizeResult
 LegalizerHelper::lower(MachineInstr &MI, unsigned TypeIdx, LLT LowerHintTy) {
   using namespace TargetOpcode;
 
-  switch(MI.getOpcode()) {
+  switch (MI.getOpcode()) {
   default:
     return UnableToLegalize;
   case TargetOpcode::G_FCONSTANT:
@@ -3532,7 +3536,8 @@ LegalizerHelper::lower(MachineInstr &MI, unsigned TypeIdx, LLT LowerHintTy) {
     LLT DstTy = MRI.getType(DstReg);
     Register TmpRes = MRI.createGenericVirtualRegister(DstTy);
 
-    auto MIBSz = MIRBuilder.buildConstant(DstTy, DstTy.getScalarSizeInBits() - SizeInBits);
+    auto MIBSz = MIRBuilder.buildConstant(DstTy, DstTy.getScalarSizeInBits() -
+                                                     SizeInBits);
     MIRBuilder.buildShl(TmpRes, SrcReg, MIBSz->getOperand(0));
     MIRBuilder.buildAShr(DstReg, TmpRes, MIBSz->getOperand(0));
     MI.eraseFromParent();
@@ -3609,7 +3614,7 @@ LegalizerHelper::lower(MachineInstr &MI, unsigned TypeIdx, LLT LowerHintTy) {
   case G_SEXT:
   case G_ANYEXT:
     return lowerEXT(MI);
-  GISEL_VECREDUCE_CASES_NONSEQ
+    GISEL_VECREDUCE_CASES_NONSEQ
     return lowerVectorReduction(MI);
   }
 }
@@ -4119,7 +4124,8 @@ LegalizerHelper::reduceLoadStoreWidth(GLoadStore &LdStMI, unsigned TypeIdx,
   LLT LeftoverTy;
   SmallVector<Register, 8> NarrowRegs, NarrowLeftoverRegs;
   if (IsLoad) {
-    std::tie(NumParts, NumLeftover) = getNarrowTypeBreakDown(ValTy, NarrowTy, LeftoverTy);
+    std::tie(NumParts, NumLeftover) =
+        getNarrowTypeBreakDown(ValTy, NarrowTy, LeftoverTy);
   } else {
     if (extractParts(ValReg, ValTy, NarrowTy, LeftoverTy, NarrowRegs,
                      NarrowLeftoverRegs)) {
@@ -4178,8 +4184,8 @@ LegalizerHelper::reduceLoadStoreWidth(GLoadStore &LdStMI, unsigned TypeIdx,
     splitTypePieces(LeftoverTy, NarrowLeftoverRegs, NumLeftover, HandledOffset);
 
   if (IsLoad) {
-    insertParts(ValReg, ValTy, NarrowTy, NarrowRegs,
-                LeftoverTy, NarrowLeftoverRegs);
+    insertParts(ValReg, ValTy, NarrowTy, NarrowRegs, LeftoverTy,
+                NarrowLeftoverRegs);
   }
 
   LdStMI.eraseFromParent();
@@ -4329,7 +4335,7 @@ LegalizerHelper::fewerElementsVector(MachineInstr &MI, unsigned TypeIdx,
     return reduceLoadStoreWidth(cast<GLoadStore>(MI), TypeIdx, NarrowTy);
   case G_SEXT_INREG:
     return fewerElementsVectorMultiEltType(GMI, NumElts, {2 /*imm*/});
-  GISEL_VECREDUCE_CASES_NONSEQ
+    GISEL_VECREDUCE_CASES_NONSEQ
     return fewerElementsVectorReductions(MI, TypeIdx, NarrowTy);
   case G_SHUFFLE_VECTOR:
     return fewerElementsVectorShuffle(MI, TypeIdx, NarrowTy);
@@ -4574,7 +4580,7 @@ LegalizerHelper::tryNarrowPow2Reduction(MachineInstr &MI, Register SrcReg,
   // one NarrowTy size value left.
   while (SplitSrcs.size() > 1) {
     SmallVector<Register> PartialRdxs;
-    for (unsigned Idx = 0; Idx < SplitSrcs.size()-1; Idx += 2) {
+    for (unsigned Idx = 0; Idx < SplitSrcs.size() - 1; Idx += 2) {
       Register LHS = SplitSrcs[Idx];
       Register RHS = SplitSrcs[Idx + 1];
       // Create the intermediate vector op.
@@ -4591,9 +4597,8 @@ LegalizerHelper::tryNarrowPow2Reduction(MachineInstr &MI, Register SrcReg,
   return Legalized;
 }
 
-LegalizerHelper::LegalizeResult
-LegalizerHelper::narrowScalarShiftByConstant(MachineInstr &MI, const APInt &Amt,
-                                             const LLT HalfTy, const LLT AmtTy) {
+LegalizerHelper::LegalizeResult LegalizerHelper::narrowScalarShiftByConstant(
+    MachineInstr &MI, const APInt &Amt, const LLT HalfTy, const LLT AmtTy) {
 
   Register InL = MRI.createGenericVirtualRegister(HalfTy);
   Register InH = MRI.createGenericVirtualRegister(HalfTy);
@@ -4763,13 +4768,13 @@ LegalizerHelper::narrowScalarShift(MachineInstr &MI, unsigned TypeIdx,
     // Long: ShAmt >= NewBitSize
     MachineInstrBuilder HiL;
     if (MI.getOpcode() == TargetOpcode::G_LSHR) {
-      HiL = MIRBuilder.buildConstant(HalfTy, 0);            // Hi part is zero.
+      HiL = MIRBuilder.buildConstant(HalfTy, 0); // Hi part is zero.
     } else {
       auto ShiftAmt = MIRBuilder.buildConstant(ShiftAmtTy, NewBitSize - 1);
-      HiL = MIRBuilder.buildAShr(HalfTy, InH, ShiftAmt);    // Sign of Hi part.
+      HiL = MIRBuilder.buildAShr(HalfTy, InH, ShiftAmt); // Sign of Hi part.
     }
     auto LoL = MIRBuilder.buildInstr(MI.getOpcode(), {HalfTy},
-                                     {InH, AmtExcess});     // Lo from Hi part.
+                                     {InH, AmtExcess}); // Lo from Hi part.
 
     auto Lo = MIRBuilder.buildSelect(
         HalfTy, IsZero, InL, MIRBuilder.buildSelect(HalfTy, IsShort, LoS, LoL));
@@ -5428,7 +5433,7 @@ LegalizerHelper::narrowScalarInsert(MachineInstr &MI, unsigned TypeIdx,
       InsertOffset = OpStart - DstStart;
       ExtractOffset = 0;
       SegSize =
-        std::min(NarrowSize - InsertOffset, OpStart + OpSize - DstStart);
+          std::min(NarrowSize - InsertOffset, OpStart + OpSize - DstStart);
     }
 
     Register SegReg = OpReg;
@@ -5479,19 +5484,18 @@ LegalizerHelper::narrowScalarBasic(MachineInstr &MI, unsigned TypeIdx,
 
   for (unsigned I = 0, E = Src1Regs.size(); I != E; ++I) {
     auto Inst = MIRBuilder.buildInstr(MI.getOpcode(), {NarrowTy},
-                                        {Src0Regs[I], Src1Regs[I]});
+                                      {Src0Regs[I], Src1Regs[I]});
     DstRegs.push_back(Inst.getReg(0));
   }
 
   for (unsigned I = 0, E = Src1LeftoverRegs.size(); I != E; ++I) {
-    auto Inst = MIRBuilder.buildInstr(
-      MI.getOpcode(),
-      {LeftoverTy}, {Src0LeftoverRegs[I], Src1LeftoverRegs[I]});
+    auto Inst =
+        MIRBuilder.buildInstr(MI.getOpcode(), {LeftoverTy},
+                              {Src0LeftoverRegs[I], Src1LeftoverRegs[I]});
     DstLeftoverRegs.push_back(Inst.getReg(0));
   }
 
-  insertParts(DstReg, DstTy, NarrowTy, DstRegs,
-              LeftoverTy, DstLeftoverRegs);
+  insertParts(DstReg, DstTy, NarrowTy, DstRegs, LeftoverTy, DstLeftoverRegs);
 
   MI.eraseFromParent();
   return Legalized;
@@ -5511,7 +5515,8 @@ LegalizerHelper::narrowScalarExt(MachineInstr &MI, unsigned TypeIdx,
 
   SmallVector<Register, 8> Parts;
   LLT GCDTy = extractGCDType(Parts, DstTy, NarrowTy, SrcReg);
-  LLT LCMTy = buildLCMMergePieces(DstTy, NarrowTy, GCDTy, Parts, MI.getOpcode());
+  LLT LCMTy =
+      buildLCMMergePieces(DstTy, NarrowTy, GCDTy, Parts, MI.getOpcode());
   buildWidenedRemergeToDst(DstReg, LCMTy, Parts);
 
   MI.eraseFromParent();
@@ -5546,19 +5551,18 @@ LegalizerHelper::narrowScalarSelect(MachineInstr &MI, unsigned TypeIdx,
     llvm_unreachable("inconsistent extractParts result");
 
   for (unsigned I = 0, E = Src1Regs.size(); I != E; ++I) {
-    auto Select = MIRBuilder.buildSelect(NarrowTy,
-                                         CondReg, Src1Regs[I], Src2Regs[I]);
+    auto Select =
+        MIRBuilder.buildSelect(NarrowTy, CondReg, Src1Regs[I], Src2Regs[I]);
     DstRegs.push_back(Select.getReg(0));
   }
 
   for (unsigned I = 0, E = Src1LeftoverRegs.size(); I != E; ++I) {
     auto Select = MIRBuilder.buildSelect(
-      LeftoverTy, CondReg, Src1LeftoverRegs[I], Src2LeftoverRegs[I]);
+        LeftoverTy, CondReg, Src1LeftoverRegs[I], Src2LeftoverRegs[I]);
     DstLeftoverRegs.push_back(Select.getReg(0));
   }
 
-  insertParts(DstReg, DstTy, NarrowTy, DstRegs,
-              LeftoverTy, DstLeftoverRegs);
+  insertParts(DstReg, DstTy, NarrowTy, DstRegs, LeftoverTy, DstLeftoverRegs);
 
   MI.eraseFromParent();
   return Legalized;
@@ -5582,9 +5586,8 @@ LegalizerHelper::narrowScalarCTLZ(MachineInstr &MI, unsigned TypeIdx,
     auto C_0 = B.buildConstant(NarrowTy, 0);
     auto HiIsZero = B.buildICmp(CmpInst::ICMP_EQ, LLT::scalar(1),
                                 UnmergeSrc.getReg(1), C_0);
-    auto LoCTLZ = IsUndef ?
-      B.buildCTLZ_ZERO_UNDEF(DstTy, UnmergeSrc.getReg(0)) :
-      B.buildCTLZ(DstTy, UnmergeSrc.getReg(0));
+    auto LoCTLZ = IsUndef ? B.buildCTLZ_ZERO_UNDEF(DstTy, UnmergeSrc.getReg(0))
+                          : B.buildCTLZ(DstTy, UnmergeSrc.getReg(0));
     auto C_NarrowSize = B.buildConstant(DstTy, NarrowSize);
     auto HiIsZeroCTLZ = B.buildAdd(DstTy, LoCTLZ, C_NarrowSize);
     auto HiCTLZ = B.buildCTLZ_ZERO_UNDEF(DstTy, UnmergeSrc.getReg(1));
@@ -5615,9 +5618,8 @@ LegalizerHelper::narrowScalarCTTZ(MachineInstr &MI, unsigned TypeIdx,
     auto C_0 = B.buildConstant(NarrowTy, 0);
     auto LoIsZero = B.buildICmp(CmpInst::ICMP_EQ, LLT::scalar(1),
                                 UnmergeSrc.getReg(0), C_0);
-    auto HiCTTZ = IsUndef ?
-      B.buildCTTZ_ZERO_UNDEF(DstTy, UnmergeSrc.getReg(1)) :
-      B.buildCTTZ(DstTy, UnmergeSrc.getReg(1));
+    auto HiCTTZ = IsUndef ? B.buildCTTZ_ZERO_UNDEF(DstTy, UnmergeSrc.getReg(1))
+                          : B.buildCTTZ(DstTy, UnmergeSrc.getReg(1));
     auto C_NarrowSize = B.buildConstant(DstTy, NarrowSize);
     auto LoIsZeroCTTZ = B.buildAdd(DstTy, HiCTTZ, C_NarrowSize);
     auto LoCTTZ = B.buildCTTZ_ZERO_UNDEF(DstTy, UnmergeSrc.getReg(0));
@@ -5819,7 +5821,7 @@ LegalizerHelper::lowerBitCount(MachineInstr &MI) {
     auto C_B8Mask4HiTo0 = B.buildConstant(Ty, B8Mask4HiTo0);
     auto B8Count = B.buildAnd(Ty, B8CountDirty4Hi, C_B8Mask4HiTo0);
 
-    assert(Size<=128 && "Scalar size is too large for CTPOP lower algorithm");
+    assert(Size <= 128 && "Scalar size is too large for CTPOP lower algorithm");
     // 8 bits can hold CTPOP result of 128 bit int or smaller. Mul with this
     // bitmask will set 8 msb in ResTmp to sum of all B8Counts in 8 bit blocks.
     auto MulMask = B.buildConstant(Ty, APInt::getSplat(Size, APInt(8, 0x01)));
@@ -6274,8 +6276,8 @@ LegalizerHelper::LegalizeResult LegalizerHelper::lowerFPTOSI(MachineInstr &MI) {
   auto AndExpMask = MIRBuilder.buildAnd(SrcTy, Src, ExponentMask);
   auto ExponentBits = MIRBuilder.buildLShr(SrcTy, AndExpMask, ExponentLoBit);
 
-  auto SignMask = MIRBuilder.buildConstant(SrcTy,
-                                           APInt::getSignMask(SrcEltBits));
+  auto SignMask =
+      MIRBuilder.buildConstant(SrcTy, APInt::getSignMask(SrcEltBits));
   auto AndSignMask = MIRBuilder.buildAnd(SrcTy, Src, SignMask);
   auto SignLowBit = MIRBuilder.buildConstant(SrcTy, SrcEltBits - 1);
   auto Sign = MIRBuilder.buildAShr(SrcTy, AndSignMask, SignLowBit);
@@ -6297,8 +6299,8 @@ LegalizerHelper::LegalizeResult LegalizerHelper::lowerFPTOSI(MachineInstr &MI) {
   auto Srl = MIRBuilder.buildLShr(DstTy, R, ExponentSub);
 
   const LLT S1 = LLT::scalar(1);
-  auto CmpGt = MIRBuilder.buildICmp(CmpInst::ICMP_SGT,
-                                    S1, Exponent, ExponentLoBit);
+  auto CmpGt =
+      MIRBuilder.buildICmp(CmpInst::ICMP_SGT, S1, Exponent, ExponentLoBit);
 
   R = MIRBuilder.buildSelect(DstTy, CmpGt, Shl, Srl);
 
@@ -6307,8 +6309,8 @@ LegalizerHelper::LegalizeResult LegalizerHelper::lowerFPTOSI(MachineInstr &MI) {
 
   auto ZeroSrcTy = MIRBuilder.buildConstant(SrcTy, 0);
 
-  auto ExponentLt0 = MIRBuilder.buildICmp(CmpInst::ICMP_SLT,
-                                          S1, Exponent, ZeroSrcTy);
+  auto ExponentLt0 =
+      MIRBuilder.buildICmp(CmpInst::ICMP_SLT, S1, Exponent, ZeroSrcTy);
 
   auto ZeroDstTy = MIRBuilder.buildConstant(DstTy, 0);
   MIRBuilder.buildSelect(Dst, ExponentLt0, ZeroDstTy, Ret);
@@ -6352,13 +6354,13 @@ LegalizerHelper::lowerFPTRUNC_F64_TO_F16(MachineInstr &MI) {
   // Subtract the fp64 exponent bias (1023) to get the real exponent and
   // add the f16 bias (15) to get the biased exponent for the f16 format.
   E = MIRBuilder.buildAdd(
-    S32, E, MIRBuilder.buildConstant(S32, -ExpBiasf64 + ExpBiasf16));
+      S32, E, MIRBuilder.buildConstant(S32, -ExpBiasf64 + ExpBiasf16));
 
   auto M = MIRBuilder.buildLShr(S32, UH, MIRBuilder.buildConstant(S32, 8));
   M = MIRBuilder.buildAnd(S32, M, MIRBuilder.buildConstant(S32, 0xffe));
 
-  auto MaskedSig = MIRBuilder.buildAnd(S32, UH,
-                                       MIRBuilder.buildConstant(S32, 0x1ff));
+  auto MaskedSig =
+      MIRBuilder.buildAnd(S32, UH, MIRBuilder.buildConstant(S32, 0x1ff));
   MaskedSig = MIRBuilder.buildOr(S32, MaskedSig, U);
 
   auto Zero = MIRBuilder.buildConstant(S32, 0);
@@ -6384,14 +6386,14 @@ LegalizerHelper::lowerFPTRUNC_F64_TO_F16(MachineInstr &MI) {
   auto B = MIRBuilder.buildSMax(S32, OneSubExp, Zero);
   B = MIRBuilder.buildSMin(S32, B, MIRBuilder.buildConstant(S32, 13));
 
-  auto SigSetHigh = MIRBuilder.buildOr(S32, M,
-                                       MIRBuilder.buildConstant(S32, 0x1000));
+  auto SigSetHigh =
+      MIRBuilder.buildOr(S32, M, MIRBuilder.buildConstant(S32, 0x1000));
 
   auto D = MIRBuilder.buildLShr(S32, SigSetHigh, B);
   auto D0 = MIRBuilder.buildShl(S32, D, B);
 
-  auto D0_NE_SigSetHigh = MIRBuilder.buildICmp(CmpInst::ICMP_NE, S1,
-                                             D0, SigSetHigh);
+  auto D0_NE_SigSetHigh =
+      MIRBuilder.buildICmp(CmpInst::ICMP_NE, S1, D0, SigSetHigh);
   auto D1 = MIRBuilder.buildZExt(S32, D0_NE_SigSetHigh);
   D = MIRBuilder.buildOr(S32, D, D1);
 
@@ -6412,13 +6414,13 @@ LegalizerHelper::lowerFPTRUNC_F64_TO_F16(MachineInstr &MI) {
   V1 = MIRBuilder.buildOr(S32, V0, V1);
   V = MIRBuilder.buildAdd(S32, V, V1);
 
-  auto CmpEGt30 = MIRBuilder.buildICmp(CmpInst::ICMP_SGT,  S1,
-                                       E, MIRBuilder.buildConstant(S32, 30));
+  auto CmpEGt30 = MIRBuilder.buildICmp(CmpInst::ICMP_SGT, S1, E,
+                                       MIRBuilder.buildConstant(S32, 30));
   V = MIRBuilder.buildSelect(S32, CmpEGt30,
                              MIRBuilder.buildConstant(S32, 0x7c00), V);
 
-  auto CmpEGt1039 = MIRBuilder.buildICmp(CmpInst::ICMP_EQ, S1,
-                                         E, MIRBuilder.buildConstant(S32, 1039));
+  auto CmpEGt1039 = MIRBuilder.buildICmp(CmpInst::ICMP_EQ, S1, E,
+                                         MIRBuilder.buildConstant(S32, 1039));
   V = MIRBuilder.buildSelect(S32, CmpEGt1039, I, V);
 
   // Extract the sign bit.
@@ -6491,11 +6493,11 @@ LegalizerHelper::lowerFCopySign(MachineInstr &MI) {
   const int Src0Size = Src0Ty.getScalarSizeInBits();
   const int Src1Size = Src1Ty.getScalarSizeInBits();
 
-  auto SignBitMask = MIRBuilder.buildConstant(
-    Src0Ty, APInt::getSignMask(Src0Size));
+  auto SignBitMask =
+      MIRBuilder.buildConstant(Src0Ty, APInt::getSignMask(Src0Size));
 
   auto NotSignBitMask = MIRBuilder.buildConstant(
-    Src0Ty, APInt::getLowBitsSet(Src0Size, Src0Size - 1));
+      Src0Ty, APInt::getLowBitsSet(Src0Size, Src0Size - 1));
 
   Register And0 = MIRBuilder.buildAnd(Src0Ty, Src0, NotSignBitMask).getReg(0);
   Register And1;
@@ -6525,8 +6527,9 @@ LegalizerHelper::lowerFCopySign(MachineInstr &MI) {
 
 LegalizerHelper::LegalizeResult
 LegalizerHelper::lowerFMinNumMaxNum(MachineInstr &MI) {
-  unsigned NewOp = MI.getOpcode() == TargetOpcode::G_FMINNUM ?
-    TargetOpcode::G_FMINNUM_IEEE : TargetOpcode::G_FMAXNUM_IEEE;
+  unsigned NewOp = MI.getOpcode() == TargetOpcode::G_FMINNUM
+                       ? TargetOpcode::G_FMINNUM_IEEE
+                       : TargetOpcode::G_FMAXNUM_IEEE;
 
   auto [Dst, Src0, Src1] = MI.getFirst3Regs();
   LLT Ty = MRI.getType(Dst);
@@ -6558,8 +6561,8 @@ LegalizerHelper::LegalizeResult LegalizerHelper::lowerFMad(MachineInstr &MI) {
   LLT Ty = MRI.getType(DstReg);
   unsigned Flags = MI.getFlags();
 
-  auto Mul = MIRBuilder.buildFMul(Ty, MI.getOperand(1), MI.getOperand(2),
-                                  Flags);
+  auto Mul =
+      MIRBuilder.buildFMul(Ty, MI.getOperand(1), MI.getOperand(2), Flags);
   MIRBuilder.buildFAdd(DstReg, Mul, MI.getOperand(3), Flags);
   MI.eraseFromParent();
   return Legalized;
@@ -6587,8 +6590,8 @@ LegalizerHelper::lowerIntrinsicRound(MachineInstr &MI) {
   auto Half = MIRBuilder.buildFConstant(Ty, 0.5);
   auto SignOne = MIRBuilder.buildFCopysign(Ty, One, X);
 
-  auto Cmp = MIRBuilder.buildFCmp(CmpInst::FCMP_OGE, CondTy, AbsDiff, Half,
-                                  Flags);
+  auto Cmp =
+      MIRBuilder.buildFCmp(CmpInst::FCMP_OGE, CondTy, AbsDiff, Half, Flags);
   auto Sel = MIRBuilder.buildSelect(Ty, Cmp, SignOne, Zero, Flags);
 
   MIRBuilder.buildFAdd(DstReg, T, Sel, Flags);
@@ -6610,10 +6613,10 @@ LegalizerHelper::LegalizeResult LegalizerHelper::lowerFFloor(MachineInstr &MI) {
   auto Trunc = MIRBuilder.buildIntrinsicTrunc(Ty, SrcReg, Flags);
   auto Zero = MIRBuilder.buildFConstant(Ty, 0.0);
 
-  auto Lt0 = MIRBuilder.buildFCmp(CmpInst::FCMP_OLT, CondTy,
-                                  SrcReg, Zero, Flags);
-  auto NeTrunc = MIRBuilder.buildFCmp(CmpInst::FCMP_ONE, CondTy,
-                                      SrcReg, Trunc, Flags);
+  auto Lt0 =
+      MIRBuilder.buildFCmp(CmpInst::FCMP_OLT, CondTy, SrcReg, Zero, Flags);
+  auto NeTrunc =
+      MIRBuilder.buildFCmp(CmpInst::FCMP_ONE, CondTy, SrcReg, Trunc, Flags);
   auto And = MIRBuilder.buildAnd(CondTy, Lt0, NeTrunc);
   auto AddVal = MIRBuilder.buildSITOFP(Ty, And);
 
@@ -6637,8 +6640,9 @@ LegalizerHelper::lowerMergeValues(MachineInstr &MI) {
     Register SrcReg = MI.getOperand(I).getReg();
     auto ZextInput = MIRBuilder.buildZExt(WideTy, SrcReg);
 
-    Register NextResult = I + 1 == NumOps && WideTy == DstTy ? DstReg :
-      MRI.createGenericVirtualRegister(WideTy);
+    Register NextResult = I + 1 == NumOps && WideTy == DstTy
+                              ? DstReg
+                              : MRI.createGenericVirtualRegister(WideTy);
 
     auto ShiftAmt = MIRBuilder.buildConstant(WideTy, Offset);
     auto Shl = MIRBuilder.buildShl(WideTy, ZextInput, ShiftAmt);
@@ -6648,7 +6652,7 @@ LegalizerHelper::lowerMergeValues(MachineInstr &MI) {
 
   if (DstTy.isPointer()) {
     if (MIRBuilder.getDataLayout().isNonIntegralAddressSpace(
-          DstTy.getAddressSpace())) {
+            DstTy.getAddressSpace())) {
       LLVM_DEBUG(dbgs() << "Not casting nonintegral address space\n");
       return UnableToLegalize;
     }
@@ -7189,8 +7193,7 @@ LegalizerHelper::lowerAddSubSatToAddoSubo(MachineInstr &MI) {
   return Legalized;
 }
 
-LegalizerHelper::LegalizeResult
-LegalizerHelper::lowerShlSat(MachineInstr &MI) {
+LegalizerHelper::LegalizeResult LegalizerHelper::lowerShlSat(MachineInstr &MI) {
   assert((MI.getOpcode() == TargetOpcode::G_SSHLSAT ||
           MI.getOpcode() == TargetOpcode::G_USHLSAT) &&
          "Expected shlsat opcode!");
@@ -7306,7 +7309,7 @@ LegalizerHelper::lowerReadWriteRegister(MachineInstr &MI) {
   Register ValReg = MI.getOperand(ValRegIndex).getReg();
   const LLT Ty = MRI.getType(ValReg);
   const MDString *RegStr = cast<MDString>(
-    cast<MDNode>(MI.getOperand(NameOpIdx).getMetadata())->getOperand(0));
+      cast<MDNode>(MI.getOperand(NameOpIdx).getMetadata())->getOperand(0));
 
   Register PhysReg = TLI.getRegisterByName(RegStr->getString().data(), Ty, MF);
   if (!PhysReg.isValid())
@@ -7544,8 +7547,8 @@ LegalizerHelper::LegalizeResult LegalizerHelper::lowerSelect(MachineInstr &MI) {
       MaskElt = MIRBuilder.buildSExtInReg(MaskTy, MaskElt, 1).getReg(0);
 
     // Continue the sign extension (or truncate) to match the data type.
-    MaskElt = MIRBuilder.buildSExtOrTrunc(DstTy.getElementType(),
-                                          MaskElt).getReg(0);
+    MaskElt =
+        MIRBuilder.buildSExtOrTrunc(DstTy.getElementType(), MaskElt).getReg(0);
 
     // Generate a vector splat idiom.
     auto ShufSplat = MIRBuilder.buildShuffleSplat(DstTy, MaskElt);
diff --git a/llvm/lib/CodeGen/GlobalISel/LegalizerInfo.cpp b/llvm/lib/CodeGen/GlobalISel/LegalizerInfo.cpp
index 1f2e481c63e0b90..daf9f3f1ca40a7f 100644
--- a/llvm/lib/CodeGen/GlobalISel/LegalizerInfo.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LegalizerInfo.cpp
@@ -110,8 +110,7 @@ static bool hasNoSimpleLoops(const LegalizeRule &Rule, const LegalityQuery &Q,
 }
 
 // Make sure the returned mutation makes sense for the match type.
-static bool mutationIsSane(const LegalizeRule &Rule,
-                           const LegalityQuery &Q,
+static bool mutationIsSane(const LegalizeRule &Rule, const LegalityQuery &Q,
                            std::pair<unsigned, LLT> Mutation) {
   // If the user wants a custom mutation, then we can't really say much about
   // it. Return true, and trust that they're doing the right thing.
@@ -129,8 +128,8 @@ static bool mutationIsSane(const LegalizeRule &Rule,
     [[fallthrough]];
   case MoreElements: {
     // MoreElements can go from scalar to vector.
-    const ElementCount OldElts = OldTy.isVector() ?
-      OldTy.getElementCount() : ElementCount::getFixed(1);
+    const ElementCount OldElts =
+        OldTy.isVector() ? OldTy.getElementCount() : ElementCount::getFixed(1);
     if (NewTy.isVector()) {
       if (Rule.getAction() == FewerElements) {
         // Make sure the element count really decreased.
@@ -159,7 +158,7 @@ static bool mutationIsSane(const LegalizeRule &Rule,
         return false;
     }
 
-    if (Rule.getAction() == NarrowScalar)  {
+    if (Rule.getAction() == NarrowScalar) {
       // Make sure the size really decreased.
       if (NewTy.getScalarSizeInBits() >= OldTy.getScalarSizeInBits())
         return false;
@@ -288,7 +287,8 @@ LegalizerInfo::getActionDefinitions(unsigned Opcode) const {
 LegalizeRuleSet &LegalizerInfo::getActionDefinitionsBuilder(unsigned Opcode) {
   unsigned OpcodeIdx = getActionDefinitionsIdx(Opcode);
   auto &Result = RulesForOpcode[OpcodeIdx];
-  assert(!Result.isAliasedByAnother() && "Modifying this opcode will modify aliases");
+  assert(!Result.isAliasedByAnother() &&
+         "Modifying this opcode will modify aliases");
   return Result;
 }
 
@@ -315,8 +315,7 @@ void LegalizerInfo::aliasActionDefinitions(unsigned OpcodeTo,
   RulesForOpcode[OpcodeFromIdx].aliasTo(OpcodeTo);
 }
 
-LegalizeActionStep
-LegalizerInfo::getAction(const LegalityQuery &Query) const {
+LegalizeActionStep LegalizerInfo::getAction(const LegalityQuery &Query) const {
   LegalizeActionStep Step = getActionDefinitions(Query.Opcode).apply(Query);
   if (Step.Action != LegalizeAction::UseLegacyRules) {
     return Step;
diff --git a/llvm/lib/CodeGen/GlobalISel/LoadStoreOpt.cpp b/llvm/lib/CodeGen/GlobalISel/LoadStoreOpt.cpp
index 246aa88b09acf61..c7488c4a75e58b4 100644
--- a/llvm/lib/CodeGen/GlobalISel/LoadStoreOpt.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/LoadStoreOpt.cpp
@@ -156,7 +156,6 @@ bool GISelAddressing::aliasIsKnownForLoadStore(const MachineInstr &MI1,
   if (!Base0Def || !Base1Def)
     return false; // Couldn't tell anything.
 
-
   if (Base0Def->getOpcode() != Base1Def->getOpcode())
     return false;
 
@@ -214,8 +213,8 @@ bool GISelAddressing::instMayAlias(const MachineInstr &MI,
 
       uint64_t Size = MemoryLocation::getSizeOrUnknown(
           LS->getMMO().getMemoryType().getSizeInBytes());
-      return {LS->isVolatile(),       LS->isAtomic(),          BaseReg,
-              Offset /*base offset*/, Size, &LS->getMMO()};
+      return {LS->isVolatile(),       LS->isAtomic(), BaseReg,
+              Offset /*base offset*/, Size,           &LS->getMMO()};
     }
     // FIXME: support recognizing lifetime instructions.
     // Default.
@@ -401,7 +400,7 @@ bool LoadStoreOpt::doSingleStoreMerge(SmallVectorImpl<GStore *> &Stores) {
   WideReg = Builder.buildConstant(WideValueTy, WideConst).getReg(0);
   auto NewStore =
       Builder.buildStore(WideReg, FirstStore->getPointerReg(), *WideMMO);
-  (void) NewStore;
+  (void)NewStore;
   LLVM_DEBUG(dbgs() << "Merged " << Stores.size()
                     << " stores into merged store: " << *NewStore);
   LLVM_DEBUG(for (auto *MI : Stores) dbgs() << "  " << *MI;);
@@ -811,8 +810,8 @@ bool LoadStoreOpt::mergeTruncStore(GStore &StoreMI,
   auto &C = LastStore.getMF()->getFunction().getContext();
   // Check that a store of the wide type is both allowed and fast on the target
   unsigned Fast = 0;
-  bool Allowed = TLI->allowsMemoryAccess(
-      C, DL, WideStoreTy, LowestIdxStore->getMMO(), &Fast);
+  bool Allowed = TLI->allowsMemoryAccess(C, DL, WideStoreTy,
+                                         LowestIdxStore->getMMO(), &Fast);
   if (!Allowed || !Fast)
     return false;
 
@@ -902,7 +901,7 @@ bool LoadStoreOpt::mergeTruncStoresBlock(MachineBasicBlock &BB) {
 
 bool LoadStoreOpt::mergeFunctionStores(MachineFunction &MF) {
   bool Changed = false;
-  for (auto &BB : MF){
+  for (auto &BB : MF) {
     Changed |= mergeBlockStores(BB);
     Changed |= mergeTruncStoresBlock(BB);
   }
diff --git a/llvm/lib/CodeGen/GlobalISel/MachineIRBuilder.cpp b/llvm/lib/CodeGen/GlobalISel/MachineIRBuilder.cpp
index 90358e7ca220edd..c4240668989b2ce 100644
--- a/llvm/lib/CodeGen/GlobalISel/MachineIRBuilder.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/MachineIRBuilder.cpp
@@ -97,7 +97,7 @@ MachineInstrBuilder MachineIRBuilder::buildConstDbgValue(const Constant &C,
       "Expected inlined-at fields to agree");
   auto MIB = buildInstrNoInsert(TargetOpcode::DBG_VALUE);
 
-  auto *NumericConstant = [&] () -> const Constant* {
+  auto *NumericConstant = [&]() -> const Constant * {
     if (const auto *CE = dyn_cast<ConstantExpr>(&C))
       if (CE->getOpcode() == Instruction::IntToPtr)
         return CE->getOperand(0);
@@ -201,7 +201,8 @@ MachineInstrBuilder MachineIRBuilder::buildPtrAdd(const DstOp &Res,
                                                   const SrcOp &Op1) {
   assert(Res.getLLTTy(*getMRI()).getScalarType().isPointer() &&
          Res.getLLTTy(*getMRI()) == Op0.getLLTTy(*getMRI()) && "type mismatch");
-  assert(Op1.getLLTTy(*getMRI()).getScalarType().isScalar() && "invalid offset type");
+  assert(Op1.getLLTTy(*getMRI()).getScalarType().isScalar() &&
+         "invalid offset type");
 
   return buildInstr(TargetOpcode::G_PTR_ADD, {Res}, {Op0, Op1});
 }
@@ -210,7 +211,7 @@ std::optional<MachineInstrBuilder>
 MachineIRBuilder::materializePtrAdd(Register &Res, Register Op0,
                                     const LLT ValueTy, uint64_t Value) {
   assert(Res == 0 && "Res is a result argument");
-  assert(ValueTy.isScalar()  && "invalid offset type");
+  assert(ValueTy.isScalar() && "invalid offset type");
 
   if (Value == 0) {
     Res = Op0;
@@ -291,8 +292,7 @@ MachineInstrBuilder MachineIRBuilder::buildBrIndirect(Register Tgt) {
   return buildInstr(TargetOpcode::G_BRINDIRECT).addUse(Tgt);
 }
 
-MachineInstrBuilder MachineIRBuilder::buildBrJT(Register TablePtr,
-                                                unsigned JTI,
+MachineInstrBuilder MachineIRBuilder::buildBrJT(Register TablePtr, unsigned JTI,
                                                 Register IndexReg) {
   assert(getMRI()->getType(TablePtr).isPointer() &&
          "Table reg must be a pointer");
@@ -316,8 +316,8 @@ MachineInstrBuilder MachineIRBuilder::buildConstant(const DstOp &Res,
 
   if (Ty.isVector()) {
     auto Const = buildInstr(TargetOpcode::G_CONSTANT)
-    .addDef(getMRI()->createGenericVirtualRegister(EltTy))
-    .addCImm(&Val);
+                     .addDef(getMRI()->createGenericVirtualRegister(EltTy))
+                     .addCImm(&Val);
     return buildSplatVector(Res, Const);
   }
 
@@ -341,16 +341,16 @@ MachineInstrBuilder MachineIRBuilder::buildFConstant(const DstOp &Res,
   LLT Ty = Res.getLLTTy(*getMRI());
   LLT EltTy = Ty.getScalarType();
 
-  assert(APFloat::getSizeInBits(Val.getValueAPF().getSemantics())
-         == EltTy.getSizeInBits() &&
+  assert(APFloat::getSizeInBits(Val.getValueAPF().getSemantics()) ==
+             EltTy.getSizeInBits() &&
          "creating fconstant with the wrong size");
 
   assert(!Ty.isPointer() && "invalid operand type");
 
   if (Ty.isVector()) {
     auto Const = buildInstr(TargetOpcode::G_FCONSTANT)
-    .addDef(getMRI()->createGenericVirtualRegister(EltTy))
-    .addFPImm(&Val);
+                     .addDef(getMRI()->createGenericVirtualRegister(EltTy))
+                     .addFPImm(&Val);
 
     return buildSplatVector(Res, Const);
   }
@@ -372,8 +372,8 @@ MachineInstrBuilder MachineIRBuilder::buildFConstant(const DstOp &Res,
                                                      double Val) {
   LLT DstTy = Res.getLLTTy(*getMRI());
   auto &Ctx = getMF().getFunction().getContext();
-  auto *CFP =
-      ConstantFP::get(Ctx, getAPFloatFromSize(Val, DstTy.getScalarSizeInBits()));
+  auto *CFP = ConstantFP::get(
+      Ctx, getAPFloatFromSize(Val, DstTy.getScalarSizeInBits()));
   return buildFConstant(Res, *CFP);
 }
 
@@ -422,9 +422,10 @@ MachineInstrBuilder MachineIRBuilder::buildLoadInstr(unsigned Opcode,
   return MIB;
 }
 
-MachineInstrBuilder MachineIRBuilder::buildLoadFromOffset(
-  const DstOp &Dst, const SrcOp &BasePtr,
-  MachineMemOperand &BaseMMO, int64_t Offset) {
+MachineInstrBuilder
+MachineIRBuilder::buildLoadFromOffset(const DstOp &Dst, const SrcOp &BasePtr,
+                                      MachineMemOperand &BaseMMO,
+                                      int64_t Offset) {
   LLT LoadTy = Dst.getLLTTy(*getMRI());
   MachineMemOperand *OffsetMMO =
       getMF().getMachineMemOperand(&BaseMMO, Offset, LoadTy);
@@ -494,9 +495,9 @@ unsigned MachineIRBuilder::getBoolExtOp(bool IsVec, bool IsFP) const {
 }
 
 MachineInstrBuilder MachineIRBuilder::buildBoolExt(const DstOp &Res,
-                                                   const SrcOp &Op,
-                                                   bool IsFP) {
-  unsigned ExtOp = getBoolExtOp(getMRI()->getType(Op.getReg()).isVector(), IsFP);
+                                                   const SrcOp &Op, bool IsFP) {
+  unsigned ExtOp =
+      getBoolExtOp(getMRI()->getType(Op.getReg()).isVector(), IsFP);
   return buildInstr(ExtOp, Res, Op);
 }
 
@@ -663,9 +664,9 @@ MachineInstrBuilder MachineIRBuilder::buildUnmerge(ArrayRef<LLT> Res,
   return buildInstr(TargetOpcode::G_UNMERGE_VALUES, TmpVec, Op);
 }
 
-MachineInstrBuilder MachineIRBuilder::buildUnmerge(LLT Res,
-                                                   const SrcOp &Op) {
-  unsigned NumReg = Op.getLLTTy(*getMRI()).getSizeInBits() / Res.getSizeInBits();
+MachineInstrBuilder MachineIRBuilder::buildUnmerge(LLT Res, const SrcOp &Op) {
+  unsigned NumReg =
+      Op.getLLTTy(*getMRI()).getSizeInBits() / Res.getSizeInBits();
   SmallVector<DstOp, 8> TmpVec(NumReg, Res);
   return buildInstr(TargetOpcode::G_UNMERGE_VALUES, TmpVec, Op);
 }
@@ -923,10 +924,11 @@ MachineIRBuilder::buildAtomicCmpXchg(Register OldValRes, Register Addr,
       .addMemOperand(&MMO);
 }
 
-MachineInstrBuilder MachineIRBuilder::buildAtomicRMW(
-  unsigned Opcode, const DstOp &OldValRes,
-  const SrcOp &Addr, const SrcOp &Val,
-  MachineMemOperand &MMO) {
+MachineInstrBuilder MachineIRBuilder::buildAtomicRMW(unsigned Opcode,
+                                                     const DstOp &OldValRes,
+                                                     const SrcOp &Addr,
+                                                     const SrcOp &Val,
+                                                     MachineMemOperand &MMO) {
 
 #ifndef NDEBUG
   LLT OldValResTy = OldValRes.getLLTTy(*getMRI());
@@ -1016,16 +1018,15 @@ MachineIRBuilder::buildAtomicRMWUmin(Register OldValRes, Register Addr,
 }
 
 MachineInstrBuilder
-MachineIRBuilder::buildAtomicRMWFAdd(
-  const DstOp &OldValRes, const SrcOp &Addr, const SrcOp &Val,
-  MachineMemOperand &MMO) {
+MachineIRBuilder::buildAtomicRMWFAdd(const DstOp &OldValRes, const SrcOp &Addr,
+                                     const SrcOp &Val, MachineMemOperand &MMO) {
   return buildAtomicRMW(TargetOpcode::G_ATOMICRMW_FADD, OldValRes, Addr, Val,
                         MMO);
 }
 
 MachineInstrBuilder
-MachineIRBuilder::buildAtomicRMWFSub(const DstOp &OldValRes, const SrcOp &Addr, const SrcOp &Val,
-                                     MachineMemOperand &MMO) {
+MachineIRBuilder::buildAtomicRMWFSub(const DstOp &OldValRes, const SrcOp &Addr,
+                                     const SrcOp &Val, MachineMemOperand &MMO) {
   return buildAtomicRMW(TargetOpcode::G_ATOMICRMW_FSUB, OldValRes, Addr, Val,
                         MMO);
 }
@@ -1044,11 +1045,9 @@ MachineIRBuilder::buildAtomicRMWFMin(const DstOp &OldValRes, const SrcOp &Addr,
                         MMO);
 }
 
-MachineInstrBuilder
-MachineIRBuilder::buildFence(unsigned Ordering, unsigned Scope) {
-  return buildInstr(TargetOpcode::G_FENCE)
-    .addImm(Ordering)
-    .addImm(Scope);
+MachineInstrBuilder MachineIRBuilder::buildFence(unsigned Ordering,
+                                                 unsigned Scope) {
+  return buildInstr(TargetOpcode::G_FENCE).addImm(Ordering).addImm(Scope);
 }
 
 MachineInstrBuilder
@@ -1176,7 +1175,8 @@ MachineIRBuilder::buildInstr(unsigned Opc, ArrayRef<DstOp> DstOps,
     assert(DstOps.size() == 1 && "Invalid Dst");
     assert(SrcOps.size() == 1 && "Invalid Srcs");
     assert(DstOps[0].getLLTTy(*getMRI()).getSizeInBits() ==
-           SrcOps[0].getLLTTy(*getMRI()).getSizeInBits() && "invalid bitcast");
+               SrcOps[0].getLLTTy(*getMRI()).getSizeInBits() &&
+           "invalid bitcast");
     break;
   }
   case TargetOpcode::COPY:
diff --git a/llvm/lib/CodeGen/GlobalISel/RegBankSelect.cpp b/llvm/lib/CodeGen/GlobalISel/RegBankSelect.cpp
index d201342cd61dbc8..b1446eff66430e8 100644
--- a/llvm/lib/CodeGen/GlobalISel/RegBankSelect.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/RegBankSelect.cpp
@@ -158,10 +158,11 @@ bool RegBankSelect::repairReg(
 
     // Build the instruction used to repair, then clone it at the right
     // places. Avoiding buildCopy bypasses the check that Src and Dst have the
-    // same types because the type is a placeholder when this function is called.
+    // same types because the type is a placeholder when this function is
+    // called.
     MI = MIRBuilder.buildInstrNoInsert(TargetOpcode::COPY)
-      .addDef(Dst)
-      .addUse(Src);
+             .addDef(Dst)
+             .addUse(Src);
     LLVM_DEBUG(dbgs() << "Copy: " << printReg(Src) << ':'
                       << printRegClassOrBank(Src, *MRI, TRI)
                       << " to: " << printReg(Dst) << ':'
@@ -169,7 +170,8 @@ bool RegBankSelect::repairReg(
   } else {
     // TODO: Support with G_IMPLICIT_DEF + G_INSERT sequence or G_EXTRACT
     // sequence.
-    assert(ValMapping.partsAllUniform() && "irregular breakdowns not supported");
+    assert(ValMapping.partsAllUniform() &&
+           "irregular breakdowns not supported");
 
     LLT RegTy = MRI->getType(MO.getReg());
     if (MO.isDef()) {
@@ -191,8 +193,7 @@ bool RegBankSelect::repairReg(
         MergeOp = TargetOpcode::G_MERGE_VALUES;
 
       auto MergeBuilder =
-        MIRBuilder.buildInstrNoInsert(MergeOp)
-        .addDef(MO.getReg());
+          MIRBuilder.buildInstrNoInsert(MergeOp).addDef(MO.getReg());
 
       for (Register SrcReg : NewVRegs)
         MergeBuilder.addUse(SrcReg);
@@ -200,7 +201,7 @@ bool RegBankSelect::repairReg(
       MI = MergeBuilder;
     } else {
       MachineInstrBuilder UnMergeBuilder =
-        MIRBuilder.buildInstrNoInsert(TargetOpcode::G_UNMERGE_VALUES);
+          MIRBuilder.buildInstrNoInsert(TargetOpcode::G_UNMERGE_VALUES);
       for (Register DefReg : NewVRegs)
         UnMergeBuilder.addDef(DefReg);
 
@@ -680,7 +681,7 @@ bool RegBankSelect::assignRegisterBanks(MachineFunction &MF) {
   // Walk the function and assign register banks to all operands.
   // Use a RPOT to make sure all registers are assigned before we choose
   // the best mapping of the current instruction.
-  ReversePostOrderTraversal<MachineFunction*> RPOT(&MF);
+  ReversePostOrderTraversal<MachineFunction *> RPOT(&MF);
   for (MachineBasicBlock *MBB : RPOT) {
     // Set a sensible insertion point so that subsequent calls to
     // MIRBuilder.
diff --git a/llvm/lib/CodeGen/GlobalISel/Utils.cpp b/llvm/lib/CodeGen/GlobalISel/Utils.cpp
index acc7b8098d1f0d8..441afaa15ada508 100644
--- a/llvm/lib/CodeGen/GlobalISel/Utils.cpp
+++ b/llvm/lib/CodeGen/GlobalISel/Utils.cpp
@@ -250,8 +250,7 @@ static void reportGISelDiagnostic(DiagnosticSeverity Severity,
                                   const TargetPassConfig &TPC,
                                   MachineOptimizationRemarkEmitter &MORE,
                                   MachineOptimizationRemarkMissed &R) {
-  bool IsFatal = Severity == DS_Error &&
-                 TPC.isGlobalISelAbortEnabled();
+  bool IsFatal = Severity == DS_Error && TPC.isGlobalISelAbortEnabled();
   // Print the function name explicitly if we don't have a debug location (which
   // makes the diagnostic less useful) or if we're going to emit a raw error.
   if (!R.getLocation().isValid() || IsFatal)
@@ -280,8 +279,8 @@ void llvm::reportGISelFailure(MachineFunction &MF, const TargetPassConfig &TPC,
                               MachineOptimizationRemarkEmitter &MORE,
                               const char *PassName, StringRef Msg,
                               const MachineInstr &MI) {
-  MachineOptimizationRemarkMissed R(PassName, "GISelFailure: ",
-                                    MI.getDebugLoc(), MI.getParent());
+  MachineOptimizationRemarkMissed R(
+      PassName, "GISelFailure: ", MI.getDebugLoc(), MI.getParent());
   R << Msg;
   // Printing MI is expensive;  only do it if expensive remarks are enabled.
   if (TPC.isGlobalISelAbortEnabled() || MORE.allowExtraAnalysis(PassName))
@@ -434,8 +433,8 @@ std::optional<FPValueAndVReg> llvm::getFConstantVRegValWithLookThrough(
                         Reg->VReg};
 }
 
-const ConstantFP *
-llvm::getConstantFPVRegVal(Register VReg, const MachineRegisterInfo &MRI) {
+const ConstantFP *llvm::getConstantFPVRegVal(Register VReg,
+                                             const MachineRegisterInfo &MRI) {
   MachineInstr *MI = MRI.getVRegDef(VReg);
   if (TargetOpcode::G_FCONSTANT != MI->getOpcode())
     return nullptr;
@@ -643,7 +642,7 @@ bool llvm::isKnownNeverNaN(Register Val, const MachineRegisterInfo &MRI,
   if (!DefMI)
     return false;
 
-  const TargetMachine& TM = DefMI->getMF()->getTarget();
+  const TargetMachine &TM = DefMI->getMF()->getTarget();
   if (DefMI->getFlag(MachineInstr::FmNoNans) || TM.Options.NoNaNsFPMath)
     return true;
 
@@ -742,7 +741,8 @@ Register llvm::getFunctionLiveInPhysReg(MachineFunction &MF,
     MachineInstr *Def = MRI.getVRegDef(LiveIn);
     if (Def) {
       // FIXME: Should the verifier check this is in the entry block?
-      assert(Def->getParent() == &EntryMBB && "live-in copy not in entry block");
+      assert(Def->getParent() == &EntryMBB &&
+             "live-in copy not in entry block");
       return LiveIn;
     }
 
@@ -757,7 +757,7 @@ Register llvm::getFunctionLiveInPhysReg(MachineFunction &MF,
   }
 
   BuildMI(EntryMBB, EntryMBB.begin(), DL, TII.get(TargetOpcode::COPY), LiveIn)
-    .addReg(PhysReg);
+      .addReg(PhysReg);
   if (!EntryMBB.isLiveIn(PhysReg))
     EntryMBB.addLiveIn(PhysReg);
   return LiveIn;
diff --git a/llvm/lib/CodeGen/GlobalMerge.cpp b/llvm/lib/CodeGen/GlobalMerge.cpp
index f259cbc1d788e21..c71fb127c7c56c7 100644
--- a/llvm/lib/CodeGen/GlobalMerge.cpp
+++ b/llvm/lib/CodeGen/GlobalMerge.cpp
@@ -104,19 +104,19 @@ using namespace llvm;
 #define DEBUG_TYPE "global-merge"
 
 // FIXME: This is only useful as a last-resort way to disable the pass.
-static cl::opt<bool>
-EnableGlobalMerge("enable-global-merge", cl::Hidden,
-                  cl::desc("Enable the global merge pass"),
-                  cl::init(true));
+static cl::opt<bool> EnableGlobalMerge("enable-global-merge", cl::Hidden,
+                                       cl::desc("Enable the global merge pass"),
+                                       cl::init(true));
 
 static cl::opt<unsigned>
-GlobalMergeMaxOffset("global-merge-max-offset", cl::Hidden,
-                     cl::desc("Set maximum offset for global merge pass"),
-                     cl::init(0));
+    GlobalMergeMaxOffset("global-merge-max-offset", cl::Hidden,
+                         cl::desc("Set maximum offset for global merge pass"),
+                         cl::init(0));
 
-static cl::opt<bool> GlobalMergeGroupByUse(
-    "global-merge-group-by-use", cl::Hidden,
-    cl::desc("Improve global merge pass to look at uses"), cl::init(true));
+static cl::opt<bool>
+    GlobalMergeGroupByUse("global-merge-group-by-use", cl::Hidden,
+                          cl::desc("Improve global merge pass to look at uses"),
+                          cl::init(true));
 
 static cl::opt<bool> GlobalMergeIgnoreSingleUse(
     "global-merge-ignore-single-use", cl::Hidden,
@@ -124,93 +124,92 @@ static cl::opt<bool> GlobalMergeIgnoreSingleUse(
     cl::init(true));
 
 static cl::opt<bool>
-EnableGlobalMergeOnConst("global-merge-on-const", cl::Hidden,
-                         cl::desc("Enable global merge pass on constants"),
-                         cl::init(false));
+    EnableGlobalMergeOnConst("global-merge-on-const", cl::Hidden,
+                             cl::desc("Enable global merge pass on constants"),
+                             cl::init(false));
 
 // FIXME: this could be a transitional option, and we probably need to remove
 // it if only we are sure this optimization could always benefit all targets.
-static cl::opt<cl::boolOrDefault>
-EnableGlobalMergeOnExternal("global-merge-on-external", cl::Hidden,
-     cl::desc("Enable global merge pass on external linkage"));
+static cl::opt<cl::boolOrDefault> EnableGlobalMergeOnExternal(
+    "global-merge-on-external", cl::Hidden,
+    cl::desc("Enable global merge pass on external linkage"));
 
 STATISTIC(NumMerged, "Number of globals merged");
 
 namespace {
 
-  class GlobalMerge : public FunctionPass {
-    const TargetMachine *TM = nullptr;
+class GlobalMerge : public FunctionPass {
+  const TargetMachine *TM = nullptr;
 
-    // FIXME: Infer the maximum possible offset depending on the actual users
-    // (these max offsets are different for the users inside Thumb or ARM
-    // functions), see the code that passes in the offset in the ARM backend
-    // for more information.
-    unsigned MaxOffset;
+  // FIXME: Infer the maximum possible offset depending on the actual users
+  // (these max offsets are different for the users inside Thumb or ARM
+  // functions), see the code that passes in the offset in the ARM backend
+  // for more information.
+  unsigned MaxOffset;
 
-    /// Whether we should try to optimize for size only.
-    /// Currently, this applies a dead simple heuristic: only consider globals
-    /// used in minsize functions for merging.
-    /// FIXME: This could learn about optsize, and be used in the cost model.
-    bool OnlyOptimizeForSize = false;
+  /// Whether we should try to optimize for size only.
+  /// Currently, this applies a dead simple heuristic: only consider globals
+  /// used in minsize functions for merging.
+  /// FIXME: This could learn about optsize, and be used in the cost model.
+  bool OnlyOptimizeForSize = false;
 
-    /// Whether we should merge global variables that have external linkage.
-    bool MergeExternalGlobals = false;
+  /// Whether we should merge global variables that have external linkage.
+  bool MergeExternalGlobals = false;
 
-    bool IsMachO = false;
+  bool IsMachO = false;
 
-    bool doMerge(SmallVectorImpl<GlobalVariable*> &Globals,
-                 Module &M, bool isConst, unsigned AddrSpace) const;
+  bool doMerge(SmallVectorImpl<GlobalVariable *> &Globals, Module &M,
+               bool isConst, unsigned AddrSpace) const;
 
-    /// Merge everything in \p Globals for which the corresponding bit
-    /// in \p GlobalSet is set.
-    bool doMerge(const SmallVectorImpl<GlobalVariable *> &Globals,
-                 const BitVector &GlobalSet, Module &M, bool isConst,
-                 unsigned AddrSpace) const;
+  /// Merge everything in \p Globals for which the corresponding bit
+  /// in \p GlobalSet is set.
+  bool doMerge(const SmallVectorImpl<GlobalVariable *> &Globals,
+               const BitVector &GlobalSet, Module &M, bool isConst,
+               unsigned AddrSpace) const;
 
-    /// Check if the given variable has been identified as must keep
-    /// \pre setMustKeepGlobalVariables must have been called on the Module that
-    ///      contains GV
-    bool isMustKeepGlobalVariable(const GlobalVariable *GV) const {
-      return MustKeepGlobalVariables.count(GV);
-    }
+  /// Check if the given variable has been identified as must keep
+  /// \pre setMustKeepGlobalVariables must have been called on the Module that
+  ///      contains GV
+  bool isMustKeepGlobalVariable(const GlobalVariable *GV) const {
+    return MustKeepGlobalVariables.count(GV);
+  }
 
-    /// Collect every variables marked as "used" or used in a landing pad
-    /// instruction for this Module.
-    void setMustKeepGlobalVariables(Module &M);
+  /// Collect every variables marked as "used" or used in a landing pad
+  /// instruction for this Module.
+  void setMustKeepGlobalVariables(Module &M);
 
-    /// Collect every variables marked as "used"
-    void collectUsedGlobalVariables(Module &M, StringRef Name);
+  /// Collect every variables marked as "used"
+  void collectUsedGlobalVariables(Module &M, StringRef Name);
 
-    /// Keep track of the GlobalVariable that must not be merged away
-    SmallSetVector<const GlobalVariable *, 16> MustKeepGlobalVariables;
+  /// Keep track of the GlobalVariable that must not be merged away
+  SmallSetVector<const GlobalVariable *, 16> MustKeepGlobalVariables;
 
-  public:
-    static char ID;             // Pass identification, replacement for typeid.
+public:
+  static char ID; // Pass identification, replacement for typeid.
 
-    explicit GlobalMerge()
-        : FunctionPass(ID), MaxOffset(GlobalMergeMaxOffset) {
-      initializeGlobalMergePass(*PassRegistry::getPassRegistry());
-    }
+  explicit GlobalMerge() : FunctionPass(ID), MaxOffset(GlobalMergeMaxOffset) {
+    initializeGlobalMergePass(*PassRegistry::getPassRegistry());
+  }
 
-    explicit GlobalMerge(const TargetMachine *TM, unsigned MaximalOffset,
-                         bool OnlyOptimizeForSize, bool MergeExternalGlobals)
-        : FunctionPass(ID), TM(TM), MaxOffset(MaximalOffset),
-          OnlyOptimizeForSize(OnlyOptimizeForSize),
-          MergeExternalGlobals(MergeExternalGlobals) {
-      initializeGlobalMergePass(*PassRegistry::getPassRegistry());
-    }
+  explicit GlobalMerge(const TargetMachine *TM, unsigned MaximalOffset,
+                       bool OnlyOptimizeForSize, bool MergeExternalGlobals)
+      : FunctionPass(ID), TM(TM), MaxOffset(MaximalOffset),
+        OnlyOptimizeForSize(OnlyOptimizeForSize),
+        MergeExternalGlobals(MergeExternalGlobals) {
+    initializeGlobalMergePass(*PassRegistry::getPassRegistry());
+  }
 
-    bool doInitialization(Module &M) override;
-    bool runOnFunction(Function &F) override;
-    bool doFinalization(Module &M) override;
+  bool doInitialization(Module &M) override;
+  bool runOnFunction(Function &F) override;
+  bool doFinalization(Module &M) override;
 
-    StringRef getPassName() const override { return "Merge internal globals"; }
+  StringRef getPassName() const override { return "Merge internal globals"; }
 
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.setPreservesCFG();
-      FunctionPass::getAnalysisUsage(AU);
-    }
-  };
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.setPreservesCFG();
+    FunctionPass::getAnalysisUsage(AU);
+  }
+};
 
 } // end anonymous namespace
 
@@ -218,8 +217,8 @@ char GlobalMerge::ID = 0;
 
 INITIALIZE_PASS(GlobalMerge, DEBUG_TYPE, "Merge global variables", false, false)
 
-bool GlobalMerge::doMerge(SmallVectorImpl<GlobalVariable*> &Globals,
-                          Module &M, bool isConst, unsigned AddrSpace) const {
+bool GlobalMerge::doMerge(SmallVectorImpl<GlobalVariable *> &Globals, Module &M,
+                          bool isConst, unsigned AddrSpace) const {
   auto &DL = M.getDataLayout();
   // FIXME: Find better heuristics
   llvm::stable_sort(
@@ -452,8 +451,8 @@ bool GlobalMerge::doMerge(const SmallVectorImpl<GlobalVariable *> &Globals,
   while (i != -1) {
     ssize_t j = 0;
     uint64_t MergedSize = 0;
-    std::vector<Type*> Tys;
-    std::vector<Constant*> Inits;
+    std::vector<Type *> Tys;
+    std::vector<Constant *> Inits;
     std::vector<unsigned> StructIdxs;
 
     bool HasExternal = false;
@@ -509,10 +508,9 @@ bool GlobalMerge::doMerge(const SmallVectorImpl<GlobalVariable *> &Globals,
     // of the first variable merged as the suffix of global symbol
     // name.  This avoids a link-time naming conflict for the
     // _MergedGlobals symbols.
-    Twine MergedName =
-        (IsMachO && HasExternal)
-            ? "_MergedGlobals_" + FirstExternalName
-            : "_MergedGlobals";
+    Twine MergedName = (IsMachO && HasExternal)
+                           ? "_MergedGlobals_" + FirstExternalName
+                           : "_MergedGlobals";
     auto MergedLinkage = IsMachO ? Linkage : GlobalValue::PrivateLinkage;
     auto *MergedGV = new GlobalVariable(
         M, MergedTy, isConst, MergedLinkage, MergedInit, MergedName, nullptr,
@@ -567,14 +565,15 @@ bool GlobalMerge::doMerge(const SmallVectorImpl<GlobalVariable *> &Globals,
 void GlobalMerge::collectUsedGlobalVariables(Module &M, StringRef Name) {
   // Extract global variables from llvm.used array
   const GlobalVariable *GV = M.getGlobalVariable(Name);
-  if (!GV || !GV->hasInitializer()) return;
+  if (!GV || !GV->hasInitializer())
+    return;
 
   // Should be an array of 'i8*'.
   const ConstantArray *InitList = cast<ConstantArray>(GV->getInitializer());
 
   for (unsigned i = 0, e = InitList->getNumOperands(); i != e; ++i)
-    if (const GlobalVariable *G =
-        dyn_cast<GlobalVariable>(InitList->getOperand(i)->stripPointerCasts()))
+    if (const GlobalVariable *G = dyn_cast<GlobalVariable>(
+            InitList->getOperand(i)->stripPointerCasts()))
       MustKeepGlobalVariables.insert(G);
 }
 
@@ -593,7 +592,8 @@ void GlobalMerge::setMustKeepGlobalVariables(Module &M) {
         if (const GlobalVariable *GV =
                 dyn_cast<GlobalVariable>(U->stripPointerCasts()))
           MustKeepGlobalVariables.insert(GV);
-        else if (const ConstantArray *CA = dyn_cast<ConstantArray>(U->stripPointerCasts())) {
+        else if (const ConstantArray *CA =
+                     dyn_cast<ConstantArray>(U->stripPointerCasts())) {
           for (const Use &Elt : CA->operands()) {
             if (const GlobalVariable *GV =
                     dyn_cast<GlobalVariable>(Elt->stripPointerCasts()))
@@ -618,10 +618,10 @@ bool GlobalMerge::doInitialization(Module &M) {
   setMustKeepGlobalVariables(M);
 
   LLVM_DEBUG({
-      dbgs() << "Number of GV that must be kept:  " <<
-                MustKeepGlobalVariables.size() << "\n";
-      for (const GlobalVariable *KeptGV : MustKeepGlobalVariables)
-        dbgs() << "Kept: " << *KeptGV << "\n";
+    dbgs() << "Number of GV that must be kept:  "
+           << MustKeepGlobalVariables.size() << "\n";
+    for (const GlobalVariable *KeptGV : MustKeepGlobalVariables)
+      dbgs() << "Kept: " << *KeptGV << "\n";
   });
   // Grab all non-const globals.
   for (auto &GV : M.globals()) {
@@ -644,8 +644,7 @@ bool GlobalMerge::doInitialization(Module &M) {
     StringRef Section = GV.getSection();
 
     // Ignore all 'special' globals.
-    if (GV.getName().startswith("llvm.") ||
-        GV.getName().startswith(".llvm."))
+    if (GV.getName().startswith("llvm.") || GV.getName().startswith(".llvm."))
       continue;
 
     // Ignore all "required" globals:
@@ -662,8 +661,7 @@ bool GlobalMerge::doInitialization(Module &M) {
 
     Type *Ty = GV.getValueType();
     if (DL.getTypeAllocSize(Ty) < MaxOffset) {
-      if (TM &&
-          TargetLoweringObjectFile::getKindForGlobal(&GV, *TM).isBSS())
+      if (TM && TargetLoweringObjectFile::getKindForGlobal(&GV, *TM).isBSS())
         BSSGlobals[{AddressSpace, Section}].push_back(&GV);
       else if (GV.isConstant())
         ConstGlobals[{AddressSpace, Section}].push_back(&GV);
@@ -688,9 +686,7 @@ bool GlobalMerge::doInitialization(Module &M) {
   return Changed;
 }
 
-bool GlobalMerge::runOnFunction(Function &F) {
-  return false;
-}
+bool GlobalMerge::runOnFunction(Function &F) { return false; }
 
 bool GlobalMerge::doFinalization(Module &M) {
   MustKeepGlobalVariables.clear();
@@ -700,7 +696,8 @@ bool GlobalMerge::doFinalization(Module &M) {
 Pass *llvm::createGlobalMergePass(const TargetMachine *TM, unsigned Offset,
                                   bool OnlyOptimizeForSize,
                                   bool MergeExternalByDefault) {
-  bool MergeExternal = (EnableGlobalMergeOnExternal == cl::BOU_UNSET) ?
-    MergeExternalByDefault : (EnableGlobalMergeOnExternal == cl::BOU_TRUE);
+  bool MergeExternal = (EnableGlobalMergeOnExternal == cl::BOU_UNSET)
+                           ? MergeExternalByDefault
+                           : (EnableGlobalMergeOnExternal == cl::BOU_TRUE);
   return new GlobalMerge(TM, Offset, OnlyOptimizeForSize, MergeExternal);
 }
diff --git a/llvm/lib/CodeGen/HardwareLoops.cpp b/llvm/lib/CodeGen/HardwareLoops.cpp
index e7b14d700a44a12..c1e8a352663c8e8 100644
--- a/llvm/lib/CodeGen/HardwareLoops.cpp
+++ b/llvm/lib/CodeGen/HardwareLoops.cpp
@@ -49,37 +49,35 @@
 
 using namespace llvm;
 
-static cl::opt<bool>
-ForceHardwareLoops("force-hardware-loops", cl::Hidden, cl::init(false),
-                   cl::desc("Force hardware loops intrinsics to be inserted"));
+static cl::opt<bool> ForceHardwareLoops(
+    "force-hardware-loops", cl::Hidden, cl::init(false),
+    cl::desc("Force hardware loops intrinsics to be inserted"));
 
-static cl::opt<bool>
-ForceHardwareLoopPHI(
-  "force-hardware-loop-phi", cl::Hidden, cl::init(false),
-  cl::desc("Force hardware loop counter to be updated through a phi"));
+static cl::opt<bool> ForceHardwareLoopPHI(
+    "force-hardware-loop-phi", cl::Hidden, cl::init(false),
+    cl::desc("Force hardware loop counter to be updated through a phi"));
 
 static cl::opt<bool>
-ForceNestedLoop("force-nested-hardware-loop", cl::Hidden, cl::init(false),
-                cl::desc("Force allowance of nested hardware loops"));
+    ForceNestedLoop("force-nested-hardware-loop", cl::Hidden, cl::init(false),
+                    cl::desc("Force allowance of nested hardware loops"));
 
 static cl::opt<unsigned>
-LoopDecrement("hardware-loop-decrement", cl::Hidden, cl::init(1),
-            cl::desc("Set the loop decrement value"));
+    LoopDecrement("hardware-loop-decrement", cl::Hidden, cl::init(1),
+                  cl::desc("Set the loop decrement value"));
 
 static cl::opt<unsigned>
-CounterBitWidth("hardware-loop-counter-bitwidth", cl::Hidden, cl::init(32),
-                cl::desc("Set the loop counter bitwidth"));
+    CounterBitWidth("hardware-loop-counter-bitwidth", cl::Hidden, cl::init(32),
+                    cl::desc("Set the loop counter bitwidth"));
 
 static cl::opt<bool>
-ForceGuardLoopEntry(
-  "force-hardware-loop-guard", cl::Hidden, cl::init(false),
-  cl::desc("Force generation of loop guard intrinsic"));
+    ForceGuardLoopEntry("force-hardware-loop-guard", cl::Hidden,
+                        cl::init(false),
+                        cl::desc("Force generation of loop guard intrinsic"));
 
 STATISTIC(NumHWLoops, "Number of loops converted to hardware loops");
 
 #ifndef NDEBUG
-static void debugHWLoopFailure(const StringRef DebugMsg,
-    Instruction *I) {
+static void debugHWLoopFailure(const StringRef DebugMsg, Instruction *I) {
   dbgs() << "HWLoops: " << DebugMsg;
   if (I)
     dbgs() << ' ' << *I;
@@ -109,124 +107,122 @@ createHWLoopAnalysis(StringRef RemarkName, Loop *L, Instruction *I) {
 
 namespace {
 
-  void reportHWLoopFailure(const StringRef Msg, const StringRef ORETag,
-      OptimizationRemarkEmitter *ORE, Loop *TheLoop, Instruction *I = nullptr) {
-    LLVM_DEBUG(debugHWLoopFailure(Msg, I));
-    ORE->emit(createHWLoopAnalysis(ORETag, TheLoop, I) << Msg);
-  }
+void reportHWLoopFailure(const StringRef Msg, const StringRef ORETag,
+                         OptimizationRemarkEmitter *ORE, Loop *TheLoop,
+                         Instruction *I = nullptr) {
+  LLVM_DEBUG(debugHWLoopFailure(Msg, I));
+  ORE->emit(createHWLoopAnalysis(ORETag, TheLoop, I) << Msg);
+}
 
-  using TTI = TargetTransformInfo;
-
-  class HardwareLoopsLegacy : public FunctionPass {
-  public:
-    static char ID;
-
-    HardwareLoopsLegacy() : FunctionPass(ID) {
-      initializeHardwareLoopsLegacyPass(*PassRegistry::getPassRegistry());
-    }
-
-    bool runOnFunction(Function &F) override;
-
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.addRequired<LoopInfoWrapperPass>();
-      AU.addPreserved<LoopInfoWrapperPass>();
-      AU.addRequired<DominatorTreeWrapperPass>();
-      AU.addPreserved<DominatorTreeWrapperPass>();
-      AU.addRequired<ScalarEvolutionWrapperPass>();
-      AU.addPreserved<ScalarEvolutionWrapperPass>();
-      AU.addRequired<AssumptionCacheTracker>();
-      AU.addRequired<TargetTransformInfoWrapperPass>();
-      AU.addRequired<OptimizationRemarkEmitterWrapperPass>();
-      AU.addPreserved<BranchProbabilityInfoWrapperPass>();
-    }
-  };
+using TTI = TargetTransformInfo;
 
-  class HardwareLoopsImpl {
-  public:
-    HardwareLoopsImpl(ScalarEvolution &SE, LoopInfo &LI, bool PreserveLCSSA,
-                      DominatorTree &DT, const DataLayout &DL,
-                      const TargetTransformInfo &TTI, TargetLibraryInfo *TLI,
-                      AssumptionCache &AC, OptimizationRemarkEmitter *ORE,
-                      HardwareLoopOptions &Opts)
-      : SE(SE), LI(LI), PreserveLCSSA(PreserveLCSSA), DT(DT), DL(DL), TTI(TTI),
-        TLI(TLI), AC(AC), ORE(ORE), Opts(Opts) { }
-
-    bool run(Function &F);
-
-  private:
-    // Try to convert the given Loop into a hardware loop.
-    bool TryConvertLoop(Loop *L, LLVMContext &Ctx);
-
-    // Given that the target believes the loop to be profitable, try to
-    // convert it.
-    bool TryConvertLoop(HardwareLoopInfo &HWLoopInfo);
-
-    ScalarEvolution &SE;
-    LoopInfo &LI;
-    bool PreserveLCSSA;
-    DominatorTree &DT;
-    const DataLayout &DL;
-    const TargetTransformInfo &TTI;
-    TargetLibraryInfo *TLI = nullptr;
-    AssumptionCache &AC;
-    OptimizationRemarkEmitter *ORE;
-    HardwareLoopOptions &Opts;
-    bool MadeChange = false;
-  };
+class HardwareLoopsLegacy : public FunctionPass {
+public:
+  static char ID;
 
-  class HardwareLoop {
-    // Expand the trip count scev into a value that we can use.
-    Value *InitLoopCount();
-
-    // Insert the set_loop_iteration intrinsic.
-    Value *InsertIterationSetup(Value *LoopCountInit);
-
-    // Insert the loop_decrement intrinsic.
-    void InsertLoopDec();
-
-    // Insert the loop_decrement_reg intrinsic.
-    Instruction *InsertLoopRegDec(Value *EltsRem);
-
-    // If the target requires the counter value to be updated in the loop,
-    // insert a phi to hold the value. The intended purpose is for use by
-    // loop_decrement_reg.
-    PHINode *InsertPHICounter(Value *NumElts, Value *EltsRem);
-
-    // Create a new cmp, that checks the returned value of loop_decrement*,
-    // and update the exit branch to use it.
-    void UpdateBranch(Value *EltsRem);
-
-  public:
-    HardwareLoop(HardwareLoopInfo &Info, ScalarEvolution &SE,
-                 const DataLayout &DL,
-                 OptimizationRemarkEmitter *ORE,
-                 HardwareLoopOptions &Opts) :
-      SE(SE), DL(DL), ORE(ORE), Opts(Opts), L(Info.L), M(L->getHeader()->getModule()),
-      ExitCount(Info.ExitCount),
-      CountType(Info.CountType),
-      ExitBranch(Info.ExitBranch),
-      LoopDecrement(Info.LoopDecrement),
-      UsePHICounter(Info.CounterInReg),
-      UseLoopGuard(Info.PerformEntryTest) { }
-
-    void Create();
-
-  private:
-    ScalarEvolution &SE;
-    const DataLayout &DL;
-    OptimizationRemarkEmitter *ORE = nullptr;
-    HardwareLoopOptions &Opts;
-    Loop *L                 = nullptr;
-    Module *M               = nullptr;
-    const SCEV *ExitCount   = nullptr;
-    Type *CountType         = nullptr;
-    BranchInst *ExitBranch  = nullptr;
-    Value *LoopDecrement    = nullptr;
-    bool UsePHICounter      = false;
-    bool UseLoopGuard       = false;
-    BasicBlock *BeginBB     = nullptr;
-  };
-}
+  HardwareLoopsLegacy() : FunctionPass(ID) {
+    initializeHardwareLoopsLegacyPass(*PassRegistry::getPassRegistry());
+  }
+
+  bool runOnFunction(Function &F) override;
+
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.addRequired<LoopInfoWrapperPass>();
+    AU.addPreserved<LoopInfoWrapperPass>();
+    AU.addRequired<DominatorTreeWrapperPass>();
+    AU.addPreserved<DominatorTreeWrapperPass>();
+    AU.addRequired<ScalarEvolutionWrapperPass>();
+    AU.addPreserved<ScalarEvolutionWrapperPass>();
+    AU.addRequired<AssumptionCacheTracker>();
+    AU.addRequired<TargetTransformInfoWrapperPass>();
+    AU.addRequired<OptimizationRemarkEmitterWrapperPass>();
+    AU.addPreserved<BranchProbabilityInfoWrapperPass>();
+  }
+};
+
+class HardwareLoopsImpl {
+public:
+  HardwareLoopsImpl(ScalarEvolution &SE, LoopInfo &LI, bool PreserveLCSSA,
+                    DominatorTree &DT, const DataLayout &DL,
+                    const TargetTransformInfo &TTI, TargetLibraryInfo *TLI,
+                    AssumptionCache &AC, OptimizationRemarkEmitter *ORE,
+                    HardwareLoopOptions &Opts)
+      : SE(SE), LI(LI), PreserveLCSSA(PreserveLCSSA), DT(DT), DL(DL), TTI(TTI),
+        TLI(TLI), AC(AC), ORE(ORE), Opts(Opts) {}
+
+  bool run(Function &F);
+
+private:
+  // Try to convert the given Loop into a hardware loop.
+  bool TryConvertLoop(Loop *L, LLVMContext &Ctx);
+
+  // Given that the target believes the loop to be profitable, try to
+  // convert it.
+  bool TryConvertLoop(HardwareLoopInfo &HWLoopInfo);
+
+  ScalarEvolution &SE;
+  LoopInfo &LI;
+  bool PreserveLCSSA;
+  DominatorTree &DT;
+  const DataLayout &DL;
+  const TargetTransformInfo &TTI;
+  TargetLibraryInfo *TLI = nullptr;
+  AssumptionCache &AC;
+  OptimizationRemarkEmitter *ORE;
+  HardwareLoopOptions &Opts;
+  bool MadeChange = false;
+};
+
+class HardwareLoop {
+  // Expand the trip count scev into a value that we can use.
+  Value *InitLoopCount();
+
+  // Insert the set_loop_iteration intrinsic.
+  Value *InsertIterationSetup(Value *LoopCountInit);
+
+  // Insert the loop_decrement intrinsic.
+  void InsertLoopDec();
+
+  // Insert the loop_decrement_reg intrinsic.
+  Instruction *InsertLoopRegDec(Value *EltsRem);
+
+  // If the target requires the counter value to be updated in the loop,
+  // insert a phi to hold the value. The intended purpose is for use by
+  // loop_decrement_reg.
+  PHINode *InsertPHICounter(Value *NumElts, Value *EltsRem);
+
+  // Create a new cmp, that checks the returned value of loop_decrement*,
+  // and update the exit branch to use it.
+  void UpdateBranch(Value *EltsRem);
+
+public:
+  HardwareLoop(HardwareLoopInfo &Info, ScalarEvolution &SE,
+               const DataLayout &DL, OptimizationRemarkEmitter *ORE,
+               HardwareLoopOptions &Opts)
+      : SE(SE), DL(DL), ORE(ORE), Opts(Opts), L(Info.L),
+        M(L->getHeader()->getModule()), ExitCount(Info.ExitCount),
+        CountType(Info.CountType), ExitBranch(Info.ExitBranch),
+        LoopDecrement(Info.LoopDecrement), UsePHICounter(Info.CounterInReg),
+        UseLoopGuard(Info.PerformEntryTest) {}
+
+  void Create();
+
+private:
+  ScalarEvolution &SE;
+  const DataLayout &DL;
+  OptimizationRemarkEmitter *ORE = nullptr;
+  HardwareLoopOptions &Opts;
+  Loop *L = nullptr;
+  Module *M = nullptr;
+  const SCEV *ExitCount = nullptr;
+  Type *CountType = nullptr;
+  BranchInst *ExitBranch = nullptr;
+  Value *LoopDecrement = nullptr;
+  bool UsePHICounter = false;
+  bool UseLoopGuard = false;
+  BasicBlock *BeginBB = nullptr;
+};
+} // namespace
 
 char HardwareLoopsLegacy::ID = 0;
 
@@ -334,7 +330,7 @@ bool HardwareLoopsImpl::TryConvertLoop(Loop *L, LLVMContext &Ctx) {
 
   if (Opts.Decrement.has_value())
     HWLoopInfo.LoopDecrement =
-      ConstantInt::get(HWLoopInfo.CountType, Opts.Decrement.value());
+        ConstantInt::get(HWLoopInfo.CountType, Opts.Decrement.value());
 
   MadeChange |= TryConvertLoop(HWLoopInfo);
   return MadeChange && (!HWLoopInfo.IsNestingLegal && !Opts.ForceNested);
@@ -446,8 +442,7 @@ Value *HardwareLoop::InitLoopCount() {
   // loop counter and tests that is not zero?
 
   SCEVExpander SCEVE(SE, DL, "loopcnt");
-  if (!ExitCount->getType()->isPointerTy() &&
-      ExitCount->getType() != CountType)
+  if (!ExitCount->getType()->isPointerTy() && ExitCount->getType() != CountType)
     ExitCount = SE.getZeroExtendExpr(ExitCount, CountType);
 
   ExitCount = SE.getAddExpr(ExitCount, SE.getOne(CountType));
@@ -471,19 +466,18 @@ Value *HardwareLoop::InitLoopCount() {
     // If it's not safe to create a while loop then don't force it and create a
     // do-while loop instead
     if (!SCEVE.isSafeToExpandAt(ExitCount, Predecessor->getTerminator()))
-        UseLoopGuard = false;
+      UseLoopGuard = false;
     else
-        BB = Predecessor;
+      BB = Predecessor;
   }
 
   if (!SCEVE.isSafeToExpandAt(ExitCount, BB->getTerminator())) {
-    LLVM_DEBUG(dbgs() << "- Bailing, unsafe to expand ExitCount "
-               << *ExitCount << "\n");
+    LLVM_DEBUG(dbgs() << "- Bailing, unsafe to expand ExitCount " << *ExitCount
+                      << "\n");
     return nullptr;
   }
 
-  Value *Count = SCEVE.expandCodeFor(ExitCount, CountType,
-                                     BB->getTerminator());
+  Value *Count = SCEVE.expandCodeFor(ExitCount, CountType, BB->getTerminator());
 
   // FIXME: We've expanded Count where we hope to insert the counter setting
   // intrinsic. But, in the case of the 'test and set' form, we may fallback to
@@ -501,7 +495,7 @@ Value *HardwareLoop::InitLoopCount() {
   return Count;
 }
 
-Value* HardwareLoop::InsertIterationSetup(Value *LoopCountInit) {
+Value *HardwareLoop::InsertIterationSetup(Value *LoopCountInit) {
   IRBuilder<> Builder(BeginBB->getTerminator());
   Type *Ty = LoopCountInit->getType();
   bool UsePhi = UsePHICounter || Opts.ForcePhi;
@@ -536,10 +530,9 @@ Value* HardwareLoop::InsertIterationSetup(Value *LoopCountInit) {
 void HardwareLoop::InsertLoopDec() {
   IRBuilder<> CondBuilder(ExitBranch);
 
-  Function *DecFunc =
-    Intrinsic::getDeclaration(M, Intrinsic::loop_decrement,
-                              LoopDecrement->getType());
-  Value *Ops[] = { LoopDecrement };
+  Function *DecFunc = Intrinsic::getDeclaration(M, Intrinsic::loop_decrement,
+                                                LoopDecrement->getType());
+  Value *Ops[] = {LoopDecrement};
   Value *NewCond = CondBuilder.CreateCall(DecFunc, Ops);
   Value *OldCond = ExitBranch->getCondition();
   ExitBranch->setCondition(NewCond);
@@ -555,20 +548,19 @@ void HardwareLoop::InsertLoopDec() {
   LLVM_DEBUG(dbgs() << "HWLoops: Inserted loop dec: " << *NewCond << "\n");
 }
 
-Instruction* HardwareLoop::InsertLoopRegDec(Value *EltsRem) {
+Instruction *HardwareLoop::InsertLoopRegDec(Value *EltsRem) {
   IRBuilder<> CondBuilder(ExitBranch);
 
-  Function *DecFunc =
-      Intrinsic::getDeclaration(M, Intrinsic::loop_decrement_reg,
-                                { EltsRem->getType() });
-  Value *Ops[] = { EltsRem, LoopDecrement };
+  Function *DecFunc = Intrinsic::getDeclaration(
+      M, Intrinsic::loop_decrement_reg, {EltsRem->getType()});
+  Value *Ops[] = {EltsRem, LoopDecrement};
   Value *Call = CondBuilder.CreateCall(DecFunc, Ops);
 
   LLVM_DEBUG(dbgs() << "HWLoops: Inserted loop dec: " << *Call << "\n");
   return cast<Instruction>(Call);
 }
 
-PHINode* HardwareLoop::InsertPHICounter(Value *NumElts, Value *EltsRem) {
+PHINode *HardwareLoop::InsertPHICounter(Value *NumElts, Value *EltsRem) {
   BasicBlock *Preheader = L->getLoopPreheader();
   BasicBlock *Header = L->getHeader();
   BasicBlock *Latch = ExitBranch->getParent();
@@ -582,8 +574,8 @@ PHINode* HardwareLoop::InsertPHICounter(Value *NumElts, Value *EltsRem) {
 
 void HardwareLoop::UpdateBranch(Value *EltsRem) {
   IRBuilder<> CondBuilder(ExitBranch);
-  Value *NewCond =
-    CondBuilder.CreateICmpNE(EltsRem, ConstantInt::get(EltsRem->getType(), 0));
+  Value *NewCond = CondBuilder.CreateICmpNE(
+      EltsRem, ConstantInt::get(EltsRem->getType(), 0));
   Value *OldCond = ExitBranch->getCondition();
   ExitBranch->setCondition(NewCond);
 
@@ -596,11 +588,15 @@ void HardwareLoop::UpdateBranch(Value *EltsRem) {
   RecursivelyDeleteTriviallyDeadInstructions(OldCond);
 }
 
-INITIALIZE_PASS_BEGIN(HardwareLoopsLegacy, DEBUG_TYPE, HW_LOOPS_NAME, false, false)
+INITIALIZE_PASS_BEGIN(HardwareLoopsLegacy, DEBUG_TYPE, HW_LOOPS_NAME, false,
+                      false)
 INITIALIZE_PASS_DEPENDENCY(DominatorTreeWrapperPass)
 INITIALIZE_PASS_DEPENDENCY(LoopInfoWrapperPass)
 INITIALIZE_PASS_DEPENDENCY(ScalarEvolutionWrapperPass)
 INITIALIZE_PASS_DEPENDENCY(OptimizationRemarkEmitterWrapperPass)
-INITIALIZE_PASS_END(HardwareLoopsLegacy, DEBUG_TYPE, HW_LOOPS_NAME, false, false)
+INITIALIZE_PASS_END(HardwareLoopsLegacy, DEBUG_TYPE, HW_LOOPS_NAME, false,
+                    false)
 
-FunctionPass *llvm::createHardwareLoopsLegacyPass() { return new HardwareLoopsLegacy(); }
+FunctionPass *llvm::createHardwareLoopsLegacyPass() {
+  return new HardwareLoopsLegacy();
+}
diff --git a/llvm/lib/CodeGen/IfConversion.cpp b/llvm/lib/CodeGen/IfConversion.cpp
index e8e276a8558d8a7..7ede9c5b4dfa7c9 100644
--- a/llvm/lib/CodeGen/IfConversion.cpp
+++ b/llvm/lib/CodeGen/IfConversion.cpp
@@ -61,369 +61,363 @@ using namespace llvm;
 static cl::opt<int> IfCvtFnStart("ifcvt-fn-start", cl::init(-1), cl::Hidden);
 static cl::opt<int> IfCvtFnStop("ifcvt-fn-stop", cl::init(-1), cl::Hidden);
 static cl::opt<int> IfCvtLimit("ifcvt-limit", cl::init(-1), cl::Hidden);
-static cl::opt<bool> DisableSimple("disable-ifcvt-simple",
-                                   cl::init(false), cl::Hidden);
+static cl::opt<bool> DisableSimple("disable-ifcvt-simple", cl::init(false),
+                                   cl::Hidden);
 static cl::opt<bool> DisableSimpleF("disable-ifcvt-simple-false",
                                     cl::init(false), cl::Hidden);
-static cl::opt<bool> DisableTriangle("disable-ifcvt-triangle",
-                                     cl::init(false), cl::Hidden);
+static cl::opt<bool> DisableTriangle("disable-ifcvt-triangle", cl::init(false),
+                                     cl::Hidden);
 static cl::opt<bool> DisableTriangleR("disable-ifcvt-triangle-rev",
                                       cl::init(false), cl::Hidden);
 static cl::opt<bool> DisableTriangleF("disable-ifcvt-triangle-false",
                                       cl::init(false), cl::Hidden);
-static cl::opt<bool> DisableDiamond("disable-ifcvt-diamond",
-                                    cl::init(false), cl::Hidden);
+static cl::opt<bool> DisableDiamond("disable-ifcvt-diamond", cl::init(false),
+                                    cl::Hidden);
 static cl::opt<bool> DisableForkedDiamond("disable-ifcvt-forked-diamond",
-                                        cl::init(false), cl::Hidden);
-static cl::opt<bool> IfCvtBranchFold("ifcvt-branch-fold",
-                                     cl::init(true), cl::Hidden);
-
-STATISTIC(NumSimple,       "Number of simple if-conversions performed");
-STATISTIC(NumSimpleFalse,  "Number of simple (F) if-conversions performed");
-STATISTIC(NumTriangle,     "Number of triangle if-conversions performed");
-STATISTIC(NumTriangleRev,  "Number of triangle (R) if-conversions performed");
-STATISTIC(NumTriangleFalse,"Number of triangle (F) if-conversions performed");
+                                          cl::init(false), cl::Hidden);
+static cl::opt<bool> IfCvtBranchFold("ifcvt-branch-fold", cl::init(true),
+                                     cl::Hidden);
+
+STATISTIC(NumSimple, "Number of simple if-conversions performed");
+STATISTIC(NumSimpleFalse, "Number of simple (F) if-conversions performed");
+STATISTIC(NumTriangle, "Number of triangle if-conversions performed");
+STATISTIC(NumTriangleRev, "Number of triangle (R) if-conversions performed");
+STATISTIC(NumTriangleFalse, "Number of triangle (F) if-conversions performed");
 STATISTIC(NumTriangleFRev, "Number of triangle (F/R) if-conversions performed");
-STATISTIC(NumDiamonds,     "Number of diamond if-conversions performed");
-STATISTIC(NumForkedDiamonds, "Number of forked-diamond if-conversions performed");
-STATISTIC(NumIfConvBBs,    "Number of if-converted blocks");
-STATISTIC(NumDupBBs,       "Number of duplicated blocks");
-STATISTIC(NumUnpred,       "Number of true blocks of diamonds unpredicated");
+STATISTIC(NumDiamonds, "Number of diamond if-conversions performed");
+STATISTIC(NumForkedDiamonds,
+          "Number of forked-diamond if-conversions performed");
+STATISTIC(NumIfConvBBs, "Number of if-converted blocks");
+STATISTIC(NumDupBBs, "Number of duplicated blocks");
+STATISTIC(NumUnpred, "Number of true blocks of diamonds unpredicated");
 
 namespace {
 
-  class IfConverter : public MachineFunctionPass {
-    enum IfcvtKind {
-      ICNotClassfied,  // BB data valid, but not classified.
-      ICSimpleFalse,   // Same as ICSimple, but on the false path.
-      ICSimple,        // BB is entry of an one split, no rejoin sub-CFG.
-      ICTriangleFRev,  // Same as ICTriangleFalse, but false path rev condition.
-      ICTriangleRev,   // Same as ICTriangle, but true path rev condition.
-      ICTriangleFalse, // Same as ICTriangle, but on the false path.
-      ICTriangle,      // BB is entry of a triangle sub-CFG.
-      ICDiamond,       // BB is entry of a diamond sub-CFG.
-      ICForkedDiamond  // BB is entry of an almost diamond sub-CFG, with a
-                       // common tail that can be shared.
-    };
-
-    /// One per MachineBasicBlock, this is used to cache the result
-    /// if-conversion feasibility analysis. This includes results from
-    /// TargetInstrInfo::analyzeBranch() (i.e. TBB, FBB, and Cond), and its
-    /// classification, and common tail block of its successors (if it's a
-    /// diamond shape), its size, whether it's predicable, and whether any
-    /// instruction can clobber the 'would-be' predicate.
-    ///
-    /// IsDone          - True if BB is not to be considered for ifcvt.
-    /// IsBeingAnalyzed - True if BB is currently being analyzed.
-    /// IsAnalyzed      - True if BB has been analyzed (info is still valid).
-    /// IsEnqueued      - True if BB has been enqueued to be ifcvt'ed.
-    /// IsBrAnalyzable  - True if analyzeBranch() returns false.
-    /// HasFallThrough  - True if BB may fallthrough to the following BB.
-    /// IsUnpredicable  - True if BB is known to be unpredicable.
-    /// ClobbersPred    - True if BB could modify predicates (e.g. has
-    ///                   cmp, call, etc.)
-    /// NonPredSize     - Number of non-predicated instructions.
-    /// ExtraCost       - Extra cost for multi-cycle instructions.
-    /// ExtraCost2      - Some instructions are slower when predicated
-    /// BB              - Corresponding MachineBasicBlock.
-    /// TrueBB / FalseBB- See analyzeBranch().
-    /// BrCond          - Conditions for end of block conditional branches.
-    /// Predicate       - Predicate used in the BB.
-    struct BBInfo {
-      bool IsDone          : 1;
-      bool IsBeingAnalyzed : 1;
-      bool IsAnalyzed      : 1;
-      bool IsEnqueued      : 1;
-      bool IsBrAnalyzable  : 1;
-      bool IsBrReversible  : 1;
-      bool HasFallThrough  : 1;
-      bool IsUnpredicable  : 1;
-      bool CannotBeCopied  : 1;
-      bool ClobbersPred    : 1;
-      unsigned NonPredSize = 0;
-      unsigned ExtraCost = 0;
-      unsigned ExtraCost2 = 0;
-      MachineBasicBlock *BB = nullptr;
-      MachineBasicBlock *TrueBB = nullptr;
-      MachineBasicBlock *FalseBB = nullptr;
-      SmallVector<MachineOperand, 4> BrCond;
-      SmallVector<MachineOperand, 4> Predicate;
-
-      BBInfo() : IsDone(false), IsBeingAnalyzed(false),
-                 IsAnalyzed(false), IsEnqueued(false), IsBrAnalyzable(false),
-                 IsBrReversible(false), HasFallThrough(false),
-                 IsUnpredicable(false), CannotBeCopied(false),
-                 ClobbersPred(false) {}
-    };
-
-    /// Record information about pending if-conversions to attempt:
-    /// BBI             - Corresponding BBInfo.
-    /// Kind            - Type of block. See IfcvtKind.
-    /// NeedSubsumption - True if the to-be-predicated BB has already been
-    ///                   predicated.
-    /// NumDups      - Number of instructions that would be duplicated due
-    ///                   to this if-conversion. (For diamonds, the number of
-    ///                   identical instructions at the beginnings of both
-    ///                   paths).
-    /// NumDups2     - For diamonds, the number of identical instructions
-    ///                   at the ends of both paths.
-    struct IfcvtToken {
-      BBInfo &BBI;
-      IfcvtKind Kind;
-      unsigned NumDups;
-      unsigned NumDups2;
-      bool NeedSubsumption : 1;
-      bool TClobbersPred : 1;
-      bool FClobbersPred : 1;
-
-      IfcvtToken(BBInfo &b, IfcvtKind k, bool s, unsigned d, unsigned d2 = 0,
-                 bool tc = false, bool fc = false)
+class IfConverter : public MachineFunctionPass {
+  enum IfcvtKind {
+    ICNotClassfied,  // BB data valid, but not classified.
+    ICSimpleFalse,   // Same as ICSimple, but on the false path.
+    ICSimple,        // BB is entry of an one split, no rejoin sub-CFG.
+    ICTriangleFRev,  // Same as ICTriangleFalse, but false path rev condition.
+    ICTriangleRev,   // Same as ICTriangle, but true path rev condition.
+    ICTriangleFalse, // Same as ICTriangle, but on the false path.
+    ICTriangle,      // BB is entry of a triangle sub-CFG.
+    ICDiamond,       // BB is entry of a diamond sub-CFG.
+    ICForkedDiamond  // BB is entry of an almost diamond sub-CFG, with a
+                     // common tail that can be shared.
+  };
+
+  /// One per MachineBasicBlock, this is used to cache the result
+  /// if-conversion feasibility analysis. This includes results from
+  /// TargetInstrInfo::analyzeBranch() (i.e. TBB, FBB, and Cond), and its
+  /// classification, and common tail block of its successors (if it's a
+  /// diamond shape), its size, whether it's predicable, and whether any
+  /// instruction can clobber the 'would-be' predicate.
+  ///
+  /// IsDone          - True if BB is not to be considered for ifcvt.
+  /// IsBeingAnalyzed - True if BB is currently being analyzed.
+  /// IsAnalyzed      - True if BB has been analyzed (info is still valid).
+  /// IsEnqueued      - True if BB has been enqueued to be ifcvt'ed.
+  /// IsBrAnalyzable  - True if analyzeBranch() returns false.
+  /// HasFallThrough  - True if BB may fallthrough to the following BB.
+  /// IsUnpredicable  - True if BB is known to be unpredicable.
+  /// ClobbersPred    - True if BB could modify predicates (e.g. has
+  ///                   cmp, call, etc.)
+  /// NonPredSize     - Number of non-predicated instructions.
+  /// ExtraCost       - Extra cost for multi-cycle instructions.
+  /// ExtraCost2      - Some instructions are slower when predicated
+  /// BB              - Corresponding MachineBasicBlock.
+  /// TrueBB / FalseBB- See analyzeBranch().
+  /// BrCond          - Conditions for end of block conditional branches.
+  /// Predicate       - Predicate used in the BB.
+  struct BBInfo {
+    bool IsDone : 1;
+    bool IsBeingAnalyzed : 1;
+    bool IsAnalyzed : 1;
+    bool IsEnqueued : 1;
+    bool IsBrAnalyzable : 1;
+    bool IsBrReversible : 1;
+    bool HasFallThrough : 1;
+    bool IsUnpredicable : 1;
+    bool CannotBeCopied : 1;
+    bool ClobbersPred : 1;
+    unsigned NonPredSize = 0;
+    unsigned ExtraCost = 0;
+    unsigned ExtraCost2 = 0;
+    MachineBasicBlock *BB = nullptr;
+    MachineBasicBlock *TrueBB = nullptr;
+    MachineBasicBlock *FalseBB = nullptr;
+    SmallVector<MachineOperand, 4> BrCond;
+    SmallVector<MachineOperand, 4> Predicate;
+
+    BBInfo()
+        : IsDone(false), IsBeingAnalyzed(false), IsAnalyzed(false),
+          IsEnqueued(false), IsBrAnalyzable(false), IsBrReversible(false),
+          HasFallThrough(false), IsUnpredicable(false), CannotBeCopied(false),
+          ClobbersPred(false) {}
+  };
+
+  /// Record information about pending if-conversions to attempt:
+  /// BBI             - Corresponding BBInfo.
+  /// Kind            - Type of block. See IfcvtKind.
+  /// NeedSubsumption - True if the to-be-predicated BB has already been
+  ///                   predicated.
+  /// NumDups      - Number of instructions that would be duplicated due
+  ///                   to this if-conversion. (For diamonds, the number of
+  ///                   identical instructions at the beginnings of both
+  ///                   paths).
+  /// NumDups2     - For diamonds, the number of identical instructions
+  ///                   at the ends of both paths.
+  struct IfcvtToken {
+    BBInfo &BBI;
+    IfcvtKind Kind;
+    unsigned NumDups;
+    unsigned NumDups2;
+    bool NeedSubsumption : 1;
+    bool TClobbersPred : 1;
+    bool FClobbersPred : 1;
+
+    IfcvtToken(BBInfo &b, IfcvtKind k, bool s, unsigned d, unsigned d2 = 0,
+               bool tc = false, bool fc = false)
         : BBI(b), Kind(k), NumDups(d), NumDups2(d2), NeedSubsumption(s),
           TClobbersPred(tc), FClobbersPred(fc) {}
-    };
+  };
 
-    /// Results of if-conversion feasibility analysis indexed by basic block
-    /// number.
-    std::vector<BBInfo> BBAnalysis;
-    TargetSchedModel SchedModel;
+  /// Results of if-conversion feasibility analysis indexed by basic block
+  /// number.
+  std::vector<BBInfo> BBAnalysis;
+  TargetSchedModel SchedModel;
 
-    const TargetLoweringBase *TLI = nullptr;
-    const TargetInstrInfo *TII = nullptr;
-    const TargetRegisterInfo *TRI = nullptr;
-    const MachineBranchProbabilityInfo *MBPI = nullptr;
-    MachineRegisterInfo *MRI = nullptr;
+  const TargetLoweringBase *TLI = nullptr;
+  const TargetInstrInfo *TII = nullptr;
+  const TargetRegisterInfo *TRI = nullptr;
+  const MachineBranchProbabilityInfo *MBPI = nullptr;
+  MachineRegisterInfo *MRI = nullptr;
 
-    LivePhysRegs Redefs;
+  LivePhysRegs Redefs;
 
-    bool PreRegAlloc = true;
-    bool MadeChange = false;
-    int FnNum = -1;
-    std::function<bool(const MachineFunction &)> PredicateFtor;
+  bool PreRegAlloc = true;
+  bool MadeChange = false;
+  int FnNum = -1;
+  std::function<bool(const MachineFunction &)> PredicateFtor;
 
-  public:
-    static char ID;
+public:
+  static char ID;
 
-    IfConverter(std::function<bool(const MachineFunction &)> Ftor = nullptr)
-        : MachineFunctionPass(ID), PredicateFtor(std::move(Ftor)) {
-      initializeIfConverterPass(*PassRegistry::getPassRegistry());
-    }
+  IfConverter(std::function<bool(const MachineFunction &)> Ftor = nullptr)
+      : MachineFunctionPass(ID), PredicateFtor(std::move(Ftor)) {
+    initializeIfConverterPass(*PassRegistry::getPassRegistry());
+  }
 
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.addRequired<MachineBlockFrequencyInfo>();
-      AU.addRequired<MachineBranchProbabilityInfo>();
-      AU.addRequired<ProfileSummaryInfoWrapperPass>();
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.addRequired<MachineBlockFrequencyInfo>();
+    AU.addRequired<MachineBranchProbabilityInfo>();
+    AU.addRequired<ProfileSummaryInfoWrapperPass>();
+    MachineFunctionPass::getAnalysisUsage(AU);
+  }
 
-    bool runOnMachineFunction(MachineFunction &MF) override;
+  bool runOnMachineFunction(MachineFunction &MF) override;
 
-    MachineFunctionProperties getRequiredProperties() const override {
-      return MachineFunctionProperties().set(
-          MachineFunctionProperties::Property::NoVRegs);
-    }
+  MachineFunctionProperties getRequiredProperties() const override {
+    return MachineFunctionProperties().set(
+        MachineFunctionProperties::Property::NoVRegs);
+  }
 
-  private:
-    bool reverseBranchCondition(BBInfo &BBI) const;
-    bool ValidSimple(BBInfo &TrueBBI, unsigned &Dups,
-                     BranchProbability Prediction) const;
-    bool ValidTriangle(BBInfo &TrueBBI, BBInfo &FalseBBI,
-                       bool FalseBranch, unsigned &Dups,
-                       BranchProbability Prediction) const;
-    bool CountDuplicatedInstructions(
-        MachineBasicBlock::iterator &TIB, MachineBasicBlock::iterator &FIB,
-        MachineBasicBlock::iterator &TIE, MachineBasicBlock::iterator &FIE,
-        unsigned &Dups1, unsigned &Dups2,
-        MachineBasicBlock &TBB, MachineBasicBlock &FBB,
-        bool SkipUnconditionalBranches) const;
-    bool ValidDiamond(BBInfo &TrueBBI, BBInfo &FalseBBI,
-                      unsigned &Dups1, unsigned &Dups2,
-                      BBInfo &TrueBBICalc, BBInfo &FalseBBICalc) const;
-    bool ValidForkedDiamond(BBInfo &TrueBBI, BBInfo &FalseBBI,
-                            unsigned &Dups1, unsigned &Dups2,
-                            BBInfo &TrueBBICalc, BBInfo &FalseBBICalc) const;
-    void AnalyzeBranches(BBInfo &BBI);
-    void ScanInstructions(BBInfo &BBI,
-                          MachineBasicBlock::iterator &Begin,
-                          MachineBasicBlock::iterator &End,
-                          bool BranchUnpredicable = false) const;
-    bool RescanInstructions(
-        MachineBasicBlock::iterator &TIB, MachineBasicBlock::iterator &FIB,
-        MachineBasicBlock::iterator &TIE, MachineBasicBlock::iterator &FIE,
-        BBInfo &TrueBBI, BBInfo &FalseBBI) const;
-    void AnalyzeBlock(MachineBasicBlock &MBB,
-                      std::vector<std::unique_ptr<IfcvtToken>> &Tokens);
-    bool FeasibilityAnalysis(BBInfo &BBI, SmallVectorImpl<MachineOperand> &Pred,
-                             bool isTriangle = false, bool RevBranch = false,
-                             bool hasCommonTail = false);
-    void AnalyzeBlocks(MachineFunction &MF,
-                       std::vector<std::unique_ptr<IfcvtToken>> &Tokens);
-    void InvalidatePreds(MachineBasicBlock &MBB);
-    bool IfConvertSimple(BBInfo &BBI, IfcvtKind Kind);
-    bool IfConvertTriangle(BBInfo &BBI, IfcvtKind Kind);
-    bool IfConvertDiamondCommon(BBInfo &BBI, BBInfo &TrueBBI, BBInfo &FalseBBI,
-                                unsigned NumDups1, unsigned NumDups2,
-                                bool TClobbersPred, bool FClobbersPred,
-                                bool RemoveBranch, bool MergeAddEdges);
-    bool IfConvertDiamond(BBInfo &BBI, IfcvtKind Kind,
-                          unsigned NumDups1, unsigned NumDups2,
-                          bool TClobbers, bool FClobbers);
-    bool IfConvertForkedDiamond(BBInfo &BBI, IfcvtKind Kind,
+private:
+  bool reverseBranchCondition(BBInfo &BBI) const;
+  bool ValidSimple(BBInfo &TrueBBI, unsigned &Dups,
+                   BranchProbability Prediction) const;
+  bool ValidTriangle(BBInfo &TrueBBI, BBInfo &FalseBBI, bool FalseBranch,
+                     unsigned &Dups, BranchProbability Prediction) const;
+  bool CountDuplicatedInstructions(
+      MachineBasicBlock::iterator &TIB, MachineBasicBlock::iterator &FIB,
+      MachineBasicBlock::iterator &TIE, MachineBasicBlock::iterator &FIE,
+      unsigned &Dups1, unsigned &Dups2, MachineBasicBlock &TBB,
+      MachineBasicBlock &FBB, bool SkipUnconditionalBranches) const;
+  bool ValidDiamond(BBInfo &TrueBBI, BBInfo &FalseBBI, unsigned &Dups1,
+                    unsigned &Dups2, BBInfo &TrueBBICalc,
+                    BBInfo &FalseBBICalc) const;
+  bool ValidForkedDiamond(BBInfo &TrueBBI, BBInfo &FalseBBI, unsigned &Dups1,
+                          unsigned &Dups2, BBInfo &TrueBBICalc,
+                          BBInfo &FalseBBICalc) const;
+  void AnalyzeBranches(BBInfo &BBI);
+  void ScanInstructions(BBInfo &BBI, MachineBasicBlock::iterator &Begin,
+                        MachineBasicBlock::iterator &End,
+                        bool BranchUnpredicable = false) const;
+  bool RescanInstructions(MachineBasicBlock::iterator &TIB,
+                          MachineBasicBlock::iterator &FIB,
+                          MachineBasicBlock::iterator &TIE,
+                          MachineBasicBlock::iterator &FIE, BBInfo &TrueBBI,
+                          BBInfo &FalseBBI) const;
+  void AnalyzeBlock(MachineBasicBlock &MBB,
+                    std::vector<std::unique_ptr<IfcvtToken>> &Tokens);
+  bool FeasibilityAnalysis(BBInfo &BBI, SmallVectorImpl<MachineOperand> &Pred,
+                           bool isTriangle = false, bool RevBranch = false,
+                           bool hasCommonTail = false);
+  void AnalyzeBlocks(MachineFunction &MF,
+                     std::vector<std::unique_ptr<IfcvtToken>> &Tokens);
+  void InvalidatePreds(MachineBasicBlock &MBB);
+  bool IfConvertSimple(BBInfo &BBI, IfcvtKind Kind);
+  bool IfConvertTriangle(BBInfo &BBI, IfcvtKind Kind);
+  bool IfConvertDiamondCommon(BBInfo &BBI, BBInfo &TrueBBI, BBInfo &FalseBBI,
                               unsigned NumDups1, unsigned NumDups2,
-                              bool TClobbers, bool FClobbers);
-    void PredicateBlock(BBInfo &BBI,
-                        MachineBasicBlock::iterator E,
-                        SmallVectorImpl<MachineOperand> &Cond,
-                        SmallSet<MCPhysReg, 4> *LaterRedefs = nullptr);
-    void CopyAndPredicateBlock(BBInfo &ToBBI, BBInfo &FromBBI,
-                               SmallVectorImpl<MachineOperand> &Cond,
-                               bool IgnoreBr = false);
-    void MergeBlocks(BBInfo &ToBBI, BBInfo &FromBBI, bool AddEdges = true);
-
-    bool MeetIfcvtSizeLimit(MachineBasicBlock &BB,
-                            unsigned Cycle, unsigned Extra,
-                            BranchProbability Prediction) const {
-      return Cycle > 0 && TII->isProfitableToIfCvt(BB, Cycle, Extra,
-                                                   Prediction);
-    }
+                              bool TClobbersPred, bool FClobbersPred,
+                              bool RemoveBranch, bool MergeAddEdges);
+  bool IfConvertDiamond(BBInfo &BBI, IfcvtKind Kind, unsigned NumDups1,
+                        unsigned NumDups2, bool TClobbers, bool FClobbers);
+  bool IfConvertForkedDiamond(BBInfo &BBI, IfcvtKind Kind, unsigned NumDups1,
+                              unsigned NumDups2, bool TClobbers,
+                              bool FClobbers);
+  void PredicateBlock(BBInfo &BBI, MachineBasicBlock::iterator E,
+                      SmallVectorImpl<MachineOperand> &Cond,
+                      SmallSet<MCPhysReg, 4> *LaterRedefs = nullptr);
+  void CopyAndPredicateBlock(BBInfo &ToBBI, BBInfo &FromBBI,
+                             SmallVectorImpl<MachineOperand> &Cond,
+                             bool IgnoreBr = false);
+  void MergeBlocks(BBInfo &ToBBI, BBInfo &FromBBI, bool AddEdges = true);
+
+  bool MeetIfcvtSizeLimit(MachineBasicBlock &BB, unsigned Cycle, unsigned Extra,
+                          BranchProbability Prediction) const {
+    return Cycle > 0 && TII->isProfitableToIfCvt(BB, Cycle, Extra, Prediction);
+  }
+
+  bool MeetIfcvtSizeLimit(BBInfo &TBBInfo, BBInfo &FBBInfo,
+                          MachineBasicBlock &CommBB, unsigned Dups,
+                          BranchProbability Prediction, bool Forked) const {
+    const MachineFunction &MF = *TBBInfo.BB->getParent();
+    if (MF.getFunction().hasMinSize()) {
+      MachineBasicBlock::iterator TIB = TBBInfo.BB->begin();
+      MachineBasicBlock::iterator FIB = FBBInfo.BB->begin();
+      MachineBasicBlock::iterator TIE = TBBInfo.BB->end();
+      MachineBasicBlock::iterator FIE = FBBInfo.BB->end();
+
+      unsigned Dups1 = 0, Dups2 = 0;
+      if (!CountDuplicatedInstructions(TIB, FIB, TIE, FIE, Dups1, Dups2,
+                                       *TBBInfo.BB, *FBBInfo.BB,
+                                       /*SkipUnconditionalBranches*/ true))
+        llvm_unreachable("should already have been checked by ValidDiamond");
+
+      unsigned BranchBytes = 0;
+      unsigned CommonBytes = 0;
+
+      // Count common instructions at the start of the true and false blocks.
+      for (auto &I : make_range(TBBInfo.BB->begin(), TIB)) {
+        LLVM_DEBUG(dbgs() << "Common inst: " << I);
+        CommonBytes += TII->getInstSizeInBytes(I);
+      }
+      for (auto &I : make_range(FBBInfo.BB->begin(), FIB)) {
+        LLVM_DEBUG(dbgs() << "Common inst: " << I);
+        CommonBytes += TII->getInstSizeInBytes(I);
+      }
 
-    bool MeetIfcvtSizeLimit(BBInfo &TBBInfo, BBInfo &FBBInfo,
-                            MachineBasicBlock &CommBB, unsigned Dups,
-                            BranchProbability Prediction, bool Forked) const {
-      const MachineFunction &MF = *TBBInfo.BB->getParent();
-      if (MF.getFunction().hasMinSize()) {
-        MachineBasicBlock::iterator TIB = TBBInfo.BB->begin();
-        MachineBasicBlock::iterator FIB = FBBInfo.BB->begin();
-        MachineBasicBlock::iterator TIE = TBBInfo.BB->end();
-        MachineBasicBlock::iterator FIE = FBBInfo.BB->end();
-
-        unsigned Dups1 = 0, Dups2 = 0;
-        if (!CountDuplicatedInstructions(TIB, FIB, TIE, FIE, Dups1, Dups2,
-                                         *TBBInfo.BB, *FBBInfo.BB,
-                                         /*SkipUnconditionalBranches*/ true))
-          llvm_unreachable("should already have been checked by ValidDiamond");
-
-        unsigned BranchBytes = 0;
-        unsigned CommonBytes = 0;
-
-        // Count common instructions at the start of the true and false blocks.
-        for (auto &I : make_range(TBBInfo.BB->begin(), TIB)) {
+      // Count instructions at the end of the true and false blocks, after
+      // the ones we plan to predicate. Analyzable branches will be removed
+      // (unless this is a forked diamond), and all other instructions are
+      // common between the two blocks.
+      for (auto &I : make_range(TIE, TBBInfo.BB->end())) {
+        if (I.isBranch() && TBBInfo.IsBrAnalyzable && !Forked) {
+          LLVM_DEBUG(dbgs() << "Saving branch: " << I);
+          BranchBytes += TII->predictBranchSizeForIfCvt(I);
+        } else {
           LLVM_DEBUG(dbgs() << "Common inst: " << I);
           CommonBytes += TII->getInstSizeInBytes(I);
         }
-        for (auto &I : make_range(FBBInfo.BB->begin(), FIB)) {
+      }
+      for (auto &I : make_range(FIE, FBBInfo.BB->end())) {
+        if (I.isBranch() && FBBInfo.IsBrAnalyzable && !Forked) {
+          LLVM_DEBUG(dbgs() << "Saving branch: " << I);
+          BranchBytes += TII->predictBranchSizeForIfCvt(I);
+        } else {
           LLVM_DEBUG(dbgs() << "Common inst: " << I);
           CommonBytes += TII->getInstSizeInBytes(I);
         }
-
-        // Count instructions at the end of the true and false blocks, after
-        // the ones we plan to predicate. Analyzable branches will be removed
-        // (unless this is a forked diamond), and all other instructions are
-        // common between the two blocks.
-        for (auto &I : make_range(TIE, TBBInfo.BB->end())) {
-          if (I.isBranch() && TBBInfo.IsBrAnalyzable && !Forked) {
-            LLVM_DEBUG(dbgs() << "Saving branch: " << I);
-            BranchBytes += TII->predictBranchSizeForIfCvt(I);
-          } else {
-            LLVM_DEBUG(dbgs() << "Common inst: " << I);
-            CommonBytes += TII->getInstSizeInBytes(I);
-          }
-        }
-        for (auto &I : make_range(FIE, FBBInfo.BB->end())) {
-          if (I.isBranch() && FBBInfo.IsBrAnalyzable && !Forked) {
-            LLVM_DEBUG(dbgs() << "Saving branch: " << I);
-            BranchBytes += TII->predictBranchSizeForIfCvt(I);
-          } else {
-            LLVM_DEBUG(dbgs() << "Common inst: " << I);
-            CommonBytes += TII->getInstSizeInBytes(I);
-          }
-        }
-        for (auto &I : CommBB.terminators()) {
-          if (I.isBranch()) {
-            LLVM_DEBUG(dbgs() << "Saving branch: " << I);
-            BranchBytes += TII->predictBranchSizeForIfCvt(I);
-          }
+      }
+      for (auto &I : CommBB.terminators()) {
+        if (I.isBranch()) {
+          LLVM_DEBUG(dbgs() << "Saving branch: " << I);
+          BranchBytes += TII->predictBranchSizeForIfCvt(I);
         }
+      }
 
-        // The common instructions in one branch will be eliminated, halving
-        // their code size.
-        CommonBytes /= 2;
-
-        // Count the instructions which we need to predicate.
-        unsigned NumPredicatedInstructions = 0;
-        for (auto &I : make_range(TIB, TIE)) {
-          if (!I.isDebugInstr()) {
-            LLVM_DEBUG(dbgs() << "Predicating: " << I);
-            NumPredicatedInstructions++;
-          }
+      // The common instructions in one branch will be eliminated, halving
+      // their code size.
+      CommonBytes /= 2;
+
+      // Count the instructions which we need to predicate.
+      unsigned NumPredicatedInstructions = 0;
+      for (auto &I : make_range(TIB, TIE)) {
+        if (!I.isDebugInstr()) {
+          LLVM_DEBUG(dbgs() << "Predicating: " << I);
+          NumPredicatedInstructions++;
         }
-        for (auto &I : make_range(FIB, FIE)) {
-          if (!I.isDebugInstr()) {
-            LLVM_DEBUG(dbgs() << "Predicating: " << I);
-            NumPredicatedInstructions++;
-          }
+      }
+      for (auto &I : make_range(FIB, FIE)) {
+        if (!I.isDebugInstr()) {
+          LLVM_DEBUG(dbgs() << "Predicating: " << I);
+          NumPredicatedInstructions++;
         }
-
-        // Even though we're optimising for size at the expense of performance,
-        // avoid creating really long predicated blocks.
-        if (NumPredicatedInstructions > 15)
-          return false;
-
-        // Some targets (e.g. Thumb2) need to insert extra instructions to
-        // start predicated blocks.
-        unsigned ExtraPredicateBytes = TII->extraSizeToPredicateInstructions(
-            MF, NumPredicatedInstructions);
-
-        LLVM_DEBUG(dbgs() << "MeetIfcvtSizeLimit(BranchBytes=" << BranchBytes
-                          << ", CommonBytes=" << CommonBytes
-                          << ", NumPredicatedInstructions="
-                          << NumPredicatedInstructions
-                          << ", ExtraPredicateBytes=" << ExtraPredicateBytes
-                          << ")\n");
-        return (BranchBytes + CommonBytes) > ExtraPredicateBytes;
-      } else {
-        unsigned TCycle = TBBInfo.NonPredSize + TBBInfo.ExtraCost - Dups;
-        unsigned FCycle = FBBInfo.NonPredSize + FBBInfo.ExtraCost - Dups;
-        bool Res = TCycle > 0 && FCycle > 0 &&
-                   TII->isProfitableToIfCvt(
-                       *TBBInfo.BB, TCycle, TBBInfo.ExtraCost2, *FBBInfo.BB,
-                       FCycle, FBBInfo.ExtraCost2, Prediction);
-        LLVM_DEBUG(dbgs() << "MeetIfcvtSizeLimit(TCycle=" << TCycle
-                          << ", FCycle=" << FCycle
-                          << ", TExtra=" << TBBInfo.ExtraCost2 << ", FExtra="
-                          << FBBInfo.ExtraCost2 << ") = " << Res << "\n");
-        return Res;
       }
-    }
 
-    /// Returns true if Block ends without a terminator.
-    bool blockAlwaysFallThrough(BBInfo &BBI) const {
-      return BBI.IsBrAnalyzable && BBI.TrueBB == nullptr;
+      // Even though we're optimising for size at the expense of performance,
+      // avoid creating really long predicated blocks.
+      if (NumPredicatedInstructions > 15)
+        return false;
+
+      // Some targets (e.g. Thumb2) need to insert extra instructions to
+      // start predicated blocks.
+      unsigned ExtraPredicateBytes =
+          TII->extraSizeToPredicateInstructions(MF, NumPredicatedInstructions);
+
+      LLVM_DEBUG(
+          dbgs() << "MeetIfcvtSizeLimit(BranchBytes=" << BranchBytes
+                 << ", CommonBytes=" << CommonBytes
+                 << ", NumPredicatedInstructions=" << NumPredicatedInstructions
+                 << ", ExtraPredicateBytes=" << ExtraPredicateBytes << ")\n");
+      return (BranchBytes + CommonBytes) > ExtraPredicateBytes;
+    } else {
+      unsigned TCycle = TBBInfo.NonPredSize + TBBInfo.ExtraCost - Dups;
+      unsigned FCycle = FBBInfo.NonPredSize + FBBInfo.ExtraCost - Dups;
+      bool Res = TCycle > 0 && FCycle > 0 &&
+                 TII->isProfitableToIfCvt(
+                     *TBBInfo.BB, TCycle, TBBInfo.ExtraCost2, *FBBInfo.BB,
+                     FCycle, FBBInfo.ExtraCost2, Prediction);
+      LLVM_DEBUG(dbgs() << "MeetIfcvtSizeLimit(TCycle=" << TCycle << ", FCycle="
+                        << FCycle << ", TExtra=" << TBBInfo.ExtraCost2
+                        << ", FExtra=" << FBBInfo.ExtraCost2 << ") = " << Res
+                        << "\n");
+      return Res;
     }
+  }
+
+  /// Returns true if Block ends without a terminator.
+  bool blockAlwaysFallThrough(BBInfo &BBI) const {
+    return BBI.IsBrAnalyzable && BBI.TrueBB == nullptr;
+  }
 
-    /// Used to sort if-conversion candidates.
-    static bool IfcvtTokenCmp(const std::unique_ptr<IfcvtToken> &C1,
-                              const std::unique_ptr<IfcvtToken> &C2) {
-      int Incr1 = (C1->Kind == ICDiamond)
-        ? -(int)(C1->NumDups + C1->NumDups2) : (int)C1->NumDups;
-      int Incr2 = (C2->Kind == ICDiamond)
-        ? -(int)(C2->NumDups + C2->NumDups2) : (int)C2->NumDups;
-      if (Incr1 > Incr2)
+  /// Used to sort if-conversion candidates.
+  static bool IfcvtTokenCmp(const std::unique_ptr<IfcvtToken> &C1,
+                            const std::unique_ptr<IfcvtToken> &C2) {
+    int Incr1 = (C1->Kind == ICDiamond) ? -(int)(C1->NumDups + C1->NumDups2)
+                                        : (int)C1->NumDups;
+    int Incr2 = (C2->Kind == ICDiamond) ? -(int)(C2->NumDups + C2->NumDups2)
+                                        : (int)C2->NumDups;
+    if (Incr1 > Incr2)
+      return true;
+    else if (Incr1 == Incr2) {
+      // Favors subsumption.
+      if (!C1->NeedSubsumption && C2->NeedSubsumption)
         return true;
-      else if (Incr1 == Incr2) {
-        // Favors subsumption.
-        if (!C1->NeedSubsumption && C2->NeedSubsumption)
+      else if (C1->NeedSubsumption == C2->NeedSubsumption) {
+        // Favors diamond over triangle, etc.
+        if ((unsigned)C1->Kind < (unsigned)C2->Kind)
           return true;
-        else if (C1->NeedSubsumption == C2->NeedSubsumption) {
-          // Favors diamond over triangle, etc.
-          if ((unsigned)C1->Kind < (unsigned)C2->Kind)
-            return true;
-          else if (C1->Kind == C2->Kind)
-            return C1->BBI.BB->getNumber() < C2->BBI.BB->getNumber();
-        }
+        else if (C1->Kind == C2->Kind)
+          return C1->BBI.BB->getNumber() < C2->BBI.BB->getNumber();
       }
-      return false;
     }
-  };
+    return false;
+  }
+};
 
 } // end anonymous namespace
 
@@ -451,7 +445,8 @@ bool IfConverter::runOnMachineFunction(MachineFunction &MF) {
   MRI = &MF.getRegInfo();
   SchedModel.init(&ST);
 
-  if (!TII) return false;
+  if (!TII)
+    return false;
 
   PreRegAlloc = MRI->isSSA();
 
@@ -477,7 +472,8 @@ bool IfConverter::runOnMachineFunction(MachineFunction &MF) {
   std::vector<std::unique_ptr<IfcvtToken>> Tokens;
   MadeChange = false;
   unsigned NumIfCvts = NumSimple + NumSimpleFalse + NumTriangle +
-    NumTriangleRev + NumTriangleFalse + NumTriangleFRev + NumDiamonds;
+                       NumTriangleRev + NumTriangleFalse + NumTriangleFRev +
+                       NumDiamonds;
   while (IfCvtLimit == -1 || (int)NumIfCvts < IfCvtLimit) {
     // Do an initial analysis for each basic block and find all the potential
     // candidates to perform if-conversion.
@@ -502,11 +498,13 @@ bool IfConverter::runOnMachineFunction(MachineFunction &MF) {
 
       bool RetVal = false;
       switch (Kind) {
-      default: llvm_unreachable("Unexpected!");
+      default:
+        llvm_unreachable("Unexpected!");
       case ICSimple:
       case ICSimpleFalse: {
         bool isFalse = Kind == ICSimpleFalse;
-        if ((isFalse && DisableSimpleF) || (!isFalse && DisableSimple)) break;
+        if ((isFalse && DisableSimpleF) || (!isFalse && DisableSimple))
+          break;
         LLVM_DEBUG(dbgs() << "Ifcvt (Simple"
                           << (Kind == ICSimpleFalse ? " false" : "")
                           << "): " << printMBBReference(*BBI.BB) << " ("
@@ -516,20 +514,25 @@ bool IfConverter::runOnMachineFunction(MachineFunction &MF) {
         RetVal = IfConvertSimple(BBI, Kind);
         LLVM_DEBUG(dbgs() << (RetVal ? "succeeded!" : "failed!") << "\n");
         if (RetVal) {
-          if (isFalse) ++NumSimpleFalse;
-          else         ++NumSimple;
+          if (isFalse)
+            ++NumSimpleFalse;
+          else
+            ++NumSimple;
         }
-       break;
+        break;
       }
       case ICTriangle:
       case ICTriangleRev:
       case ICTriangleFalse:
       case ICTriangleFRev: {
         bool isFalse = Kind == ICTriangleFalse;
-        bool isRev   = (Kind == ICTriangleRev || Kind == ICTriangleFRev);
-        if (DisableTriangle && !isFalse && !isRev) break;
-        if (DisableTriangleR && !isFalse && isRev) break;
-        if (DisableTriangleF && isFalse && !isRev) break;
+        bool isRev = (Kind == ICTriangleRev || Kind == ICTriangleFRev);
+        if (DisableTriangle && !isFalse && !isRev)
+          break;
+        if (DisableTriangleR && !isFalse && isRev)
+          break;
+        if (DisableTriangleF && isFalse && !isRev)
+          break;
         LLVM_DEBUG(dbgs() << "Ifcvt (Triangle");
         if (isFalse)
           LLVM_DEBUG(dbgs() << " false");
@@ -551,27 +554,30 @@ bool IfConverter::runOnMachineFunction(MachineFunction &MF) {
         break;
       }
       case ICDiamond:
-        if (DisableDiamond) break;
+        if (DisableDiamond)
+          break;
         LLVM_DEBUG(dbgs() << "Ifcvt (Diamond): " << printMBBReference(*BBI.BB)
                           << " (T:" << BBI.TrueBB->getNumber()
                           << ",F:" << BBI.FalseBB->getNumber() << ") ");
         RetVal = IfConvertDiamond(BBI, Kind, NumDups, NumDups2,
-                                  Token->TClobbersPred,
-                                  Token->FClobbersPred);
+                                  Token->TClobbersPred, Token->FClobbersPred);
         LLVM_DEBUG(dbgs() << (RetVal ? "succeeded!" : "failed!") << "\n");
-        if (RetVal) ++NumDiamonds;
+        if (RetVal)
+          ++NumDiamonds;
         break;
       case ICForkedDiamond:
-        if (DisableForkedDiamond) break;
+        if (DisableForkedDiamond)
+          break;
         LLVM_DEBUG(dbgs() << "Ifcvt (Forked Diamond): "
                           << printMBBReference(*BBI.BB)
                           << " (T:" << BBI.TrueBB->getNumber()
                           << ",F:" << BBI.FalseBB->getNumber() << ") ");
-        RetVal = IfConvertForkedDiamond(BBI, Kind, NumDups, NumDups2,
-                                      Token->TClobbersPred,
-                                      Token->FClobbersPred);
+        RetVal =
+            IfConvertForkedDiamond(BBI, Kind, NumDups, NumDups2,
+                                   Token->TClobbersPred, Token->FClobbersPred);
         LLVM_DEBUG(dbgs() << (RetVal ? "succeeded!" : "failed!") << "\n");
-        if (RetVal) ++NumForkedDiamonds;
+        if (RetVal)
+          ++NumForkedDiamonds;
         break;
       }
 
@@ -581,7 +587,7 @@ bool IfConverter::runOnMachineFunction(MachineFunction &MF) {
       Change |= RetVal;
 
       NumIfCvts = NumSimple + NumSimpleFalse + NumTriangle + NumTriangleRev +
-        NumTriangleFalse + NumTriangleFRev + NumDiamonds;
+                  NumTriangleFalse + NumTriangleFRev + NumDiamonds;
       if (IfCvtLimit != -1 && (int)NumIfCvts >= IfCvtLimit)
         break;
     }
@@ -616,7 +622,7 @@ static MachineBasicBlock *findFalseBlock(MachineBasicBlock *BB,
 /// Reverse the condition of the end of the block branch. Swap block's 'true'
 /// and 'false' successors.
 bool IfConverter::reverseBranchCondition(BBInfo &BBI) const {
-  DebugLoc dl;  // FIXME: this is nowhere
+  DebugLoc dl; // FIXME: this is nowhere
   if (!TII->reverseBranchCondition(BBI.BrCond)) {
     TII->removeBranch(*BBI.BB);
     TII->insertBranch(*BBI.BB, BBI.FalseBB, BBI.TrueBB, BBI.BrCond, dl);
@@ -684,8 +690,8 @@ bool IfConverter::ValidTriangle(BBInfo &TrueBBI, BBInfo &FalseBBI,
         // Ends with an unconditional branch. It will be removed.
         --Size;
       else {
-        MachineBasicBlock *FExit = FalseBranch
-          ? TrueBBI.TrueBB : TrueBBI.FalseBB;
+        MachineBasicBlock *FExit =
+            FalseBranch ? TrueBBI.TrueBB : TrueBBI.FalseBB;
         if (FExit)
           // Require a conditional branch
           ++Size;
@@ -727,13 +733,10 @@ bool IfConverter::ValidTriangle(BBInfo &TrueBBI, BBInfo &FalseBBI,
 /// handled.
 /// @return false if the shared portion prevents if conversion.
 bool IfConverter::CountDuplicatedInstructions(
-    MachineBasicBlock::iterator &TIB,
-    MachineBasicBlock::iterator &FIB,
-    MachineBasicBlock::iterator &TIE,
-    MachineBasicBlock::iterator &FIE,
-    unsigned &Dups1, unsigned &Dups2,
-    MachineBasicBlock &TBB, MachineBasicBlock &FBB,
-    bool SkipUnconditionalBranches) const {
+    MachineBasicBlock::iterator &TIB, MachineBasicBlock::iterator &FIB,
+    MachineBasicBlock::iterator &TIE, MachineBasicBlock::iterator &FIE,
+    unsigned &Dups1, unsigned &Dups2, MachineBasicBlock &TBB,
+    MachineBasicBlock &FBB, bool SkipUnconditionalBranches) const {
   while (TIB != TIE && FIB != FIE) {
     // Skip dbg_value instructions. These do not count.
     TIB = skipDebugInstructionsForward(TIB, TIE, false);
@@ -806,10 +809,11 @@ bool IfConverter::CountDuplicatedInstructions(
 /// @param FalseBBI - BBInfo to update for the false block.
 /// @returns - false if either block cannot be predicated or if both blocks end
 ///   with a predicate-clobbering instruction.
-bool IfConverter::RescanInstructions(
-    MachineBasicBlock::iterator &TIB, MachineBasicBlock::iterator &FIB,
-    MachineBasicBlock::iterator &TIE, MachineBasicBlock::iterator &FIE,
-    BBInfo &TrueBBI, BBInfo &FalseBBI) const {
+bool IfConverter::RescanInstructions(MachineBasicBlock::iterator &TIB,
+                                     MachineBasicBlock::iterator &FIB,
+                                     MachineBasicBlock::iterator &TIE,
+                                     MachineBasicBlock::iterator &FIE,
+                                     BBInfo &TrueBBI, BBInfo &FalseBBI) const {
   bool BranchUnpredicable = true;
   TrueBBI.IsUnpredicable = FalseBBI.IsUnpredicable = false;
   ScanInstructions(TrueBBI, TIB, TIE, BranchUnpredicable);
@@ -824,9 +828,8 @@ bool IfConverter::RescanInstructions(
 }
 
 #ifndef NDEBUG
-static void verifySameBranchInstructions(
-    MachineBasicBlock *MBB1,
-    MachineBasicBlock *MBB2) {
+static void verifySameBranchInstructions(MachineBasicBlock *MBB1,
+                                         MachineBasicBlock *MBB2) {
   const MachineBasicBlock::reverse_iterator B1 = MBB1->rend();
   const MachineBasicBlock::reverse_iterator B2 = MBB2->rend();
   MachineBasicBlock::reverse_iterator E1 = MBB1->rbegin();
@@ -870,25 +873,24 @@ static void verifySameBranchInstructions(
 ///  FalseBB TrueBB FalseBB
 /// Currently only handles analyzable branches.
 /// Specifically excludes actual diamonds to avoid overlap.
-bool IfConverter::ValidForkedDiamond(
-    BBInfo &TrueBBI, BBInfo &FalseBBI,
-    unsigned &Dups1, unsigned &Dups2,
-    BBInfo &TrueBBICalc, BBInfo &FalseBBICalc) const {
+bool IfConverter::ValidForkedDiamond(BBInfo &TrueBBI, BBInfo &FalseBBI,
+                                     unsigned &Dups1, unsigned &Dups2,
+                                     BBInfo &TrueBBICalc,
+                                     BBInfo &FalseBBICalc) const {
   Dups1 = Dups2 = 0;
-  if (TrueBBI.IsBeingAnalyzed || TrueBBI.IsDone ||
-      FalseBBI.IsBeingAnalyzed || FalseBBI.IsDone)
+  if (TrueBBI.IsBeingAnalyzed || TrueBBI.IsDone || FalseBBI.IsBeingAnalyzed ||
+      FalseBBI.IsDone)
     return false;
 
   if (!TrueBBI.IsBrAnalyzable || !FalseBBI.IsBrAnalyzable)
     return false;
   // Don't IfConvert blocks that can't be folded into their predecessor.
-  if  (TrueBBI.BB->pred_size() > 1 || FalseBBI.BB->pred_size() > 1)
+  if (TrueBBI.BB->pred_size() > 1 || FalseBBI.BB->pred_size() > 1)
     return false;
 
   // This function is specifically looking for conditional tails, as
   // unconditional tails are already handled by the standard diamond case.
-  if (TrueBBI.BrCond.size() == 0 ||
-      FalseBBI.BrCond.size() == 0)
+  if (TrueBBI.BrCond.size() == 0 || FalseBBI.BrCond.size() == 0)
     return false;
 
   MachineBasicBlock *TT = TrueBBI.TrueBB;
@@ -930,9 +932,9 @@ bool IfConverter::ValidForkedDiamond(
   MachineBasicBlock::iterator FIB = FalseBBI.BB->begin();
   MachineBasicBlock::iterator TIE = TrueBBI.BB->end();
   MachineBasicBlock::iterator FIE = FalseBBI.BB->end();
-  if(!CountDuplicatedInstructions(TIB, FIB, TIE, FIE, Dups1, Dups2,
-                                  *TrueBBI.BB, *FalseBBI.BB,
-                                  /* SkipUnconditionalBranches */ true))
+  if (!CountDuplicatedInstructions(TIB, FIB, TIE, FIE, Dups1, Dups2,
+                                   *TrueBBI.BB, *FalseBBI.BB,
+                                   /* SkipUnconditionalBranches */ true))
     return false;
 
   TrueBBICalc.BB = TrueBBI.BB;
@@ -952,13 +954,13 @@ bool IfConverter::ValidForkedDiamond(
 
 /// ValidDiamond - Returns true if the 'true' and 'false' blocks (along
 /// with their common predecessor) forms a valid diamond shape for ifcvt.
-bool IfConverter::ValidDiamond(
-    BBInfo &TrueBBI, BBInfo &FalseBBI,
-    unsigned &Dups1, unsigned &Dups2,
-    BBInfo &TrueBBICalc, BBInfo &FalseBBICalc) const {
+bool IfConverter::ValidDiamond(BBInfo &TrueBBI, BBInfo &FalseBBI,
+                               unsigned &Dups1, unsigned &Dups2,
+                               BBInfo &TrueBBICalc,
+                               BBInfo &FalseBBICalc) const {
   Dups1 = Dups2 = 0;
-  if (TrueBBI.IsBeingAnalyzed || TrueBBI.IsDone ||
-      FalseBBI.IsBeingAnalyzed || FalseBBI.IsDone)
+  if (TrueBBI.IsBeingAnalyzed || TrueBBI.IsDone || FalseBBI.IsBeingAnalyzed ||
+      FalseBBI.IsDone)
     return false;
 
   // If the True and False BBs are equal we're dealing with a degenerate case
@@ -977,7 +979,7 @@ bool IfConverter::ValidDiamond(
     return false;
   if (!TT && (TrueBBI.IsBrAnalyzable || FalseBBI.IsBrAnalyzable))
     return false;
-  if  (TrueBBI.BB->pred_size() > 1 || FalseBBI.BB->pred_size() > 1)
+  if (TrueBBI.BB->pred_size() > 1 || FalseBBI.BB->pred_size() > 1)
     return false;
 
   // FIXME: Allow true block to have an early exit?
@@ -994,9 +996,9 @@ bool IfConverter::ValidDiamond(
   MachineBasicBlock::iterator FIB = FalseBBI.BB->begin();
   MachineBasicBlock::iterator TIE = TrueBBI.BB->end();
   MachineBasicBlock::iterator FIE = FalseBBI.BB->end();
-  if(!CountDuplicatedInstructions(TIB, FIB, TIE, FIE, Dups1, Dups2,
-                                  *TrueBBI.BB, *FalseBBI.BB,
-                                  SkipUnconditionalBranches))
+  if (!CountDuplicatedInstructions(TIB, FIB, TIE, FIE, Dups1, Dups2,
+                                   *TrueBBI.BB, *FalseBBI.BB,
+                                   SkipUnconditionalBranches))
     return false;
 
   TrueBBICalc.BB = TrueBBI.BB;
@@ -1030,8 +1032,8 @@ void IfConverter::AnalyzeBranches(BBInfo &BBI) {
   }
 
   SmallVector<MachineOperand, 4> RevCond(BBI.BrCond.begin(), BBI.BrCond.end());
-  BBI.IsBrReversible = (RevCond.size() == 0) ||
-      !TII->reverseBranchCondition(RevCond);
+  BBI.IsBrReversible =
+      (RevCond.size() == 0) || !TII->reverseBranchCondition(RevCond);
   BBI.HasFallThrough = BBI.IsBrAnalyzable && BBI.FalseBB == nullptr;
 
   if (BBI.BrCond.size()) {
@@ -1118,7 +1120,7 @@ void IfConverter::ScanInstructions(BBInfo &BBI,
       unsigned ExtraPredCost = TII->getPredicationCost(MI);
       unsigned NumCycles = SchedModel.computeInstrLatency(&MI, false);
       if (NumCycles > 1)
-        BBI.ExtraCost += NumCycles-1;
+        BBI.ExtraCost += NumCycles - 1;
       BBI.ExtraCost2 += ExtraPredCost;
     } else if (!AlreadyPredicated) {
       // FIXME: This instruction is already predicated before the
@@ -1276,8 +1278,8 @@ void IfConverter::AnalyzeBlock(
       continue;
     }
 
-    SmallVector<MachineOperand, 4>
-        RevCond(BBI.BrCond.begin(), BBI.BrCond.end());
+    SmallVector<MachineOperand, 4> RevCond(BBI.BrCond.begin(),
+                                           BBI.BrCond.end());
     bool CanRevCond = !TII->reverseBranchCondition(RevCond);
 
     unsigned Dups = 0;
@@ -1293,17 +1295,19 @@ void IfConverter::AnalyzeBlock(
       auto feasibleDiamond = [&](bool Forked) {
         bool MeetsSize = MeetIfcvtSizeLimit(TrueBBICalc, FalseBBICalc, *BB,
                                             Dups + Dups2, Prediction, Forked);
-        bool TrueFeasible = FeasibilityAnalysis(TrueBBI, BBI.BrCond,
-                                                /* IsTriangle */ false, /* RevCond */ false,
-                                                /* hasCommonTail */ true);
-        bool FalseFeasible = FeasibilityAnalysis(FalseBBI, RevCond,
-                                                 /* IsTriangle */ false, /* RevCond */ false,
-                                                 /* hasCommonTail */ true);
+        bool TrueFeasible =
+            FeasibilityAnalysis(TrueBBI, BBI.BrCond,
+                                /* IsTriangle */ false, /* RevCond */ false,
+                                /* hasCommonTail */ true);
+        bool FalseFeasible =
+            FeasibilityAnalysis(FalseBBI, RevCond,
+                                /* IsTriangle */ false, /* RevCond */ false,
+                                /* hasCommonTail */ true);
         return MeetsSize && TrueFeasible && FalseFeasible;
       };
 
-      if (ValidDiamond(TrueBBI, FalseBBI, Dups, Dups2,
-                       TrueBBICalc, FalseBBICalc)) {
+      if (ValidDiamond(TrueBBI, FalseBBI, Dups, Dups2, TrueBBICalc,
+                       FalseBBICalc)) {
         if (feasibleDiamond(false)) {
           // Diamond:
           //   EBB
@@ -1315,11 +1319,11 @@ void IfConverter::AnalyzeBlock(
           // Note TailBB can be empty.
           Tokens.push_back(std::make_unique<IfcvtToken>(
               BBI, ICDiamond, TNeedSub | FNeedSub, Dups, Dups2,
-              (bool) TrueBBICalc.ClobbersPred, (bool) FalseBBICalc.ClobbersPred));
+              (bool)TrueBBICalc.ClobbersPred, (bool)FalseBBICalc.ClobbersPred));
           Enqueued = true;
         }
-      } else if (ValidForkedDiamond(TrueBBI, FalseBBI, Dups, Dups2,
-                                    TrueBBICalc, FalseBBICalc)) {
+      } else if (ValidForkedDiamond(TrueBBI, FalseBBI, Dups, Dups2, TrueBBICalc,
+                                    FalseBBICalc)) {
         if (feasibleDiamond(true)) {
           // ForkedDiamond:
           // if TBB and FBB have a common tail that includes their conditional
@@ -1333,7 +1337,7 @@ void IfConverter::AnalyzeBlock(
           //
           Tokens.push_back(std::make_unique<IfcvtToken>(
               BBI, ICForkedDiamond, TNeedSub | FNeedSub, Dups, Dups2,
-              (bool) TrueBBICalc.ClobbersPred, (bool) FalseBBICalc.ClobbersPred));
+              (bool)TrueBBICalc.ClobbersPred, (bool)FalseBBICalc.ClobbersPred));
           Enqueued = true;
         }
       }
@@ -1388,17 +1392,16 @@ void IfConverter::AnalyzeBlock(
                              FalseBBI.NonPredSize + FalseBBI.ExtraCost,
                              FalseBBI.ExtraCost2, Prediction.getCompl()) &&
           FeasibilityAnalysis(FalseBBI, RevCond, true)) {
-        Tokens.push_back(std::make_unique<IfcvtToken>(BBI, ICTriangleFalse,
-                                                       FNeedSub, Dups));
+        Tokens.push_back(
+            std::make_unique<IfcvtToken>(BBI, ICTriangleFalse, FNeedSub, Dups));
         Enqueued = true;
       }
 
-      if (ValidTriangle(FalseBBI, TrueBBI, true, Dups,
-                        Prediction.getCompl()) &&
+      if (ValidTriangle(FalseBBI, TrueBBI, true, Dups, Prediction.getCompl()) &&
           MeetIfcvtSizeLimit(*FalseBBI.BB,
                              FalseBBI.NonPredSize + FalseBBI.ExtraCost,
-                           FalseBBI.ExtraCost2, Prediction.getCompl()) &&
-        FeasibilityAnalysis(FalseBBI, RevCond, true, true)) {
+                             FalseBBI.ExtraCost2, Prediction.getCompl()) &&
+          FeasibilityAnalysis(FalseBBI, RevCond, true, true)) {
         Tokens.push_back(
             std::make_unique<IfcvtToken>(BBI, ICTriangleFRev, FNeedSub, Dups));
         Enqueued = true;
@@ -1465,7 +1468,7 @@ void IfConverter::InvalidatePreds(MachineBasicBlock &MBB) {
 /// Inserts an unconditional branch from \p MBB to \p ToMBB.
 static void InsertUncondBranch(MachineBasicBlock &MBB, MachineBasicBlock &ToMBB,
                                const TargetInstrInfo *TII) {
-  DebugLoc dl;  // FIXME: this is nowhere
+  DebugLoc dl; // FIXME: this is nowhere
   SmallVector<MachineOperand, 0> NoCond;
   TII->insertBranch(MBB, &ToMBB, nullptr, NoCond, dl);
 }
@@ -1483,7 +1486,7 @@ static void UpdatePredRedefs(MachineInstr &MI, LivePhysRegs &Redefs) {
   for (unsigned Reg : Redefs)
     LiveBeforeMI.insert(Reg);
 
-  SmallVector<std::pair<MCPhysReg, const MachineOperand*>, 4> Clobbers;
+  SmallVector<std::pair<MCPhysReg, const MachineOperand *>, 4> Clobbers;
   Redefs.stepForward(MI, Clobbers);
 
   // Now add the implicit uses for each of the clobbered values.
@@ -1491,7 +1494,7 @@ static void UpdatePredRedefs(MachineInstr &MI, LivePhysRegs &Redefs) {
     // FIXME: Const cast here is nasty, but better than making StepForward
     // take a mutable instruction instead of const.
     unsigned Reg = Clobber.first;
-    MachineOperand &Op = const_cast<MachineOperand&>(*Clobber.second);
+    MachineOperand &Op = const_cast<MachineOperand &>(*Clobber.second);
     MachineInstr *OpMI = Op.getParent();
     MachineInstrBuilder MIB(*OpMI->getMF(), OpMI);
     if (Op.isRegMask()) {
@@ -1516,7 +1519,7 @@ static void UpdatePredRedefs(MachineInstr &MI, LivePhysRegs &Redefs) {
 
 /// If convert a simple (split, no rejoin) sub-CFG.
 bool IfConverter::IfConvertSimple(BBInfo &BBI, IfcvtKind Kind) {
-  BBInfo &TrueBBI  = BBAnalysis[BBI.TrueBB->getNumber()];
+  BBInfo &TrueBBI = BBAnalysis[BBI.TrueBB->getNumber()];
   BBInfo &FalseBBI = BBAnalysis[BBI.FalseBB->getNumber()];
   BBInfo *CvtBBI = &TrueBBI;
   BBInfo *NextBBI = &FalseBBI;
@@ -1527,8 +1530,7 @@ bool IfConverter::IfConvertSimple(BBInfo &BBI, IfcvtKind Kind) {
 
   MachineBasicBlock &CvtMBB = *CvtBBI->BB;
   MachineBasicBlock &NextMBB = *NextBBI->BB;
-  if (CvtBBI->IsDone ||
-      (CvtBBI->CannotBeCopied && CvtMBB.pred_size() > 1)) {
+  if (CvtBBI->IsDone || (CvtBBI->CannotBeCopied && CvtMBB.pred_size() > 1)) {
     // Something has changed. It's no longer safe to predicate this block.
     BBI.IsAnalyzed = false;
     CvtBBI->IsAnalyzed = false;
@@ -1605,7 +1607,7 @@ bool IfConverter::IfConvertTriangle(BBInfo &BBI, IfcvtKind Kind) {
   BBInfo &FalseBBI = BBAnalysis[BBI.FalseBB->getNumber()];
   BBInfo *CvtBBI = &TrueBBI;
   BBInfo *NextBBI = &FalseBBI;
-  DebugLoc dl;  // FIXME: this is nowhere
+  DebugLoc dl; // FIXME: this is nowhere
 
   SmallVector<MachineOperand, 4> Cond(BBI.BrCond.begin(), BBI.BrCond.end());
   if (Kind == ICTriangleFalse || Kind == ICTriangleFRev)
@@ -1613,8 +1615,7 @@ bool IfConverter::IfConvertTriangle(BBInfo &BBI, IfcvtKind Kind) {
 
   MachineBasicBlock &CvtMBB = *CvtBBI->BB;
   MachineBasicBlock &NextMBB = *NextBBI->BB;
-  if (CvtBBI->IsDone ||
-      (CvtBBI->CannotBeCopied && CvtMBB.pred_size() > 1)) {
+  if (CvtBBI->IsDone || (CvtBBI->CannotBeCopied && CvtMBB.pred_size() > 1)) {
     // Something has changed. It's no longer safe to predicate this block.
     BBI.IsAnalyzed = false;
     CvtBBI->IsAnalyzed = false;
@@ -1717,8 +1718,7 @@ bool IfConverter::IfConvertTriangle(BBInfo &BBI, IfcvtKind Kind) {
     // Only merge them if the true block does not fallthrough to the false
     // block. By not merging them, we make it possible to iteratively
     // ifcvt the blocks.
-    if (!HasEarlyExit &&
-        NextMBB.pred_size() == 1 && !NextBBI->HasFallThrough &&
+    if (!HasEarlyExit && NextMBB.pred_size() == 1 && !NextBBI->HasFallThrough &&
         !NextMBB.hasAddressTaken()) {
       MergeBlocks(BBI, *NextBBI);
       FalseBBDead = true;
@@ -1754,14 +1754,14 @@ bool IfConverter::IfConvertTriangle(BBInfo &BBI, IfcvtKind Kind) {
 ///                   cases. The caller will replace the branch if necessary.
 /// \p MergeAddEdges - Add successor edges when merging blocks. Only false for
 ///                    unanalyzable fallthrough
-bool IfConverter::IfConvertDiamondCommon(
-    BBInfo &BBI, BBInfo &TrueBBI, BBInfo &FalseBBI,
-    unsigned NumDups1, unsigned NumDups2,
-    bool TClobbersPred, bool FClobbersPred,
-    bool RemoveBranch, bool MergeAddEdges) {
-
-  if (TrueBBI.IsDone || FalseBBI.IsDone ||
-      TrueBBI.BB->pred_size() > 1 || FalseBBI.BB->pred_size() > 1) {
+bool IfConverter::IfConvertDiamondCommon(BBInfo &BBI, BBInfo &TrueBBI,
+                                         BBInfo &FalseBBI, unsigned NumDups1,
+                                         unsigned NumDups2, bool TClobbersPred,
+                                         bool FClobbersPred, bool RemoveBranch,
+                                         bool MergeAddEdges) {
+
+  if (TrueBBI.IsDone || FalseBBI.IsDone || TrueBBI.BB->pred_size() > 1 ||
+      FalseBBI.BB->pred_size() > 1) {
     // Something has changed. It's no longer safe to predicate these blocks.
     BBI.IsAnalyzed = false;
     TrueBBI.IsAnalyzed = false;
@@ -1847,7 +1847,7 @@ bool IfConverter::IfConvertDiamondCommon(
 
   if (MRI->tracksLiveness()) {
     for (const MachineInstr &MI : make_range(MBB1.begin(), DI1)) {
-      SmallVector<std::pair<MCPhysReg, const MachineOperand*>, 4> Dummy;
+      SmallVector<std::pair<MCPhysReg, const MachineOperand *>, 4> Dummy;
       Redefs.stepForward(MI, Dummy);
     }
   }
@@ -1875,7 +1875,7 @@ bool IfConverter::IfConvertDiamondCommon(
       break;
     DI1 = Prev;
   }
-  for (unsigned i = 0; i != NumDups2; ) {
+  for (unsigned i = 0; i != NumDups2;) {
     // NumDups2 only counted non-dbg_value instructions, so this won't
     // run off the head of the list.
     assert(DI1 != MBB1.begin());
@@ -1988,11 +1988,11 @@ bool IfConverter::IfConvertDiamondCommon(
 
 /// If convert an almost-diamond sub-CFG where the true
 /// and false blocks share a common tail.
-bool IfConverter::IfConvertForkedDiamond(
-    BBInfo &BBI, IfcvtKind Kind,
-    unsigned NumDups1, unsigned NumDups2,
-    bool TClobbersPred, bool FClobbersPred) {
-  BBInfo &TrueBBI  = BBAnalysis[BBI.TrueBB->getNumber()];
+bool IfConverter::IfConvertForkedDiamond(BBInfo &BBI, IfcvtKind Kind,
+                                         unsigned NumDups1, unsigned NumDups2,
+                                         bool TClobbersPred,
+                                         bool FClobbersPred) {
+  BBInfo &TrueBBI = BBAnalysis[BBI.TrueBB->getNumber()];
   BBInfo &FalseBBI = BBAnalysis[BBI.FalseBB->getNumber()];
 
   // Save the debug location for later.
@@ -2003,17 +2003,16 @@ bool IfConverter::IfConvertForkedDiamond(
   // Removing branches from both blocks is safe, because we have already
   // determined that both blocks have the same branch instructions. The branch
   // will be added back at the end, unpredicated.
-  if (!IfConvertDiamondCommon(
-      BBI, TrueBBI, FalseBBI,
-      NumDups1, NumDups2,
-      TClobbersPred, FClobbersPred,
-      /* RemoveBranch */ true, /* MergeAddEdges */ true))
+  if (!IfConvertDiamondCommon(BBI, TrueBBI, FalseBBI, NumDups1, NumDups2,
+                              TClobbersPred, FClobbersPred,
+                              /* RemoveBranch */ true,
+                              /* MergeAddEdges */ true))
     return false;
 
   // Add back the branch.
   // Debug location saved above when removing the branch from BBI2
-  TII->insertBranch(*BBI.BB, TrueBBI.TrueBB, TrueBBI.FalseBB,
-                    TrueBBI.BrCond, dl);
+  TII->insertBranch(*BBI.BB, TrueBBI.TrueBB, TrueBBI.FalseBB, TrueBBI.BrCond,
+                    dl);
 
   // Update block info.
   BBI.IsDone = TrueBBI.IsDone = FalseBBI.IsDone = true;
@@ -2027,7 +2026,7 @@ bool IfConverter::IfConvertForkedDiamond(
 bool IfConverter::IfConvertDiamond(BBInfo &BBI, IfcvtKind Kind,
                                    unsigned NumDups1, unsigned NumDups2,
                                    bool TClobbersPred, bool FClobbersPred) {
-  BBInfo &TrueBBI  = BBAnalysis[BBI.TrueBB->getNumber()];
+  BBInfo &TrueBBI = BBAnalysis[BBI.TrueBB->getNumber()];
   BBInfo &FalseBBI = BBAnalysis[BBI.FalseBB->getNumber()];
   MachineBasicBlock *TailBB = TrueBBI.TrueBB;
 
@@ -2038,12 +2037,10 @@ bool IfConverter::IfConvertDiamond(BBInfo &BBI, IfcvtKind Kind,
     assert((TailBB || !TrueBBI.IsBrAnalyzable) && "Unexpected!");
   }
 
-  if (!IfConvertDiamondCommon(
-      BBI, TrueBBI, FalseBBI,
-      NumDups1, NumDups2,
-      TClobbersPred, FClobbersPred,
-      /* RemoveBranch */ TrueBBI.IsBrAnalyzable,
-      /* MergeAddEdges */ TailBB == nullptr))
+  if (!IfConvertDiamondCommon(BBI, TrueBBI, FalseBBI, NumDups1, NumDups2,
+                              TClobbersPred, FClobbersPred,
+                              /* RemoveBranch */ TrueBBI.IsBrAnalyzable,
+                              /* MergeAddEdges */ TailBB == nullptr))
     return false;
 
   // If the if-converted block falls through or unconditionally branches into
@@ -2057,8 +2054,8 @@ bool IfConverter::IfConvertDiamond(BBInfo &BBI, IfcvtKind Kind,
     BBI.BB->removeSuccessor(FalseBBI.BB, true);
 
     BBInfo &TailBBI = BBAnalysis[TailBB->getNumber()];
-    bool CanMergeTail = !TailBBI.HasFallThrough &&
-      !TailBBI.BB->hasAddressTaken();
+    bool CanMergeTail =
+        !TailBBI.HasFallThrough && !TailBBI.BB->hasAddressTaken();
     // The if-converted block can still have a predicated terminator
     // (e.g. a predicated return). If that is the case, we cannot merge
     // it with the tail block.
@@ -2114,8 +2111,7 @@ static bool MaySpeculate(const MachineInstr &MI,
 
 /// Predicate instructions from the start of the block to the specified end with
 /// the specified condition.
-void IfConverter::PredicateBlock(BBInfo &BBI,
-                                 MachineBasicBlock::iterator E,
+void IfConverter::PredicateBlock(BBInfo &BBI, MachineBasicBlock::iterator E,
                                  SmallVectorImpl<MachineOperand> &Cond,
                                  SmallSet<MCPhysReg, 4> *LaterRedefs) {
   bool AnyUnpred = false;
@@ -2178,7 +2174,7 @@ void IfConverter::CopyAndPredicateBlock(BBInfo &ToBBI, BBInfo &FromBBI,
     unsigned ExtraPredCost = TII->getPredicationCost(I);
     unsigned NumCycles = SchedModel.computeInstrLatency(&I, false);
     if (NumCycles > 1)
-      ToBBI.ExtraCost += NumCycles-1;
+      ToBBI.ExtraCost += NumCycles - 1;
     ToBBI.ExtraCost2 += ExtraPredCost;
 
     if (!TII->isPredicated(I) && !MI->isDebugInstr()) {
@@ -2225,8 +2221,7 @@ void IfConverter::CopyAndPredicateBlock(BBInfo &ToBBI, BBInfo &FromBBI,
 /// edge from ToBBI to FromBBI.
 void IfConverter::MergeBlocks(BBInfo &ToBBI, BBInfo &FromBBI, bool AddEdges) {
   MachineBasicBlock &FromMBB = *FromBBI.BB;
-  assert(!FromMBB.hasAddressTaken() &&
-         "Removing a BB whose address is taken!");
+  assert(!FromMBB.hasAddressTaken() && "Removing a BB whose address is taken!");
 
   // If we're about to splice an INLINEASM_BR from FromBBI, we need to update
   // ToBBI's successor list accordingly.
@@ -2318,9 +2313,9 @@ void IfConverter::MergeBlocks(BBInfo &ToBBI, BBInfo &FromBBI, bool AddEdges) {
       //    C  D             C  D
       //
       if (ToBBI.BB->isSuccessor(Succ))
-        ToBBI.BB->setSuccProbability(
-            find(ToBBI.BB->successors(), Succ),
-            MBPI->getEdgeProbability(ToBBI.BB, Succ) + NewProb);
+        ToBBI.BB->setSuccProbability(find(ToBBI.BB->successors(), Succ),
+                                     MBPI->getEdgeProbability(ToBBI.BB, Succ) +
+                                         NewProb);
       else
         ToBBI.BB->addSuccessor(Succ, NewProb);
     }
diff --git a/llvm/lib/CodeGen/ImplicitNullChecks.cpp b/llvm/lib/CodeGen/ImplicitNullChecks.cpp
index b2a7aad734115d7..62e396031035f07 100644
--- a/llvm/lib/CodeGen/ImplicitNullChecks.cpp
+++ b/llvm/lib/CodeGen/ImplicitNullChecks.cpp
@@ -170,11 +170,7 @@ class ImplicitNullChecks : public MachineFunctionPass {
                                     MachineBasicBlock *HandlerMBB);
   void rewriteNullChecks(ArrayRef<NullCheck> NullCheckList);
 
-  enum AliasResult {
-    AR_NoAlias,
-    AR_MayAlias,
-    AR_WillAliasEverything
-  };
+  enum AliasResult { AR_NoAlias, AR_MayAlias, AR_WillAliasEverything };
 
   /// Returns AR_NoAlias if \p MI memory operation does not alias with
   /// \p PrevMI, AR_MayAlias if they may alias and AR_WillAliasEverything if
@@ -182,11 +178,7 @@ class ImplicitNullChecks : public MachineFunctionPass {
   AliasResult areMemoryOpsAliased(const MachineInstr &MI,
                                   const MachineInstr *PrevMI) const;
 
-  enum SuitabilityResult {
-    SR_Suitable,
-    SR_Unsuitable,
-    SR_Impossible
-  };
+  enum SuitabilityResult { SR_Suitable, SR_Unsuitable, SR_Impossible };
 
   /// Return SR_Suitable if \p MI a memory operation that can be used to
   /// implicitly null check the value in \p PointerReg, SR_Unsuitable if
@@ -367,7 +359,7 @@ ImplicitNullChecks::isSuitableMemoryOp(const MachineInstr &MI,
   // Implementation restriction for faulting_op insertion
   // TODO: This could be relaxed if we find a test case which warrants it.
   if (MI.getDesc().getNumDefs() > 1)
-   return SR_Unsuitable;
+    return SR_Unsuitable;
 
   if (!MI.mayLoadOrStore() || MI.isPredicable())
     return SR_Unsuitable;
@@ -501,7 +493,6 @@ bool ImplicitNullChecks::canDependenceHoistingClobberLiveIns(
     // branched to NullSucc directly.
     if (AnyAliasLiveIn(TRI, NullSucc, DependenceMO.getReg()))
       return true;
-
   }
 
   // The dependence does not clobber live-ins in NullSucc block.
@@ -612,8 +603,7 @@ bool ImplicitNullChecks::analyzeBlockForNullChecks(
     //
     // we must ensure that there are no instructions between the 'test' and
     // conditional jump that modify %rax.
-    assert(MBP.ConditionDef->getParent() ==  &MBB &&
-           "Should be in basic block");
+    assert(MBP.ConditionDef->getParent() == &MBB && "Should be in basic block");
 
     for (auto I = MBB.rbegin(); MBP.ConditionDef != &*I; ++I)
       if (I->modifiesRegister(PointerReg, TRI))
@@ -811,8 +801,8 @@ char ImplicitNullChecks::ID = 0;
 
 char &llvm::ImplicitNullChecksID = ImplicitNullChecks::ID;
 
-INITIALIZE_PASS_BEGIN(ImplicitNullChecks, DEBUG_TYPE,
-                      "Implicit null checks", false, false)
+INITIALIZE_PASS_BEGIN(ImplicitNullChecks, DEBUG_TYPE, "Implicit null checks",
+                      false, false)
 INITIALIZE_PASS_DEPENDENCY(AAResultsWrapperPass)
-INITIALIZE_PASS_END(ImplicitNullChecks, DEBUG_TYPE,
-                    "Implicit null checks", false, false)
+INITIALIZE_PASS_END(ImplicitNullChecks, DEBUG_TYPE, "Implicit null checks",
+                    false, false)
diff --git a/llvm/lib/CodeGen/InlineSpiller.cpp b/llvm/lib/CodeGen/InlineSpiller.cpp
index c62f3db9d321562..8322319e76b2272 100644
--- a/llvm/lib/CodeGen/InlineSpiller.cpp
+++ b/llvm/lib/CodeGen/InlineSpiller.cpp
@@ -62,22 +62,22 @@ using namespace llvm;
 
 #define DEBUG_TYPE "regalloc"
 
-STATISTIC(NumSpilledRanges,   "Number of spilled live ranges");
-STATISTIC(NumSnippets,        "Number of spilled snippets");
-STATISTIC(NumSpills,          "Number of spills inserted");
-STATISTIC(NumSpillsRemoved,   "Number of spills removed");
-STATISTIC(NumReloads,         "Number of reloads inserted");
-STATISTIC(NumReloadsRemoved,  "Number of reloads removed");
-STATISTIC(NumFolded,          "Number of folded stack accesses");
-STATISTIC(NumFoldedLoads,     "Number of folded loads");
-STATISTIC(NumRemats,          "Number of rematerialized defs for spilling");
+STATISTIC(NumSpilledRanges, "Number of spilled live ranges");
+STATISTIC(NumSnippets, "Number of spilled snippets");
+STATISTIC(NumSpills, "Number of spills inserted");
+STATISTIC(NumSpillsRemoved, "Number of spills removed");
+STATISTIC(NumReloads, "Number of reloads inserted");
+STATISTIC(NumReloadsRemoved, "Number of reloads removed");
+STATISTIC(NumFolded, "Number of folded stack accesses");
+STATISTIC(NumFoldedLoads, "Number of folded loads");
+STATISTIC(NumRemats, "Number of rematerialized defs for spilling");
 
 static cl::opt<bool> DisableHoisting("disable-spill-hoist", cl::Hidden,
                                      cl::desc("Disable inline spill hoisting"));
 static cl::opt<bool>
-RestrictStatepointRemat("restrict-statepoint-remat",
-                       cl::init(false), cl::Hidden,
-                       cl::desc("Restrict remat for statepoint operands"));
+    RestrictStatepointRemat("restrict-statepoint-remat", cl::init(false),
+                            cl::Hidden,
+                            cl::desc("Restrict remat for statepoint operands"));
 
 namespace {
 
@@ -176,13 +176,13 @@ class InlineSpiller : public Spiller {
   // All COPY instructions to/from snippets.
   // They are ignored since both operands refer to the same stack slot.
   // For bundled copies, this will only include the first header copy.
-  SmallPtrSet<MachineInstr*, 8> SnippetCopies;
+  SmallPtrSet<MachineInstr *, 8> SnippetCopies;
 
   // Values that failed to remat at some point.
-  SmallPtrSet<VNInfo*, 8> UsedValues;
+  SmallPtrSet<VNInfo *, 8> UsedValues;
 
   // Dead defs generated during spilling.
-  SmallVector<MachineInstr*, 8> DeadDefs;
+  SmallVector<MachineInstr *, 8> DeadDefs;
 
   // Object records spills information and does the hoisting.
   HoistSpillHelper HSpiller;
@@ -217,7 +217,7 @@ class InlineSpiller : public Spiller {
   bool hoistSpillInsideBB(LiveInterval &SpillLI, MachineInstr &CopyMI);
   void eliminateRedundantSpills(LiveInterval &LI, VNInfo *VNI);
 
-  void markValueUsed(LiveInterval*, VNInfo*);
+  void markValueUsed(LiveInterval *, VNInfo *);
   bool canGuaranteeAssignmentAfterRemat(Register VReg, MachineInstr &MI);
   bool reMaterializeFor(LiveInterval &, MachineInstr &MI);
   void reMaterializeAll();
@@ -499,7 +499,7 @@ bool InlineSpiller::hoistSpillInsideBB(LiveInterval &SpillLI,
 /// redundant spills of this value in SLI.reg and sibling copies.
 void InlineSpiller::eliminateRedundantSpills(LiveInterval &SLI, VNInfo *VNI) {
   assert(VNI && "Missing value");
-  SmallVector<std::pair<LiveInterval*, VNInfo*>, 8> WorkList;
+  SmallVector<std::pair<LiveInterval *, VNInfo *>, 8> WorkList;
   WorkList.push_back(std::make_pair(&SLI, VNI));
   assert(StackInt && "No stack slot assigned yet.");
 
@@ -562,7 +562,7 @@ void InlineSpiller::eliminateRedundantSpills(LiveInterval &SLI, VNInfo *VNI) {
 /// markValueUsed - Remember that VNI failed to rematerialize, so its defining
 /// instruction cannot be eliminated. See through snippet copies
 void InlineSpiller::markValueUsed(LiveInterval *LI, VNInfo *VNI) {
-  SmallVector<std::pair<LiveInterval*, VNInfo*>, 8> WorkList;
+  SmallVector<std::pair<LiveInterval *, VNInfo *>, 8> WorkList;
   WorkList.push_back(std::make_pair(LI, VNI));
   do {
     std::tie(LI, VNI) = WorkList.pop_back_val();
@@ -670,8 +670,7 @@ bool InlineSpiller::reMaterializeFor(LiveInterval &VirtReg, MachineInstr &MI) {
 
   // Before rematerializing into a register for a single instruction, try to
   // fold a load into the instruction. That avoids allocating a new register.
-  if (RM.OrigMI->canFoldAsLoad() &&
-      foldMemoryOperand(Ops, RM.OrigMI)) {
+  if (RM.OrigMI->canFoldAsLoad() && foldMemoryOperand(Ops, RM.OrigMI)) {
     Edit->markRematerialized(RM.ParentVNI);
     ++NumFoldedLoads;
     return true;
@@ -733,7 +732,7 @@ void InlineSpiller::reMaterializeAll() {
         continue;
 
       assert(!MI.isDebugInstr() && "Did not expect to find a use in debug "
-             "instruction that isn't a DBG_VALUE");
+                                   "instruction that isn't a DBG_VALUE");
 
       anyRemat |= reMaterializeFor(LI, MI);
     }
@@ -862,9 +861,8 @@ static void dumpMachineInstrRangeWithSlotIndex(MachineBasicBlock::iterator B,
 /// @param Ops    Operand indices from AnalyzeVirtRegInBundle().
 /// @param LoadMI Load instruction to use instead of stack slot when non-null.
 /// @return       True on success.
-bool InlineSpiller::
-foldMemoryOperand(ArrayRef<std::pair<MachineInstr *, unsigned>> Ops,
-                  MachineInstr *LoadMI) {
+bool InlineSpiller::foldMemoryOperand(
+    ArrayRef<std::pair<MachineInstr *, unsigned>> Ops, MachineInstr *LoadMI) {
   if (Ops.empty())
     return false;
   // Don't attempt folding in bundles.
@@ -926,7 +924,7 @@ foldMemoryOperand(ArrayRef<std::pair<MachineInstr *, unsigned>> Ops,
 
   MachineInstrSpan MIS(MI, MI->getParent());
 
-  SmallVector<std::pair<unsigned, unsigned> > TiedOps;
+  SmallVector<std::pair<unsigned, unsigned>> TiedOps;
   if (UntieRegs)
     for (unsigned Idx : FoldOps) {
       MachineOperand &MO = MI->getOperand(Idx);
@@ -988,13 +986,14 @@ foldMemoryOperand(ArrayRef<std::pair<MachineInstr *, unsigned>> Ops,
   // substituted / preserved with more analysis.
   if (MI->peekDebugInstrNum() && Ops[0].second == 0) {
     // Helper lambda.
-    auto MakeSubstitution = [this,FoldMI,MI,&Ops]() {
+    auto MakeSubstitution = [this, FoldMI, MI, &Ops]() {
       // Substitute old operand zero to the new instructions memory operand.
       unsigned OldOperandNum = Ops[0].second;
       unsigned NewNum = FoldMI->getDebugInstrNum();
       unsigned OldNum = MI->getDebugInstrNum();
-      MF.makeDebugValueSubstitution({OldNum, OldOperandNum},
-                         {NewNum, MachineFunction::DebugOperandMemNumber});
+      MF.makeDebugValueSubstitution(
+          {OldNum, OldOperandNum},
+          {NewNum, MachineFunction::DebugOperandMemNumber});
     };
 
     const MachineOperand &Op0 = MI->getOperand(Ops[0].second);
@@ -1049,8 +1048,7 @@ foldMemoryOperand(ArrayRef<std::pair<MachineInstr *, unsigned>> Ops,
   return true;
 }
 
-void InlineSpiller::insertReload(Register NewVReg,
-                                 SlotIndex Idx,
+void InlineSpiller::insertReload(Register NewVReg, SlotIndex Idx,
                                  MachineBasicBlock::iterator MI) {
   MachineBasicBlock &MBB = *MI->getParent();
 
@@ -1081,7 +1079,7 @@ static bool isRealSpill(const MachineInstr &Def) {
 
 /// insertSpill - Insert a spill of NewVReg after MI.
 void InlineSpiller::insertSpill(Register NewVReg, bool isKill,
-                                 MachineBasicBlock::iterator MI) {
+                                MachineBasicBlock::iterator MI) {
   // Spill are not terminators, so inserting spills after terminators will
   // violate invariants in MachineVerifier.
   assert(!MI->isTerminator() && "Inserting a spill after a terminator");
@@ -1135,7 +1133,7 @@ void InlineSpiller::spillAroundUses(Register Reg) {
     }
 
     assert(!MI.isDebugInstr() && "Did not expect to find a use in debug "
-           "instruction that isn't a DBG_VALUE");
+                                 "instruction that isn't a DBG_VALUE");
 
     // Ignore copies to/from snippets. We'll delete them.
     if (SnippetCopies.count(&MI))
@@ -1146,7 +1144,7 @@ void InlineSpiller::spillAroundUses(Register Reg) {
       continue;
 
     // Analyze instruction.
-    SmallVector<std::pair<MachineInstr*, unsigned>, 8> Ops;
+    SmallVector<std::pair<MachineInstr *, unsigned>, 8> Ops;
     VirtRegInfo RI = AnalyzeVirtRegInBundle(MI, Reg, &Ops);
 
     // Find the slot index where this instruction reads and writes OldLI.
@@ -1533,7 +1531,7 @@ void HoistSpillHelper::runHoistSpills(
     }
 
     SmallPtrSet<MachineDomTreeNode *, 16> &SpillsInSubTree =
-          SpillsInSubTreeMap[*RIt].first;
+        SpillsInSubTreeMap[*RIt].first;
     BlockFrequency &SubTreeCost = SpillsInSubTreeMap[*RIt].second;
     // No spills in subtree, simply continue.
     if (SpillsInSubTree.empty())
diff --git a/llvm/lib/CodeGen/InterferenceCache.cpp b/llvm/lib/CodeGen/InterferenceCache.cpp
index ae197ee5553aef1..3c2b94208756aa3 100644
--- a/llvm/lib/CodeGen/InterferenceCache.cpp
+++ b/llvm/lib/CodeGen/InterferenceCache.cpp
@@ -40,17 +40,16 @@ const InterferenceCache::BlockInterference
 // also support for when pass managers are reused for targets with different
 // numbers of PhysRegs: in this case PhysRegEntries is freed and reinitialized.
 void InterferenceCache::reinitPhysRegEntries() {
-  if (PhysRegEntriesCount == TRI->getNumRegs()) return;
+  if (PhysRegEntriesCount == TRI->getNumRegs())
+    return;
   free(PhysRegEntries);
   PhysRegEntriesCount = TRI->getNumRegs();
-  PhysRegEntries = static_cast<unsigned char*>(
+  PhysRegEntries = static_cast<unsigned char *>(
       safe_calloc(PhysRegEntriesCount, sizeof(unsigned char)));
 }
 
-void InterferenceCache::init(MachineFunction *mf,
-                             LiveIntervalUnion *liuarray,
-                             SlotIndexes *indexes,
-                             LiveIntervals *lis,
+void InterferenceCache::init(MachineFunction *mf, LiveIntervalUnion *liuarray,
+                             SlotIndexes *indexes, LiveIntervals *lis,
                              const TargetRegisterInfo *tri) {
   MF = mf;
   LIUArray = liuarray;
@@ -156,7 +155,7 @@ void InterferenceCache::Entry::update(unsigned MBBNum) {
       MF->getBlockNumbered(MBBNum)->getIterator();
   BlockInterference *BI = &Blocks[MBBNum];
   ArrayRef<SlotIndex> RegMaskSlots;
-  ArrayRef<const uint32_t*> RegMaskBits;
+  ArrayRef<const uint32_t *> RegMaskBits;
   while (true) {
     BI->Tag = Tag;
     BI->First = BI->Last = SlotIndex();
@@ -248,11 +247,11 @@ void InterferenceCache::Entry::update(unsigned MBBNum) {
   // Also check for register mask interference.
   SlotIndex Limit = BI->Last.isValid() ? BI->Last : Start;
   for (unsigned i = RegMaskSlots.size();
-       i && RegMaskSlots[i-1].getDeadSlot() > Limit; --i)
-    if (MachineOperand::clobbersPhysReg(RegMaskBits[i-1], PhysReg)) {
+       i && RegMaskSlots[i - 1].getDeadSlot() > Limit; --i)
+    if (MachineOperand::clobbersPhysReg(RegMaskBits[i - 1], PhysReg)) {
       // Register mask i-1 clobbers PhysReg after the LIU interference.
       // Model the regmask clobber as a dead def.
-      BI->Last = RegMaskSlots[i-1].getDeadSlot();
+      BI->Last = RegMaskSlots[i - 1].getDeadSlot();
       break;
     }
 }
diff --git a/llvm/lib/CodeGen/InterferenceCache.h b/llvm/lib/CodeGen/InterferenceCache.h
index 2a176b4f2cf7b18..dfe829162249133 100644
--- a/llvm/lib/CodeGen/InterferenceCache.h
+++ b/llvm/lib/CodeGen/InterferenceCache.h
@@ -142,7 +142,7 @@ class LLVM_LIBRARY_VISIBILITY InterferenceCache {
 
   // Point to an entry for each physreg. The entry pointed to may not be up to
   // date, and it may have been reused for a different physreg.
-  unsigned char* PhysRegEntries = nullptr;
+  unsigned char *PhysRegEntries = nullptr;
   size_t PhysRegEntriesCount = 0;
 
   // Next round-robin entry to be picked.
@@ -158,9 +158,7 @@ class LLVM_LIBRARY_VISIBILITY InterferenceCache {
   InterferenceCache() = default;
   InterferenceCache &operator=(const InterferenceCache &other) = delete;
   InterferenceCache(const InterferenceCache &other) = delete;
-  ~InterferenceCache() {
-    free(PhysRegEntries);
-  }
+  ~InterferenceCache() { free(PhysRegEntries); }
 
   void reinitPhysRegEntries();
 
@@ -194,9 +192,7 @@ class LLVM_LIBRARY_VISIBILITY InterferenceCache {
     /// Cursor - Create a dangling cursor.
     Cursor() = default;
 
-    Cursor(const Cursor &O) {
-      setEntry(O.CacheEntry);
-    }
+    Cursor(const Cursor &O) { setEntry(O.CacheEntry); }
 
     Cursor &operator=(const Cursor &O) {
       setEntry(O.CacheEntry);
@@ -220,21 +216,15 @@ class LLVM_LIBRARY_VISIBILITY InterferenceCache {
     }
 
     /// hasInterference - Return true if the current block has any interference.
-    bool hasInterference() {
-      return Current->First.isValid();
-    }
+    bool hasInterference() { return Current->First.isValid(); }
 
     /// first - Return the starting index of the first interfering range in the
     /// current block.
-    SlotIndex first() {
-      return Current->First;
-    }
+    SlotIndex first() { return Current->First; }
 
     /// last - Return the ending index of the last interfering range in the
     /// current block.
-    SlotIndex last() {
-      return Current->Last;
-    }
+    SlotIndex last() { return Current->Last; }
   };
 };
 
diff --git a/llvm/lib/CodeGen/InterleavedAccessPass.cpp b/llvm/lib/CodeGen/InterleavedAccessPass.cpp
index 6b3848531569c0b..e61ea9ffc67daa4 100644
--- a/llvm/lib/CodeGen/InterleavedAccessPass.cpp
+++ b/llvm/lib/CodeGen/InterleavedAccessPass.cpp
@@ -145,11 +145,13 @@ class InterleavedAccess : public FunctionPass {
 
 char InterleavedAccess::ID = 0;
 
-INITIALIZE_PASS_BEGIN(InterleavedAccess, DEBUG_TYPE,
+INITIALIZE_PASS_BEGIN(
+    InterleavedAccess, DEBUG_TYPE,
     "Lower interleaved memory accesses to target specific intrinsics", false,
     false)
 INITIALIZE_PASS_DEPENDENCY(DominatorTreeWrapperPass)
-INITIALIZE_PASS_END(InterleavedAccess, DEBUG_TYPE,
+INITIALIZE_PASS_END(
+    InterleavedAccess, DEBUG_TYPE,
     "Lower interleaved memory accesses to target specific intrinsics", false,
     false)
 
@@ -207,8 +209,8 @@ static bool isDeInterleaveMask(ArrayRef<int> Mask, unsigned &Factor,
 /// It checks for a more general pattern than the RE-interleave mask.
 /// I.e. <x, y, ... z, x+1, y+1, ...z+1, x+2, y+2, ...z+2, ...>
 /// E.g. For a Factor of 2 (LaneLen=4): <4, 32, 5, 33, 6, 34, 7, 35>
-/// E.g. For a Factor of 3 (LaneLen=4): <4, 32, 16, 5, 33, 17, 6, 34, 18, 7, 35, 19>
-/// E.g. For a Factor of 4 (LaneLen=2): <8, 2, 12, 4, 9, 3, 13, 5>
+/// E.g. For a Factor of 3 (LaneLen=4): <4, 32, 16, 5, 33, 17, 6, 34, 18, 7, 35,
+/// 19> E.g. For a Factor of 4 (LaneLen=2): <8, 2, 12, 4, 9, 3, 13, 5>
 ///
 /// The particular case of an RE-interleave mask is:
 /// I.e. <0, LaneLen, ... , LaneLen*(Factor - 1), 1, LaneLen + 1, ...>
@@ -290,8 +292,7 @@ bool InterleavedAccess::lowerInterleavedLoad(
   for (auto *Shuffle : Shuffles) {
     if (Shuffle->getType() != VecTy)
       return false;
-    if (!isDeInterleaveMaskOfFactor(Shuffle->getShuffleMask(), Factor,
-                                    Index))
+    if (!isDeInterleaveMaskOfFactor(Shuffle->getShuffleMask(), Factor, Index))
       return false;
 
     assert(Shuffle->getShuffleMask().size() <= NumLoadElements);
@@ -300,8 +301,7 @@ bool InterleavedAccess::lowerInterleavedLoad(
   for (auto *Shuffle : BinOpShuffles) {
     if (Shuffle->getType() != VecTy)
       return false;
-    if (!isDeInterleaveMaskOfFactor(Shuffle->getShuffleMask(), Factor,
-                                    Index))
+    if (!isDeInterleaveMaskOfFactor(Shuffle->getShuffleMask(), Factor, Index))
       return false;
 
     assert(Shuffle->getShuffleMask().size() <= NumLoadElements);
diff --git a/llvm/lib/CodeGen/InterleavedLoadCombinePass.cpp b/llvm/lib/CodeGen/InterleavedLoadCombinePass.cpp
index 88e43e67493f6be..ec70806424db5b2 100644
--- a/llvm/lib/CodeGen/InterleavedLoadCombinePass.cpp
+++ b/llvm/lib/CodeGen/InterleavedLoadCombinePass.cpp
@@ -412,12 +412,15 @@ class Polynomial {
     //
     //      =    ([b_h + a_h + (b_m + a_m) >> (n-2)] % 2) * 2^(n-2) +
     //         + ((b_m + a_m) % 2^(n-2)) +
-    //         + o_h * 2^(e'-1) * 2^(n-e') +               | pre(2), move 2^(e'-1)
-    //                                                     | out of the old exponent
+    //         + o_h * 2^(e'-1) * 2^(n-e') +               | pre(2), move
+    //         2^(e'-1)
+    //                                                     | out of the old
+    //                                                     exponent
     //         + E * 2^(n-e') =
     //      =    ([b_h + a_h + (b_m + a_m) >> (n-2)] % 2) * 2^(n-2) +
     //         + ((b_m + a_m) % 2^(n-2)) +
-    //         + [o_h * 2^(e'-1) + E] * 2^(n-e') +         | move 2^(e'-1) out of
+    //         + [o_h * 2^(e'-1) + E] * 2^(n-e') +         | move 2^(e'-1) out
+    //         of
     //                                                     | the old exponent
     //
     //    Let E' = o_h * 2^(e'-1) + E
@@ -630,8 +633,8 @@ static raw_ostream &operator<<(raw_ostream &OS, const Polynomial &S) {
 ///
 /// 1) The memory address loaded into the element as Polynomial
 /// 2) a set of load instruction necessary to construct the vector,
-/// 3) a set of all other instructions that are necessary to create the vector and
-/// 4) a pointer value that can be used as relative base for all elements.
+/// 3) a set of all other instructions that are necessary to create the vector
+/// and 4) a pointer value that can be used as relative base for all elements.
 struct VectorInfo {
 private:
   VectorInfo(const VectorInfo &c) : VTy(c.VTy) {
@@ -1226,7 +1229,7 @@ bool InterleavedLoadCombineImpl::combine(std::list<VectorInfo> &InterleavedLoad,
   auto MSSAU = MemorySSAUpdater(&MSSA);
   MemoryUse *MSSALoad = cast<MemoryUse>(MSSAU.createMemoryAccessBefore(
       LI, nullptr, MSSA.getMemoryAccess(InsertionPoint)));
-  MSSAU.insertUse(MSSALoad, /*RenameUses=*/ true);
+  MSSAU.insertUse(MSSALoad, /*RenameUses=*/true);
 
   // Create the final SVIs and replace all uses.
   int i = 0;
@@ -1356,8 +1359,7 @@ INITIALIZE_PASS_END(
     "Combine interleaved loads into wide loads and shufflevector instructions",
     false, false)
 
-FunctionPass *
-llvm::createInterleavedLoadCombinePass() {
+FunctionPass *llvm::createInterleavedLoadCombinePass() {
   auto P = new InterleavedLoadCombine();
   return P;
 }
diff --git a/llvm/lib/CodeGen/IntrinsicLowering.cpp b/llvm/lib/CodeGen/IntrinsicLowering.cpp
index 61920a0e04ab59e..8ff59f06db62fd0 100644
--- a/llvm/lib/CodeGen/IntrinsicLowering.cpp
+++ b/llvm/lib/CodeGen/IntrinsicLowering.cpp
@@ -28,8 +28,7 @@ using namespace llvm;
 /// arguments we expect to pass in.
 template <class ArgIt>
 static CallInst *ReplaceCallWith(const char *NewFn, CallInst *CI,
-                                 ArgIt ArgBegin, ArgIt ArgEnd,
-                                 Type *RetTy) {
+                                 ArgIt ArgBegin, ArgIt ArgEnd, Type *RetTy) {
   // If we haven't already looked up this function, check to see if the
   // program already contains a function with this name.
   Module *M = CI->getModule();
@@ -51,36 +50,36 @@ static CallInst *ReplaceCallWith(const char *NewFn, CallInst *CI,
 
 /// Emit the code to lower bswap of V before the specified instruction IP.
 static Value *LowerBSWAP(LLVMContext &Context, Value *V, Instruction *IP) {
-  assert(V->getType()->isIntOrIntVectorTy() && "Can't bswap a non-integer type!");
+  assert(V->getType()->isIntOrIntVectorTy() &&
+         "Can't bswap a non-integer type!");
 
   unsigned BitSize = V->getType()->getScalarSizeInBits();
 
   IRBuilder<> Builder(IP);
 
-  switch(BitSize) {
-  default: llvm_unreachable("Unhandled type size of value to byteswap!");
+  switch (BitSize) {
+  default:
+    llvm_unreachable("Unhandled type size of value to byteswap!");
   case 16: {
-    Value *Tmp1 = Builder.CreateShl(V, ConstantInt::get(V->getType(), 8),
-                                    "bswap.2");
-    Value *Tmp2 = Builder.CreateLShr(V, ConstantInt::get(V->getType(), 8),
-                                     "bswap.1");
+    Value *Tmp1 =
+        Builder.CreateShl(V, ConstantInt::get(V->getType(), 8), "bswap.2");
+    Value *Tmp2 =
+        Builder.CreateLShr(V, ConstantInt::get(V->getType(), 8), "bswap.1");
     V = Builder.CreateOr(Tmp1, Tmp2, "bswap.i16");
     break;
   }
   case 32: {
-    Value *Tmp4 = Builder.CreateShl(V, ConstantInt::get(V->getType(), 24),
-                                    "bswap.4");
-    Value *Tmp3 = Builder.CreateShl(V, ConstantInt::get(V->getType(), 8),
-                                    "bswap.3");
-    Value *Tmp2 = Builder.CreateLShr(V, ConstantInt::get(V->getType(), 8),
-                                     "bswap.2");
-    Value *Tmp1 = Builder.CreateLShr(V,ConstantInt::get(V->getType(), 24),
-                                     "bswap.1");
-    Tmp3 = Builder.CreateAnd(Tmp3,
-                         ConstantInt::get(V->getType(), 0xFF0000),
+    Value *Tmp4 =
+        Builder.CreateShl(V, ConstantInt::get(V->getType(), 24), "bswap.4");
+    Value *Tmp3 =
+        Builder.CreateShl(V, ConstantInt::get(V->getType(), 8), "bswap.3");
+    Value *Tmp2 =
+        Builder.CreateLShr(V, ConstantInt::get(V->getType(), 8), "bswap.2");
+    Value *Tmp1 =
+        Builder.CreateLShr(V, ConstantInt::get(V->getType(), 24), "bswap.1");
+    Tmp3 = Builder.CreateAnd(Tmp3, ConstantInt::get(V->getType(), 0xFF0000),
                              "bswap.and3");
-    Tmp2 = Builder.CreateAnd(Tmp2,
-                           ConstantInt::get(V->getType(), 0xFF00),
+    Tmp2 = Builder.CreateAnd(Tmp2, ConstantInt::get(V->getType(), 0xFF00),
                              "bswap.and2");
     Tmp4 = Builder.CreateOr(Tmp4, Tmp3, "bswap.or1");
     Tmp2 = Builder.CreateOr(Tmp2, Tmp1, "bswap.or2");
@@ -88,48 +87,34 @@ static Value *LowerBSWAP(LLVMContext &Context, Value *V, Instruction *IP) {
     break;
   }
   case 64: {
-    Value *Tmp8 = Builder.CreateShl(V, ConstantInt::get(V->getType(), 56),
-                                    "bswap.8");
-    Value *Tmp7 = Builder.CreateShl(V, ConstantInt::get(V->getType(), 40),
-                                    "bswap.7");
-    Value *Tmp6 = Builder.CreateShl(V, ConstantInt::get(V->getType(), 24),
-                                    "bswap.6");
-    Value *Tmp5 = Builder.CreateShl(V, ConstantInt::get(V->getType(), 8),
-                                    "bswap.5");
-    Value* Tmp4 = Builder.CreateLShr(V, ConstantInt::get(V->getType(), 8),
-                                     "bswap.4");
-    Value* Tmp3 = Builder.CreateLShr(V,
-                                     ConstantInt::get(V->getType(), 24),
-                                     "bswap.3");
-    Value* Tmp2 = Builder.CreateLShr(V,
-                                     ConstantInt::get(V->getType(), 40),
-                                     "bswap.2");
-    Value* Tmp1 = Builder.CreateLShr(V,
-                                     ConstantInt::get(V->getType(), 56),
-                                     "bswap.1");
-    Tmp7 = Builder.CreateAnd(Tmp7,
-                             ConstantInt::get(V->getType(),
-                                              0xFF000000000000ULL),
-                             "bswap.and7");
-    Tmp6 = Builder.CreateAnd(Tmp6,
-                             ConstantInt::get(V->getType(),
-                                              0xFF0000000000ULL),
-                             "bswap.and6");
-    Tmp5 = Builder.CreateAnd(Tmp5,
-                        ConstantInt::get(V->getType(),
-                             0xFF00000000ULL),
-                             "bswap.and5");
-    Tmp4 = Builder.CreateAnd(Tmp4,
-                        ConstantInt::get(V->getType(),
-                             0xFF000000ULL),
-                             "bswap.and4");
-    Tmp3 = Builder.CreateAnd(Tmp3,
-                             ConstantInt::get(V->getType(),
-                             0xFF0000ULL),
+    Value *Tmp8 =
+        Builder.CreateShl(V, ConstantInt::get(V->getType(), 56), "bswap.8");
+    Value *Tmp7 =
+        Builder.CreateShl(V, ConstantInt::get(V->getType(), 40), "bswap.7");
+    Value *Tmp6 =
+        Builder.CreateShl(V, ConstantInt::get(V->getType(), 24), "bswap.6");
+    Value *Tmp5 =
+        Builder.CreateShl(V, ConstantInt::get(V->getType(), 8), "bswap.5");
+    Value *Tmp4 =
+        Builder.CreateLShr(V, ConstantInt::get(V->getType(), 8), "bswap.4");
+    Value *Tmp3 =
+        Builder.CreateLShr(V, ConstantInt::get(V->getType(), 24), "bswap.3");
+    Value *Tmp2 =
+        Builder.CreateLShr(V, ConstantInt::get(V->getType(), 40), "bswap.2");
+    Value *Tmp1 =
+        Builder.CreateLShr(V, ConstantInt::get(V->getType(), 56), "bswap.1");
+    Tmp7 = Builder.CreateAnd(
+        Tmp7, ConstantInt::get(V->getType(), 0xFF000000000000ULL),
+        "bswap.and7");
+    Tmp6 = Builder.CreateAnd(
+        Tmp6, ConstantInt::get(V->getType(), 0xFF0000000000ULL), "bswap.and6");
+    Tmp5 = Builder.CreateAnd(
+        Tmp5, ConstantInt::get(V->getType(), 0xFF00000000ULL), "bswap.and5");
+    Tmp4 = Builder.CreateAnd(
+        Tmp4, ConstantInt::get(V->getType(), 0xFF000000ULL), "bswap.and4");
+    Tmp3 = Builder.CreateAnd(Tmp3, ConstantInt::get(V->getType(), 0xFF0000ULL),
                              "bswap.and3");
-    Tmp2 = Builder.CreateAnd(Tmp2,
-                             ConstantInt::get(V->getType(),
-                             0xFF00ULL),
+    Tmp2 = Builder.CreateAnd(Tmp2, ConstantInt::get(V->getType(), 0xFF00ULL),
                              "bswap.and2");
     Tmp8 = Builder.CreateOr(Tmp8, Tmp7, "bswap.or1");
     Tmp6 = Builder.CreateOr(Tmp6, Tmp5, "bswap.or2");
@@ -149,10 +134,8 @@ static Value *LowerCTPOP(LLVMContext &Context, Value *V, Instruction *IP) {
   assert(V->getType()->isIntegerTy() && "Can't ctpop a non-integer type!");
 
   static const uint64_t MaskValues[6] = {
-    0x5555555555555555ULL, 0x3333333333333333ULL,
-    0x0F0F0F0F0F0F0F0FULL, 0x00FF00FF00FF00FFULL,
-    0x0000FFFF0000FFFFULL, 0x00000000FFFFFFFFULL
-  };
+      0x5555555555555555ULL, 0x3333333333333333ULL, 0x0F0F0F0F0F0F0F0FULL,
+      0x00FF00FF00FF00FFULL, 0x0000FFFF0000FFFFULL, 0x00000000FFFFFFFFULL};
 
   IRBuilder<> Builder(IP);
 
@@ -162,13 +145,12 @@ static Value *LowerCTPOP(LLVMContext &Context, Value *V, Instruction *IP) {
 
   for (unsigned n = 0; n < WordSize; ++n) {
     Value *PartValue = V;
-    for (unsigned i = 1, ct = 0; i < (BitSize>64 ? 64 : BitSize);
+    for (unsigned i = 1, ct = 0; i < (BitSize > 64 ? 64 : BitSize);
          i <<= 1, ++ct) {
       Value *MaskCst = ConstantInt::get(V->getType(), MaskValues[ct]);
       Value *LHS = Builder.CreateAnd(PartValue, MaskCst, "cppop.and1");
-      Value *VShift = Builder.CreateLShr(PartValue,
-                                        ConstantInt::get(V->getType(), i),
-                                         "ctpop.sh");
+      Value *VShift = Builder.CreateLShr(
+          PartValue, ConstantInt::get(V->getType(), i), "ctpop.sh");
       Value *RHS = Builder.CreateAnd(VShift, MaskCst, "cppop.and2");
       PartValue = Builder.CreateAdd(LHS, RHS, "ctpop.step");
     }
@@ -200,10 +182,10 @@ static Value *LowerCTLZ(LLVMContext &Context, Value *V, Instruction *IP) {
 }
 
 static void ReplaceFPIntrinsicWithCall(CallInst *CI, const char *Fname,
-                                       const char *Dname,
-                                       const char *LDname) {
+                                       const char *Dname, const char *LDname) {
   switch (CI->getArgOperand(0)->getType()->getTypeID()) {
-  default: llvm_unreachable("Invalid type in intrinsic");
+  default:
+    llvm_unreachable("Invalid type in intrinsic");
   case Type::FloatTyID:
     ReplaceCallWith(Fname, CI, CI->arg_begin(), CI->arg_end(),
                     Type::getFloatTy(CI->getContext()));
@@ -230,11 +212,11 @@ void IntrinsicLowering::LowerIntrinsicCall(CallInst *CI) {
 
   switch (Callee->getIntrinsicID()) {
   case Intrinsic::not_intrinsic:
-    report_fatal_error("Cannot lower a call to a non-intrinsic function '"+
-                      Callee->getName() + "'!");
+    report_fatal_error("Cannot lower a call to a non-intrinsic function '" +
+                       Callee->getName() + "'!");
   default:
-    report_fatal_error("Code generator does not support intrinsic function '"+
-                      Callee->getName()+"'!");
+    report_fatal_error("Code generator does not support intrinsic function '" +
+                       Callee->getName() + "'!");
 
   case Intrinsic::expect: {
     // Just replace __builtin_expect(exp, c) with EXP.
@@ -271,8 +253,9 @@ void IntrinsicLowering::LowerIntrinsicCall(CallInst *CI) {
   case Intrinsic::stackrestore: {
     if (!Warned)
       errs() << "WARNING: this target does not support the llvm.stack"
-             << (Callee->getIntrinsicID() == Intrinsic::stacksave ?
-               "save" : "restore") << " intrinsic.\n";
+             << (Callee->getIntrinsicID() == Intrinsic::stacksave ? "save"
+                                                                  : "restore")
+             << " intrinsic.\n";
     Warned = true;
     if (Callee->getIntrinsicID() == Intrinsic::stacksave)
       CI->replaceAllUsesWith(Constant::getNullValue(CI->getType()));
@@ -289,8 +272,9 @@ void IntrinsicLowering::LowerIntrinsicCall(CallInst *CI) {
   case Intrinsic::returnaddress:
   case Intrinsic::frameaddress:
     errs() << "WARNING: this target does not support the llvm."
-           << (Callee->getIntrinsicID() == Intrinsic::returnaddress ?
-             "return" : "frame") << "address intrinsic.\n";
+           << (Callee->getIntrinsicID() == Intrinsic::returnaddress ? "return"
+                                                                    : "frame")
+           << "address intrinsic.\n";
     CI->replaceAllUsesWith(
         ConstantPointerNull::get(cast<PointerType>(CI->getType())));
     break;
@@ -302,10 +286,10 @@ void IntrinsicLowering::LowerIntrinsicCall(CallInst *CI) {
     break;
 
   case Intrinsic::prefetch:
-    break;    // Simply strip out prefetches on unsupported architectures
+    break; // Simply strip out prefetches on unsupported architectures
 
   case Intrinsic::pcmarker:
-    break;    // Simply strip out pcmarker on unsupported architectures
+    break; // Simply strip out pcmarker on unsupported architectures
   case Intrinsic::readcyclecounter: {
     errs() << "WARNING: this target does not support the llvm.readcyclecoun"
            << "ter intrinsic.  It is being lowered to a constant 0\n";
@@ -315,7 +299,7 @@ void IntrinsicLowering::LowerIntrinsicCall(CallInst *CI) {
 
   case Intrinsic::dbg_declare:
   case Intrinsic::dbg_label:
-    break;    // Simply strip out debugging intrinsics
+    break; // Simply strip out debugging intrinsics
 
   case Intrinsic::eh_typeid_for:
     // Return something different to eh_selector.
@@ -331,7 +315,7 @@ void IntrinsicLowering::LowerIntrinsicCall(CallInst *CI) {
   case Intrinsic::assume:
   case Intrinsic::experimental_noalias_scope_decl:
   case Intrinsic::var_annotation:
-    break;   // Strip out these intrinsics
+    break; // Strip out these intrinsics
 
   case Intrinsic::memcpy: {
     Type *IntPtr = DL.getIntPtrType(Context);
@@ -341,7 +325,8 @@ void IntrinsicLowering::LowerIntrinsicCall(CallInst *CI) {
     Ops[0] = CI->getArgOperand(0);
     Ops[1] = CI->getArgOperand(1);
     Ops[2] = Size;
-    ReplaceCallWith("memcpy", CI, Ops, Ops+3, CI->getArgOperand(0)->getType());
+    ReplaceCallWith("memcpy", CI, Ops, Ops + 3,
+                    CI->getArgOperand(0)->getType());
     break;
   }
   case Intrinsic::memmove: {
@@ -352,7 +337,8 @@ void IntrinsicLowering::LowerIntrinsicCall(CallInst *CI) {
     Ops[0] = CI->getArgOperand(0);
     Ops[1] = CI->getArgOperand(1);
     Ops[2] = Size;
-    ReplaceCallWith("memmove", CI, Ops, Ops+3, CI->getArgOperand(0)->getType());
+    ReplaceCallWith("memmove", CI, Ops, Ops + 3,
+                    CI->getArgOperand(0)->getType());
     break;
   }
   case Intrinsic::memset: {
@@ -363,11 +349,12 @@ void IntrinsicLowering::LowerIntrinsicCall(CallInst *CI) {
     Value *Ops[3];
     Ops[0] = Op0;
     // Extend the amount to i32.
-    Ops[1] = Builder.CreateIntCast(CI->getArgOperand(1),
-                                   Type::getInt32Ty(Context),
-                                   /* isSigned */ false);
+    Ops[1] =
+        Builder.CreateIntCast(CI->getArgOperand(1), Type::getInt32Ty(Context),
+                              /* isSigned */ false);
     Ops[2] = Size;
-    ReplaceCallWith("memset", CI, Ops, Ops+3, CI->getArgOperand(0)->getType());
+    ReplaceCallWith("memset", CI, Ops, Ops + 3,
+                    CI->getArgOperand(0)->getType());
     break;
   }
   case Intrinsic::sqrt: {
@@ -431,10 +418,10 @@ void IntrinsicLowering::LowerIntrinsicCall(CallInst *CI) {
     break;
   }
   case Intrinsic::get_rounding:
-     // Lower to "round to the nearest"
-     if (!CI->getType()->isVoidTy())
-       CI->replaceAllUsesWith(ConstantInt::get(CI->getType(), 1));
-     break;
+    // Lower to "round to the nearest"
+    if (!CI->getType()->isVoidTy())
+      CI->replaceAllUsesWith(ConstantInt::get(CI->getType(), 1));
+    break;
   case Intrinsic::invariant_start:
   case Intrinsic::lifetime_start:
     // Discard region information.
diff --git a/llvm/lib/CodeGen/JMCInstrumenter.cpp b/llvm/lib/CodeGen/JMCInstrumenter.cpp
index a1f0a50406643e5..aaf66409c872af2 100644
--- a/llvm/lib/CodeGen/JMCInstrumenter.cpp
+++ b/llvm/lib/CodeGen/JMCInstrumenter.cpp
@@ -151,7 +151,8 @@ bool JMCInstrumenter::runOnModule(Module &M) {
   bool IsELF = ModuleTriple.isOSBinFormatELF();
   assert((IsELF || IsMSVC) && "Unsupported triple for JMC");
   bool UseX86FastCall = IsMSVC && ModuleTriple.getArch() == Triple::x86;
-  const char *const FlagSymbolSection = IsELF ? ".data.just.my.code" : ".msvcjmc";
+  const char *const FlagSymbolSection =
+      IsELF ? ".data.just.my.code" : ".msvcjmc";
 
   GlobalValue *CheckFunction = nullptr;
   DenseMap<DISubprogram *, Constant *> SavedFlags(8);
diff --git a/llvm/lib/CodeGen/LLVMTargetMachine.cpp b/llvm/lib/CodeGen/LLVMTargetMachine.cpp
index d02ec1db1165d41..2263e9903c93488 100644
--- a/llvm/lib/CodeGen/LLVMTargetMachine.cpp
+++ b/llvm/lib/CodeGen/LLVMTargetMachine.cpp
@@ -57,8 +57,8 @@ void LLVMTargetMachine::initAsmInfo() {
   // we'll crash later.
   // Provide the user with a useful error message about what's wrong.
   assert(TmpAsmInfo && "MCAsmInfo not initialized. "
-         "Make sure you include the correct TargetSelect.h"
-         "and that InitializeAllTargetMCs() is being invoked!");
+                       "Make sure you include the correct TargetSelect.h"
+                       "and that InitializeAllTargetMCs() is being invoked!");
 
   if (Options.BinutilsVersion.first > 0)
     TmpAsmInfo->setBinutilsVersion(Options.BinutilsVersion);
diff --git a/llvm/lib/CodeGen/LatencyPriorityQueue.cpp b/llvm/lib/CodeGen/LatencyPriorityQueue.cpp
index fab6b8d10a334e9..7a530b5ed1c6c50 100644
--- a/llvm/lib/CodeGen/LatencyPriorityQueue.cpp
+++ b/llvm/lib/CodeGen/LatencyPriorityQueue.cpp
@@ -35,22 +35,25 @@ bool latency_sort::operator()(const SUnit *LHS, const SUnit *RHS) const {
   // The most important heuristic is scheduling the critical path.
   unsigned LHSLatency = PQ->getLatency(LHSNum);
   unsigned RHSLatency = PQ->getLatency(RHSNum);
-  if (LHSLatency < RHSLatency) return true;
-  if (LHSLatency > RHSLatency) return false;
+  if (LHSLatency < RHSLatency)
+    return true;
+  if (LHSLatency > RHSLatency)
+    return false;
 
   // After that, if two nodes have identical latencies, look to see if one will
   // unblock more other nodes than the other.
   unsigned LHSBlocked = PQ->getNumSolelyBlockNodes(LHSNum);
   unsigned RHSBlocked = PQ->getNumSolelyBlockNodes(RHSNum);
-  if (LHSBlocked < RHSBlocked) return true;
-  if (LHSBlocked > RHSBlocked) return false;
+  if (LHSBlocked < RHSBlocked)
+    return true;
+  if (LHSBlocked > RHSBlocked)
+    return false;
 
   // Finally, just to provide a stable ordering, use the node number as a
   // deciding factor.
   return RHSNum < LHSNum;
 }
 
-
 /// getSingleUnscheduledPred - If there is exactly one unscheduled predecessor
 /// of SU, return it, otherwise return null.
 SUnit *LatencyPriorityQueue::getSingleUnscheduledPred(SUnit *SU) {
@@ -81,7 +84,6 @@ void LatencyPriorityQueue::push(SUnit *SU) {
   Queue.push_back(SU);
 }
 
-
 // scheduledNode - As nodes are scheduled, we look to see if there are any
 // successor nodes that have a single unscheduled predecessor.  If so, that
 // single predecessor has a higher priority, since scheduling it will make
@@ -98,10 +100,12 @@ void LatencyPriorityQueue::scheduledNode(SUnit *SU) {
 /// scheduled will make this node available, so it is better than some other
 /// node of the same priority that will not make a node available.
 void LatencyPriorityQueue::AdjustPriorityOfUnscheduledPreds(SUnit *SU) {
-  if (SU->isAvailable) return;  // All preds scheduled.
+  if (SU->isAvailable)
+    return; // All preds scheduled.
 
   SUnit *OnlyAvailablePred = getSingleUnscheduledPred(SU);
-  if (!OnlyAvailablePred || !OnlyAvailablePred->isAvailable) return;
+  if (!OnlyAvailablePred || !OnlyAvailablePred->isAvailable)
+    return;
 
   // Okay, we found a single predecessor that is available, but not scheduled.
   // Since it is available, it must be in the priority queue.  First remove it.
@@ -113,10 +117,12 @@ void LatencyPriorityQueue::AdjustPriorityOfUnscheduledPreds(SUnit *SU) {
 }
 
 SUnit *LatencyPriorityQueue::pop() {
-  if (empty()) return nullptr;
+  if (empty())
+    return nullptr;
   std::vector<SUnit *>::iterator Best = Queue.begin();
   for (std::vector<SUnit *>::iterator I = std::next(Queue.begin()),
-       E = Queue.end(); I != E; ++I)
+                                      E = Queue.end();
+       I != E; ++I)
     if (Picker(*Best, *I))
       Best = I;
   SUnit *V = *Best;
diff --git a/llvm/lib/CodeGen/LazyMachineBlockFrequencyInfo.cpp b/llvm/lib/CodeGen/LazyMachineBlockFrequencyInfo.cpp
index 39b44b917d9e373..dcac3bf4921ca56 100644
--- a/llvm/lib/CodeGen/LazyMachineBlockFrequencyInfo.cpp
+++ b/llvm/lib/CodeGen/LazyMachineBlockFrequencyInfo.cpp
@@ -1,7 +1,7 @@
 ///===- LazyMachineBlockFrequencyInfo.cpp - Lazy Machine Block Frequency --===//
 ///
-/// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
-/// See https://llvm.org/LICENSE.txt for license information.
+/// Part of the LLVM Project, under the Apache License v2.0 with LLVM
+/// Exceptions. See https://llvm.org/LICENSE.txt for license information.
 /// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
 ///
 ///===---------------------------------------------------------------------===//
diff --git a/llvm/lib/CodeGen/LexicalScopes.cpp b/llvm/lib/CodeGen/LexicalScopes.cpp
index 47c19c3d8ec4505..b68f159a2aafb85 100644
--- a/llvm/lib/CodeGen/LexicalScopes.cpp
+++ b/llvm/lib/CodeGen/LexicalScopes.cpp
@@ -169,10 +169,10 @@ LexicalScopes::getOrCreateRegularScope(const DILocalScope *Scope) {
   LexicalScope *Parent = nullptr;
   if (auto *Block = dyn_cast<DILexicalBlockBase>(Scope))
     Parent = getOrCreateLexicalScope(Block->getScope());
-  I = LexicalScopeMap.emplace(std::piecewise_construct,
-                              std::forward_as_tuple(Scope),
-                              std::forward_as_tuple(Parent, Scope, nullptr,
-                                                    false)).first;
+  I = LexicalScopeMap
+          .emplace(std::piecewise_construct, std::forward_as_tuple(Scope),
+                   std::forward_as_tuple(Parent, Scope, nullptr, false))
+          .first;
 
   if (!Parent) {
     assert(cast<DISubprogram>(Scope)->describes(&MF->getFunction()));
@@ -221,10 +221,10 @@ LexicalScopes::getOrCreateAbstractScope(const DILocalScope *Scope) {
   if (auto *Block = dyn_cast<DILexicalBlockBase>(Scope))
     Parent = getOrCreateAbstractScope(Block->getScope());
 
-  I = AbstractScopeMap.emplace(std::piecewise_construct,
-                               std::forward_as_tuple(Scope),
-                               std::forward_as_tuple(Parent, Scope,
-                                                     nullptr, true)).first;
+  I = AbstractScopeMap
+          .emplace(std::piecewise_construct, std::forward_as_tuple(Scope),
+                   std::forward_as_tuple(Parent, Scope, nullptr, true))
+          .first;
   if (isa<DISubprogram>(Scope))
     AbstractScopesList.push_back(&I->second);
   return &I->second;
diff --git a/llvm/lib/CodeGen/LiveDebugValues/InstrRefBasedImpl.cpp b/llvm/lib/CodeGen/LiveDebugValues/InstrRefBasedImpl.cpp
index b2c2b40139eda6e..0a357df6f54325d 100644
--- a/llvm/lib/CodeGen/LiveDebugValues/InstrRefBasedImpl.cpp
+++ b/llvm/lib/CodeGen/LiveDebugValues/InstrRefBasedImpl.cpp
@@ -382,8 +382,8 @@ class TransferTracker {
         // Continue processing values so that we add any other UseBeforeDef
         // entries needed for later.
         if (Num.getBlock() == (unsigned)MBB.getNumber() && !Num.isPHI()) {
-          LastUseBeforeDef = std::max(LastUseBeforeDef,
-                                      static_cast<unsigned>(Num.getInst()));
+          LastUseBeforeDef =
+              std::max(LastUseBeforeDef, static_cast<unsigned>(Num.getInst()));
           continue;
         }
         recoverAsEntryValue(Var, Value.Properties, Num);
@@ -404,8 +404,7 @@ class TransferTracker {
 
     // Add UseBeforeDef entry for the last value to be defined in this block.
     if (LastUseBeforeDef) {
-      addUseBeforeDef(Var, Value.Properties, DbgOps,
-                      LastUseBeforeDef);
+      addUseBeforeDef(Var, Value.Properties, DbgOps, LastUseBeforeDef);
       return;
     }
 
@@ -1124,8 +1123,8 @@ std::string MLocTracker::LocIdxToName(LocIdx Idx) const {
     ID -= NumRegs;
     unsigned Slot = ID / NumSlotIdxes;
     return Twine("slot ")
-        .concat(Twine(Slot).concat(Twine(" sz ").concat(Twine(Pos.first)
-        .concat(Twine(" offs ").concat(Twine(Pos.second))))))
+        .concat(Twine(Slot).concat(Twine(" sz ").concat(Twine(Pos.first).concat(
+            Twine(" offs ").concat(Twine(Pos.second))))))
         .str();
   } else {
     return TRI.getRegAsmName(ID).str();
@@ -2171,7 +2170,8 @@ bool InstrRefBasedLDV::transferRegisterCopy(MachineInstr &MI) {
   // alternative locations (or else terminating those variables).
   if (TTracker) {
     for (auto LocVal : ClobberedLocs) {
-      TTracker->clobberMloc(LocVal.first, LocVal.second, MI.getIterator(), false);
+      TTracker->clobberMloc(LocVal.first, LocVal.second, MI.getIterator(),
+                            false);
     }
   }
 
@@ -2375,7 +2375,7 @@ void InstrRefBasedLDV::produceMLocTransferFunction(
       // this block never used anyway.
       ValueIDNum NotGeneratedNum = ValueIDNum(I, 1, Idx);
       auto Result =
-        TransferMap.insert(std::make_pair(Idx.asU64(), NotGeneratedNum));
+          TransferMap.insert(std::make_pair(Idx.asU64(), NotGeneratedNum));
       if (!Result.second) {
         ValueIDNum &ValueID = Result.first->second;
         if (ValueID.getBlock() == I && ValueID.isPHI())
@@ -2896,7 +2896,8 @@ std::optional<ValueIDNum> InstrRefBasedLDV::pickOperandPHILoc(
     auto &LocVec = Locs[I];
     SmallVector<LocIdx, 4> NewCandidates;
     std::set_intersection(CandidateLocs.begin(), CandidateLocs.end(),
-                          LocVec.begin(), LocVec.end(), std::inserter(NewCandidates, NewCandidates.begin()));
+                          LocVec.begin(), LocVec.end(),
+                          std::inserter(NewCandidates, NewCandidates.begin()));
     CandidateLocs = NewCandidates;
   }
   if (CandidateLocs.empty())
@@ -3325,7 +3326,8 @@ void InstrRefBasedLDV::buildVLocValueMap(
       if (BlockLiveIn->Kind == DbgValue::VPHI)
         BlockLiveIn->Kind = DbgValue::Def;
       assert(BlockLiveIn->Properties.DIExpr->getFragmentInfo() ==
-             Var.getFragment() && "Fragment info missing during value prop");
+                 Var.getFragment() &&
+             "Fragment info missing during value prop");
       Output[MBB->getNumber()].push_back(std::make_pair(Var, *BlockLiveIn));
     }
   } // Per-variable loop.
@@ -4039,7 +4041,7 @@ template <> class SSAUpdaterTraits<LDVSSAUpdater> {
   /// solution determined it to be. This includes non-phi values if SSAUpdater
   /// tries to create a PHI where the incoming values are identical.
   static BlockValueNum CreateEmptyPHI(LDVSSABlock *BB, unsigned NumPreds,
-                                   LDVSSAUpdater *Updater) {
+                                      LDVSSAUpdater *Updater) {
     BlockValueNum PHIValNum = Updater->getValue(BB);
     LDVSSAPhi *PHI = BB->newPHI(PHIValNum);
     Updater->PHIs[PHIValNum] = PHI;
@@ -4048,7 +4050,8 @@ template <> class SSAUpdaterTraits<LDVSSAUpdater> {
 
   /// AddPHIOperand - Add the specified value as an operand of the PHI for
   /// the specified predecessor block.
-  static void AddPHIOperand(LDVSSAPhi *PHI, BlockValueNum Val, LDVSSABlock *Pred) {
+  static void AddPHIOperand(LDVSSAPhi *PHI, BlockValueNum Val,
+                            LDVSSABlock *Pred) {
     PHI->IncomingValues.push_back(std::make_pair(Pred, Val));
   }
 
@@ -4158,7 +4161,8 @@ std::optional<ValueIDNum> InstrRefBasedLDV::resolveDbgPHIsImpl(
   // Otherwise, we must use the SSA Updater. It will identify the value number
   // that we are to use, and the PHIs that must happen along the way.
   SSAUpdaterImpl<LDVSSAUpdater> Impl(&Updater, &AvailableValues, &CreatedPHIs);
-  BlockValueNum ResultInt = Impl.GetValue(Updater.getSSALDVBlock(Here.getParent()));
+  BlockValueNum ResultInt =
+      Impl.GetValue(Updater.getSSALDVBlock(Here.getParent()));
   ValueIDNum Result = ValueIDNum::fromU64(ResultInt);
 
   // We have the number for a PHI, or possibly live-through value, to be used
diff --git a/llvm/lib/CodeGen/LiveDebugValues/InstrRefBasedImpl.h b/llvm/lib/CodeGen/LiveDebugValues/InstrRefBasedImpl.h
index 30de18e53c4fb43..57db45b4d8df69a 100644
--- a/llvm/lib/CodeGen/LiveDebugValues/InstrRefBasedImpl.h
+++ b/llvm/lib/CodeGen/LiveDebugValues/InstrRefBasedImpl.h
@@ -1067,12 +1067,14 @@ class InstrRefBasedLDV : public LDVImpl {
   using ScopeToDILocT = DenseMap<const LexicalScope *, const DILocation *>;
 
   /// Mapping from lexical scopes to variables in that scope.
-  using ScopeToVarsT = DenseMap<const LexicalScope *, SmallSet<DebugVariable, 4>>;
+  using ScopeToVarsT =
+      DenseMap<const LexicalScope *, SmallSet<DebugVariable, 4>>;
 
   /// Mapping from lexical scopes to blocks where variables in that scope are
   /// assigned. Such blocks aren't necessarily "in" the lexical scope, it's
   /// just a block where an assignment happens.
-  using ScopeToAssignBlocksT = DenseMap<const LexicalScope *, SmallPtrSet<MachineBasicBlock *, 4>>;
+  using ScopeToAssignBlocksT =
+      DenseMap<const LexicalScope *, SmallPtrSet<MachineBasicBlock *, 4>>;
 
 private:
   MachineDominatorTree *DomTree;
@@ -1291,9 +1293,9 @@ class InstrRefBasedLDV : public LDVImpl {
   /// performance as it doesn't have to find the dominance frontier between
   /// different assignments.
   void placePHIsForSingleVarDefinition(
-          const SmallPtrSetImpl<MachineBasicBlock *> &InScopeBlocks,
-          MachineBasicBlock *MBB, SmallVectorImpl<VLocTracker> &AllTheVLocs,
-          const DebugVariable &Var, LiveInsT &Output);
+      const SmallPtrSetImpl<MachineBasicBlock *> &InScopeBlocks,
+      MachineBasicBlock *MBB, SmallVectorImpl<VLocTracker> &AllTheVLocs,
+      const DebugVariable &Var, LiveInsT &Output);
 
   /// Calculate the iterated-dominance-frontier for a set of defs, using the
   /// existing LLVM facilities for this. Works for a single "value" or
@@ -1427,10 +1429,10 @@ class InstrRefBasedLDV : public LDVImpl {
     if (!MI.hasOneMemOperand())
       return false;
     auto *MemOperand = *MI.memoperands_begin();
-    return MemOperand->isStore() &&
-           MemOperand->getPseudoValue() &&
-           MemOperand->getPseudoValue()->kind() == PseudoSourceValue::FixedStack
-           && !MemOperand->getPseudoValue()->isAliased(MFI);
+    return MemOperand->isStore() && MemOperand->getPseudoValue() &&
+           MemOperand->getPseudoValue()->kind() ==
+               PseudoSourceValue::FixedStack &&
+           !MemOperand->getPseudoValue()->isAliased(MFI);
   }
 
   std::optional<LocIdx> findLocationForMemOperand(const MachineInstr &MI);
diff --git a/llvm/lib/CodeGen/LiveDebugValues/VarLocBasedImpl.cpp b/llvm/lib/CodeGen/LiveDebugValues/VarLocBasedImpl.cpp
index 116c6b7e2d19efa..20e6240da568409 100644
--- a/llvm/lib/CodeGen/LiveDebugValues/VarLocBasedImpl.cpp
+++ b/llvm/lib/CodeGen/LiveDebugValues/VarLocBasedImpl.cpp
@@ -246,7 +246,7 @@ struct LocIndex {
     return (static_cast<uint64_t>(Location) << 32) | Index;
   }
 
-  template<typename IntT> static LocIndex fromRawInteger(IntT ID) {
+  template <typename IntT> static LocIndex fromRawInteger(IntT ID) {
     static_assert(std::is_unsigned_v<IntT> && sizeof(ID) == sizeof(uint64_t),
                   "Cannot convert raw integer to LocIndex");
     return {static_cast<u32_location_t>(ID >> 32),
@@ -303,9 +303,7 @@ class VarLocBasedLDV : public LDVImpl {
       bool operator==(const SpillLoc &Other) const {
         return SpillBase == Other.SpillBase && SpillOffset == Other.SpillOffset;
       }
-      bool operator!=(const SpillLoc &Other) const {
-        return !(*this == Other);
-      }
+      bool operator!=(const SpillLoc &Other) const { return !(*this == Other); }
     };
 
     // Target indices used for wasm-specific locations.
@@ -941,8 +939,7 @@ class VarLocBasedLDV : public LDVImpl {
     /// Return whether the set is empty or not.
     bool empty() const {
       assert(Vars.empty() == EntryValuesBackupVars.empty() &&
-             Vars.empty() == VarLocs.empty() &&
-             "open ranges are inconsistent");
+             Vars.empty() == VarLocs.empty() && "open ranges are inconsistent");
       return VarLocs.empty();
     }
 
@@ -1264,10 +1261,9 @@ void VarLocBasedLDV::getUsedRegs(const VarLocSet &CollectFrom,
 
 #ifndef NDEBUG
 void VarLocBasedLDV::printVarLocInMBB(const MachineFunction &MF,
-                                       const VarLocInMBB &V,
-                                       const VarLocMap &VarLocIDs,
-                                       const char *msg,
-                                       raw_ostream &Out) const {
+                                      const VarLocInMBB &V,
+                                      const VarLocMap &VarLocIDs,
+                                      const char *msg, raw_ostream &Out) const {
   Out << '\n' << msg << '\n';
   for (const MachineBasicBlock &BB : MF) {
     if (!V.count(&BB))
@@ -1667,7 +1663,7 @@ void VarLocBasedLDV::transferWasmDef(MachineInstr &MI,
 }
 
 bool VarLocBasedLDV::isSpillInstruction(const MachineInstr &MI,
-                                         MachineFunction *MF) {
+                                        MachineFunction *MF) {
   // TODO: Handle multiple stores folded into one.
   if (!MI.hasOneMemOperand())
     return false;
@@ -1680,7 +1676,7 @@ bool VarLocBasedLDV::isSpillInstruction(const MachineInstr &MI,
 }
 
 bool VarLocBasedLDV::isLocationSpill(const MachineInstr &MI,
-                                      MachineFunction *MF, Register &Reg) {
+                                     MachineFunction *MF, Register &Reg) {
   if (!isSpillInstruction(MI, MF))
     return false;
 
@@ -1742,9 +1738,9 @@ VarLocBasedLDV::isRestoreInstruction(const MachineInstr &MI,
 /// It will be inserted into the BB when we're done iterating over the
 /// instructions.
 void VarLocBasedLDV::transferSpillOrRestoreInst(MachineInstr &MI,
-                                                 OpenRangesSet &OpenRanges,
-                                                 VarLocMap &VarLocIDs,
-                                                 TransferMap &Transfers) {
+                                                OpenRangesSet &OpenRanges,
+                                                VarLocMap &VarLocIDs,
+                                                TransferMap &Transfers) {
   MachineFunction *MF = MI.getMF();
   TransferKind TKind;
   Register Reg;
@@ -1837,9 +1833,9 @@ void VarLocBasedLDV::transferSpillOrRestoreInst(MachineInstr &MI,
 /// value from one register to another register that is callee saved, we
 /// create new DBG_VALUE instruction  described with copy destination register.
 void VarLocBasedLDV::transferRegisterCopy(MachineInstr &MI,
-                                           OpenRangesSet &OpenRanges,
-                                           VarLocMap &VarLocIDs,
-                                           TransferMap &Transfers) {
+                                          OpenRangesSet &OpenRanges,
+                                          VarLocMap &VarLocIDs,
+                                          TransferMap &Transfers) {
   auto DestSrc = TII->isCopyInstr(MI);
   if (!DestSrc)
     return;
@@ -1909,9 +1905,9 @@ void VarLocBasedLDV::transferRegisterCopy(MachineInstr &MI,
 
 /// Terminate all open ranges at the end of the current basic block.
 bool VarLocBasedLDV::transferTerminator(MachineBasicBlock *CurMBB,
-                                         OpenRangesSet &OpenRanges,
-                                         VarLocInMBB &OutLocs,
-                                         const VarLocMap &VarLocIDs) {
+                                        OpenRangesSet &OpenRanges,
+                                        VarLocInMBB &OutLocs,
+                                        const VarLocMap &VarLocIDs) {
   bool Changed = false;
   LLVM_DEBUG({
     VarVec VarLocs;
@@ -1943,8 +1939,8 @@ bool VarLocBasedLDV::transferTerminator(MachineBasicBlock *CurMBB,
 /// \param OverlappingFragments The overlap map being constructed, from one
 ///           Var/Fragment pair to a vector of fragments known to overlap.
 void VarLocBasedLDV::accumulateFragmentMap(MachineInstr &MI,
-                                            VarToFragments &SeenFragments,
-                                            OverlapMap &OverlappingFragments) {
+                                           VarToFragments &SeenFragments,
+                                           OverlapMap &OverlappingFragments) {
   DebugVariable MIVar(MI.getDebugVariable(), MI.getDebugExpression(),
                       MI.getDebugLoc()->getInlinedAt());
   FragmentInfo ThisFragment = MIVar.getFragmentOrDefault();
@@ -2095,7 +2091,7 @@ bool VarLocBasedLDV::join(
 }
 
 void VarLocBasedLDV::flushPendingLocs(VarLocInMBB &PendingInLocs,
-                                       VarLocMap &VarLocIDs) {
+                                      VarLocMap &VarLocIDs) {
   // PendingInLocs records all locations propagated into blocks, which have
   // not had DBG_VALUE insts created. Go through and create those insts now.
   for (auto &Iter : PendingInLocs) {
@@ -2175,9 +2171,9 @@ static void collectRegDefs(const MachineInstr &MI, DefinedRegsSet &Regs,
 /// could be used as backup values. If we loose the track of some unmodified
 /// parameters, the backup values will be used as a primary locations.
 void VarLocBasedLDV::recordEntryValue(const MachineInstr &MI,
-                                       const DefinedRegsSet &DefinedRegs,
-                                       OpenRangesSet &OpenRanges,
-                                       VarLocMap &VarLocIDs) {
+                                      const DefinedRegsSet &DefinedRegs,
+                                      OpenRangesSet &OpenRanges,
+                                      VarLocMap &VarLocIDs) {
   if (TPC) {
     auto &TM = TPC->getTM<TargetMachine>();
     if (!TM.Options.ShouldEmitDebugEntryValues())
@@ -2234,11 +2230,11 @@ bool VarLocBasedLDV::ExtendRanges(MachineFunction &MF,
   VarLocMap VarLocIDs;         // Map VarLoc<>unique ID for use in bitvectors.
   OverlapMap OverlapFragments; // Map of overlapping variable fragments.
   OpenRangesSet OpenRanges(Alloc, OverlapFragments);
-                              // Ranges that are open until end of bb.
-  VarLocInMBB OutLocs;        // Ranges that exist beyond bb.
-  VarLocInMBB InLocs;         // Ranges that are incoming after joining.
-  TransferMap Transfers;      // DBG_VALUEs associated with transfers (such as
-                              // spills, copies and restores).
+  // Ranges that are open until end of bb.
+  VarLocInMBB OutLocs;   // Ranges that exist beyond bb.
+  VarLocInMBB InLocs;    // Ranges that are incoming after joining.
+  TransferMap Transfers; // DBG_VALUEs associated with transfers (such as
+                         // spills, copies and restores).
   // Map responsible MI to attached Transfer emitted from Backup Entry Value.
   InstToEntryLocMap EntryValTransfers;
   // Map a Register to the last MI which clobbered it.
@@ -2328,8 +2324,8 @@ bool VarLocBasedLDV::ExtendRanges(MachineFunction &MF,
     while (!Worklist.empty()) {
       MachineBasicBlock *MBB = OrderToBB[Worklist.top()];
       Worklist.pop();
-      MBBJoined = join(*MBB, OutLocs, InLocs, VarLocIDs, Visited,
-                       ArtificialBlocks);
+      MBBJoined =
+          join(*MBB, OutLocs, InLocs, VarLocIDs, Visited, ArtificialBlocks);
       MBBJoined |= Visited.insert(MBB).second;
       if (MBBJoined) {
         MBBJoined = false;
@@ -2398,8 +2394,4 @@ bool VarLocBasedLDV::ExtendRanges(MachineFunction &MF,
   return Changed;
 }
 
-LDVImpl *
-llvm::makeVarLocBasedLiveDebugValues()
-{
-  return new VarLocBasedLDV();
-}
+LDVImpl *llvm::makeVarLocBasedLiveDebugValues() { return new VarLocBasedLDV(); }
diff --git a/llvm/lib/CodeGen/LiveDebugVariables.cpp b/llvm/lib/CodeGen/LiveDebugVariables.cpp
index 9603c1f01e08569..e32018f75bdf9d3 100644
--- a/llvm/lib/CodeGen/LiveDebugVariables.cpp
+++ b/llvm/lib/CodeGen/LiveDebugVariables.cpp
@@ -66,21 +66,21 @@ using namespace llvm;
 
 #define DEBUG_TYPE "livedebugvars"
 
-static cl::opt<bool>
-EnableLDV("live-debug-variables", cl::init(true),
-          cl::desc("Enable the live debug variables pass"), cl::Hidden);
+static cl::opt<bool> EnableLDV("live-debug-variables", cl::init(true),
+                               cl::desc("Enable the live debug variables pass"),
+                               cl::Hidden);
 
 STATISTIC(NumInsertedDebugValues, "Number of DBG_VALUEs inserted");
 STATISTIC(NumInsertedDebugLabels, "Number of DBG_LABELs inserted");
 
 char LiveDebugVariables::ID = 0;
 
-INITIALIZE_PASS_BEGIN(LiveDebugVariables, DEBUG_TYPE,
-                "Debug Variable Analysis", false, false)
+INITIALIZE_PASS_BEGIN(LiveDebugVariables, DEBUG_TYPE, "Debug Variable Analysis",
+                      false, false)
 INITIALIZE_PASS_DEPENDENCY(MachineDominatorTree)
 INITIALIZE_PASS_DEPENDENCY(LiveIntervals)
-INITIALIZE_PASS_END(LiveDebugVariables, DEBUG_TYPE,
-                "Debug Variable Analysis", false, false)
+INITIALIZE_PASS_END(LiveDebugVariables, DEBUG_TYPE, "Debug Variable Analysis",
+                    false, false)
 
 void LiveDebugVariables::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.addRequired<MachineDominatorTree>();
@@ -287,9 +287,9 @@ class UserValue {
   const DILocalVariable *Variable; ///< The debug info variable we are part of.
   /// The part of the variable we describe.
   const std::optional<DIExpression::FragmentInfo> Fragment;
-  DebugLoc dl;            ///< The debug location for the variable. This is
-                          ///< used by dwarf writer to find lexical scope.
-  UserValue *leader;      ///< Equivalence class leader.
+  DebugLoc dl;               ///< The debug location for the variable. This is
+                             ///< used by dwarf writer to find lexical scope.
+  UserValue *leader;         ///< Equivalence class leader.
   UserValue *next = nullptr; ///< Next value in equivalence class, or null.
 
   /// Numbered locations referenced by locmap.
@@ -367,8 +367,7 @@ class UserValue {
         return UndefLocNo;
       // For register locations we dont care about use/def and other flags.
       for (unsigned i = 0, e = locations.size(); i != e; ++i)
-        if (locations[i].isReg() &&
-            locations[i].getReg() == LocMO.getReg() &&
+        if (locations[i].isReg() && locations[i].getReg() == LocMO.getReg() &&
             locations[i].getSubReg() == LocMO.getSubReg())
           return i;
     } else
@@ -513,7 +512,7 @@ class UserLabel {
 
   /// Does this UserLabel match the parameters?
   bool matches(const DILabel *L, const DILocation *IA,
-             const SlotIndex Index) const {
+               const SlotIndex Index) const {
     return Label == L && dl->getInlinedAt() == IA && loc == Index;
   }
 
@@ -648,8 +647,7 @@ class LDVImpl {
     virtRegToEqClass.clear();
     userVarMap.clear();
     // Make sure we call emitDebugValues if the machine function was modified.
-    assert((!ModifiedMF || EmitDone) &&
-           "Dbg values are not emitted in LDV");
+    assert((!ModifiedMF || EmitDone) && "Dbg values are not emitted in LDV");
     EmitDone = false;
     ModifiedMF = false;
   }
@@ -667,7 +665,7 @@ class LDVImpl {
   /// Recreate DBG_VALUE instruction from data structures.
   void emitDebugValues(VirtRegMap *VRM);
 
-  void print(raw_ostream&);
+  void print(raw_ostream &);
 };
 
 } // end anonymous namespace
@@ -942,8 +940,9 @@ bool LDVImpl::collectDebugValues(MachineFunction &mf, bool InstrRef) {
                          MBBI->isDebugRef())) {
           MBBI = handleDebugInstr(*MBBI, Idx);
           Changed = true;
-        // In normal debug mode, use the dedicated DBG_VALUE / DBG_LABEL handler
-        // to track things through register allocation, and erase the instr.
+          // In normal debug mode, use the dedicated DBG_VALUE / DBG_LABEL
+          // handler to track things through register allocation, and erase the
+          // instr.
         } else if ((MBBI->isDebugValue() && handleDebugValue(*MBBI, Idx)) ||
                    (MBBI->isDebugLabel() && handleDebugLabel(*MBBI, Idx))) {
           MBBI = MBB.erase(MBBI);
@@ -1316,21 +1315,20 @@ bool LiveDebugVariables::runOnMachineFunction(MachineFunction &mf) {
 
 void LiveDebugVariables::releaseMemory() {
   if (pImpl)
-    static_cast<LDVImpl*>(pImpl)->clear();
+    static_cast<LDVImpl *>(pImpl)->clear();
 }
 
 LiveDebugVariables::~LiveDebugVariables() {
   if (pImpl)
-    delete static_cast<LDVImpl*>(pImpl);
+    delete static_cast<LDVImpl *>(pImpl);
 }
 
 //===----------------------------------------------------------------------===//
 //                           Live Range Splitting
 //===----------------------------------------------------------------------===//
 
-bool
-UserValue::splitLocation(unsigned OldLocNo, ArrayRef<Register> NewRegs,
-                         LiveIntervals& LIS) {
+bool UserValue::splitLocation(unsigned OldLocNo, ArrayRef<Register> NewRegs,
+                              LiveIntervals &LIS) {
   LLVM_DEBUG({
     dbgs() << "Splitting Loc" << OldLocNo << '\t';
     print(dbgs(), nullptr);
@@ -1428,14 +1426,13 @@ UserValue::splitLocation(unsigned OldLocNo, ArrayRef<Register> NewRegs,
   return DidChange;
 }
 
-bool
-UserValue::splitRegister(Register OldReg, ArrayRef<Register> NewRegs,
-                         LiveIntervals &LIS) {
+bool UserValue::splitRegister(Register OldReg, ArrayRef<Register> NewRegs,
+                              LiveIntervals &LIS) {
   bool DidChange = false;
   // Split locations referring to OldReg. Iterate backwards so splitLocation can
   // safely erase unused locations.
-  for (unsigned i = locations.size(); i ; --i) {
-    unsigned LocNo = i-1;
+  for (unsigned i = locations.size(); i; --i) {
+    unsigned LocNo = i - 1;
     const MachineOperand *Loc = &locations[LocNo];
     if (!Loc->isReg() || Loc->getReg() != OldReg)
       continue;
@@ -1501,10 +1498,11 @@ void LDVImpl::splitRegister(Register OldReg, ArrayRef<Register> NewRegs) {
     mapVirtReg(NewReg, UV);
 }
 
-void LiveDebugVariables::
-splitRegister(Register OldReg, ArrayRef<Register> NewRegs, LiveIntervals &LIS) {
+void LiveDebugVariables::splitRegister(Register OldReg,
+                                       ArrayRef<Register> NewRegs,
+                                       LiveIntervals &LIS) {
   if (pImpl)
-    static_cast<LDVImpl*>(pImpl)->splitRegister(OldReg, NewRegs);
+    static_cast<LDVImpl *>(pImpl)->splitRegister(OldReg, NewRegs);
 }
 
 void UserValue::rewriteLocations(VirtRegMap &VRM, const MachineFunction &MF,
@@ -1625,8 +1623,8 @@ findInsertLocation(MachineBasicBlock *MBB, SlotIndex Idx, LiveIntervals &LIS,
   }
 
   // Don't insert anything after the first terminator, though.
-  return MI->isTerminator() ? MBB->getFirstTerminator() :
-                              std::next(MachineBasicBlock::iterator(MI));
+  return MI->isTerminator() ? MBB->getFirstTerminator()
+                            : std::next(MachineBasicBlock::iterator(MI));
 }
 
 /// Find an iterator for inserting the next DBG_VALUE instruction
@@ -1686,8 +1684,8 @@ void UserValue::insertDebugValue(MachineBasicBlock *MBB, SlotIndex StartIdx,
 
   ++NumInsertedDebugValues;
 
-  assert(cast<DILocalVariable>(Variable)
-             ->isValidLocationForIntrinsic(getDebugLoc()) &&
+  assert(cast<DILocalVariable>(Variable)->isValidLocationForIntrinsic(
+             getDebugLoc()) &&
          "Expected inlined-at fields to agree");
 
   // If the location was spilled, the new DBG_VALUE will be indirect. If the
@@ -1874,12 +1872,10 @@ void LDVImpl::emitDebugValues(VirtRegMap *VRM) {
         Builder.addImm(regSizeInBits);
       }
 
-      LLVM_DEBUG(
-      if (SpillOffset != 0) {
-        dbgs() << "DBG_PHI for Vreg " << Reg << " subreg " << SubReg <<
-                  " has nonzero offset\n";
-      }
-      );
+      LLVM_DEBUG(if (SpillOffset != 0) {
+        dbgs() << "DBG_PHI for Vreg " << Reg << " subreg " << SubReg
+               << " has nonzero offset\n";
+      });
     }
     // If there was no mapping for a value ID, it's optimized out. Create no
     // DBG_PHI, and any variables using this value will become optimized out.
@@ -1910,7 +1906,7 @@ void LDVImpl::emitDebugValues(VirtRegMap *VRM) {
       auto NextItem = std::next(StashIt);
       while (NextItem != StashedDebugInstrs.end() && NextItem->Idx == Idx) {
         assert(NextItem->MBB == MBB && "Instrs with same slot index should be"
-               "in the same block");
+                                       "in the same block");
         MBB->insert(InsertPos, NextItem->MI);
         StashIt = NextItem;
         NextItem = std::next(StashIt);
@@ -1959,12 +1955,12 @@ void LDVImpl::emitDebugValues(VirtRegMap *VRM) {
 
 void LiveDebugVariables::emitDebugValues(VirtRegMap *VRM) {
   if (pImpl)
-    static_cast<LDVImpl*>(pImpl)->emitDebugValues(VRM);
+    static_cast<LDVImpl *>(pImpl)->emitDebugValues(VRM);
 }
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
 LLVM_DUMP_METHOD void LiveDebugVariables::dump() const {
   if (pImpl)
-    static_cast<LDVImpl*>(pImpl)->print(dbgs());
+    static_cast<LDVImpl *>(pImpl)->print(dbgs());
 }
 #endif
diff --git a/llvm/lib/CodeGen/LiveInterval.cpp b/llvm/lib/CodeGen/LiveInterval.cpp
index 1cf354349c5610c..507e241e6a7b3c9 100644
--- a/llvm/lib/CodeGen/LiveInterval.cpp
+++ b/llvm/lib/CodeGen/LiveInterval.cpp
@@ -120,8 +120,7 @@ class CalcLiveRangeUtilBase {
   VNInfo *extendInBlock(SlotIndex StartIdx, SlotIndex Use) {
     if (segments().empty())
       return nullptr;
-    iterator I =
-      impl().findInsertPos(Segment(Use.getPrevSlot(), Use, nullptr));
+    iterator I = impl().findInsertPos(Segment(Use.getPrevSlot(), Use, nullptr));
     if (I == segments().begin())
       return nullptr;
     --I;
@@ -132,17 +131,19 @@ class CalcLiveRangeUtilBase {
     return I->valno;
   }
 
-  std::pair<VNInfo*,bool> extendInBlock(ArrayRef<SlotIndex> Undefs,
-      SlotIndex StartIdx, SlotIndex Use) {
+  std::pair<VNInfo *, bool> extendInBlock(ArrayRef<SlotIndex> Undefs,
+                                          SlotIndex StartIdx, SlotIndex Use) {
     if (segments().empty())
       return std::make_pair(nullptr, false);
     SlotIndex BeforeUse = Use.getPrevSlot();
     iterator I = impl().findInsertPos(Segment(BeforeUse, Use, nullptr));
     if (I == segments().begin())
-      return std::make_pair(nullptr, LR->isUndefIn(Undefs, StartIdx, BeforeUse));
+      return std::make_pair(nullptr,
+                            LR->isUndefIn(Undefs, StartIdx, BeforeUse));
     --I;
     if (I->end <= StartIdx)
-      return std::make_pair(nullptr, LR->isUndefIn(Undefs, StartIdx, BeforeUse));
+      return std::make_pair(nullptr,
+                            LR->isUndefIn(Undefs, StartIdx, BeforeUse));
     if (I->end < Use) {
       if (LR->isUndefIn(Undefs, I->end, BeforeUse))
         return std::make_pair(nullptr, true);
@@ -386,7 +387,7 @@ VNInfo *LiveRange::createDeadDef(VNInfo *VNI) {
 // A->overlaps(C) should return false since we want to be able to join
 // A and C.
 //
-bool LiveRange::overlapsFrom(const LiveRange& other,
+bool LiveRange::overlapsFrom(const LiveRange &other,
                              const_iterator StartPos) const {
   assert(!empty() && "empty range");
   const_iterator i = begin();
@@ -399,19 +400,22 @@ bool LiveRange::overlapsFrom(const LiveRange& other,
 
   if (i->start < j->start) {
     i = std::upper_bound(i, ie, j->start);
-    if (i != begin()) --i;
+    if (i != begin())
+      --i;
   } else if (j->start < i->start) {
     ++StartPos;
     if (StartPos != other.end() && StartPos->start <= i->start) {
       assert(StartPos < other.end() && i < end());
       j = std::upper_bound(j, je, i->start);
-      if (j != other.begin()) --j;
+      if (j != other.begin())
+        --j;
     }
   } else {
     return true;
   }
 
-  if (j == je) return false;
+  if (j == je)
+    return false;
 
   while (i != ie) {
     if (i->start > j->start) {
@@ -502,7 +506,7 @@ bool LiveRange::covers(const LiveRange &Other) const {
 /// (and any other deleted values neighboring it), otherwise mark it as ~1U so
 /// it can be nuked later.
 void LiveRange::markValNoForDeletion(VNInfo *ValNo) {
-  if (ValNo->id == getNumValNums()-1) {
+  if (ValNo->id == getNumValNums() - 1) {
     do {
       valnos.pop_back();
     } while (!valnos.empty() && valnos.back()->isUnused());
@@ -514,7 +518,7 @@ void LiveRange::markValNoForDeletion(VNInfo *ValNo) {
 /// RenumberValues - Renumber all values in order of appearance and delete the
 /// remaining unused values.
 void LiveRange::RenumberValues() {
-  SmallPtrSet<VNInfo*, 8> Seen;
+  SmallPtrSet<VNInfo *, 8> Seen;
   valnos.clear();
   for (const Segment &S : segments) {
     VNInfo *VNI = S.valno;
@@ -546,8 +550,9 @@ void LiveRange::append(const Segment S) {
   segments.push_back(S);
 }
 
-std::pair<VNInfo*,bool> LiveRange::extendInBlock(ArrayRef<SlotIndex> Undefs,
-    SlotIndex StartIdx, SlotIndex Kill) {
+std::pair<VNInfo *, bool> LiveRange::extendInBlock(ArrayRef<SlotIndex> Undefs,
+                                                   SlotIndex StartIdx,
+                                                   SlotIndex Kill) {
   // Use the segment set, if it is available.
   if (segmentSet != nullptr)
     return CalcLiveRangeUtilSet(this).extendInBlock(Undefs, StartIdx, Kill);
@@ -570,14 +575,14 @@ void LiveRange::removeSegment(SlotIndex Start, SlotIndex End,
   // Find the Segment containing this span.
   iterator I = find(Start);
   assert(I != end() && "Segment is not in range!");
-  assert(I->containsInterval(Start, End)
-         && "Segment is not entirely in range!");
+  assert(I->containsInterval(Start, End) &&
+         "Segment is not entirely in range!");
 
   // If the span we are removing is at the start of the Segment, adjust it.
   VNInfo *ValNo = I->valno;
   if (I->start == Start) {
     if (I->end == End) {
-      segments.erase(I);  // Removed the whole Segment.
+      segments.erase(I); // Removed the whole Segment.
 
       if (RemoveDeadValNo)
         removeValNoIfDead(ValNo);
@@ -595,7 +600,7 @@ void LiveRange::removeSegment(SlotIndex Start, SlotIndex End,
 
   // Otherwise, we are splitting the Segment into two pieces.
   SlotIndex OldEnd = I->end;
-  I->end = Start;   // Trim the old segment.
+  I->end = Start; // Trim the old segment.
 
   // Insert the new one.
   segments.insert(std::next(I), Segment(End, OldEnd, ValNo));
@@ -617,15 +622,15 @@ void LiveRange::removeValNoIfDead(VNInfo *ValNo) {
 /// removeValNo - Remove all the segments defined by the specified value#.
 /// Also remove the value# from value# list.
 void LiveRange::removeValNo(VNInfo *ValNo) {
-  if (empty()) return;
+  if (empty())
+    return;
   llvm::erase_if(segments,
                  [ValNo](const Segment &S) { return S.valno == ValNo; });
   // Now that ValNo is dead, remove it.
   markValNoForDeletion(ValNo);
 }
 
-void LiveRange::join(LiveRange &Other,
-                     const int *LHSValNoAssignments,
+void LiveRange::join(LiveRange &Other, const int *LHSValNoAssignments,
                      const int *RHSValNoAssignments,
                      SmallVectorImpl<VNInfo *> &NewVNInfo) {
   verify();
@@ -651,7 +656,7 @@ void LiveRange::join(LiveRange &Other,
     iterator OutIt = begin();
     OutIt->valno = NewVNInfo[LHSValNoAssignments[OutIt->valno->id]];
     for (iterator I = std::next(OutIt), E = end(); I != E; ++I) {
-      VNInfo* nextValNo = NewVNInfo[LHSValNoAssignments[I->valno->id]];
+      VNInfo *nextValNo = NewVNInfo[LHSValNoAssignments[I->valno->id]];
       assert(nextValNo && "Huh?");
 
       // If this live range has the same value # as its immediate predecessor,
@@ -691,11 +696,11 @@ void LiveRange::join(LiveRange &Other,
         valnos.push_back(VNI);
       else
         valnos[NumValNos] = VNI;
-      VNI->id = NumValNos++;  // Renumber val#.
+      VNI->id = NumValNos++; // Renumber val#.
     }
   }
   if (NumNewVals < NumVals)
-    valnos.resize(NumNewVals);  // shrinkify
+    valnos.resize(NumNewVals); // shrinkify
 
   // Okay, now insert the RHS live segments into the LHS.
   LiveRangeUpdater Updater(this);
@@ -707,8 +712,7 @@ void LiveRange::join(LiveRange &Other,
 /// value number.  The segments in RHS are allowed to overlap with segments in
 /// the current range, but only if the overlapping segments have the
 /// specified value number.
-void LiveRange::MergeSegmentsInAsValue(const LiveRange &RHS,
-                                       VNInfo *LHSValNo) {
+void LiveRange::MergeSegmentsInAsValue(const LiveRange &RHS, VNInfo *LHSValNo) {
   LiveRangeUpdater Updater(this);
   for (const Segment &S : RHS.segments)
     Updater.add(S.start, S.end, LHSValNo);
@@ -720,8 +724,7 @@ void LiveRange::MergeSegmentsInAsValue(const LiveRange &RHS,
 /// current range, it will replace the value numbers of the overlaped
 /// segments with the specified value number.
 void LiveRange::MergeValueInAsValue(const LiveRange &RHS,
-                                    const VNInfo *RHSValNo,
-                                    VNInfo *LHSValNo) {
+                                    const VNInfo *RHSValNo, VNInfo *LHSValNo) {
   LiveRangeUpdater Updater(this);
   for (const Segment &S : RHS.segments)
     if (S.valno == RHSValNo)
@@ -747,20 +750,21 @@ VNInfo *LiveRange::MergeValueNumberInto(VNInfo *V1, VNInfo *V2) {
   }
 
   // Merge V1 segments into V2.
-  for (iterator I = begin(); I != end(); ) {
+  for (iterator I = begin(); I != end();) {
     iterator S = I++;
-    if (S->valno != V1) continue;  // Not a V1 Segment.
+    if (S->valno != V1)
+      continue; // Not a V1 Segment.
 
     // Okay, we found a V1 live range.  If it had a previous, touching, V2 live
     // range, extend it.
     if (S != begin()) {
-      iterator Prev = S-1;
+      iterator Prev = S - 1;
       if (Prev->valno == V2 && Prev->end == S->start) {
         Prev->end = S->end;
 
         // Erase this live-range.
         segments.erase(S);
-        I = Prev+1;
+        I = Prev + 1;
         S = Prev;
       }
     }
@@ -776,7 +780,7 @@ VNInfo *LiveRange::MergeValueNumberInto(VNInfo *V1, VNInfo *V2) {
       if (I->start == S->end && I->valno == V2) {
         S->end = I->end;
         segments.erase(I);
-        I = S+1;
+        I = S + 1;
       }
     }
   }
@@ -814,7 +818,7 @@ bool LiveRange::isLiveAtIndexes(ArrayRef<SlotIndex> Slots) const {
     return false;
 
   // Look for each slot in the live range.
-  for ( ; SlotI != SlotE; ++SlotI) {
+  for (; SlotI != SlotE; ++SlotI) {
     // Go to the next segment that ends after the current slot.
     // The slot may be within a hole in the range.
     SegmentI = advanceTo(SegmentI, *SlotI);
@@ -983,7 +987,7 @@ void LiveInterval::computeSubRangeUndefs(SmallVectorImpl<SlotIndex> &Undefs,
   }
 }
 
-raw_ostream& llvm::operator<<(raw_ostream& OS, const LiveRange::Segment &S) {
+raw_ostream &llvm::operator<<(raw_ostream &OS, const LiveRange::Segment &S) {
   return OS << '[' << S.start << ',' << S.end << ':' << S.valno->id << ')';
 }
 
@@ -1010,7 +1014,8 @@ void LiveRange::print(raw_ostream &OS) const {
     for (const_vni_iterator i = vni_begin(), e = vni_end(); i != e;
          ++i, ++vnum) {
       const VNInfo *vni = *i;
-      if (vnum) OS << ' ';
+      if (vnum)
+        OS << ' ';
       OS << vnum << '@';
       if (vni->isUnused()) {
         OS << 'x';
@@ -1038,17 +1043,13 @@ void LiveInterval::print(raw_ostream &OS) const {
 }
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
-LLVM_DUMP_METHOD void LiveRange::dump() const {
-  dbgs() << *this << '\n';
-}
+LLVM_DUMP_METHOD void LiveRange::dump() const { dbgs() << *this << '\n'; }
 
 LLVM_DUMP_METHOD void LiveInterval::SubRange::dump() const {
   dbgs() << *this << '\n';
 }
 
-LLVM_DUMP_METHOD void LiveInterval::dump() const {
-  dbgs() << *this << '\n';
-}
+LLVM_DUMP_METHOD void LiveInterval::dump() const { dbgs() << *this << '\n'; }
 #endif
 
 #ifndef NDEBUG
@@ -1132,8 +1133,7 @@ void LiveRangeUpdater::print(raw_ostream &OS) const {
   }
   assert(LR && "Can't have null LR in dirty updater.");
   OS << " updater with gap = " << (ReadI - WriteI)
-     << ", last start = " << LastStart
-     << ":\n  Area 1:";
+     << ", last start = " << LastStart << ":\n  Area 1:";
   for (const auto &S : make_range(LR->begin(), WriteI))
     OS << ' ' << S;
   OS << "\n  Spills:";
@@ -1145,9 +1145,7 @@ void LiveRangeUpdater::print(raw_ostream &OS) const {
   OS << '\n';
 }
 
-LLVM_DUMP_METHOD void LiveRangeUpdater::dump() const {
-  print(errs());
-}
+LLVM_DUMP_METHOD void LiveRangeUpdater::dump() const { print(errs()); }
 #endif
 
 // Determine if A and B should be coalesced.
@@ -1372,7 +1370,7 @@ void ConnectedVNInfoEqClasses::Distribute(LiveInterval &LI, LiveInterval *LIV[],
   if (LI.hasSubRanges()) {
     unsigned NumComponents = EqClass.getNumClasses();
     SmallVector<unsigned, 8> VNIMapping;
-    SmallVector<LiveInterval::SubRange*, 8> SubRanges;
+    SmallVector<LiveInterval::SubRange *, 8> SubRanges;
     BumpPtrAllocator &Allocator = LIS.getVNInfoAllocator();
     for (LiveInterval::SubRange &SR : LI.subranges()) {
       // Create new subranges in the split intervals and construct a mapping
@@ -1381,7 +1379,7 @@ void ConnectedVNInfoEqClasses::Distribute(LiveInterval &LI, LiveInterval *LIV[],
       VNIMapping.clear();
       VNIMapping.reserve(NumValNos);
       SubRanges.clear();
-      SubRanges.resize(NumComponents-1, nullptr);
+      SubRanges.resize(NumComponents - 1, nullptr);
       for (unsigned I = 0; I < NumValNos; ++I) {
         const VNInfo &VNI = *SR.valnos[I];
         unsigned ComponentNum;
@@ -1389,12 +1387,12 @@ void ConnectedVNInfoEqClasses::Distribute(LiveInterval &LI, LiveInterval *LIV[],
           ComponentNum = 0;
         } else {
           const VNInfo *MainRangeVNI = LI.getVNInfoAt(VNI.def);
-          assert(MainRangeVNI != nullptr
-                 && "SubRange def must have corresponding main range def");
+          assert(MainRangeVNI != nullptr &&
+                 "SubRange def must have corresponding main range def");
           ComponentNum = getEqClass(MainRangeVNI);
-          if (ComponentNum > 0 && SubRanges[ComponentNum-1] == nullptr) {
-            SubRanges[ComponentNum-1]
-              = LIV[ComponentNum-1]->createSubRange(Allocator, SR.LaneMask);
+          if (ComponentNum > 0 && SubRanges[ComponentNum - 1] == nullptr) {
+            SubRanges[ComponentNum - 1] =
+                LIV[ComponentNum - 1]->createSubRange(Allocator, SR.LaneMask);
           }
         }
         VNIMapping.push_back(ComponentNum);
diff --git a/llvm/lib/CodeGen/LiveIntervalUnion.cpp b/llvm/lib/CodeGen/LiveIntervalUnion.cpp
index 11a4ecf0bef9dfe..c82b7300aa54e38 100644
--- a/llvm/lib/CodeGen/LiveIntervalUnion.cpp
+++ b/llvm/lib/CodeGen/LiveIntervalUnion.cpp
@@ -80,8 +80,8 @@ void LiveIntervalUnion::extract(const LiveInterval &VirtReg,
   }
 }
 
-void
-LiveIntervalUnion::print(raw_ostream &OS, const TargetRegisterInfo *TRI) const {
+void LiveIntervalUnion::print(raw_ostream &OS,
+                              const TargetRegisterInfo *TRI) const {
   if (empty()) {
     OS << " empty\n";
     return;
@@ -95,11 +95,11 @@ LiveIntervalUnion::print(raw_ostream &OS, const TargetRegisterInfo *TRI) const {
 
 #ifndef NDEBUG
 // Verify the live intervals in this union and add them to the visited set.
-void LiveIntervalUnion::verify(LiveVirtRegBitSet& VisitedVRegs) {
+void LiveIntervalUnion::verify(LiveVirtRegBitSet &VisitedVRegs) {
   for (SegmentIter SI = Segments.begin(); SI.valid(); ++SI)
     VisitedVRegs.set(SI.value()->reg());
 }
-#endif //!NDEBUG
+#endif //! NDEBUG
 
 const LiveInterval *LiveIntervalUnion::getOneVReg() const {
   if (empty())
@@ -198,10 +198,10 @@ void LiveIntervalUnion::Array::init(LiveIntervalUnion::Allocator &Alloc,
     return;
   clear();
   Size = NSize;
-  LIUs = static_cast<LiveIntervalUnion*>(
-      safe_malloc(sizeof(LiveIntervalUnion)*NSize));
+  LIUs = static_cast<LiveIntervalUnion *>(
+      safe_malloc(sizeof(LiveIntervalUnion) * NSize));
   for (unsigned i = 0; i != Size; ++i)
-    new(LIUs + i) LiveIntervalUnion(Alloc);
+    new (LIUs + i) LiveIntervalUnion(Alloc);
 }
 
 void LiveIntervalUnion::Array::clear() {
@@ -210,6 +210,6 @@ void LiveIntervalUnion::Array::clear() {
   for (unsigned i = 0; i != Size; ++i)
     LIUs[i].~LiveIntervalUnion();
   free(LIUs);
-  Size =  0;
+  Size = 0;
   LIUs = nullptr;
 }
diff --git a/llvm/lib/CodeGen/LiveIntervals.cpp b/llvm/lib/CodeGen/LiveIntervals.cpp
index da55e7f7284b364..771f89513d70de2 100644
--- a/llvm/lib/CodeGen/LiveIntervals.cpp
+++ b/llvm/lib/CodeGen/LiveIntervals.cpp
@@ -63,13 +63,13 @@ INITIALIZE_PASS_BEGIN(LiveIntervals, "liveintervals", "Live Interval Analysis",
                       false, false)
 INITIALIZE_PASS_DEPENDENCY(MachineDominatorTree)
 INITIALIZE_PASS_DEPENDENCY(SlotIndexes)
-INITIALIZE_PASS_END(LiveIntervals, "liveintervals",
-                "Live Interval Analysis", false, false)
+INITIALIZE_PASS_END(LiveIntervals, "liveintervals", "Live Interval Analysis",
+                    false, false)
 
 #ifndef NDEBUG
 static cl::opt<bool> EnablePrecomputePhysRegs(
-  "precompute-phys-liveness", cl::Hidden,
-  cl::desc("Eagerly compute live intervals for all physreg units."));
+    "precompute-phys-liveness", cl::Hidden,
+    cl::desc("Eagerly compute live intervals for all physreg units."));
 #else
 static bool EnablePrecomputePhysRegs = false;
 #endif // NDEBUG
@@ -145,7 +145,7 @@ bool LiveIntervals::runOnMachineFunction(MachineFunction &fn) {
   return false;
 }
 
-void LiveIntervals::print(raw_ostream &OS, const Module* ) const {
+void LiveIntervals::print(raw_ostream &OS, const Module *) const {
   OS << "********** INTERVALS **********\n";
 
   // Dump the regunits.
@@ -174,9 +174,7 @@ void LiveIntervals::printInstrs(raw_ostream &OS) const {
 }
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
-LLVM_DUMP_METHOD void LiveIntervals::dumpInstrs() const {
-  printInstrs(dbgs());
-}
+LLVM_DUMP_METHOD void LiveIntervals::dumpInstrs() const { printInstrs(dbgs()); }
 #endif
 
 LiveInterval *LiveIntervals::createInterval(Register reg) {
@@ -201,7 +199,7 @@ void LiveIntervals::computeVirtRegs() {
     LiveInterval &LI = createEmptyInterval(Reg);
     bool NeedSplit = computeVirtRegInterval(LI);
     if (NeedSplit) {
-      SmallVector<LiveInterval*, 8> SplitLIs;
+      SmallVector<LiveInterval *, 8> SplitLIs;
       splitSeparateComponents(LI, SplitLIs);
     }
   }
@@ -350,8 +348,9 @@ void LiveIntervals::computeLiveInRegUnits() {
     computeRegUnitRange(*RegUnitRanges[Unit], Unit);
 }
 
-static void createSegmentsForValues(LiveRange &LR,
-    iterator_range<LiveInterval::vni_iterator> VNIs) {
+static void
+createSegmentsForValues(LiveRange &LR,
+                        iterator_range<LiveInterval::vni_iterator> VNIs) {
   for (VNInfo *VNI : VNIs) {
     if (VNI->isUnused())
       continue;
@@ -364,12 +363,12 @@ void LiveIntervals::extendSegmentsToUses(LiveRange &Segments,
                                          ShrinkToUsesWorkList &WorkList,
                                          Register Reg, LaneBitmask LaneMask) {
   // Keep track of the PHIs that are in use.
-  SmallPtrSet<VNInfo*, 8> UsedPHIs;
+  SmallPtrSet<VNInfo *, 8> UsedPHIs;
   // Blocks that have already been added to WorkList as live-out.
-  SmallPtrSet<const MachineBasicBlock*, 16> LiveOut;
+  SmallPtrSet<const MachineBasicBlock *, 16> LiveOut;
 
-  auto getSubRange = [](const LiveInterval &I, LaneBitmask M)
-        -> const LiveRange& {
+  auto getSubRange = [](const LiveInterval &I,
+                        LaneBitmask M) -> const LiveRange & {
     if (M.none())
       return I;
     for (const LiveInterval::SubRange &SR : I.subranges()) {
@@ -431,7 +430,7 @@ void LiveIntervals::extendSegmentsToUses(LiveRange &Segments,
         // by <undef>s for this live range.
         assert(LaneMask.any() &&
                "Missing value out of predecessor for main range");
-        SmallVector<SlotIndex,8> Undefs;
+        SmallVector<SlotIndex, 8> Undefs;
         LI.computeSubRangeUndefs(Undefs, LaneMask, *MRI, *Indexes);
         assert(LiveRangeCalc::isJointlyDominated(Pred, Undefs, *Indexes) &&
                "Missing value out of predecessor for subrange");
@@ -442,7 +441,7 @@ void LiveIntervals::extendSegmentsToUses(LiveRange &Segments,
 }
 
 bool LiveIntervals::shrinkToUses(LiveInterval *li,
-                                 SmallVectorImpl<MachineInstr*> *dead) {
+                                 SmallVectorImpl<MachineInstr *> *dead) {
   LLVM_DEBUG(dbgs() << "Shrink: " << *li << '\n');
   assert(li->reg().isVirtual() && "Can only shrink virtual registers");
 
@@ -500,7 +499,7 @@ bool LiveIntervals::shrinkToUses(LiveInterval *li,
 }
 
 bool LiveIntervals::computeDeadValues(LiveInterval &LI,
-                                      SmallVectorImpl<MachineInstr*> *dead) {
+                                      SmallVectorImpl<MachineInstr *> *dead) {
   bool MayHaveSplitComponents = false;
 
   for (VNInfo *VNI : LI.valnos) {
@@ -612,8 +611,7 @@ void LiveIntervals::shrinkToUses(LiveInterval::SubRange &SR, Register Reg) {
   LLVM_DEBUG(dbgs() << "Shrunk: " << SR << '\n');
 }
 
-void LiveIntervals::extendToIndices(LiveRange &LR,
-                                    ArrayRef<SlotIndex> Indices,
+void LiveIntervals::extendToIndices(LiveRange &LR, ArrayRef<SlotIndex> Indices,
                                     ArrayRef<SlotIndex> Undefs) {
   assert(LICalc && "LICalc not initialized.");
   LICalc->reset(MF, getSlotIndexes(), DomTree, &getVNInfoAllocator());
@@ -634,22 +632,25 @@ void LiveIntervals::pruneValue(LiveRange &LR, SlotIndex Kill,
   // If VNI isn't live out from KillMBB, the value is trivially pruned.
   if (LRQ.endPoint() < MBBEnd) {
     LR.removeSegment(Kill, LRQ.endPoint());
-    if (EndPoints) EndPoints->push_back(LRQ.endPoint());
+    if (EndPoints)
+      EndPoints->push_back(LRQ.endPoint());
     return;
   }
 
   // VNI is live out of KillMBB.
   LR.removeSegment(Kill, MBBEnd);
-  if (EndPoints) EndPoints->push_back(MBBEnd);
+  if (EndPoints)
+    EndPoints->push_back(MBBEnd);
 
   // Find all blocks that are reachable from KillMBB without leaving VNI's live
   // range. It is possible that KillMBB itself is reachable, so start a DFS
   // from each successor.
-  using VisitedTy = df_iterator_default_set<MachineBasicBlock*,9>;
+  using VisitedTy = df_iterator_default_set<MachineBasicBlock *, 9>;
   VisitedTy Visited;
   for (MachineBasicBlock *Succ : KillMBB->successors()) {
-    for (df_ext_iterator<MachineBasicBlock*, VisitedTy>
-         I = df_ext_begin(Succ, Visited), E = df_ext_end(Succ, Visited);
+    for (df_ext_iterator<MachineBasicBlock *, VisitedTy>
+             I = df_ext_begin(Succ, Visited),
+             E = df_ext_end(Succ, Visited);
          I != E;) {
       MachineBasicBlock *MBB = *I;
 
@@ -666,14 +667,16 @@ void LiveIntervals::pruneValue(LiveRange &LR, SlotIndex Kill,
       // Prune the search if VNI is killed in MBB.
       if (LRQ.endPoint() < MBBEnd) {
         LR.removeSegment(MBBStart, LRQ.endPoint());
-        if (EndPoints) EndPoints->push_back(LRQ.endPoint());
+        if (EndPoints)
+          EndPoints->push_back(LRQ.endPoint());
         I.skipChildren();
         continue;
       }
 
       // VNI is live through MBB.
       LR.removeSegment(MBBStart, MBBEnd);
-      if (EndPoints) EndPoints->push_back(MBBEnd);
+      if (EndPoints)
+        EndPoints->push_back(MBBEnd);
       ++I;
     }
   }
@@ -685,7 +688,7 @@ void LiveIntervals::pruneValue(LiveRange &LR, SlotIndex Kill,
 
 void LiveIntervals::addKillFlags(const VirtRegMap *VRM) {
   // Keep track of regunit ranges.
-  SmallVector<std::pair<const LiveRange*, LiveRange::const_iterator>, 8> RU;
+  SmallVector<std::pair<const LiveRange *, LiveRange::const_iterator>, 8> RU;
 
   for (unsigned i = 0, e = MRI->getNumVirtRegs(); i != e; ++i) {
     Register Reg = Register::index2VirtReg(i);
@@ -800,13 +803,13 @@ void LiveIntervals::addKillFlags(const VirtRegMap *VRM) {
 
       MI->addRegisterKilled(Reg, nullptr);
       continue;
-CancelKill:
+    CancelKill:
       MI->clearRegisterKills(Reg, nullptr);
     }
   }
 }
 
-MachineBasicBlock*
+MachineBasicBlock *
 LiveIntervals::intervalIsInOneMBB(const LiveInterval &LI) const {
   assert(!LI.empty() && "LiveInterval is empty.");
 
@@ -832,8 +835,8 @@ LiveIntervals::intervalIsInOneMBB(const LiveInterval &LI) const {
   return MBB1 == MBB2 ? MBB1 : nullptr;
 }
 
-bool
-LiveIntervals::hasPHIKill(const LiveInterval &LI, const VNInfo *VNI) const {
+bool LiveIntervals::hasPHIKill(const LiveInterval &LI,
+                               const VNInfo *VNI) const {
   for (const VNInfo *PHI : LI.valnos) {
     if (PHI->isUnused() || !PHI->isPHIDef())
       continue;
@@ -902,7 +905,7 @@ bool LiveIntervals::checkRegMaskInterference(const LiveInterval &LI,
 
   // Use a smaller arrays for local live ranges.
   ArrayRef<SlotIndex> Slots;
-  ArrayRef<const uint32_t*> Bits;
+  ArrayRef<const uint32_t *> Bits;
   if (MachineBasicBlock *MBB = intervalIsInOneMBB(LI)) {
     Slots = getRegMaskSlotsInBlock(MBB->getNumber());
     Bits = getRegMaskBitsInBlock(MBB->getNumber());
@@ -923,14 +926,14 @@ bool LiveIntervals::checkRegMaskInterference(const LiveInterval &LI,
   bool Found = false;
   // Utility to union regmasks.
   auto unionBitMask = [&](unsigned Idx) {
-      if (!Found) {
-        // This is the first overlap. Initialize UsableRegs to all ones.
-        UsableRegs.clear();
-        UsableRegs.resize(TRI->getNumRegs(), true);
-        Found = true;
-      }
-      // Remove usable registers clobbered by this mask.
-      UsableRegs.clearBitsNotInMask(Bits[Idx]);
+    if (!Found) {
+      // This is the first overlap. Initialize UsableRegs to all ones.
+      UsableRegs.clear();
+      UsableRegs.resize(TRI->getNumRegs(), true);
+      Found = true;
+    }
+    // Remove usable registers clobbered by this mask.
+    UsableRegs.clearBitsNotInMask(Bits[Idx]);
   };
   while (true) {
     assert(*SlotI >= LiveI->start);
@@ -966,20 +969,20 @@ bool LiveIntervals::checkRegMaskInterference(const LiveInterval &LI,
 /// Toolkit used by handleMove to trim or extend live intervals.
 class LiveIntervals::HMEditor {
 private:
-  LiveIntervals& LIS;
-  const MachineRegisterInfo& MRI;
-  const TargetRegisterInfo& TRI;
+  LiveIntervals &LIS;
+  const MachineRegisterInfo &MRI;
+  const TargetRegisterInfo &TRI;
   SlotIndex OldIdx;
   SlotIndex NewIdx;
-  SmallPtrSet<LiveRange*, 8> Updated;
+  SmallPtrSet<LiveRange *, 8> Updated;
   bool UpdateFlags;
 
 public:
-  HMEditor(LiveIntervals& LIS, const MachineRegisterInfo& MRI,
-           const TargetRegisterInfo& TRI,
-           SlotIndex OldIdx, SlotIndex NewIdx, bool UpdateFlags)
-    : LIS(LIS), MRI(MRI), TRI(TRI), OldIdx(OldIdx), NewIdx(NewIdx),
-      UpdateFlags(UpdateFlags) {}
+  HMEditor(LiveIntervals &LIS, const MachineRegisterInfo &MRI,
+           const TargetRegisterInfo &TRI, SlotIndex OldIdx, SlotIndex NewIdx,
+           bool UpdateFlags)
+      : LIS(LIS), MRI(MRI), TRI(TRI), OldIdx(OldIdx), NewIdx(NewIdx),
+        UpdateFlags(UpdateFlags) {}
 
   // FIXME: UpdateFlags is a workaround that creates live intervals for all
   // physregs, even those that aren't needed for regalloc, in order to update
@@ -1115,7 +1118,7 @@ class LiveIntervals::HMEditor {
         // If we are here then OldIdx was just a use but not a def. We only have
         // to ensure liveness extends to NewIdx.
         LiveRange::iterator NewIdxIn =
-          LR.advanceTo(Next, NewIdx.getBaseIndex());
+            LR.advanceTo(Next, NewIdx.getBaseIndex());
         // Extend the segment before NewIdx if necessary.
         if (NewIdxIn == E ||
             !SlotIndex::isEarlierInstr(NewIdxIn->start, NewIdx)) {
@@ -1163,8 +1166,8 @@ class LiveIntervals::HMEditor {
     // NewIdx.
 
     // Is there an existing Def at NewIdx?
-    LiveRange::iterator AfterNewIdx
-      = LR.advanceTo(OldIdxOut, NewIdx.getRegSlot());
+    LiveRange::iterator AfterNewIdx =
+        LR.advanceTo(OldIdxOut, NewIdx.getRegSlot());
     bool OldIdxDefIsDead = OldIdxOut->end.isDead();
     if (!OldIdxDefIsDead &&
         SlotIndex::isEarlierInstr(OldIdxOut->end, NewIdxDef)) {
@@ -1199,8 +1202,8 @@ class LiveIntervals::HMEditor {
         std::copy(std::next(OldIdxOut), E, OldIdxOut);
         // The last segment is undefined now, reuse it for a dead def.
         LiveRange::iterator NewSegment = std::prev(E);
-        *NewSegment = LiveRange::Segment(NewIdxDef, NewIdxDef.getDeadSlot(),
-                                         DefVNI);
+        *NewSegment =
+            LiveRange::Segment(NewIdxDef, NewIdxDef.getDeadSlot(), DefVNI);
         DefVNI->def = NewIdxDef;
 
         LiveRange::iterator Prev = std::prev(NewSegment);
@@ -1251,8 +1254,8 @@ class LiveIntervals::HMEditor {
       LiveRange::iterator NewSegment = std::prev(AfterNewIdx);
       VNInfo *NewSegmentVNI = OldIdxVNI;
       NewSegmentVNI->def = NewIdxDef;
-      *NewSegment = LiveRange::Segment(NewIdxDef, NewIdxDef.getDeadSlot(),
-                                       NewSegmentVNI);
+      *NewSegment =
+          LiveRange::Segment(NewIdxDef, NewIdxDef.getDeadSlot(), NewSegmentVNI);
     }
   }
 
@@ -1279,8 +1282,8 @@ class LiveIntervals::HMEditor {
 
       // At this point we have to move OldIdxIn->end back to the nearest
       // previous use or (dead-)def but no further than NewIdx.
-      SlotIndex DefBeforeOldIdx
-        = std::max(OldIdxIn->start.getDeadSlot(),
+      SlotIndex DefBeforeOldIdx =
+          std::max(OldIdxIn->start.getDeadSlot(),
                    NewIdx.getRegSlot(OldIdxIn->end.isEarlyClobber()));
       OldIdxIn->end = findLastUseBefore(DefBeforeOldIdx, Reg, LaneMask);
 
@@ -1341,8 +1344,8 @@ class LiveIntervals::HMEditor {
 
             // Extend to where the previous range started, unless there is
             // another redef first.
-            NewDefEndPoint = std::min(OldIdxIn->start,
-                                      std::next(NewIdxOut)->start);
+            NewDefEndPoint =
+                std::min(OldIdxIn->start, std::next(NewIdxOut)->start);
           }
 
           // Merge the OldIdxIn and OldIdxOut segments into OldIdxOut.
@@ -1360,8 +1363,8 @@ class LiveIntervals::HMEditor {
           LiveRange::iterator Next = std::next(NewSegment);
           if (SlotIndex::isEarlierInstr(Next->start, NewIdx)) {
             // There is no gap between NewSegment and its predecessor.
-            *NewSegment = LiveRange::Segment(Next->start, SplitPos,
-                                             Next->valno);
+            *NewSegment =
+                LiveRange::Segment(Next->start, SplitPos, Next->valno);
 
             *Next = LiveRange::Segment(SplitPos, NewDefEndPoint, OldIdxVNI);
             Next->valno->def = SplitPos;
@@ -1378,9 +1381,9 @@ class LiveIntervals::HMEditor {
           if (OldIdxIn != E && SlotIndex::isEarlierInstr(NewIdx, OldIdxIn->end))
             OldIdxIn->end = NewIdxDef;
         }
-      } else if (OldIdxIn != E
-          && SlotIndex::isEarlierInstr(NewIdxOut->start, NewIdx)
-          && SlotIndex::isEarlierInstr(NewIdx, NewIdxOut->end)) {
+      } else if (OldIdxIn != E &&
+                 SlotIndex::isEarlierInstr(NewIdxOut->start, NewIdx) &&
+                 SlotIndex::isEarlierInstr(NewIdx, NewIdxOut->end)) {
         // OldIdxVNI is a dead def that has been moved into the middle of
         // another value in LR. That can happen when LR is a whole register,
         // but the dead def is a write to a subreg that is dead at NewIdx.
@@ -1395,8 +1398,8 @@ class LiveIntervals::HMEditor {
         // OldIdxVNI as its value number.
         *NewIdxOut = LiveRange::Segment(
             NewIdxOut->start, NewIdxDef.getRegSlot(), NewIdxOut->valno);
-        *(NewIdxOut + 1) = LiveRange::Segment(
-            NewIdxDef.getRegSlot(), (NewIdxOut + 1)->end, OldIdxVNI);
+        *(NewIdxOut + 1) = LiveRange::Segment(NewIdxDef.getRegSlot(),
+                                              (NewIdxOut + 1)->end, OldIdxVNI);
         OldIdxVNI->def = NewIdxDef;
         // Modify subsequent segments to be defined by the moved def OldIdxVNI.
         for (auto *Idx = NewIdxOut + 2; Idx <= OldIdxOut; ++Idx)
@@ -1448,8 +1451,8 @@ class LiveIntervals::HMEditor {
         if (MO.isUndef())
           continue;
         unsigned SubReg = MO.getSubReg();
-        if (SubReg != 0 && LaneMask.any()
-            && (TRI.getSubRegIndexLaneMask(SubReg) & LaneMask).none())
+        if (SubReg != 0 && LaneMask.any() &&
+            (TRI.getSubRegIndexLaneMask(SubReg) & LaneMask).none())
           continue;
 
         const MachineInstr &MI = *MO.getParent();
@@ -1470,7 +1473,7 @@ class LiveIntervals::HMEditor {
     // point to the next instruction after OldIdx, or MBB->end().
     MachineBasicBlock::iterator MII = MBB->end();
     if (MachineInstr *MI = Indexes->getInstructionFromIndex(
-                           Indexes->getNextNonNullIndex(OldIdx)))
+            Indexes->getNextNonNullIndex(OldIdx)))
       if (MI->getParent() == MBB)
         MII = MI;
 
@@ -1606,8 +1609,8 @@ void LiveIntervals::repairOldRegInRange(const MachineBasicBlock::iterator Begin,
 
         if (!lastUseIdx.isValid()) {
           VNInfo *VNI = LR.getNextValue(instrIdx.getRegSlot(), VNInfoAllocator);
-          LiveRange::Segment S(instrIdx.getRegSlot(),
-                               instrIdx.getDeadSlot(), VNI);
+          LiveRange::Segment S(instrIdx.getRegSlot(), instrIdx.getDeadSlot(),
+                               VNI);
           LII = LR.addSegment(S);
         } else if (LII->start != instrIdx.getRegSlot()) {
           VNInfo *VNI = LR.getNextValue(instrIdx.getRegSlot(), VNInfoAllocator);
@@ -1636,11 +1639,10 @@ void LiveIntervals::repairOldRegInRange(const MachineBasicBlock::iterator Begin,
     LR.removeSegment(*LII, true);
 }
 
-void
-LiveIntervals::repairIntervalsInRange(MachineBasicBlock *MBB,
-                                      MachineBasicBlock::iterator Begin,
-                                      MachineBasicBlock::iterator End,
-                                      ArrayRef<Register> OrigRegs) {
+void LiveIntervals::repairIntervalsInRange(MachineBasicBlock *MBB,
+                                           MachineBasicBlock::iterator Begin,
+                                           MachineBasicBlock::iterator End,
+                                           ArrayRef<Register> OrigRegs) {
   // Find anchor points, which are at the beginning/end of blocks or at
   // instructions that already have indexes.
   while (Begin != MBB->begin() && !Indexes->hasIndex(*std::prev(Begin)))
@@ -1725,8 +1727,8 @@ void LiveIntervals::removeVRegDefAt(LiveInterval &LI, SlotIndex Pos) {
   LI.removeEmptySubRanges();
 }
 
-void LiveIntervals::splitSeparateComponents(LiveInterval &LI,
-    SmallVectorImpl<LiveInterval*> &SplitLIs) {
+void LiveIntervals::splitSeparateComponents(
+    LiveInterval &LI, SmallVectorImpl<LiveInterval *> &SplitLIs) {
   ConnectedVNInfoEqClasses ConEQ(*this);
   unsigned NumComp = ConEQ.Classify(LI);
   if (NumComp <= 1)
diff --git a/llvm/lib/CodeGen/LivePhysRegs.cpp b/llvm/lib/CodeGen/LivePhysRegs.cpp
index 96380d408482573..96e3b9474f5e80a 100644
--- a/llvm/lib/CodeGen/LivePhysRegs.cpp
+++ b/llvm/lib/CodeGen/LivePhysRegs.cpp
@@ -23,13 +23,13 @@
 #include "llvm/Support/raw_ostream.h"
 using namespace llvm;
 
-
 /// Remove all registers from the set that get clobbered by the register
 /// mask.
 /// The clobbers set will be the list of live registers clobbered
 /// by the regmask.
-void LivePhysRegs::removeRegsInMask(const MachineOperand &MO,
-    SmallVectorImpl<std::pair<MCPhysReg, const MachineOperand*>> *Clobbers) {
+void LivePhysRegs::removeRegsInMask(
+    const MachineOperand &MO,
+    SmallVectorImpl<std::pair<MCPhysReg, const MachineOperand *>> *Clobbers) {
   RegisterSet::iterator LRI = LiveRegs.begin();
   while (LRI != LiveRegs.end()) {
     if (MO.clobbersPhysReg(*LRI)) {
@@ -77,8 +77,9 @@ void LivePhysRegs::stepBackward(const MachineInstr &MI) {
 /// killed-uses, add defs. This is the not recommended way, because it depends
 /// on accurate kill flags. If possible use stepBackward() instead of this
 /// function.
-void LivePhysRegs::stepForward(const MachineInstr &MI,
-    SmallVectorImpl<std::pair<MCPhysReg, const MachineOperand*>> &Clobbers) {
+void LivePhysRegs::stepForward(
+    const MachineInstr &MI,
+    SmallVectorImpl<std::pair<MCPhysReg, const MachineOperand *>> &Clobbers) {
   // Remove killed registers from the set.
   for (ConstMIBundleOperands O(MI); O.isValid(); ++O) {
     if (O->isReg()) {
@@ -133,9 +134,7 @@ void LivePhysRegs::print(raw_ostream &OS) const {
 }
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
-LLVM_DUMP_METHOD void LivePhysRegs::dump() const {
-  dbgs() << "  " << *this;
-}
+LLVM_DUMP_METHOD void LivePhysRegs::dump() const { dbgs() << "  " << *this; }
 #endif
 
 bool LivePhysRegs::available(const MachineRegisterInfo &MRI,
diff --git a/llvm/lib/CodeGen/LiveRangeCalc.cpp b/llvm/lib/CodeGen/LiveRangeCalc.cpp
index f7d9e5c44ac2e54..4fc8c7bb9fe4d2d 100644
--- a/llvm/lib/CodeGen/LiveRangeCalc.cpp
+++ b/llvm/lib/CodeGen/LiveRangeCalc.cpp
@@ -46,10 +46,8 @@ void LiveRangeCalc::resetLiveOutMap() {
   Map.resize(NumBlocks);
 }
 
-void LiveRangeCalc::reset(const MachineFunction *mf,
-                          SlotIndexes *SI,
-                          MachineDominatorTree *MDT,
-                          VNInfo::Allocator *VNIA) {
+void LiveRangeCalc::reset(const MachineFunction *mf, SlotIndexes *SI,
+                          MachineDominatorTree *MDT, VNInfo::Allocator *VNIA) {
   MF = mf;
   MRI = &MF->getRegInfo();
   Indexes = SI;
@@ -220,7 +218,8 @@ bool LiveRangeCalc::findReachingDefs(LiveRange &LR, MachineBasicBlock &UseMBB,
     if (Register::isPhysicalRegister(PhysReg)) {
       const TargetRegisterInfo *TRI = MRI->getTargetRegisterInfo();
       bool IsLiveIn = MBB->isLiveIn(PhysReg);
-      for (MCRegAliasIterator Alias(PhysReg, TRI, false); !IsLiveIn && Alias.isValid(); ++Alias)
+      for (MCRegAliasIterator Alias(PhysReg, TRI, false);
+           !IsLiveIn && Alias.isValid(); ++Alias)
         IsLiveIn = MBB->isLiveIn(*Alias);
       if (!IsLiveIn) {
         MBB->getParent()->verify();
@@ -234,39 +233,39 @@ bool LiveRangeCalc::findReachingDefs(LiveRange &LR, MachineBasicBlock &UseMBB,
     FoundUndef |= MBB->pred_empty();
 
     for (MachineBasicBlock *Pred : MBB->predecessors()) {
-       // Is this a known live-out block?
-       if (Seen.test(Pred->getNumber())) {
-         if (VNInfo *VNI = Map[Pred].first) {
-           if (TheVNI && TheVNI != VNI)
-             UniqueVNI = false;
-           TheVNI = VNI;
-         }
-         continue;
-       }
-
-       SlotIndex Start, End;
-       std::tie(Start, End) = Indexes->getMBBRange(Pred);
-
-       // First time we see Pred.  Try to determine the live-out value, but set
-       // it as null if Pred is live-through with an unknown value.
-       auto EP = LR.extendInBlock(Undefs, Start, End);
-       VNInfo *VNI = EP.first;
-       FoundUndef |= EP.second;
-       setLiveOutValue(Pred, EP.second ? &UndefVNI : VNI);
-       if (VNI) {
-         if (TheVNI && TheVNI != VNI)
-           UniqueVNI = false;
-         TheVNI = VNI;
-       }
-       if (VNI || EP.second)
-         continue;
-
-       // No, we need a live-in value for Pred as well
-       if (Pred != &UseMBB)
-         WorkList.push_back(Pred->getNumber());
-       else
-          // Loopback to UseMBB, so value is really live through.
-         Use = SlotIndex();
+      // Is this a known live-out block?
+      if (Seen.test(Pred->getNumber())) {
+        if (VNInfo *VNI = Map[Pred].first) {
+          if (TheVNI && TheVNI != VNI)
+            UniqueVNI = false;
+          TheVNI = VNI;
+        }
+        continue;
+      }
+
+      SlotIndex Start, End;
+      std::tie(Start, End) = Indexes->getMBBRange(Pred);
+
+      // First time we see Pred.  Try to determine the live-out value, but set
+      // it as null if Pred is live-through with an unknown value.
+      auto EP = LR.extendInBlock(Undefs, Start, End);
+      VNInfo *VNI = EP.first;
+      FoundUndef |= EP.second;
+      setLiveOutValue(Pred, EP.second ? &UndefVNI : VNI);
+      if (VNI) {
+        if (TheVNI && TheVNI != VNI)
+          UniqueVNI = false;
+        TheVNI = VNI;
+      }
+      if (VNI || EP.second)
+        continue;
+
+      // No, we need a live-in value for Pred as well
+      if (Pred != &UseMBB)
+        WorkList.push_back(Pred->getNumber());
+      else
+        // Loopback to UseMBB, so value is really live through.
+        Use = SlotIndex();
     }
   }
 
@@ -362,7 +361,7 @@ void LiveRangeCalc::updateSSA() {
         if (IDomValue.first && IDomValue.first != &UndefVNI &&
             !IDomValue.second) {
           Map[IDom->getBlock()].second = IDomValue.second =
-            DomTree->getNode(Indexes->getMBBFromIndex(IDomValue.first->def));
+              DomTree->getNode(Indexes->getMBBFromIndex(IDomValue.first->def));
         }
 
         for (MachineBasicBlock *Pred : MBB->predecessors()) {
@@ -377,7 +376,7 @@ void LiveRangeCalc::updateSSA() {
           // Cache the DomTree node that defined the value.
           if (!Value.second)
             Value.second =
-              DomTree->getNode(Indexes->getMBBFromIndex(Value.first->def));
+                DomTree->getNode(Indexes->getMBBFromIndex(Value.first->def));
 
           // This predecessor is carrying something other than IDomValue.
           // It could be because IDomValue hasn't propagated yet, or it could be
diff --git a/llvm/lib/CodeGen/LiveRangeEdit.cpp b/llvm/lib/CodeGen/LiveRangeEdit.cpp
index ff49e080090c2bd..d86daf3a4fa8ceb 100644
--- a/llvm/lib/CodeGen/LiveRangeEdit.cpp
+++ b/llvm/lib/CodeGen/LiveRangeEdit.cpp
@@ -24,12 +24,12 @@ using namespace llvm;
 
 #define DEBUG_TYPE "regalloc"
 
-STATISTIC(NumDCEDeleted,        "Number of instructions deleted by DCE");
-STATISTIC(NumDCEFoldedLoads,    "Number of single use loads folded after DCE");
-STATISTIC(NumFracRanges,        "Number of live ranges fractured by DCE");
+STATISTIC(NumDCEDeleted, "Number of instructions deleted by DCE");
+STATISTIC(NumDCEFoldedLoads, "Number of single use loads folded after DCE");
+STATISTIC(NumFracRanges, "Number of live ranges fractured by DCE");
 STATISTIC(NumReMaterialization, "Number of instructions rematerialized");
 
-void LiveRangeEdit::Delegate::anchor() { }
+void LiveRangeEdit::Delegate::anchor() {}
 
 LiveInterval &LiveRangeEdit::createEmptyIntervalFrom(Register OldReg,
                                                      bool createSubRanges) {
@@ -205,7 +205,7 @@ void LiveRangeEdit::eraseVirtReg(Register Reg) {
 }
 
 bool LiveRangeEdit::foldAsLoad(LiveInterval *LI,
-                               SmallVectorImpl<MachineInstr*> &Dead) {
+                               SmallVectorImpl<MachineInstr *> &Dead) {
   MachineInstr *DefMI = nullptr, *UseMI = nullptr;
 
   // Check that there is a single def and a single use.
@@ -380,10 +380,10 @@ void LiveRangeEdit::eliminateDeadDef(MachineInstr *MI, ToShrinkSet &ToShrink) {
     MI->setDesc(TII.get(TargetOpcode::KILL));
     // Remove all operands that aren't physregs.
     for (unsigned i = MI->getNumOperands(); i; --i) {
-      const MachineOperand &MO = MI->getOperand(i-1);
+      const MachineOperand &MO = MI->getOperand(i - 1);
       if (MO.isReg() && MO.getReg().isPhysical())
         continue;
-      MI->removeOperand(i-1);
+      MI->removeOperand(i - 1);
     }
     LLVM_DEBUG(dbgs() << "Converted physregs to:\t" << *MI);
   } else {
@@ -466,7 +466,7 @@ void LiveRangeEdit::eliminateDeadDefs(SmallVectorImpl<MachineInstr *> &Dead,
 
     // LI may have been separated, create new intervals.
     LI->RenumberValues();
-    SmallVector<LiveInterval*, 8> SplitLIs;
+    SmallVector<LiveInterval *, 8> SplitLIs;
     LIS.splitSeparateComponents(*LI, SplitLIs);
     if (!SplitLIs.empty())
       ++NumFracRanges;
@@ -486,8 +486,7 @@ void LiveRangeEdit::eliminateDeadDefs(SmallVectorImpl<MachineInstr *> &Dead,
 
 // Keep track of new virtual registers created via
 // MachineRegisterInfo::createVirtualRegister.
-void
-LiveRangeEdit::MRI_NoteNewVirtualRegister(Register VReg) {
+void LiveRangeEdit::MRI_NoteNewVirtualRegister(Register VReg) {
   if (VRM)
     VRM->grow();
 
diff --git a/llvm/lib/CodeGen/LiveRangeUtils.h b/llvm/lib/CodeGen/LiveRangeUtils.h
index ada5c5be484a399..caeeac6bd7fa364 100644
--- a/llvm/lib/CodeGen/LiveRangeUtils.h
+++ b/llvm/lib/CodeGen/LiveRangeUtils.h
@@ -22,7 +22,7 @@ namespace llvm {
 /// created live ranges \p SplitLRs. \p VNIClasses maps each value number in \p
 /// LR to 0 meaning it should stay or to 1..N meaning it should go to a specific
 /// live range in the \p SplitLRs array.
-template<typename LiveRangeT, typename EqClassesT>
+template <typename LiveRangeT, typename EqClassesT>
 static void DistributeRange(LiveRangeT &LR, LiveRangeT *SplitLRs[],
                             EqClassesT VNIClasses) {
   // Move segments to new intervals.
@@ -31,9 +31,10 @@ static void DistributeRange(LiveRangeT &LR, LiveRangeT *SplitLRs[],
     ++J;
   for (typename LiveRangeT::iterator I = J; I != E; ++I) {
     if (unsigned eq = VNIClasses[I->valno->id]) {
-      assert((SplitLRs[eq-1]->empty() || SplitLRs[eq-1]->expiredAt(I->start)) &&
+      assert((SplitLRs[eq - 1]->empty() ||
+              SplitLRs[eq - 1]->expiredAt(I->start)) &&
              "New intervals should be empty");
-      SplitLRs[eq-1]->segments.push_back(*I);
+      SplitLRs[eq - 1]->segments.push_back(*I);
     } else
       *J++ = *I;
   }
@@ -46,8 +47,8 @@ static void DistributeRange(LiveRangeT &LR, LiveRangeT *SplitLRs[],
   for (unsigned i = j; i != e; ++i) {
     VNInfo *VNI = LR.getValNumInfo(i);
     if (unsigned eq = VNIClasses[i]) {
-      VNI->id = SplitLRs[eq-1]->getNumValNums();
-      SplitLRs[eq-1]->valnos.push_back(VNI);
+      VNI->id = SplitLRs[eq - 1]->getNumValNums();
+      SplitLRs[eq - 1]->valnos.push_back(VNI);
     } else {
       VNI->id = j;
       LR.valnos[j++] = VNI;
@@ -56,6 +57,6 @@ static void DistributeRange(LiveRangeT &LR, LiveRangeT *SplitLRs[],
   LR.valnos.resize(j);
 }
 
-} // End llvm namespace
+} // namespace llvm
 
 #endif
diff --git a/llvm/lib/CodeGen/LiveRegMatrix.cpp b/llvm/lib/CodeGen/LiveRegMatrix.cpp
index 6df7e5c10862840..cc23d40ffa6be73 100644
--- a/llvm/lib/CodeGen/LiveRegMatrix.cpp
+++ b/llvm/lib/CodeGen/LiveRegMatrix.cpp
@@ -32,16 +32,16 @@ using namespace llvm;
 
 #define DEBUG_TYPE "regalloc"
 
-STATISTIC(NumAssigned   , "Number of registers assigned");
-STATISTIC(NumUnassigned , "Number of registers unassigned");
+STATISTIC(NumAssigned, "Number of registers assigned");
+STATISTIC(NumUnassigned, "Number of registers unassigned");
 
 char LiveRegMatrix::ID = 0;
-INITIALIZE_PASS_BEGIN(LiveRegMatrix, "liveregmatrix",
-                      "Live Register Matrix", false, false)
+INITIALIZE_PASS_BEGIN(LiveRegMatrix, "liveregmatrix", "Live Register Matrix",
+                      false, false)
 INITIALIZE_PASS_DEPENDENCY(LiveIntervals)
 INITIALIZE_PASS_DEPENDENCY(VirtRegMap)
-INITIALIZE_PASS_END(LiveRegMatrix, "liveregmatrix",
-                    "Live Register Matrix", false, false)
+INITIALIZE_PASS_END(LiveRegMatrix, "liveregmatrix", "Live Register Matrix",
+                    false, false)
 
 LiveRegMatrix::LiveRegMatrix() : MachineFunctionPass(ID) {}
 
@@ -167,11 +167,11 @@ bool LiveRegMatrix::checkRegUnitInterference(const LiveInterval &VirtReg,
     return false;
   CoalescerPair CP(VirtReg.reg(), PhysReg, *TRI);
 
-  bool Result = foreachUnit(TRI, VirtReg, PhysReg, [&](unsigned Unit,
-                                                       const LiveRange &Range) {
-    const LiveRange &UnitRange = LIS->getRegUnit(Unit);
-    return Range.overlaps(UnitRange, CP, *LIS->getSlotIndexes());
-  });
+  bool Result = foreachUnit(
+      TRI, VirtReg, PhysReg, [&](unsigned Unit, const LiveRange &Range) {
+        const LiveRange &UnitRange = LIS->getRegUnit(Unit);
+        return Range.overlaps(UnitRange, CP, *LIS->getSlotIndexes());
+      });
   return Result;
 }
 
diff --git a/llvm/lib/CodeGen/LiveStacks.cpp b/llvm/lib/CodeGen/LiveStacks.cpp
index 8fc5a929d77b21c..fa940a635dd45f9 100644
--- a/llvm/lib/CodeGen/LiveStacks.cpp
+++ b/llvm/lib/CodeGen/LiveStacks.cpp
@@ -21,11 +21,11 @@ using namespace llvm;
 #define DEBUG_TYPE "livestacks"
 
 char LiveStacks::ID = 0;
-INITIALIZE_PASS_BEGIN(LiveStacks, DEBUG_TYPE,
-                "Live Stack Slot Analysis", false, false)
+INITIALIZE_PASS_BEGIN(LiveStacks, DEBUG_TYPE, "Live Stack Slot Analysis", false,
+                      false)
 INITIALIZE_PASS_DEPENDENCY(SlotIndexes)
-INITIALIZE_PASS_END(LiveStacks, DEBUG_TYPE,
-                "Live Stack Slot Analysis", false, false)
+INITIALIZE_PASS_END(LiveStacks, DEBUG_TYPE, "Live Stack Slot Analysis", false,
+                    false)
 
 char &llvm::LiveStacksID = LiveStacks::ID;
 
@@ -50,8 +50,8 @@ bool LiveStacks::runOnMachineFunction(MachineFunction &MF) {
   return false;
 }
 
-LiveInterval &
-LiveStacks::getOrCreateInterval(int Slot, const TargetRegisterClass *RC) {
+LiveInterval &LiveStacks::getOrCreateInterval(int Slot,
+                                              const TargetRegisterClass *RC) {
   assert(Slot >= 0 && "Spill slot indice must be >= 0");
   SS2IntervalMap::iterator I = S2IMap.find(Slot);
   if (I == S2IMap.end()) {
@@ -70,7 +70,7 @@ LiveStacks::getOrCreateInterval(int Slot, const TargetRegisterClass *RC) {
 }
 
 /// print - Implement the dump method.
-void LiveStacks::print(raw_ostream &OS, const Module*) const {
+void LiveStacks::print(raw_ostream &OS, const Module *) const {
 
   OS << "********** INTERVALS **********\n";
   for (const_iterator I = begin(), E = end(); I != E; ++I) {
diff --git a/llvm/lib/CodeGen/LiveVariables.cpp b/llvm/lib/CodeGen/LiveVariables.cpp
index 9cd74689ba10b31..2acb6bf5c5a7623 100644
--- a/llvm/lib/CodeGen/LiveVariables.cpp
+++ b/llvm/lib/CodeGen/LiveVariables.cpp
@@ -43,12 +43,11 @@ using namespace llvm;
 
 char LiveVariables::ID = 0;
 char &llvm::LiveVariablesID = LiveVariables::ID;
-INITIALIZE_PASS_BEGIN(LiveVariables, "livevars",
-                "Live Variable Analysis", false, false)
+INITIALIZE_PASS_BEGIN(LiveVariables, "livevars", "Live Variable Analysis",
+                      false, false)
 INITIALIZE_PASS_DEPENDENCY(UnreachableMachineBlockElim)
-INITIALIZE_PASS_END(LiveVariables, "livevars",
-                "Live Variable Analysis", false, false)
-
+INITIALIZE_PASS_END(LiveVariables, "livevars", "Live Variable Analysis", false,
+                    false)
 
 void LiveVariables::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.addRequiredID(UnreachableMachineBlockElimID);
@@ -96,14 +95,15 @@ void LiveVariables::MarkVirtRegAliveInBlock(
   // remove it.
   for (unsigned i = 0, e = VRInfo.Kills.size(); i != e; ++i)
     if (VRInfo.Kills[i]->getParent() == MBB) {
-      VRInfo.Kills.erase(VRInfo.Kills.begin()+i);  // Erase entry
+      VRInfo.Kills.erase(VRInfo.Kills.begin() + i); // Erase entry
       break;
     }
 
-  if (MBB == DefBlock) return;  // Terminate recursion
+  if (MBB == DefBlock)
+    return; // Terminate recursion
 
   if (VRInfo.AliveBlocks.test(BBNum))
-    return;  // We already know the block is live
+    return; // We already know the block is live
 
   // Mark the variable known alive in this bb
   VRInfo.AliveBlocks.set(BBNum);
@@ -197,8 +197,8 @@ LiveVariables::FindLastPartialDef(Register Reg,
       continue;
     unsigned Dist = DistanceMap[Def];
     if (Dist > LastDefDist) {
-      LastDefReg  = SubReg;
-      LastDef     = Def;
+      LastDefReg = SubReg;
+      LastDef = Def;
       LastDefDist = Dist;
     }
   }
@@ -238,8 +238,8 @@ void LiveVariables::HandlePhysRegUse(Register Reg, MachineInstr &MI) {
     MachineInstr *LastPartialDef = FindLastPartialDef(Reg, PartDefRegs);
     // If LastPartialDef is NULL, it must be using a livein register.
     if (LastPartialDef) {
-      LastPartialDef->addOperand(MachineOperand::CreateReg(Reg, true/*IsDef*/,
-                                                           true/*IsImp*/));
+      LastPartialDef->addOperand(
+          MachineOperand::CreateReg(Reg, true /*IsDef*/, true /*IsImp*/));
       PhysRegDef[Reg] = LastPartialDef;
       SmallSet<unsigned, 8> Processed;
       for (MCPhysReg SubReg : TRI->subregs(Reg)) {
@@ -249,9 +249,8 @@ void LiveVariables::HandlePhysRegUse(Register Reg, MachineInstr &MI) {
           continue;
         // This part of Reg was defined before the last partial def. It's killed
         // here.
-        LastPartialDef->addOperand(MachineOperand::CreateReg(SubReg,
-                                                             false/*IsDef*/,
-                                                             true/*IsImp*/));
+        LastPartialDef->addOperand(
+            MachineOperand::CreateReg(SubReg, false /*IsDef*/, true /*IsImp*/));
         PhysRegDef[SubReg] = LastPartialDef;
         for (MCPhysReg SS : TRI->subregs(SubReg))
           Processed.insert(SS);
@@ -260,8 +259,8 @@ void LiveVariables::HandlePhysRegUse(Register Reg, MachineInstr &MI) {
   } else if (LastDef && !PhysRegUse[Reg] &&
              !LastDef->findRegisterDefOperand(Reg))
     // Last def defines the super register, add an implicit def of reg.
-    LastDef->addOperand(MachineOperand::CreateReg(Reg, true/*IsDef*/,
-                                                  true/*IsImp*/));
+    LastDef->addOperand(
+        MachineOperand::CreateReg(Reg, true /*IsDef*/, true /*IsImp*/));
 
   // Remember this use.
   for (MCPhysReg SubReg : TRI->subregs_inclusive(Reg))
@@ -368,8 +367,8 @@ bool LiveVariables::HandlePhysRegKill(Register Reg, MachineInstr *MI) {
         }
       }
       if (NeedDef)
-        PhysRegDef[Reg]->addOperand(MachineOperand::CreateReg(SubReg,
-                                                 true/*IsDef*/, true/*IsImp*/));
+        PhysRegDef[Reg]->addOperand(
+            MachineOperand::CreateReg(SubReg, true /*IsDef*/, true /*IsImp*/));
       MachineInstr *LastSubRef = FindLastRefOrPartRef(SubReg);
       if (LastSubRef)
         LastSubRef->addRegisterKilled(SubReg, TRI, true);
@@ -384,11 +383,11 @@ bool LiveVariables::HandlePhysRegKill(Register Reg, MachineInstr *MI) {
   } else if (LastRefOrPartRef == PhysRegDef[Reg] && LastRefOrPartRef != MI) {
     if (LastPartDef)
       // The last partial def kills the register.
-      LastPartDef->addOperand(MachineOperand::CreateReg(Reg, false/*IsDef*/,
-                                                true/*IsImp*/, true/*IsKill*/));
+      LastPartDef->addOperand(MachineOperand::CreateReg(
+          Reg, false /*IsDef*/, true /*IsImp*/, true /*IsKill*/));
     else {
       MachineOperand *MO =
-        LastRefOrPartRef->findRegisterDefOperand(Reg, false, false, TRI);
+          LastRefOrPartRef->findRegisterDefOperand(Reg, false, false, TRI);
       bool NeedEC = MO->isEarlyClobber() && MO->getReg() != Reg;
       // If the last reference is the last def, then it's not used at all.
       // That is, unless we are currently processing the last reference itself.
@@ -463,7 +462,7 @@ void LiveVariables::HandlePhysRegDef(Register Reg, MachineInstr *MI,
   }
 
   if (MI)
-    Defs.push_back(Reg);  // Remember this def.
+    Defs.push_back(Reg); // Remember this def.
 }
 
 void LiveVariables::UpdatePhysRegDefs(MachineInstr &MI,
@@ -472,7 +471,7 @@ void LiveVariables::UpdatePhysRegDefs(MachineInstr &MI,
     Register Reg = Defs.pop_back_val();
     for (MCPhysReg SubReg : TRI->subregs_inclusive(Reg)) {
       PhysRegDef[SubReg] = &MI;
-      PhysRegUse[SubReg]  = nullptr;
+      PhysRegUse[SubReg] = nullptr;
     }
   }
 }
@@ -616,7 +615,7 @@ bool LiveVariables::runOnMachineFunction(MachineFunction &mf) {
   // register before its uses due to dominance properties of SSA (except for PHI
   // nodes, which are treated as a special case).
   MachineBasicBlock *Entry = &MF->front();
-  df_iterator_default_set<MachineBasicBlock*,16> Visited;
+  df_iterator_default_set<MachineBasicBlock *, 16> Visited;
 
   for (MachineBasicBlock *MBB : depth_first_ext(Entry, Visited)) {
     runOnBlock(MBB, NumRegs);
@@ -759,15 +758,15 @@ void LiveVariables::removeVirtualRegistersKilled(MachineInstr &MI) {
 /// particular, we want to map the variable information of a virtual register
 /// which is used in a PHI node. We map that to the BB the vreg is coming from.
 ///
-void LiveVariables::analyzePHINodes(const MachineFunction& Fn) {
+void LiveVariables::analyzePHINodes(const MachineFunction &Fn) {
   for (const auto &MBB : Fn)
     for (const auto &BBI : MBB) {
       if (!BBI.isPHI())
         break;
       for (unsigned i = 1, e = BBI.getNumOperands(); i != e; i += 2)
         if (BBI.getOperand(i).readsReg())
-          PHIVarInfo[BBI.getOperand(i + 1).getMBB()->getNumber()]
-            .push_back(BBI.getOperand(i).getReg());
+          PHIVarInfo[BBI.getOperand(i + 1).getMBB()->getNumber()].push_back(
+              BBI.getOperand(i).getReg());
     }
 }
 
@@ -784,7 +783,7 @@ bool LiveVariables::VarInfo::isLiveIn(const MachineBasicBlock &MBB,
   if (Def && Def->getParent() == &MBB)
     return false;
 
- // Reg was not defined in MBB, was it killed here?
+  // Reg was not defined in MBB, was it killed here?
   return findKill(&MBB);
 }
 
@@ -813,8 +812,7 @@ bool LiveVariables::isLiveOut(Register Reg, const MachineBasicBlock &MBB) {
 /// addNewBlock - Add a new basic block BB as an empty succcessor to DomBB. All
 /// variables that are live out of DomBB will be marked as passing live through
 /// BB.
-void LiveVariables::addNewBlock(MachineBasicBlock *BB,
-                                MachineBasicBlock *DomBB,
+void LiveVariables::addNewBlock(MachineBasicBlock *BB, MachineBasicBlock *DomBB,
                                 MachineBasicBlock *SuccBB) {
   const unsigned NumNew = BB->getNumber();
 
@@ -827,7 +825,7 @@ void LiveVariables::addNewBlock(MachineBasicBlock *BB,
 
     // All registers used by PHI nodes in SuccBB must be live through BB.
     for (unsigned i = 1, e = BBI->getNumOperands(); i != e; i += 2)
-      if (BBI->getOperand(i+1).getMBB() == BB)
+      if (BBI->getOperand(i + 1).getMBB() == BB)
         getVarInfo(BBI->getOperand(i).getReg()).AliveBlocks.set(NumNew);
   }
 
@@ -863,8 +861,7 @@ void LiveVariables::addNewBlock(MachineBasicBlock *BB,
 /// variables that are live out of DomBB will be marked as passing live through
 /// BB. LiveInSets[BB] is *not* updated (because it is not needed during
 /// PHIElimination).
-void LiveVariables::addNewBlock(MachineBasicBlock *BB,
-                                MachineBasicBlock *DomBB,
+void LiveVariables::addNewBlock(MachineBasicBlock *BB, MachineBasicBlock *DomBB,
                                 MachineBasicBlock *SuccBB,
                                 std::vector<SparseBitVector<>> &LiveInSets) {
   const unsigned NumNew = BB->getNumber();
@@ -876,13 +873,11 @@ void LiveVariables::addNewBlock(MachineBasicBlock *BB,
     VI.AliveBlocks.set(NumNew);
   }
   // All registers used by PHI nodes in SuccBB must be live through BB.
-  for (MachineBasicBlock::iterator BBI = SuccBB->begin(),
-         BBE = SuccBB->end();
+  for (MachineBasicBlock::iterator BBI = SuccBB->begin(), BBE = SuccBB->end();
        BBI != BBE && BBI->isPHI(); ++BBI) {
     for (unsigned i = 1, e = BBI->getNumOperands(); i != e; i += 2)
       if (BBI->getOperand(i + 1).getMBB() == BB &&
           BBI->getOperand(i).readsReg())
-        getVarInfo(BBI->getOperand(i).getReg())
-          .AliveBlocks.set(NumNew);
+        getVarInfo(BBI->getOperand(i).getReg()).AliveBlocks.set(NumNew);
   }
 }
diff --git a/llvm/lib/CodeGen/LocalStackSlotAllocation.cpp b/llvm/lib/CodeGen/LocalStackSlotAllocation.cpp
index e491ed12034d7d7..e7c3a8c428ac635 100644
--- a/llvm/lib/CodeGen/LocalStackSlotAllocation.cpp
+++ b/llvm/lib/CodeGen/LocalStackSlotAllocation.cpp
@@ -47,67 +47,67 @@ STATISTIC(NumReplacements, "Number of frame indices references replaced");
 
 namespace {
 
-  class FrameRef {
-    MachineBasicBlock::iterator MI; // Instr referencing the frame
-    int64_t LocalOffset;            // Local offset of the frame idx referenced
-    int FrameIdx;                   // The frame index
-
-    // Order reference instruction appears in program. Used to ensure
-    // deterministic order when multiple instructions may reference the same
-    // location.
-    unsigned Order;
-
-  public:
-    FrameRef(MachineInstr *I, int64_t Offset, int Idx, unsigned Ord) :
-      MI(I), LocalOffset(Offset), FrameIdx(Idx), Order(Ord) {}
-
-    bool operator<(const FrameRef &RHS) const {
-      return std::tie(LocalOffset, FrameIdx, Order) <
-             std::tie(RHS.LocalOffset, RHS.FrameIdx, RHS.Order);
-    }
+class FrameRef {
+  MachineBasicBlock::iterator MI; // Instr referencing the frame
+  int64_t LocalOffset;            // Local offset of the frame idx referenced
+  int FrameIdx;                   // The frame index
+
+  // Order reference instruction appears in program. Used to ensure
+  // deterministic order when multiple instructions may reference the same
+  // location.
+  unsigned Order;
+
+public:
+  FrameRef(MachineInstr *I, int64_t Offset, int Idx, unsigned Ord)
+      : MI(I), LocalOffset(Offset), FrameIdx(Idx), Order(Ord) {}
+
+  bool operator<(const FrameRef &RHS) const {
+    return std::tie(LocalOffset, FrameIdx, Order) <
+           std::tie(RHS.LocalOffset, RHS.FrameIdx, RHS.Order);
+  }
 
-    MachineBasicBlock::iterator getMachineInstr() const { return MI; }
-    int64_t getLocalOffset() const { return LocalOffset; }
-    int getFrameIndex() const { return FrameIdx; }
-  };
+  MachineBasicBlock::iterator getMachineInstr() const { return MI; }
+  int64_t getLocalOffset() const { return LocalOffset; }
+  int getFrameIndex() const { return FrameIdx; }
+};
 
-  class LocalStackSlotPass: public MachineFunctionPass {
-    SmallVector<int64_t, 16> LocalOffsets;
+class LocalStackSlotPass : public MachineFunctionPass {
+  SmallVector<int64_t, 16> LocalOffsets;
 
-    /// StackObjSet - A set of stack object indexes
-    using StackObjSet = SmallSetVector<int, 8>;
+  /// StackObjSet - A set of stack object indexes
+  using StackObjSet = SmallSetVector<int, 8>;
 
-    void AdjustStackOffset(MachineFrameInfo &MFI, int FrameIdx, int64_t &Offset,
-                           bool StackGrowsDown, Align &MaxAlign);
-    void AssignProtectedObjSet(const StackObjSet &UnassignedObjs,
-                               SmallSet<int, 16> &ProtectedObjs,
-                               MachineFrameInfo &MFI, bool StackGrowsDown,
-                               int64_t &Offset, Align &MaxAlign);
-    void calculateFrameObjectOffsets(MachineFunction &Fn);
-    bool insertFrameReferenceRegisters(MachineFunction &Fn);
+  void AdjustStackOffset(MachineFrameInfo &MFI, int FrameIdx, int64_t &Offset,
+                         bool StackGrowsDown, Align &MaxAlign);
+  void AssignProtectedObjSet(const StackObjSet &UnassignedObjs,
+                             SmallSet<int, 16> &ProtectedObjs,
+                             MachineFrameInfo &MFI, bool StackGrowsDown,
+                             int64_t &Offset, Align &MaxAlign);
+  void calculateFrameObjectOffsets(MachineFunction &Fn);
+  bool insertFrameReferenceRegisters(MachineFunction &Fn);
 
-  public:
-    static char ID; // Pass identification, replacement for typeid
+public:
+  static char ID; // Pass identification, replacement for typeid
 
-    explicit LocalStackSlotPass() : MachineFunctionPass(ID) {
-      initializeLocalStackSlotPassPass(*PassRegistry::getPassRegistry());
-    }
+  explicit LocalStackSlotPass() : MachineFunctionPass(ID) {
+    initializeLocalStackSlotPassPass(*PassRegistry::getPassRegistry());
+  }
 
-    bool runOnMachineFunction(MachineFunction &MF) override;
+  bool runOnMachineFunction(MachineFunction &MF) override;
 
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.setPreservesCFG();
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
-  };
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.setPreservesCFG();
+    MachineFunctionPass::getAnalysisUsage(AU);
+  }
+};
 
 } // end anonymous namespace
 
 char LocalStackSlotPass::ID = 0;
 
 char &llvm::LocalStackSlotAllocationID = LocalStackSlotPass::ID;
-INITIALIZE_PASS(LocalStackSlotPass, DEBUG_TYPE,
-                "Local Stack Slot Allocation", false, false)
+INITIALIZE_PASS(LocalStackSlotPass, DEBUG_TYPE, "Local Stack Slot Allocation",
+                false, false)
 
 bool LocalStackSlotPass::runOnMachineFunction(MachineFunction &MF) {
   MachineFrameInfo &MFI = MF.getFrameInfo();
@@ -188,7 +188,7 @@ void LocalStackSlotPass::calculateFrameObjectOffsets(MachineFunction &Fn) {
   MachineFrameInfo &MFI = Fn.getFrameInfo();
   const TargetFrameLowering &TFI = *Fn.getSubtarget().getFrameLowering();
   bool StackGrowsDown =
-    TFI.getStackGrowthDirection() == TargetFrameLowering::StackGrowsDown;
+      TFI.getStackGrowthDirection() == TargetFrameLowering::StackGrowsDown;
   int64_t Offset = 0;
   Align MaxAlign;
 
@@ -268,13 +268,11 @@ void LocalStackSlotPass::calculateFrameObjectOffsets(MachineFunction &Fn) {
   MFI.setLocalFrameMaxAlign(MaxAlign);
 }
 
-static inline bool
-lookupCandidateBaseReg(unsigned BaseReg,
-                       int64_t BaseOffset,
-                       int64_t FrameSizeAdjust,
-                       int64_t LocalFrameOffset,
-                       const MachineInstr &MI,
-                       const TargetRegisterInfo *TRI) {
+static inline bool lookupCandidateBaseReg(unsigned BaseReg, int64_t BaseOffset,
+                                          int64_t FrameSizeAdjust,
+                                          int64_t LocalFrameOffset,
+                                          const MachineInstr &MI,
+                                          const TargetRegisterInfo *TRI) {
   // Check if the relative offset from the where the base register references
   // to the target address is in range for the instruction.
   int64_t Offset = FrameSizeAdjust + LocalFrameOffset - BaseOffset;
@@ -293,7 +291,7 @@ bool LocalStackSlotPass::insertFrameReferenceRegisters(MachineFunction &Fn) {
   const TargetRegisterInfo *TRI = Fn.getSubtarget().getRegisterInfo();
   const TargetFrameLowering &TFI = *Fn.getSubtarget().getFrameLowering();
   bool StackGrowsDown =
-    TFI.getStackGrowthDirection() == TargetFrameLowering::StackGrowsDown;
+      TFI.getStackGrowthDirection() == TargetFrameLowering::StackGrowsDown;
 
   // Collect all of the instructions in the block that reference
   // a frame index. Also store the frame index referenced to ease later
@@ -329,7 +327,8 @@ bool LocalStackSlotPass::insertFrameReferenceRegisters(MachineFunction &Fn) {
           int64_t LocalOffset = LocalOffsets[Idx];
           if (!TRI->needsFrameBaseReg(&MI, LocalOffset))
             break;
-          FrameReferenceInsns.push_back(FrameRef(&MI, LocalOffset, Idx, Order++));
+          FrameReferenceInsns.push_back(
+              FrameRef(&MI, LocalOffset, Idx, Order++));
           break;
         }
       }
@@ -346,7 +345,7 @@ bool LocalStackSlotPass::insertFrameReferenceRegisters(MachineFunction &Fn) {
   int64_t BaseOffset = 0;
 
   // Loop through the frame references and allocate for them as necessary.
-  for (int ref = 0, e = FrameReferenceInsns.size(); ref < e ; ++ref) {
+  for (int ref = 0, e = FrameReferenceInsns.size(); ref < e; ++ref) {
     FrameRef &FR = FrameReferenceInsns[ref];
     MachineInstr &MI = *FR.getMachineInstr();
     int64_t LocalOffset = FR.getLocalOffset();
@@ -418,8 +417,8 @@ bool LocalStackSlotPass::insertFrameReferenceRegisters(MachineFunction &Fn) {
       BaseReg = TRI->materializeFrameBaseRegister(Entry, FrameIdx, InstrOffset);
 
       LLVM_DEBUG(dbgs() << "  Materialized base register at frame local offset "
-                        << LocalOffset + InstrOffset
-                        << " into " << printReg(BaseReg, TRI) << '\n');
+                        << LocalOffset + InstrOffset << " into "
+                        << printReg(BaseReg, TRI) << '\n');
 
       // The base register already includes any offset specified
       // by the instruction, so account for that so it doesn't get
diff --git a/llvm/lib/CodeGen/LowerEmuTLS.cpp b/llvm/lib/CodeGen/LowerEmuTLS.cpp
index f3b5069d351b4e4..710942978212d29 100644
--- a/llvm/lib/CodeGen/LowerEmuTLS.cpp
+++ b/llvm/lib/CodeGen/LowerEmuTLS.cpp
@@ -36,10 +36,10 @@ class LowerEmuTLS : public ModulePass {
   }
 
   bool runOnModule(Module &M) override;
+
 private:
   bool addEmuTlsVar(Module &M, const GlobalVariable *GV);
-  static void copyLinkageVisibility(Module &M,
-                                    const GlobalVariable *from,
+  static void copyLinkageVisibility(Module &M, const GlobalVariable *from,
                                     GlobalVariable *to) {
     to->setLinkage(from->getLinkage());
     to->setVisibility(from->getVisibility());
@@ -50,7 +50,7 @@ class LowerEmuTLS : public ModulePass {
     }
   }
 };
-}
+} // namespace
 
 char LowerEmuTLS::ID = 0;
 
@@ -73,7 +73,7 @@ bool LowerEmuTLS::runOnModule(Module &M) {
     return false;
 
   bool Changed = false;
-  SmallVector<const GlobalVariable*, 8> TlsVars;
+  SmallVector<const GlobalVariable *, 8> TlsVars;
   for (const auto &G : M.globals()) {
     if (G.isThreadLocal())
       TlsVars.append({&G});
@@ -90,7 +90,7 @@ bool LowerEmuTLS::addEmuTlsVar(Module &M, const GlobalVariable *GV) {
   std::string EmuTlsVarName = ("__emutls_v." + GV->getName()).str();
   GlobalVariable *EmuTlsVar = M.getNamedGlobal(EmuTlsVarName);
   if (EmuTlsVar)
-    return false;  // It has been added before.
+    return false; // It has been added before.
 
   const DataLayout &DL = M.getDataLayout();
   Constant *NullPtr = ConstantPointerNull::get(VoidPtrType);
@@ -116,10 +116,10 @@ bool LowerEmuTLS::addEmuTlsVar(Module &M, const GlobalVariable *GV) {
   IntegerType *WordType = DL.getIntPtrType(C);
   PointerType *InitPtrType = PointerType::getUnqual(C);
   Type *ElementTypes[4] = {WordType, WordType, VoidPtrType, InitPtrType};
-  ArrayRef<Type*> ElementTypeArray(ElementTypes, 4);
+  ArrayRef<Type *> ElementTypeArray(ElementTypes, 4);
   StructType *EmuTlsVarType = StructType::create(ElementTypeArray);
-  EmuTlsVar = cast<GlobalVariable>(
-      M.getOrInsertGlobal(EmuTlsVarName, EmuTlsVarType));
+  EmuTlsVar =
+      cast<GlobalVariable>(M.getOrInsertGlobal(EmuTlsVarName, EmuTlsVarType));
   copyLinkageVisibility(M, GV, EmuTlsVar);
 
   // Define "__emutls_t.*" and "__emutls_v.*" only if GV is defined.
@@ -137,7 +137,7 @@ bool LowerEmuTLS::addEmuTlsVar(Module &M, const GlobalVariable *GV) {
         M.getOrInsertGlobal(EmuTlsTmplName, GVType));
     assert(EmuTlsTmplVar && "Failed to create emualted TLS initializer");
     EmuTlsTmplVar->setConstant(true);
-    EmuTlsTmplVar->setInitializer(const_cast<Constant*>(InitValue));
+    EmuTlsTmplVar->setInitializer(const_cast<Constant *>(InitValue));
     EmuTlsTmplVar->setAlignment(GVAlignment);
     copyLinkageVisibility(M, GV, EmuTlsTmplVar);
   }
@@ -147,7 +147,7 @@ bool LowerEmuTLS::addEmuTlsVar(Module &M, const GlobalVariable *GV) {
       ConstantInt::get(WordType, DL.getTypeStoreSize(GVType)),
       ConstantInt::get(WordType, GVAlignment.value()), NullPtr,
       EmuTlsTmplVar ? EmuTlsTmplVar : NullPtr};
-  ArrayRef<Constant*> ElementValueArray(ElementValues, 4);
+  ArrayRef<Constant *> ElementValueArray(ElementValues, 4);
   EmuTlsVar->setInitializer(
       ConstantStruct::get(EmuTlsVarType, ElementValueArray));
   Align MaxAlignment =
diff --git a/llvm/lib/CodeGen/MBFIWrapper.cpp b/llvm/lib/CodeGen/MBFIWrapper.cpp
index 5b388be27839464..2390c5c524e10a0 100644
--- a/llvm/lib/CodeGen/MBFIWrapper.cpp
+++ b/llvm/lib/CodeGen/MBFIWrapper.cpp
@@ -11,8 +11,8 @@
 //
 //===----------------------------------------------------------------------===//
 
-#include "llvm/CodeGen/MachineBlockFrequencyInfo.h"
 #include "llvm/CodeGen/MBFIWrapper.h"
+#include "llvm/CodeGen/MachineBlockFrequencyInfo.h"
 #include <optional>
 
 using namespace llvm;
@@ -26,8 +26,7 @@ BlockFrequency MBFIWrapper::getBlockFreq(const MachineBasicBlock *MBB) const {
   return MBFI.getBlockFreq(MBB);
 }
 
-void MBFIWrapper::setBlockFreq(const MachineBasicBlock *MBB,
-                               BlockFrequency F) {
+void MBFIWrapper::setBlockFreq(const MachineBasicBlock *MBB, BlockFrequency F) {
   MergedBBFreq[MBB] = F;
 }
 
@@ -43,13 +42,13 @@ MBFIWrapper::getBlockProfileCount(const MachineBasicBlock *MBB) const {
   return MBFI.getBlockProfileCount(MBB);
 }
 
-raw_ostream & MBFIWrapper::printBlockFreq(raw_ostream &OS,
-                                          const MachineBasicBlock *MBB) const {
+raw_ostream &MBFIWrapper::printBlockFreq(raw_ostream &OS,
+                                         const MachineBasicBlock *MBB) const {
   return MBFI.printBlockFreq(OS, getBlockFreq(MBB));
 }
 
-raw_ostream & MBFIWrapper::printBlockFreq(raw_ostream &OS,
-                                          const BlockFrequency Freq) const {
+raw_ostream &MBFIWrapper::printBlockFreq(raw_ostream &OS,
+                                         const BlockFrequency Freq) const {
   return MBFI.printBlockFreq(OS, Freq);
 }
 
@@ -57,6 +56,4 @@ void MBFIWrapper::view(const Twine &Name, bool isSimple) {
   MBFI.view(Name, isSimple);
 }
 
-uint64_t MBFIWrapper::getEntryFreq() const {
-  return MBFI.getEntryFreq();
-}
+uint64_t MBFIWrapper::getEntryFreq() const { return MBFI.getEntryFreq(); }
diff --git a/llvm/lib/CodeGen/MIRCanonicalizerPass.cpp b/llvm/lib/CodeGen/MIRCanonicalizerPass.cpp
index 21b849244d9be24..27ab8114921961a 100644
--- a/llvm/lib/CodeGen/MIRCanonicalizerPass.cpp
+++ b/llvm/lib/CodeGen/MIRCanonicalizerPass.cpp
@@ -356,8 +356,8 @@ static bool doDefKillClear(MachineBasicBlock *MBB) {
   return Changed;
 }
 
-static bool runOnBasicBlock(MachineBasicBlock *MBB,
-                            unsigned BasicBlockNum, VRegRenamer &Renamer) {
+static bool runOnBasicBlock(MachineBasicBlock *MBB, unsigned BasicBlockNum,
+                            VRegRenamer &Renamer) {
   LLVM_DEBUG({
     dbgs() << "\n\n  NEW BASIC BLOCK: " << MBB->getName() << "  \n\n";
     dbgs() << "\n\n================================================\n\n";
diff --git a/llvm/lib/CodeGen/MIRFSDiscriminator.cpp b/llvm/lib/CodeGen/MIRFSDiscriminator.cpp
index 8d17cceeb3cde8e..126cab510ab2ddf 100644
--- a/llvm/lib/CodeGen/MIRFSDiscriminator.cpp
+++ b/llvm/lib/CodeGen/MIRFSDiscriminator.cpp
@@ -195,7 +195,7 @@ bool MIRAddFSDiscriminators::runOnMachineFunction(MachineFunction &MF) {
   if (Changed) {
     createFSDiscriminatorVariable(MF.getFunction().getParent());
     LLVM_DEBUG(dbgs() << "Num of FS Discriminators: " << NumNewD << "\n");
-    (void) NumNewD;
+    (void)NumNewD;
   }
 
   return Changed;
diff --git a/llvm/lib/CodeGen/MIRParser/CMakeLists.txt b/llvm/lib/CodeGen/MIRParser/CMakeLists.txt
index 8e85c0476c7add2..75b034e583ad926 100644
--- a/llvm/lib/CodeGen/MIRParser/CMakeLists.txt
+++ b/llvm/lib/CodeGen/MIRParser/CMakeLists.txt
@@ -1,21 +1,11 @@
-add_llvm_component_library(LLVMMIRParser
-  MILexer.cpp
-  MIParser.cpp
-  MIRParser.cpp
+add_llvm_component_library(LLVMMIRParser MILexer.cpp MIParser.cpp MIRParser.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/CodeGen/MIRParser
+                               ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+                           llvm / CodeGen /
+                           MIRParser
 
-  DEPENDS
-  intrinsics_gen
+                               DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  AsmParser
-  BinaryFormat
-  CodeGen
-  CodeGenTypes
-  Core
-  MC
-  Support
-  Target
-  )
+                                   LINK_COMPONENTS AsmParser BinaryFormat
+                                       CodeGen CodeGenTypes Core MC Support
+                                           Target)
diff --git a/llvm/lib/CodeGen/MIRParser/MILexer.cpp b/llvm/lib/CodeGen/MIRParser/MILexer.cpp
index 0a0e386cde61001..9532d7242bf4deb 100644
--- a/llvm/lib/CodeGen/MIRParser/MILexer.cpp
+++ b/llvm/lib/CodeGen/MIRParser/MILexer.cpp
@@ -325,9 +325,10 @@ static Cursor maybeLexMachineBasicBlock(Cursor C, MIToken &Token,
     while (isIdentifierChar(C.peek()))
       C.advance();
   }
-  Token.reset(IsReference ? MIToken::MachineBasicBlock
-                          : MIToken::MachineBasicBlockLabel,
-              Range.upto(C))
+  Token
+      .reset(IsReference ? MIToken::MachineBasicBlock
+                         : MIToken::MachineBasicBlockLabel,
+             Range.upto(C))
       .setIntegerValue(APSInt(Number))
       .setStringValue(Range.upto(C).drop_front(StringOffset));
   return C;
@@ -434,9 +435,7 @@ static Cursor lexVirtualRegister(Cursor C, MIToken &Token) {
 }
 
 /// Returns true for a character allowed in a register name.
-static bool isRegisterChar(char C) {
-  return isIdentifierChar(C) && C != '.';
-}
+static bool isRegisterChar(char C) { return isIdentifierChar(C) && C != '.'; }
 
 static Cursor lexNamedVirtualRegister(Cursor C, MIToken &Token) {
   Cursor Range = C;
diff --git a/llvm/lib/CodeGen/MIRParser/MIParser.cpp b/llvm/lib/CodeGen/MIRParser/MIParser.cpp
index 65280c65b68781e..5fcb43a2b00dba0 100644
--- a/llvm/lib/CodeGen/MIRParser/MIParser.cpp
+++ b/llvm/lib/CodeGen/MIRParser/MIParser.cpp
@@ -79,7 +79,7 @@
 using namespace llvm;
 
 void PerTargetMIParsingState::setTarget(
-  const TargetSubtargetInfo &NewSubtarget) {
+    const TargetSubtargetInfo &NewSubtarget) {
 
   // If the subtarget changed, over conservatively assume everything is invalid.
   if (&Subtarget == &NewSubtarget)
@@ -171,8 +171,7 @@ void PerTargetMIParsingState::initNames2SubRegIndices() {
     return;
   const TargetRegisterInfo *TRI = Subtarget.getRegisterInfo();
   for (unsigned I = 1, E = TRI->getNumSubRegIndices(); I < E; ++I)
-    Names2SubRegIndices.insert(
-        std::make_pair(TRI->getSubRegIndexName(I), I));
+    Names2SubRegIndices.insert(std::make_pair(TRI->getSubRegIndexName(I), I));
 }
 
 unsigned PerTargetMIParsingState::getSubRegIndex(StringRef Name) {
@@ -312,9 +311,10 @@ const RegisterBank *PerTargetMIParsingState::getRegBank(StringRef Name) {
 }
 
 PerFunctionMIParsingState::PerFunctionMIParsingState(MachineFunction &MF,
-    SourceMgr &SM, const SlotMapping &IRSlots, PerTargetMIParsingState &T)
-  : MF(MF), SM(&SM), IRSlots(IRSlots), Target(T) {
-}
+                                                     SourceMgr &SM,
+                                                     const SlotMapping &IRSlots,
+                                                     PerTargetMIParsingState &T)
+    : MF(MF), SM(&SM), IRSlots(IRSlots), Target(T) {}
 
 VRegInfo &PerFunctionMIParsingState::getVRegInfo(Register Num) {
   auto I = VRegInfos.insert(std::make_pair(Num, nullptr));
@@ -361,7 +361,7 @@ static void initSlots2Values(const Function &F,
   }
 }
 
-const Value* PerFunctionMIParsingState::getIRValue(unsigned Slot) {
+const Value *PerFunctionMIParsingState::getIRValue(unsigned Slot) {
   if (Slots2Values.empty())
     initSlots2Values(MF.getFunction(), Slots2Values);
   return Slots2Values.lookup(Slot);
@@ -477,7 +477,7 @@ class MIParser {
   bool parseCFIOffset(int &Offset);
   bool parseCFIRegister(Register &Reg);
   bool parseCFIAddressSpace(unsigned &AddressSpace);
-  bool parseCFIEscapeValues(std::string& Values);
+  bool parseCFIEscapeValues(std::string &Values);
   bool parseCFIOperand(MachineOperand &Dest);
   bool parseIRBlock(BasicBlock *&BB, const Function &F);
   bool parseBlockAddressOperand(MachineOperand &Dest);
@@ -569,8 +569,8 @@ class MIParser {
 
 MIParser::MIParser(PerFunctionMIParsingState &PFS, SMDiagnostic &Error,
                    StringRef Source)
-    : MF(PFS.MF), Error(Error), Source(Source), CurrentSource(Source), PFS(PFS)
-{}
+    : MF(PFS.MF), Error(Error), Source(Source), CurrentSource(Source),
+      PFS(PFS) {}
 
 MIParser::MIParser(PerFunctionMIParsingState &PFS, SMDiagnostic &Error,
                    StringRef Source, SMRange SourceRange)
@@ -990,7 +990,7 @@ bool MIParser::parseBasicBlock(MachineBasicBlock &MBB,
 
   // Construct successor list by searching for basic block machine operands.
   if (!ExplicitSuccessors) {
-    SmallVector<MachineBasicBlock*,4> Successors;
+    SmallVector<MachineBasicBlock *, 4> Successors;
     bool IsFallthrough;
     guessSuccessors(MBB, Successors, IsFallthrough);
     for (MachineBasicBlock *Succ : Successors)
@@ -1069,7 +1069,8 @@ bool MIParser::parse(MachineInstr *&MI) {
          Token.isNot(MIToken::coloncolon) && Token.isNot(MIToken::lbrace)) {
     auto Loc = Token.location();
     std::optional<unsigned> TiedDefIdx;
-    if (parseMachineOperandAndTargetFlags(OpCode, Operands.size(), MO, TiedDefIdx))
+    if (parseMachineOperandAndTargetFlags(OpCode, Operands.size(), MO,
+                                          TiedDefIdx))
       return true;
     Operands.push_back(
         ParsedMachineOperand(MO, Loc, Token.location(), TiedDefIdx));
@@ -1583,7 +1584,7 @@ bool MIParser::parseRegisterClassOrBank(VRegInfo &RegInfo) {
       if (RegInfo.Explicit && RegInfo.D.RC != RC) {
         const TargetRegisterInfo &TRI = *MF.getSubtarget().getRegisterInfo();
         return error(Loc, Twine("conflicting register classes, previously: ") +
-                     Twine(TRI.getRegClassName(RegInfo.D.RC)));
+                              Twine(TRI.getRegClassName(RegInfo.D.RC)));
       }
       RegInfo.D.RC = RC;
       RegInfo.Explicit = true;
@@ -1757,7 +1758,7 @@ bool MIParser::parseRegisterOperand(MachineOperand &Dest,
       return error("register class specification expects a virtual register");
     lex();
     if (parseRegisterClassOrBank(*RegInfo))
-        return true;
+      return true;
   }
   MachineRegisterInfo &MRI = MF.getRegInfo();
   if ((Flags & RegState::Define) == 0) {
@@ -2032,7 +2033,7 @@ static bool getHexUint(const MIToken &Token, APInt &Result) {
   if (!isxdigit(S[2]))
     return true;
   StringRef V = S.substr(2);
-  APInt A(V.size()*4, V, 16);
+  APInt A(V.size() * 4, V, 16);
 
   // If A is 0, then A.getActiveBits() is 0. This isn't a valid bitwidth. Make
   // sure it isn't the case before constructing result.
@@ -3120,7 +3121,8 @@ static bool parseIRValue(const MIToken &Token, PerFunctionMIParsingState &PFS,
     llvm_unreachable("The current token should be an IR block reference");
   }
   if (!V)
-    return ErrCB(Token.location(), Twine("use of undefined IR value '") + Token.range() + "'");
+    return ErrCB(Token.location(),
+                 Twine("use of undefined IR value '") + Token.range() + "'");
   return false;
 }
 
@@ -3150,9 +3152,7 @@ bool MIParser::getUint64(uint64_t &Result) {
   return true;
 }
 
-bool MIParser::getHexUint(APInt &Result) {
-  return ::getHexUint(Token, Result);
-}
+bool MIParser::getHexUint(APInt &Result) { return ::getHexUint(Token, Result); }
 
 bool MIParser::parseMemoryOperandFlag(MachineMemOperand::Flags &Flags) {
   const auto OldFlags = Flags;
@@ -3292,8 +3292,7 @@ bool MIParser::parseMachinePointerInfo(MachinePointerInfo &Dest) {
   return false;
 }
 
-bool MIParser::parseOptionalScope(LLVMContext &Context,
-                                  SyncScope::ID &SSID) {
+bool MIParser::parseOptionalScope(LLVMContext &Context, SyncScope::ID &SSID) {
   SSID = SyncScope::System;
   if (Token.is(MIToken::Identifier) && Token.stringValue() == "syncscope") {
     lex();
@@ -3372,10 +3371,10 @@ bool MIParser::parseMachineMemoryOperand(MachineMemOperand *&Dest) {
 
   LLT MemoryType;
   if (Token.isNot(MIToken::IntegerLiteral) &&
-      Token.isNot(MIToken::kw_unknown_size) &&
-      Token.isNot(MIToken::lparen))
-    return error("expected memory LLT, the size integer literal or 'unknown-size' after "
-                 "memory operation");
+      Token.isNot(MIToken::kw_unknown_size) && Token.isNot(MIToken::lparen))
+    return error(
+        "expected memory LLT, the size integer literal or 'unknown-size' after "
+        "memory operation");
 
   uint64_t Size = MemoryLocation::UnknownSize;
   if (Token.is(MIToken::IntegerLiteral)) {
@@ -3401,11 +3400,11 @@ bool MIParser::parseMachineMemoryOperand(MachineMemOperand *&Dest) {
 
   MachinePointerInfo Ptr = MachinePointerInfo();
   if (Token.is(MIToken::Identifier)) {
-    const char *Word =
-        ((Flags & MachineMemOperand::MOLoad) &&
-         (Flags & MachineMemOperand::MOStore))
-            ? "on"
-            : Flags & MachineMemOperand::MOLoad ? "from" : "into";
+    const char *Word = ((Flags & MachineMemOperand::MOLoad) &&
+                        (Flags & MachineMemOperand::MOStore))
+                           ? "on"
+                       : Flags & MachineMemOperand::MOLoad ? "from"
+                                                           : "into";
     if (Token.stringValue() != Word)
       return error(Twine("expected '") + Word + "'");
     lex();
@@ -3597,9 +3596,8 @@ bool llvm::parseMBBReference(PerFunctionMIParsingState &PFS,
   return MIParser(PFS, Error, Src).parseStandaloneMBB(MBB);
 }
 
-bool llvm::parseRegisterReference(PerFunctionMIParsingState &PFS,
-                                  Register &Reg, StringRef Src,
-                                  SMDiagnostic &Error) {
+bool llvm::parseRegisterReference(PerFunctionMIParsingState &PFS, Register &Reg,
+                                  StringRef Src, SMDiagnostic &Error) {
   return MIParser(PFS, Error, Src).parseStandaloneRegister(Reg);
 }
 
@@ -3615,14 +3613,13 @@ bool llvm::parseVirtualRegisterReference(PerFunctionMIParsingState &PFS,
   return MIParser(PFS, Error, Src).parseStandaloneVirtualRegister(Info);
 }
 
-bool llvm::parseStackObjectReference(PerFunctionMIParsingState &PFS,
-                                     int &FI, StringRef Src,
-                                     SMDiagnostic &Error) {
+bool llvm::parseStackObjectReference(PerFunctionMIParsingState &PFS, int &FI,
+                                     StringRef Src, SMDiagnostic &Error) {
   return MIParser(PFS, Error, Src).parseStandaloneStackObject(FI);
 }
 
-bool llvm::parseMDNode(PerFunctionMIParsingState &PFS,
-                       MDNode *&Node, StringRef Src, SMDiagnostic &Error) {
+bool llvm::parseMDNode(PerFunctionMIParsingState &PFS, MDNode *&Node,
+                       StringRef Src, SMDiagnostic &Error) {
   return MIParser(PFS, Error, Src).parseStandaloneMDNode(Node);
 }
 
diff --git a/llvm/lib/CodeGen/MIRParser/MIRParser.cpp b/llvm/lib/CodeGen/MIRParser/MIRParser.cpp
index b2e570c5e67ec7b..2d8ed590ecab7f5 100644
--- a/llvm/lib/CodeGen/MIRParser/MIRParser.cpp
+++ b/llvm/lib/CodeGen/MIRParser/MIRParser.cpp
@@ -142,8 +142,7 @@ class MIRParserImpl {
                                             const yaml::StringValue &LocStr);
   template <typename T>
   bool parseStackObjectsDebugInfo(PerFunctionMIParsingState &PFS,
-                                  const T &Object,
-                                  int FrameIdx);
+                                  const T &Object, int FrameIdx);
 
   bool initializeConstantPool(PerFunctionMIParsingState &PFS,
                               MachineConstantPool &ConstantPool,
@@ -179,7 +178,8 @@ class MIRParserImpl {
   void computeFunctionProperties(MachineFunction &MF);
 
   void setupDebugValueTracking(MachineFunction &MF,
-    PerFunctionMIParsingState &PFS, const yaml::MachineFunction &YamlMF);
+                               PerFunctionMIParsingState &PFS,
+                               const yaml::MachineFunction &YamlMF);
 };
 
 } // end namespace llvm
@@ -458,9 +458,8 @@ void MIRParserImpl::setupDebugValueTracking(
   MF.setUseDebugInstrRef(YamlMF.UseDebugInstrRef);
 }
 
-bool
-MIRParserImpl::initializeMachineFunction(const yaml::MachineFunction &YamlMF,
-                                         MachineFunction &MF) {
+bool MIRParserImpl::initializeMachineFunction(
+    const yaml::MachineFunction &YamlMF, MachineFunction &MF) {
   // TODO: Recreate the machine function.
   if (Target) {
     // Avoid clearing state if we're using the same subtarget again.
@@ -513,7 +512,8 @@ MIRParserImpl::initializeMachineFunction(const yaml::MachineFunction &YamlMF,
   SMDiagnostic Error;
   SourceMgr BlockSM;
   BlockSM.AddNewSourceBuffer(
-      MemoryBuffer::getMemBuffer(BlockStr, "",/*RequiresNullTerminator=*/false),
+      MemoryBuffer::getMemBuffer(BlockStr, "",
+                                 /*RequiresNullTerminator=*/false),
       SMLoc());
   PFS.SM = &BlockSM;
   if (parseMachineBasicBlockDefinitions(PFS, BlockStr, Error)) {
@@ -630,8 +630,9 @@ bool MIRParserImpl::parseRegisterInfo(PerFunctionMIParsingState &PFS,
 
     if (!VReg.PreferredRegister.Value.empty()) {
       if (Info.Kind != VRegInfo::NORMAL)
-        return error(VReg.Class.SourceRange.Start,
-              Twine("preferred register can only be set for normal vregs"));
+        return error(
+            VReg.Class.SourceRange.Start,
+            Twine("preferred register can only be set for normal vregs"));
 
       if (parseRegisterReference(PFS, Info.PreferredReg,
                                  VReg.PreferredRegister.Value, Error))
@@ -683,8 +684,8 @@ bool MIRParserImpl::setupRegisterInfo(const PerFunctionMIParsingState &PFS,
     Register Reg = Info.VReg;
     switch (Info.Kind) {
     case VRegInfo::UNKNOWN:
-      error(Twine("Cannot determine class/bank of virtual register ") +
-            Name + " in function '" + MF.getName() + "'");
+      error(Twine("Cannot determine class/bank of virtual register ") + Name +
+            " in function '" + MF.getName() + "'");
       Error = true;
       break;
     case VRegInfo::NORMAL:
@@ -790,8 +791,8 @@ bool MIRParserImpl::initializeFrameInfo(PerFunctionMIParsingState &PFS,
                    Twine("StackID is not supported by target"));
     MFI.setStackID(ObjectIdx, Object.StackID);
     MFI.setObjectAlignment(ObjectIdx, Object.Alignment.valueOrOne());
-    if (!PFS.FixedStackObjectSlots.insert(std::make_pair(Object.ID.Value,
-                                                         ObjectIdx))
+    if (!PFS.FixedStackObjectSlots
+             .insert(std::make_pair(Object.ID.Value, ObjectIdx))
              .second)
       return error(Object.ID.SourceRange.Start,
                    Twine("redefinition of fixed stack object '%fixed-stack.") +
@@ -878,7 +879,8 @@ bool MIRParserImpl::initializeFrameInfo(PerFunctionMIParsingState &PFS,
   if (!YamlMFI.FunctionContext.Value.empty()) {
     SMDiagnostic Error;
     int FI;
-    if (parseStackObjectReference(PFS, FI, YamlMFI.FunctionContext.Value, Error))
+    if (parseStackObjectReference(PFS, FI, YamlMFI.FunctionContext.Value,
+                                  Error))
       return error(Error, YamlMFI.FunctionContext.SourceRange);
     MFI.setFunctionContextIndex(FI);
   }
@@ -886,8 +888,8 @@ bool MIRParserImpl::initializeFrameInfo(PerFunctionMIParsingState &PFS,
   return false;
 }
 
-bool MIRParserImpl::parseCalleeSavedRegister(PerFunctionMIParsingState &PFS,
-    std::vector<CalleeSavedInfo> &CSIInfo,
+bool MIRParserImpl::parseCalleeSavedRegister(
+    PerFunctionMIParsingState &PFS, std::vector<CalleeSavedInfo> &CSIInfo,
     const yaml::StringValue &RegisterSource, bool IsRestored, int FrameIdx) {
   if (RegisterSource.Value.empty())
     return false;
@@ -950,8 +952,8 @@ bool MIRParserImpl::parseStackObjectsDebugInfo(PerFunctionMIParsingState &PFS,
   return false;
 }
 
-bool MIRParserImpl::parseMDNode(PerFunctionMIParsingState &PFS,
-    MDNode *&Node, const yaml::StringValue &Source) {
+bool MIRParserImpl::parseMDNode(PerFunctionMIParsingState &PFS, MDNode *&Node,
+                                const yaml::StringValue &Source) {
   if (Source.Value.empty())
     return false;
   SMDiagnostic Error;
@@ -960,8 +962,9 @@ bool MIRParserImpl::parseMDNode(PerFunctionMIParsingState &PFS,
   return false;
 }
 
-bool MIRParserImpl::initializeConstantPool(PerFunctionMIParsingState &PFS,
-    MachineConstantPool &ConstantPool, const yaml::MachineFunction &YamlMF) {
+bool MIRParserImpl::initializeConstantPool(
+    PerFunctionMIParsingState &PFS, MachineConstantPool &ConstantPool,
+    const yaml::MachineFunction &YamlMF) {
   DenseMap<unsigned, unsigned> &ConstantPoolSlots = PFS.ConstantPoolSlots;
   const MachineFunction &MF = PFS.MF;
   const auto &M = *MF.getFunction().getParent();
@@ -988,8 +991,8 @@ bool MIRParserImpl::initializeConstantPool(PerFunctionMIParsingState &PFS,
   return false;
 }
 
-bool MIRParserImpl::initializeJumpTableInfo(PerFunctionMIParsingState &PFS,
-    const yaml::MachineJumpTable &YamlJTI) {
+bool MIRParserImpl::initializeJumpTableInfo(
+    PerFunctionMIParsingState &PFS, const yaml::MachineJumpTable &YamlJTI) {
   MachineJumpTableInfo *JTI = PFS.MF.getOrCreateJumpTableInfo(YamlJTI.Kind);
   for (const auto &Entry : YamlJTI.Entries) {
     std::vector<MachineBasicBlock *> Blocks;
diff --git a/llvm/lib/CodeGen/MIRPrinter.cpp b/llvm/lib/CodeGen/MIRPrinter.cpp
index b70bd0820091749..11aab1b327de71b 100644
--- a/llvm/lib/CodeGen/MIRPrinter.cpp
+++ b/llvm/lib/CodeGen/MIRPrinter.cpp
@@ -230,8 +230,7 @@ void MIRPrinter::print(const MachineFunction &MF) {
     const auto &SubSrc = Sub.Src;
     const auto &SubDest = Sub.Dest;
     YamlMF.DebugValueSubstitutions.push_back({SubSrc.first, SubSrc.second,
-                                              SubDest.first,
-                                              SubDest.second,
+                                              SubDest.first, SubDest.second,
                                               Sub.Subreg});
   }
   if (const auto *ConstantPool = MF.getConstantPool())
@@ -259,7 +258,7 @@ void MIRPrinter::print(const MachineFunction &MF) {
 
   yaml::Output Out(OS);
   if (!SimplifyMIR)
-      Out.setWriteDefaultValues(true);
+    Out.setWriteDefaultValues(true);
   Out << YamlMF;
 }
 
@@ -296,9 +295,8 @@ printStackObjectDbgInfo(const MachineFunction::VariableDbgInfo &DebugVar,
   std::array<std::string *, 3> Outputs{{&Object.DebugVar.Value,
                                         &Object.DebugExpr.Value,
                                         &Object.DebugLoc.Value}};
-  std::array<const Metadata *, 3> Metas{{DebugVar.Var,
-                                        DebugVar.Expr,
-                                        DebugVar.Loc}};
+  std::array<const Metadata *, 3> Metas{
+      {DebugVar.Var, DebugVar.Expr, DebugVar.Loc}};
   for (unsigned i = 0; i < 3; ++i) {
     raw_string_ostream StrOS(*Outputs[i]);
     Metas[i]->printAsOperand(StrOS, MST);
@@ -358,8 +356,8 @@ void MIRPrinter::convert(ModuleSlotTracker &MST,
   YamlMFI.MaxAlignment = MFI.getMaxAlign().value();
   YamlMFI.AdjustsStack = MFI.adjustsStack();
   YamlMFI.HasCalls = MFI.hasCalls();
-  YamlMFI.MaxCallFrameSize = MFI.isMaxCallFrameSizeComputed()
-    ? MFI.getMaxCallFrameSize() : ~0u;
+  YamlMFI.MaxCallFrameSize =
+      MFI.isMaxCallFrameSizeComputed() ? MFI.getMaxCallFrameSize() : ~0u;
   YamlMFI.CVBytesOfCalleeSavedRegisters =
       MFI.getCVBytesOfCalleeSavedRegisters();
   YamlMFI.HasOpaqueSPAdjustment = MFI.hasOpaqueSPAdjustment();
@@ -442,13 +440,13 @@ void MIRPrinter::convertStackObjects(yaml::MachineFunction &YMF,
     yaml::MachineStackObject YamlObject;
     YamlObject.ID = ID;
     if (const auto *Alloca = MFI.getObjectAllocation(I))
-      YamlObject.Name.Value = std::string(
-          Alloca->hasName() ? Alloca->getName() : "");
+      YamlObject.Name.Value =
+          std::string(Alloca->hasName() ? Alloca->getName() : "");
     YamlObject.Type = MFI.isSpillSlotObjectIndex(I)
                           ? yaml::MachineStackObject::SpillSlot
-                          : MFI.isVariableSizedObjectIndex(I)
-                                ? yaml::MachineStackObject::VariableSized
-                                : yaml::MachineStackObject::DefaultType;
+                      : MFI.isVariableSizedObjectIndex(I)
+                          ? yaml::MachineStackObject::VariableSized
+                          : yaml::MachineStackObject::DefaultType;
     YamlObject.Offset = MFI.getObjectOffset(I);
     YamlObject.Size = MFI.getObjectSize(I);
     YamlObject.Alignment = MFI.getObjectAlign(I);
@@ -620,9 +618,9 @@ void MIRPrinter::initRegisterMaskIds(const MachineFunction &MF) {
 }
 
 void llvm::guessSuccessors(const MachineBasicBlock &MBB,
-                           SmallVectorImpl<MachineBasicBlock*> &Result,
+                           SmallVectorImpl<MachineBasicBlock *> &Result,
                            bool &IsFallthrough) {
-  SmallPtrSet<MachineBasicBlock*,8> Seen;
+  SmallPtrSet<MachineBasicBlock *, 8> Seen;
 
   for (const MachineInstr &MI : MBB) {
     if (MI.isPHI())
@@ -640,32 +638,32 @@ void llvm::guessSuccessors(const MachineBasicBlock &MBB,
   IsFallthrough = I == MBB.end() || !I->isBarrier();
 }
 
-bool
-MIPrinter::canPredictBranchProbabilities(const MachineBasicBlock &MBB) const {
+bool MIPrinter::canPredictBranchProbabilities(
+    const MachineBasicBlock &MBB) const {
   if (MBB.succ_size() <= 1)
     return true;
   if (!MBB.hasSuccessorProbabilities())
     return true;
 
-  SmallVector<BranchProbability,8> Normalized(MBB.Probs.begin(),
-                                              MBB.Probs.end());
+  SmallVector<BranchProbability, 8> Normalized(MBB.Probs.begin(),
+                                               MBB.Probs.end());
   BranchProbability::normalizeProbabilities(Normalized.begin(),
                                             Normalized.end());
-  SmallVector<BranchProbability,8> Equal(Normalized.size());
+  SmallVector<BranchProbability, 8> Equal(Normalized.size());
   BranchProbability::normalizeProbabilities(Equal.begin(), Equal.end());
 
   return std::equal(Normalized.begin(), Normalized.end(), Equal.begin());
 }
 
 bool MIPrinter::canPredictSuccessors(const MachineBasicBlock &MBB) const {
-  SmallVector<MachineBasicBlock*,8> GuessedSuccs;
+  SmallVector<MachineBasicBlock *, 8> GuessedSuccs;
   bool GuessedFallthrough;
   guessSuccessors(MBB, GuessedSuccs, GuessedFallthrough);
   if (GuessedFallthrough) {
     const MachineFunction &MF = *MBB.getParent();
     MachineFunction::const_iterator NextI = std::next(MBB.getIterator());
     if (NextI != MF.end()) {
-      MachineBasicBlock *Next = const_cast<MachineBasicBlock*>(&*NextI);
+      MachineBasicBlock *Next = const_cast<MachineBasicBlock *>(&*NextI);
       if (!is_contained(GuessedSuccs, Next))
         GuessedSuccs.push_back(Next);
     }
@@ -902,8 +900,7 @@ static std::string formatOperandComment(std::string Comment) {
 }
 
 void MIPrinter::print(const MachineInstr &MI, unsigned OpIdx,
-                      const TargetRegisterInfo *TRI,
-                      const TargetInstrInfo *TII,
+                      const TargetRegisterInfo *TRI, const TargetInstrInfo *TII,
                       bool ShouldPrintRegisterTies, LLT TypeToPrint,
                       bool PrintDef) {
   const MachineOperand &Op = MI.getOperand(OpIdx);
@@ -941,7 +938,7 @@ void MIPrinter::print(const MachineInstr &MI, unsigned OpIdx,
     const TargetIntrinsicInfo *TII = MI.getMF()->getTarget().getIntrinsicInfo();
     Op.print(OS, MST, TypeToPrint, OpIdx, PrintDef, /*IsStandalone=*/false,
              ShouldPrintRegisterTies, TiedOperandIdx, TRI, TII);
-      OS << formatOperandComment(MOComment);
+    OS << formatOperandComment(MOComment);
     break;
   }
   case MachineOperand::MO_FrameIndex:
diff --git a/llvm/lib/CodeGen/MIRVRegNamerUtils.h b/llvm/lib/CodeGen/MIRVRegNamerUtils.h
index a059bc5333c65e1..11e0f73dad6a433 100644
--- a/llvm/lib/CodeGen/MIRVRegNamerUtils.h
+++ b/llvm/lib/CodeGen/MIRVRegNamerUtils.h
@@ -19,8 +19,8 @@
 
 #include "llvm/CodeGen/Register.h"
 #include <map>
-#include <vector>
 #include <string>
+#include <vector>
 
 namespace llvm {
 
diff --git a/llvm/lib/CodeGen/MLRegAllocEvictAdvisor.cpp b/llvm/lib/CodeGen/MLRegAllocEvictAdvisor.cpp
index 114e7910dc27bbd..24e1c71d178fa73 100644
--- a/llvm/lib/CodeGen/MLRegAllocEvictAdvisor.cpp
+++ b/llvm/lib/CodeGen/MLRegAllocEvictAdvisor.cpp
@@ -551,7 +551,7 @@ class DevelopmentModeEvictionAdvisorAnalysis final
   std::unique_ptr<Logger> Log;
 };
 
-#endif //#ifdef LLVM_HAVE_TFLITE
+#endif // #ifdef LLVM_HAVE_TFLITE
 } // namespace
 
 float MLEvictAdvisor::getInitialQueueSize(const MachineFunction &MF) {
diff --git a/llvm/lib/CodeGen/MLRegAllocPriorityAdvisor.cpp b/llvm/lib/CodeGen/MLRegAllocPriorityAdvisor.cpp
index 422781593a9c6bd..b5431066d66eb6e 100644
--- a/llvm/lib/CodeGen/MLRegAllocPriorityAdvisor.cpp
+++ b/llvm/lib/CodeGen/MLRegAllocPriorityAdvisor.cpp
@@ -79,7 +79,6 @@ static const std::vector<int64_t> PerLiveRangeShape{1};
 static const TensorSpec DecisionSpec =
     TensorSpec::createSpec<float>(DecisionName, {1});
 
-
 // Named features index.
 enum FeatureIDs {
 #define _FEATURE_IDX(_, name, __, ___) name,
@@ -271,7 +270,7 @@ class DevelopmentModePriorityAdvisorAnalysis final
   std::unique_ptr<MLModelRunner> Runner;
   std::unique_ptr<Logger> Log;
 };
-#endif //#ifdef LLVM_HAVE_TFLITE
+#endif // #ifdef LLVM_HAVE_TFLITE
 
 } // namespace llvm
 
diff --git a/llvm/lib/CodeGen/MachineBasicBlock.cpp b/llvm/lib/CodeGen/MachineBasicBlock.cpp
index 280ced65db7d8c0..4a6fe97cc68819f 100644
--- a/llvm/lib/CodeGen/MachineBasicBlock.cpp
+++ b/llvm/lib/CodeGen/MachineBasicBlock.cpp
@@ -211,8 +211,8 @@ MachineBasicBlock::SkipPHIsAndLabels(MachineBasicBlock::iterator I) {
   const TargetInstrInfo *TII = getParent()->getSubtarget().getInstrInfo();
 
   iterator E = end();
-  while (I != E && (I->isPHI() || I->isPosition() ||
-                    TII->isBasicBlockPrologue(*I)))
+  while (I != E &&
+         (I->isPHI() || I->isPosition() || TII->isBasicBlockPrologue(*I)))
     ++I;
   // FIXME: This needs to change if we wish to bundle labels
   // inside the bundle.
@@ -296,9 +296,7 @@ bool MachineBasicBlock::isEntryBlock() const {
 }
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
-LLVM_DUMP_METHOD void MachineBasicBlock::dump() const {
-  print(dbgs());
-}
+LLVM_DUMP_METHOD void MachineBasicBlock::dump() const { print(dbgs()); }
 #endif
 
 bool MachineBasicBlock::mayHaveInlineAsmBr() const {
@@ -372,7 +370,8 @@ void MachineBasicBlock::print(raw_ostream &OS, ModuleSlotTracker &MST,
 
   // Print the preds of this block according to the CFG.
   if (!pred_empty() && IsStandalone) {
-    if (Indexes) OS << '\t';
+    if (Indexes)
+      OS << '\t';
     // Don't indent(2), align with previous line attributes.
     OS << "; predecessors: ";
     ListSeparator LS;
@@ -383,7 +382,8 @@ void MachineBasicBlock::print(raw_ostream &OS, ModuleSlotTracker &MST,
   }
 
   if (!succ_empty()) {
-    if (Indexes) OS << '\t';
+    if (Indexes)
+      OS << '\t';
     // Print the successors
     OS.indent(2) << "successors: ";
     ListSeparator LS;
@@ -414,7 +414,8 @@ void MachineBasicBlock::print(raw_ostream &OS, ModuleSlotTracker &MST,
   }
 
   if (!livein_empty() && MRI.tracksLiveness()) {
-    if (Indexes) OS << '\t';
+    if (Indexes)
+      OS << '\t';
     OS.indent(2) << "liveins: ";
 
     ListSeparator LS;
@@ -457,7 +458,8 @@ void MachineBasicBlock::print(raw_ostream &OS, ModuleSlotTracker &MST,
     OS.indent(2) << "}\n";
 
   if (IrrLoopHeaderWeight && IsStandalone) {
-    if (Indexes) OS << '\t';
+    if (Indexes)
+      OS << '\t';
     OS.indent(2) << "; Irreducible loop header weight: " << *IrrLoopHeaderWeight
                  << '\n';
   }
@@ -631,8 +633,8 @@ void MachineBasicBlock::sortUniqueLiveIns() {
   LiveIns.erase(Out, LiveIns.end());
 }
 
-Register
-MachineBasicBlock::addLiveIn(MCRegister PhysReg, const TargetRegisterClass *RC) {
+Register MachineBasicBlock::addLiveIn(MCRegister PhysReg,
+                                      const TargetRegisterClass *RC) {
   assert(getParent() && "MBB must be inserted in function");
   assert(Register::isPhysicalRegister(PhysReg) && "Expected physreg");
   assert(RC && "Register class is required");
@@ -646,7 +648,7 @@ MachineBasicBlock::addLiveIn(MCRegister PhysReg, const TargetRegisterClass *RC)
 
   // Look for an existing copy.
   if (LiveIn)
-    for (;I != E && I->isCopy(); ++I)
+    for (; I != E && I->isCopy(); ++I)
       if (I->getOperand(1).getReg() == PhysReg) {
         Register VirtReg = I->getOperand(0).getReg();
         if (!MRI.constrainRegClass(VirtReg, RC))
@@ -657,7 +659,7 @@ MachineBasicBlock::addLiveIn(MCRegister PhysReg, const TargetRegisterClass *RC)
   // No luck, create a virtual register.
   Register VirtReg = MRI.createVirtualRegister(RC);
   BuildMI(*this, I, DebugLoc(), TII.get(TargetOpcode::COPY), VirtReg)
-    .addReg(PhysReg, RegState::Kill);
+      .addReg(PhysReg, RegState::Kill);
   if (!LiveIn)
     addLiveIn(PhysReg);
   return VirtReg;
@@ -694,7 +696,7 @@ void MachineBasicBlock::updateTerminator(
   SmallVector<MachineOperand, 4> Cond;
   DebugLoc DL = findBranchDebugLoc();
   bool B = TII->analyzeBranch(*this, TBB, FBB, Cond);
-  (void) B;
+  (void)B;
   assert(!B && "UpdateTerminators requires analyzable predecessors!");
   if (Cond.empty()) {
     if (TBB) {
@@ -923,8 +925,8 @@ void MachineBasicBlock::transferSuccessors(MachineBasicBlock *FromMBB) {
   }
 }
 
-void
-MachineBasicBlock::transferSuccessorsAndUpdatePHIs(MachineBasicBlock *FromMBB) {
+void MachineBasicBlock::transferSuccessorsAndUpdatePHIs(
+    MachineBasicBlock *FromMBB) {
   if (this == FromMBB)
     return;
 
@@ -987,7 +989,8 @@ MachineBasicBlock *MachineBasicBlock::getFallThrough(bool JumpToFallThrough) {
   }
 
   // If there is no branch, control always falls through.
-  if (!TBB) return &*Fallthrough;
+  if (!TBB)
+    return &*Fallthrough;
 
   // If there is some explicit branch to the fallthrough block, it can obviously
   // reach, even though the branch should get folded to fall through implicitly.
@@ -997,16 +1000,15 @@ MachineBasicBlock *MachineBasicBlock::getFallThrough(bool JumpToFallThrough) {
 
   // If it's an unconditional branch to some block not the fall through, it
   // doesn't fall through.
-  if (Cond.empty()) return nullptr;
+  if (Cond.empty())
+    return nullptr;
 
   // Otherwise, if it is conditional and has no explicit false block, it falls
   // through.
   return (FBB == nullptr) ? &*Fallthrough : nullptr;
 }
 
-bool MachineBasicBlock::canFallThrough() {
-  return getFallThrough() != nullptr;
-}
+bool MachineBasicBlock::canFallThrough() { return getFallThrough() != nullptr; }
 
 MachineBasicBlock *MachineBasicBlock::splitAt(MachineInstr &MI,
                                               bool UpdateLiveIns,
@@ -1101,7 +1103,7 @@ MachineBasicBlock *MachineBasicBlock::SplitCriticalEdge(
 
   MachineFunction *MF = getParent();
   MachineBasicBlock *PrevFallthrough = getNextNode();
-  DebugLoc DL;  // FIXME: this is nowhere
+  DebugLoc DL; // FIXME: this is nowhere
 
   MachineBasicBlock *NMBB = MF->CreateMachineBasicBlock();
   NMBB->setCallFrameSize(Succ->getCallFrameSize());
@@ -1168,7 +1170,7 @@ MachineBasicBlock *MachineBasicBlock::SplitCriticalEdge(
 
   // If updateTerminator() removes instructions, we need to remove them from
   // SlotIndexes.
-  SmallVector<MachineInstr*, 4> Terminators;
+  SmallVector<MachineInstr *, 4> Terminators;
   if (Indexes) {
     for (MachineInstr &MI :
          llvm::make_range(getFirstInstrTerminator(), instr_end()))
@@ -1184,7 +1186,7 @@ MachineBasicBlock *MachineBasicBlock::SplitCriticalEdge(
     updateTerminator(PrevFallthrough);
 
   if (Indexes) {
-    SmallVector<MachineInstr*, 4> NewTerminators;
+    SmallVector<MachineInstr *, 4> NewTerminators;
     for (MachineInstr &MI :
          llvm::make_range(getFirstInstrTerminator(), instr_end()))
       NewTerminators.push_back(&MI);
@@ -1251,7 +1253,7 @@ MachineBasicBlock *MachineBasicBlock::SplitCriticalEdge(
     // will extend to the end of the new split block.
 
     bool isLastMBB =
-      std::next(MachineFunction::iterator(NMBB)) == getParent()->end();
+        std::next(MachineFunction::iterator(NMBB)) == getParent()->end();
 
     SlotIndex StartIndex = Indexes->getMBBEndIdx(this);
     SlotIndex PrevIndex = StartIndex.getPrevSlot();
@@ -1259,11 +1261,11 @@ MachineBasicBlock *MachineBasicBlock::SplitCriticalEdge(
 
     // Find the registers used from NMBB in PHIs in Succ.
     SmallSet<Register, 8> PHISrcRegs;
-    for (MachineBasicBlock::instr_iterator
-         I = Succ->instr_begin(), E = Succ->instr_end();
+    for (MachineBasicBlock::instr_iterator I = Succ->instr_begin(),
+                                           E = Succ->instr_end();
          I != E && I->isPHI(); ++I) {
       for (unsigned ni = 1, ne = I->getNumOperands(); ni != ne; ni += 2) {
-        if (I->getOperand(ni+1).getMBB() == NMBB) {
+        if (I->getOperand(ni + 1).getMBB() == NMBB) {
           MachineOperand &MO = I->getOperand(ni);
           Register Reg = MO.getReg();
           PHISrcRegs.insert(Reg);
@@ -1409,8 +1411,8 @@ MachineInstr *MachineBasicBlock::remove_instr(MachineInstr *MI) {
   return Insts.remove(MI);
 }
 
-MachineBasicBlock::instr_iterator
-MachineBasicBlock::insert(instr_iterator I, MachineInstr *MI) {
+MachineBasicBlock::instr_iterator MachineBasicBlock::insert(instr_iterator I,
+                                                            MachineInstr *MI) {
   assert(!MI->isBundledWithPred() && !MI->isBundledWithSucc() &&
          "Cannot insert instruction with bundle flags");
   // Set the bundle flags when inserting inside a bundle.
@@ -1444,13 +1446,13 @@ void MachineBasicBlock::ReplaceUsesOfBlockWith(MachineBasicBlock *Old,
   MachineBasicBlock::instr_iterator I = instr_end();
   while (I != instr_begin()) {
     --I;
-    if (!I->isTerminator()) break;
+    if (!I->isTerminator())
+      break;
 
     // Scan the operands of this machine instruction, replacing any uses of Old
     // with New.
     for (unsigned i = 0, e = I->getNumOperands(); i != e; ++i)
-      if (I->getOperand(i).isMBB() &&
-          I->getOperand(i).getMBB() == Old)
+      if (I->getOperand(i).isMBB() && I->getOperand(i).getMBB() == Old)
         I->getOperand(i).setMBB(New);
   }
 
@@ -1470,8 +1472,7 @@ void MachineBasicBlock::replacePhiUsesWith(MachineBasicBlock *Old,
 
 /// Find the next valid DebugLoc starting at MBBI, skipping any debug
 /// instructions.  Return UnknownLoc if there is none.
-DebugLoc
-MachineBasicBlock::findDebugLoc(instr_iterator MBBI) {
+DebugLoc MachineBasicBlock::findDebugLoc(instr_iterator MBBI) {
   // Skip debug declarations, we don't want a DebugLoc from them.
   MBBI = skipDebugInstructionsForward(MBBI, instr_end());
   if (MBBI != instr_end())
@@ -1513,8 +1514,7 @@ DebugLoc MachineBasicBlock::rfindPrevDebugLoc(reverse_instr_iterator MBBI) {
 
 /// Find and return the merged DebugLoc of the branch instructions of the block.
 /// Return UnknownLoc if there is none.
-DebugLoc
-MachineBasicBlock::findBranchDebugLoc() {
+DebugLoc MachineBasicBlock::findBranchDebugLoc() {
   DebugLoc DL;
   auto TI = getFirstTerminator();
   while (TI != end() && !TI->isBranch())
@@ -1522,7 +1522,7 @@ MachineBasicBlock::findBranchDebugLoc() {
 
   if (TI != end()) {
     DL = TI->getDebugLoc();
-    for (++TI ; TI != end() ; ++TI)
+    for (++TI; TI != end(); ++TI)
       if (TI->isBranch())
         DL = DILocation::getMergedLocation(DL, TI->getDebugLoc());
   }
@@ -1588,7 +1588,8 @@ MachineBasicBlock::getProbabilityIterator(MachineBasicBlock::succ_iterator I) {
 /// instructions after (searching just for defs) MI.
 MachineBasicBlock::LivenessQueryResult
 MachineBasicBlock::computeRegisterLiveness(const TargetRegisterInfo *TRI,
-                                           MCRegister Reg, const_iterator Before,
+                                           MCRegister Reg,
+                                           const_iterator Before,
                                            unsigned Neighborhood) const {
   unsigned N = Neighborhood;
 
@@ -1623,7 +1624,6 @@ MachineBasicBlock::computeRegisterLiveness(const TargetRegisterInfo *TRI,
     return LQR_Dead;
   }
 
-
   N = Neighborhood;
 
   // Start by searching backwards from Before, looking for kills, reads or defs.
@@ -1698,22 +1698,20 @@ MachineBasicBlock::getEndClobberMask(const TargetRegisterInfo *TRI) const {
   return isReturnBlock() && !succ_empty() ? TRI->getNoPreservedMask() : nullptr;
 }
 
-void MachineBasicBlock::clearLiveIns() {
-  LiveIns.clear();
-}
+void MachineBasicBlock::clearLiveIns() { LiveIns.clear(); }
 
 MachineBasicBlock::livein_iterator MachineBasicBlock::livein_begin() const {
   assert(getParent()->getProperties().hasProperty(
-      MachineFunctionProperties::Property::TracksLiveness) &&
-      "Liveness information is accurate");
+             MachineFunctionProperties::Property::TracksLiveness) &&
+         "Liveness information is accurate");
   return LiveIns.begin();
 }
 
 MachineBasicBlock::liveout_iterator MachineBasicBlock::liveout_begin() const {
   const MachineFunction &MF = *getParent();
   assert(MF.getProperties().hasProperty(
-      MachineFunctionProperties::Property::TracksLiveness) &&
-      "Liveness information is accurate");
+             MachineFunctionProperties::Property::TracksLiveness) &&
+         "Liveness information is accurate");
 
   const TargetLowering &TLI = *MF.getSubtarget().getTargetLowering();
   MCPhysReg ExceptionPointer = 0, ExceptionSelector = 0;
diff --git a/llvm/lib/CodeGen/MachineBlockFrequencyInfo.cpp b/llvm/lib/CodeGen/MachineBlockFrequencyInfo.cpp
index b1cbe525d7e6c16..518fd14e1a9f02a 100644
--- a/llvm/lib/CodeGen/MachineBlockFrequencyInfo.cpp
+++ b/llvm/lib/CodeGen/MachineBlockFrequencyInfo.cpp
@@ -41,8 +41,9 @@ static cl::opt<GVDAGType> ViewMachineBlockFreqPropagationDAG(
                clEnumValN(GVDT_Integer, "integer",
                           "display a graph using the raw "
                           "integer fractional block frequency representation."),
-               clEnumValN(GVDT_Count, "count", "display a graph using the real "
-                                               "profile count if available.")));
+               clEnumValN(GVDT_Count, "count",
+                          "display a graph using the real "
+                          "profile count if available.")));
 
 // Similar option above, but used to control BFI display only after MBP pass
 cl::opt<GVDAGType> ViewBlockLayoutWithBFI(
@@ -69,9 +70,9 @@ extern cl::opt<std::string> ViewBlockFreqFuncName;
 // Defined in Analysis/BlockFrequencyInfo.cpp:  -view-hot-freq-perc=
 extern cl::opt<unsigned> ViewHotFreqPercent;
 
-static cl::opt<bool> PrintMachineBlockFreq(
-    "print-machine-bfi", cl::init(false), cl::Hidden,
-    cl::desc("Print the machine block frequency info."));
+static cl::opt<bool>
+    PrintMachineBlockFreq("print-machine-bfi", cl::init(false), cl::Hidden,
+                          cl::desc("Print the machine block frequency info."));
 
 // Command line option to specify the name of the function for block frequency
 // dump. Defined in Analysis/BlockFrequencyInfo.cpp.
@@ -176,9 +177,9 @@ MachineBlockFrequencyInfo::MachineBlockFrequencyInfo()
 }
 
 MachineBlockFrequencyInfo::MachineBlockFrequencyInfo(
-      MachineFunction &F,
-      MachineBranchProbabilityInfo &MBPI,
-      MachineLoopInfo &MLI) : MachineFunctionPass(ID) {
+    MachineFunction &F, MachineBranchProbabilityInfo &MBPI,
+    MachineLoopInfo &MLI)
+    : MachineFunctionPass(ID) {
   calculate(F, MBPI, MLI);
 }
 
@@ -202,9 +203,8 @@ void MachineBlockFrequencyInfo::calculate(
        F.getName().equals(ViewBlockFreqFuncName))) {
     view("MachineBlockFrequencyDAGS." + F.getName());
   }
-  if (PrintMachineBlockFreq &&
-      (PrintBlockFreqFuncName.empty() ||
-       F.getName().equals(PrintBlockFreqFuncName))) {
+  if (PrintMachineBlockFreq && (PrintBlockFreqFuncName.empty() ||
+                                F.getName().equals(PrintBlockFreqFuncName))) {
     MBFI->print(dbgs());
   }
 }
diff --git a/llvm/lib/CodeGen/MachineBlockPlacement.cpp b/llvm/lib/CodeGen/MachineBlockPlacement.cpp
index 912e9ec993e3cc5..b37bb7bd1a00b6a 100644
--- a/llvm/lib/CodeGen/MachineBlockPlacement.cpp
+++ b/llvm/lib/CodeGen/MachineBlockPlacement.cpp
@@ -119,10 +119,10 @@ static cl::opt<unsigned> LoopToColdBlockRatio(
              "(frequency of block) is greater than this ratio"),
     cl::init(5), cl::Hidden);
 
-static cl::opt<bool> ForceLoopColdBlock(
-    "force-loop-cold-block",
-    cl::desc("Force outlining cold blocks from loops."),
-    cl::init(false), cl::Hidden);
+static cl::opt<bool>
+    ForceLoopColdBlock("force-loop-cold-block",
+                       cl::desc("Force outlining cold blocks from loops."),
+                       cl::init(false), cl::Hidden);
 
 static cl::opt<bool>
     PreciseRotationCost("precise-rotation-cost",
@@ -147,43 +147,43 @@ static cl::opt<unsigned> JumpInstCost("jump-inst-cost",
                                       cl::desc("Cost of jump instructions."),
                                       cl::init(1), cl::Hidden);
 static cl::opt<bool>
-TailDupPlacement("tail-dup-placement",
-              cl::desc("Perform tail duplication during placement. "
-                       "Creates more fallthrough opportunites in "
-                       "outline branches."),
-              cl::init(true), cl::Hidden);
+    TailDupPlacement("tail-dup-placement",
+                     cl::desc("Perform tail duplication during placement. "
+                              "Creates more fallthrough opportunites in "
+                              "outline branches."),
+                     cl::init(true), cl::Hidden);
 
 static cl::opt<bool>
-BranchFoldPlacement("branch-fold-placement",
-              cl::desc("Perform branch folding during placement. "
-                       "Reduces code size."),
-              cl::init(true), cl::Hidden);
+    BranchFoldPlacement("branch-fold-placement",
+                        cl::desc("Perform branch folding during placement. "
+                                 "Reduces code size."),
+                        cl::init(true), cl::Hidden);
 
 // Heuristic for tail duplication.
 static cl::opt<unsigned> TailDupPlacementThreshold(
     "tail-dup-placement-threshold",
     cl::desc("Instruction cutoff for tail duplication during layout. "
              "Tail merging during layout is forced to have a threshold "
-             "that won't conflict."), cl::init(2),
-    cl::Hidden);
+             "that won't conflict."),
+    cl::init(2), cl::Hidden);
 
 // Heuristic for aggressive tail duplication.
 static cl::opt<unsigned> TailDupPlacementAggressiveThreshold(
     "tail-dup-placement-aggressive-threshold",
     cl::desc("Instruction cutoff for aggressive tail duplication during "
              "layout. Used at -O3. Tail merging during layout is forced to "
-             "have a threshold that won't conflict."), cl::init(4),
-    cl::Hidden);
+             "have a threshold that won't conflict."),
+    cl::init(4), cl::Hidden);
 
 // Heuristic for tail duplication.
 static cl::opt<unsigned> TailDupPlacementPenalty(
     "tail-dup-placement-penalty",
-    cl::desc("Cost penalty for blocks that can avoid breaking CFG by copying. "
-             "Copying can increase fallthrough, but it also increases icache "
-             "pressure. This parameter controls the penalty to account for that. "
-             "Percent as integer."),
-    cl::init(2),
-    cl::Hidden);
+    cl::desc(
+        "Cost penalty for blocks that can avoid breaking CFG by copying. "
+        "Copying can increase fallthrough, but it also increases icache "
+        "pressure. This parameter controls the penalty to account for that. "
+        "Percent as integer."),
+    cl::init(2), cl::Hidden);
 
 // Heuristic for tail duplication if profile count is used in cost model.
 static cl::opt<unsigned> TailDupProfilePercentThreshold(
@@ -198,8 +198,7 @@ static cl::opt<unsigned> TriangleChainCount(
     "triangle-chain-count",
     cl::desc("Number of triangle-shaped-CFG's that need to be in a row for the "
              "triangle tail duplication heuristic to kick in. 0 to disable."),
-    cl::init(2),
-    cl::Hidden);
+    cl::init(2), cl::Hidden);
 
 // Use case: When block layout is visualized after MBP pass, the basic blocks
 // are labeled in layout order; meanwhile blocks could be numbered in a
@@ -286,8 +285,8 @@ class BlockChain {
   iterator end() { return Blocks.end(); }
   const_iterator end() const { return Blocks.end(); }
 
-  bool remove(MachineBasicBlock* BB) {
-    for(iterator i = begin(); i != end(); ++i) {
+  bool remove(MachineBasicBlock *BB) {
+    for (iterator i = begin(); i != end(); ++i) {
       if (*i == BB) {
         Blocks.erase(i);
         return true;
@@ -457,51 +456,51 @@ class MachineBlockPlacement : public MachineFunctionPass {
 
   /// Decrease the UnscheduledPredecessors count for all blocks in chain, and
   /// if the count goes to 0, add them to the appropriate work list.
-  void markChainSuccessors(
-      const BlockChain &Chain, const MachineBasicBlock *LoopHeaderBB,
-      const BlockFilterSet *BlockFilter = nullptr);
+  void markChainSuccessors(const BlockChain &Chain,
+                           const MachineBasicBlock *LoopHeaderBB,
+                           const BlockFilterSet *BlockFilter = nullptr);
 
   /// Decrease the UnscheduledPredecessors count for a single block, and
   /// if the count goes to 0, add them to the appropriate work list.
-  void markBlockSuccessors(
-      const BlockChain &Chain, const MachineBasicBlock *BB,
-      const MachineBasicBlock *LoopHeaderBB,
-      const BlockFilterSet *BlockFilter = nullptr);
+  void markBlockSuccessors(const BlockChain &Chain, const MachineBasicBlock *BB,
+                           const MachineBasicBlock *LoopHeaderBB,
+                           const BlockFilterSet *BlockFilter = nullptr);
 
   BranchProbability
-  collectViableSuccessors(
-      const MachineBasicBlock *BB, const BlockChain &Chain,
-      const BlockFilterSet *BlockFilter,
-      SmallVector<MachineBasicBlock *, 4> &Successors);
+  collectViableSuccessors(const MachineBasicBlock *BB, const BlockChain &Chain,
+                          const BlockFilterSet *BlockFilter,
+                          SmallVector<MachineBasicBlock *, 4> &Successors);
   bool isBestSuccessor(MachineBasicBlock *BB, MachineBasicBlock *Pred,
                        BlockFilterSet *BlockFilter);
   void findDuplicateCandidates(SmallVectorImpl<MachineBasicBlock *> &Candidates,
                                MachineBasicBlock *BB,
                                BlockFilterSet *BlockFilter);
-  bool repeatedlyTailDuplicateBlock(
-      MachineBasicBlock *BB, MachineBasicBlock *&LPred,
-      const MachineBasicBlock *LoopHeaderBB,
-      BlockChain &Chain, BlockFilterSet *BlockFilter,
-      MachineFunction::iterator &PrevUnplacedBlockIt);
-  bool maybeTailDuplicateBlock(
-      MachineBasicBlock *BB, MachineBasicBlock *LPred,
-      BlockChain &Chain, BlockFilterSet *BlockFilter,
-      MachineFunction::iterator &PrevUnplacedBlockIt,
-      bool &DuplicatedToLPred);
-  bool hasBetterLayoutPredecessor(
-      const MachineBasicBlock *BB, const MachineBasicBlock *Succ,
-      const BlockChain &SuccChain, BranchProbability SuccProb,
-      BranchProbability RealSuccProb, const BlockChain &Chain,
-      const BlockFilterSet *BlockFilter);
-  BlockAndTailDupResult selectBestSuccessor(
-      const MachineBasicBlock *BB, const BlockChain &Chain,
-      const BlockFilterSet *BlockFilter);
-  MachineBasicBlock *selectBestCandidateBlock(
-      const BlockChain &Chain, SmallVectorImpl<MachineBasicBlock *> &WorkList);
-  MachineBasicBlock *getFirstUnplacedBlock(
-      const BlockChain &PlacedChain,
-      MachineFunction::iterator &PrevUnplacedBlockIt,
-      const BlockFilterSet *BlockFilter);
+  bool
+  repeatedlyTailDuplicateBlock(MachineBasicBlock *BB, MachineBasicBlock *&LPred,
+                               const MachineBasicBlock *LoopHeaderBB,
+                               BlockChain &Chain, BlockFilterSet *BlockFilter,
+                               MachineFunction::iterator &PrevUnplacedBlockIt);
+  bool maybeTailDuplicateBlock(MachineBasicBlock *BB, MachineBasicBlock *LPred,
+                               BlockChain &Chain, BlockFilterSet *BlockFilter,
+                               MachineFunction::iterator &PrevUnplacedBlockIt,
+                               bool &DuplicatedToLPred);
+  bool hasBetterLayoutPredecessor(const MachineBasicBlock *BB,
+                                  const MachineBasicBlock *Succ,
+                                  const BlockChain &SuccChain,
+                                  BranchProbability SuccProb,
+                                  BranchProbability RealSuccProb,
+                                  const BlockChain &Chain,
+                                  const BlockFilterSet *BlockFilter);
+  BlockAndTailDupResult selectBestSuccessor(const MachineBasicBlock *BB,
+                                            const BlockChain &Chain,
+                                            const BlockFilterSet *BlockFilter);
+  MachineBasicBlock *
+  selectBestCandidateBlock(const BlockChain &Chain,
+                           SmallVectorImpl<MachineBasicBlock *> &WorkList);
+  MachineBasicBlock *
+  getFirstUnplacedBlock(const BlockChain &PlacedChain,
+                        MachineFunction::iterator &PrevUnplacedBlockIt,
+                        const BlockFilterSet *BlockFilter);
 
   /// Add a basic block to the work list if it is appropriate.
   ///
@@ -525,20 +524,19 @@ class MachineBlockPlacement : public MachineFunctionPass {
                                   const MachineBasicBlock *ExitBB,
                                   const BlockFilterSet &LoopBlockSet);
   MachineBasicBlock *findBestLoopTopHelper(MachineBasicBlock *OldTop,
-      const MachineLoop &L, const BlockFilterSet &LoopBlockSet);
-  MachineBasicBlock *findBestLoopTop(
-      const MachineLoop &L, const BlockFilterSet &LoopBlockSet);
-  MachineBasicBlock *findBestLoopExit(
-      const MachineLoop &L, const BlockFilterSet &LoopBlockSet,
-      BlockFrequency &ExitFreq);
+                                           const MachineLoop &L,
+                                           const BlockFilterSet &LoopBlockSet);
+  MachineBasicBlock *findBestLoopTop(const MachineLoop &L,
+                                     const BlockFilterSet &LoopBlockSet);
+  MachineBasicBlock *findBestLoopExit(const MachineLoop &L,
+                                      const BlockFilterSet &LoopBlockSet,
+                                      BlockFrequency &ExitFreq);
   BlockFilterSet collectLoopBlockSet(const MachineLoop &L);
   void buildLoopChains(const MachineLoop &L);
-  void rotateLoop(
-      BlockChain &LoopChain, const MachineBasicBlock *ExitingBB,
-      BlockFrequency ExitFreq, const BlockFilterSet &LoopBlockSet);
-  void rotateLoopWithProfile(
-      BlockChain &LoopChain, const MachineLoop &L,
-      const BlockFilterSet &LoopBlockSet);
+  void rotateLoop(BlockChain &LoopChain, const MachineBasicBlock *ExitingBB,
+                  BlockFrequency ExitFreq, const BlockFilterSet &LoopBlockSet);
+  void rotateLoopWithProfile(BlockChain &LoopChain, const MachineLoop &L,
+                             const BlockFilterSet &LoopBlockSet);
   void buildCFGChains();
   void optimizeBranches();
   void alignBlocks();
@@ -547,10 +545,10 @@ class MachineBlockPlacement : public MachineFunctionPass {
   bool shouldTailDuplicate(MachineBasicBlock *BB);
   /// Check the edge frequencies to see if tail duplication will increase
   /// fallthroughs.
-  bool isProfitableToTailDup(
-    const MachineBasicBlock *BB, const MachineBasicBlock *Succ,
-    BranchProbability QProb,
-    const BlockChain &Chain, const BlockFilterSet *BlockFilter);
+  bool isProfitableToTailDup(const MachineBasicBlock *BB,
+                             const MachineBasicBlock *Succ,
+                             BranchProbability QProb, const BlockChain &Chain,
+                             const BlockFilterSet *BlockFilter);
 
   /// Check for a trellis layout.
   bool isTrellis(const MachineBasicBlock *BB,
@@ -571,9 +569,10 @@ class MachineBlockPlacement : public MachineFunctionPass {
 
   /// Returns true if a block can tail duplicate into all unplaced
   /// predecessors. Filters based on loop.
-  bool canTailDuplicateUnplacedPreds(
-      const MachineBasicBlock *BB, MachineBasicBlock *Succ,
-      const BlockChain &Chain, const BlockFilterSet *BlockFilter);
+  bool canTailDuplicateUnplacedPreds(const MachineBasicBlock *BB,
+                                     MachineBasicBlock *Succ,
+                                     const BlockChain &Chain,
+                                     const BlockFilterSet *BlockFilter);
 
   /// Find chains of triangles to tail-duplicate where a global analysis works,
   /// but a local analysis would not find them.
@@ -791,8 +790,8 @@ bool MachineBlockPlacement::shouldTailDuplicate(MachineBasicBlock *BB) {
 /// Compare 2 BlockFrequency's with a small penalty for \p A.
 /// In order to be conservative, we apply a X% penalty to account for
 /// increased icache pressure and static heuristics. For small frequencies
-/// we use only the numerators to improve accuracy. For simplicity, we assume the
-/// penalty is less than 100%
+/// we use only the numerators to improve accuracy. For simplicity, we assume
+/// the penalty is less than 100%
 /// TODO(iteratee): Use 64-bit fixed point edge frequencies everywhere.
 static bool greaterWithBias(BlockFrequency A, BlockFrequency B,
                             uint64_t EntryFreq) {
@@ -808,8 +807,8 @@ static bool greaterWithBias(BlockFrequency A, BlockFrequency B,
 /// considering duplication.
 bool MachineBlockPlacement::isProfitableToTailDup(
     const MachineBasicBlock *BB, const MachineBasicBlock *Succ,
-    BranchProbability QProb,
-    const BlockChain &Chain, const BlockFilterSet *BlockFilter) {
+    BranchProbability QProb, const BlockChain &Chain,
+    const BlockFilterSet *BlockFilter) {
   // We need to do a probability calculation to make sure this is profitable.
   // First: does succ have a successor that post-dominates? This affects the
   // calculation. The 2 relevant cases are:
@@ -865,12 +864,12 @@ bool MachineBlockPlacement::isProfitableToTailDup(
   // from BB.
   auto SuccBestPred = BlockFrequency(0);
   for (MachineBasicBlock *SuccPred : Succ->predecessors()) {
-    if (SuccPred == Succ || SuccPred == BB
-        || BlockToChain[SuccPred] == &Chain
-        || (BlockFilter && !BlockFilter->count(SuccPred)))
+    if (SuccPred == Succ || SuccPred == BB ||
+        BlockToChain[SuccPred] == &Chain ||
+        (BlockFilter && !BlockFilter->count(SuccPred)))
       continue;
-    auto Freq = MBFI->getBlockFreq(SuccPred)
-        * MBPI->getEdgeProbability(SuccPred, Succ);
+    auto Freq =
+        MBFI->getBlockFreq(SuccPred) * MBPI->getEdgeProbability(SuccPred, Succ);
     if (Freq > SuccBestPred)
       SuccBestPred = Freq;
   }
@@ -1126,7 +1125,7 @@ MachineBlockPlacement::getBestTrellisSuccessor(
   }
   // We have already computed the optimal edge for the other side of the
   // trellis.
-  ComputedEdges[BestB.Src] = { BestB.Dest, false };
+  ComputedEdges[BestB.Src] = {BestB.Dest, false};
 
   auto TrellisSucc = BestA.Dest;
   LLVM_DEBUG(BranchProbability SuccProb = getAdjustedProbability(
@@ -1158,8 +1157,8 @@ bool MachineBlockPlacement::canTailDuplicateUnplacedPreds(
     // Make sure all unplaced and unfiltered predecessors can be
     // tail-duplicated into.
     // Skip any blocks that are already placed or not in this loop.
-    if (Pred == BB || (BlockFilter && !BlockFilter->count(Pred))
-        || (BlockToChain[Pred] == &Chain && !Succ->succ_empty()))
+    if (Pred == BB || (BlockFilter && !BlockFilter->count(Pred)) ||
+        (BlockToChain[Pred] == &Chain && !Succ->succ_empty()))
       continue;
     if (!TailDup.canTailDuplicate(Succ, Pred)) {
       if (Successors.size() > 1 && hasSameSuccessors(*Pred, Successors))
@@ -1278,9 +1277,7 @@ void MachineBlockPlacement::precomputeTriangleChains() {
 
     unsigned count() const { return Edges.size() - 1; }
 
-    MachineBasicBlock *getKey() const {
-      return Edges.back();
-    }
+    MachineBasicBlock *getKey() const { return Edges.back(); }
   };
 
   if (TriangleChainCount == 0)
@@ -1315,7 +1312,7 @@ void MachineBlockPlacement::precomputeTriangleChains() {
     bool CanTailDuplicate = true;
     // If PDom can't tail-duplicate into it's non-BB predecessors, then this
     // isn't the kind of triangle we're looking for.
-    for (MachineBasicBlock* Pred : PDom->predecessors()) {
+    for (MachineBasicBlock *Pred : PDom->predecessors()) {
       if (Pred == &BB)
         continue;
       if (!TailDup.canTailDuplicate(PDom, Pred)) {
@@ -1375,8 +1372,8 @@ void MachineBlockPlacement::precomputeTriangleChains() {
 
 // When profile is not present, return the StaticLikelyProb.
 // When profile is available, we need to handle the triangle-shape CFG.
-static BranchProbability getLayoutSuccessorProbThreshold(
-      const MachineBasicBlock *BB) {
+static BranchProbability
+getLayoutSuccessorProbThreshold(const MachineBasicBlock *BB) {
   if (!BB->getParent()->getFunction().hasProfileData())
     return BranchProbability(StaticLikelyProb, 100);
   if (BB->succ_size() == 2) {
@@ -1540,8 +1537,8 @@ bool MachineBlockPlacement::hasBetterLayoutPredecessor(
   for (MachineBasicBlock *Pred : Succ->predecessors()) {
     BlockChain *PredChain = BlockToChain[Pred];
     if (Pred == Succ || PredChain == &SuccChain ||
-        (BlockFilter && !BlockFilter->count(Pred)) ||
-        PredChain == &Chain || Pred != *std::prev(PredChain->end()) ||
+        (BlockFilter && !BlockFilter->count(Pred)) || PredChain == &Chain ||
+        Pred != *std::prev(PredChain->end()) ||
         // This check is redundant except for look ahead. This function is
         // called for lookahead by isProfitableToTailDup when BB hasn't been
         // placed yet.
@@ -1588,12 +1585,12 @@ bool MachineBlockPlacement::hasBetterLayoutPredecessor(
 /// \returns The best successor block found, or null if none are viable, along
 /// with a boolean indicating if tail duplication is necessary.
 MachineBlockPlacement::BlockAndTailDupResult
-MachineBlockPlacement::selectBestSuccessor(
-    const MachineBasicBlock *BB, const BlockChain &Chain,
-    const BlockFilterSet *BlockFilter) {
+MachineBlockPlacement::selectBestSuccessor(const MachineBasicBlock *BB,
+                                           const BlockChain &Chain,
+                                           const BlockFilterSet *BlockFilter) {
   const BranchProbability HotProb(StaticLikelyProb, 100);
 
-  BlockAndTailDupResult BestSucc = { nullptr, false };
+  BlockAndTailDupResult BestSucc = {nullptr, false};
   auto BestProb = BranchProbability::getZero();
 
   SmallVector<MachineBasicBlock *, 4> Successors;
@@ -1673,8 +1670,8 @@ MachineBlockPlacement::selectBestSuccessor(
     std::tie(DupProb, Succ) = Tup;
     if (DupProb < BestProb)
       break;
-    if (canTailDuplicateUnplacedPreds(BB, Succ, Chain, BlockFilter)
-        && (isProfitableToTailDup(BB, Succ, BestProb, Chain, BlockFilter))) {
+    if (canTailDuplicateUnplacedPreds(BB, Succ, Chain, BlockFilter) &&
+        (isProfitableToTailDup(BB, Succ, BestProb, Chain, BlockFilter))) {
       LLVM_DEBUG(dbgs() << "    Candidate: " << getBlockName(Succ)
                         << ", probability: " << DupProb
                         << " (Tail Duplicate)\n");
@@ -1787,8 +1784,7 @@ MachineBasicBlock *MachineBlockPlacement::getFirstUnplacedBlock(
 }
 
 void MachineBlockPlacement::fillWorkLists(
-    const MachineBasicBlock *MBB,
-    SmallPtrSetImpl<BlockChain *> &UpdatedPreds,
+    const MachineBasicBlock *MBB, SmallPtrSetImpl<BlockChain *> &UpdatedPreds,
     const BlockFilterSet *BlockFilter = nullptr) {
   BlockChain &Chain = *BlockToChain[MBB];
   if (!UpdatedPreds.insert(&Chain).second)
@@ -1819,9 +1815,9 @@ void MachineBlockPlacement::fillWorkLists(
     BlockWorkList.push_back(BB);
 }
 
-void MachineBlockPlacement::buildChain(
-    const MachineBasicBlock *HeadBB, BlockChain &Chain,
-    BlockFilterSet *BlockFilter) {
+void MachineBlockPlacement::buildChain(const MachineBasicBlock *HeadBB,
+                                       BlockChain &Chain,
+                                       BlockFilterSet *BlockFilter) {
   assert(HeadBB && "BB must not be null.\n");
   assert(BlockToChain[HeadBB] == &Chain && "BlockToChainMap mis-match.\n");
   MachineFunction::iterator PrevUnplacedBlockIt = F->begin();
@@ -1834,16 +1830,14 @@ void MachineBlockPlacement::buildChain(
     assert(BlockToChain[BB] == &Chain && "BlockToChainMap mis-match in loop.");
     assert(*std::prev(Chain.end()) == BB && "BB Not found at end of chain.");
 
-
     // Look for the best viable successor if there is one to place immediately
     // after this block.
     auto Result = selectBestSuccessor(BB, Chain, BlockFilter);
-    MachineBasicBlock* BestSucc = Result.BB;
+    MachineBasicBlock *BestSucc = Result.BB;
     bool ShouldTailDup = Result.ShouldTailDup;
     if (allowTailDupPlacement())
-      ShouldTailDup |= (BestSucc && canTailDuplicateUnplacedPreds(BB, BestSucc,
-                                                                  Chain,
-                                                                  BlockFilter));
+      ShouldTailDup |= (BestSucc && canTailDuplicateUnplacedPreds(
+                                        BB, BestSucc, Chain, BlockFilter));
 
     // If an immediate successor isn't available, look for the best viable
     // block among those we've identified as not violating the loop's CFG at
@@ -1866,7 +1860,7 @@ void MachineBlockPlacement::buildChain(
     // Check for that now.
     if (allowTailDupPlacement() && BestSucc && ShouldTailDup) {
       repeatedlyTailDuplicateBlock(BestSucc, BB, LoopHeaderBB, Chain,
-                                       BlockFilter, PrevUnplacedBlockIt);
+                                   BlockFilter, PrevUnplacedBlockIt);
       // If the chosen successor was duplicated into BB, don't bother laying
       // it out, just go round the loop again with BB as the chain end.
       if (!BB->isSuccessor(BestSucc))
@@ -1875,8 +1869,8 @@ void MachineBlockPlacement::buildChain(
 
     // Place this block, updating the datastructures to reflect its placement.
     BlockChain &SuccChain = *BlockToChain[BestSucc];
-    // Zero out UnscheduledPredecessors for the successor we're about to merge in case
-    // we selected a successor that didn't fit naturally into the CFG.
+    // Zero out UnscheduledPredecessors for the successor we're about to merge
+    // in case we selected a successor that didn't fit naturally into the CFG.
     SuccChain.UnscheduledPredecessors = 0;
     LLVM_DEBUG(dbgs() << "Merging from " << getBlockName(BB) << " to "
                       << getBlockName(BestSucc) << "\n");
@@ -1903,10 +1897,8 @@ void MachineBlockPlacement::buildChain(
 // If BB is moved before OldTop, Pred needs a taken branch to BB, and it can't
 // layout the other successor below it, so it can't reduce taken branch.
 // In this case we keep its original layout.
-bool
-MachineBlockPlacement::canMoveBottomBlockToTop(
-    const MachineBasicBlock *BottomBlock,
-    const MachineBasicBlock *OldTop) {
+bool MachineBlockPlacement::canMoveBottomBlockToTop(
+    const MachineBasicBlock *BottomBlock, const MachineBasicBlock *OldTop) {
   if (BottomBlock->pred_size() != 1)
     return true;
   MachineBasicBlock *Pred = *BottomBlock->pred_begin();
@@ -1924,9 +1916,8 @@ MachineBlockPlacement::canMoveBottomBlockToTop(
 
 // Find out the possible fall through frequence to the top of a loop.
 BlockFrequency
-MachineBlockPlacement::TopFallThroughFreq(
-    const MachineBasicBlock *Top,
-    const BlockFilterSet &LoopBlockSet) {
+MachineBlockPlacement::TopFallThroughFreq(const MachineBasicBlock *Top,
+                                          const BlockFilterSet &LoopBlockSet) {
   BlockFrequency MaxFreq = 0;
   for (MachineBasicBlock *Pred : Top->predecessors()) {
     BlockChain *PredChain = BlockToChain[Pred];
@@ -1948,8 +1939,8 @@ MachineBlockPlacement::TopFallThroughFreq(
         }
       }
       if (TopOK) {
-        BlockFrequency EdgeFreq = MBFI->getBlockFreq(Pred) *
-                                  MBPI->getEdgeProbability(Pred, Top);
+        BlockFrequency EdgeFreq =
+            MBFI->getBlockFreq(Pred) * MBPI->getEdgeProbability(Pred, Top);
         if (EdgeFreq > MaxFreq)
           MaxFreq = EdgeFreq;
       }
@@ -1979,73 +1970,70 @@ MachineBlockPlacement::TopFallThroughFreq(
 //              |-
 //              V
 //
-BlockFrequency
-MachineBlockPlacement::FallThroughGains(
-    const MachineBasicBlock *NewTop,
-    const MachineBasicBlock *OldTop,
-    const MachineBasicBlock *ExitBB,
-    const BlockFilterSet &LoopBlockSet) {
+BlockFrequency MachineBlockPlacement::FallThroughGains(
+    const MachineBasicBlock *NewTop, const MachineBasicBlock *OldTop,
+    const MachineBasicBlock *ExitBB, const BlockFilterSet &LoopBlockSet) {
   BlockFrequency FallThrough2Top = TopFallThroughFreq(OldTop, LoopBlockSet);
   BlockFrequency FallThrough2Exit = 0;
   if (ExitBB)
-    FallThrough2Exit = MBFI->getBlockFreq(NewTop) *
-        MBPI->getEdgeProbability(NewTop, ExitBB);
-  BlockFrequency BackEdgeFreq = MBFI->getBlockFreq(NewTop) *
-      MBPI->getEdgeProbability(NewTop, OldTop);
+    FallThrough2Exit =
+        MBFI->getBlockFreq(NewTop) * MBPI->getEdgeProbability(NewTop, ExitBB);
+  BlockFrequency BackEdgeFreq =
+      MBFI->getBlockFreq(NewTop) * MBPI->getEdgeProbability(NewTop, OldTop);
 
   // Find the best Pred of NewTop.
-   MachineBasicBlock *BestPred = nullptr;
-   BlockFrequency FallThroughFromPred = 0;
-   for (MachineBasicBlock *Pred : NewTop->predecessors()) {
-     if (!LoopBlockSet.count(Pred))
-       continue;
-     BlockChain *PredChain = BlockToChain[Pred];
-     if (!PredChain || Pred == *std::prev(PredChain->end())) {
-       BlockFrequency EdgeFreq = MBFI->getBlockFreq(Pred) *
-           MBPI->getEdgeProbability(Pred, NewTop);
-       if (EdgeFreq > FallThroughFromPred) {
-         FallThroughFromPred = EdgeFreq;
-         BestPred = Pred;
-       }
-     }
-   }
-
-   // If NewTop is not placed after Pred, another successor can be placed
-   // after Pred.
-   BlockFrequency NewFreq = 0;
-   if (BestPred) {
-     for (MachineBasicBlock *Succ : BestPred->successors()) {
-       if ((Succ == NewTop) || (Succ == BestPred) || !LoopBlockSet.count(Succ))
-         continue;
-       if (ComputedEdges.contains(Succ))
-         continue;
-       BlockChain *SuccChain = BlockToChain[Succ];
-       if ((SuccChain && (Succ != *SuccChain->begin())) ||
-           (SuccChain == BlockToChain[BestPred]))
-         continue;
-       BlockFrequency EdgeFreq = MBFI->getBlockFreq(BestPred) *
-           MBPI->getEdgeProbability(BestPred, Succ);
-       if (EdgeFreq > NewFreq)
-         NewFreq = EdgeFreq;
-     }
-     BlockFrequency OrigEdgeFreq = MBFI->getBlockFreq(BestPred) *
-         MBPI->getEdgeProbability(BestPred, NewTop);
-     if (NewFreq > OrigEdgeFreq) {
-       // If NewTop is not the best successor of Pred, then Pred doesn't
-       // fallthrough to NewTop. So there is no FallThroughFromPred and
-       // NewFreq.
-       NewFreq = 0;
-       FallThroughFromPred = 0;
-     }
-   }
-
-   BlockFrequency Result = 0;
-   BlockFrequency Gains = BackEdgeFreq + NewFreq;
-   BlockFrequency Lost = FallThrough2Top + FallThrough2Exit +
-       FallThroughFromPred;
-   if (Gains > Lost)
-     Result = Gains - Lost;
-   return Result;
+  MachineBasicBlock *BestPred = nullptr;
+  BlockFrequency FallThroughFromPred = 0;
+  for (MachineBasicBlock *Pred : NewTop->predecessors()) {
+    if (!LoopBlockSet.count(Pred))
+      continue;
+    BlockChain *PredChain = BlockToChain[Pred];
+    if (!PredChain || Pred == *std::prev(PredChain->end())) {
+      BlockFrequency EdgeFreq =
+          MBFI->getBlockFreq(Pred) * MBPI->getEdgeProbability(Pred, NewTop);
+      if (EdgeFreq > FallThroughFromPred) {
+        FallThroughFromPred = EdgeFreq;
+        BestPred = Pred;
+      }
+    }
+  }
+
+  // If NewTop is not placed after Pred, another successor can be placed
+  // after Pred.
+  BlockFrequency NewFreq = 0;
+  if (BestPred) {
+    for (MachineBasicBlock *Succ : BestPred->successors()) {
+      if ((Succ == NewTop) || (Succ == BestPred) || !LoopBlockSet.count(Succ))
+        continue;
+      if (ComputedEdges.contains(Succ))
+        continue;
+      BlockChain *SuccChain = BlockToChain[Succ];
+      if ((SuccChain && (Succ != *SuccChain->begin())) ||
+          (SuccChain == BlockToChain[BestPred]))
+        continue;
+      BlockFrequency EdgeFreq = MBFI->getBlockFreq(BestPred) *
+                                MBPI->getEdgeProbability(BestPred, Succ);
+      if (EdgeFreq > NewFreq)
+        NewFreq = EdgeFreq;
+    }
+    BlockFrequency OrigEdgeFreq = MBFI->getBlockFreq(BestPred) *
+                                  MBPI->getEdgeProbability(BestPred, NewTop);
+    if (NewFreq > OrigEdgeFreq) {
+      // If NewTop is not the best successor of Pred, then Pred doesn't
+      // fallthrough to NewTop. So there is no FallThroughFromPred and
+      // NewFreq.
+      NewFreq = 0;
+      FallThroughFromPred = 0;
+    }
+  }
+
+  BlockFrequency Result = 0;
+  BlockFrequency Gains = BackEdgeFreq + NewFreq;
+  BlockFrequency Lost =
+      FallThrough2Top + FallThrough2Exit + FallThroughFromPred;
+  if (Gains > Lost)
+    Result = Gains - Lost;
+  return Result;
 }
 
 /// Helper function of findBestLoopTop. Find the best loop top block
@@ -2070,10 +2058,8 @@ MachineBlockPlacement::FallThroughGains(
 ///        At the same time, move it before old top increases the taken branch
 ///        to loop exit block, so the reduced taken branch will be compared with
 ///        the increased taken branch to the loop exit block.
-MachineBasicBlock *
-MachineBlockPlacement::findBestLoopTopHelper(
-    MachineBasicBlock *OldTop,
-    const MachineLoop &L,
+MachineBasicBlock *MachineBlockPlacement::findBestLoopTopHelper(
+    MachineBasicBlock *OldTop, const MachineLoop &L,
     const BlockFilterSet &LoopBlockSet) {
   // Check that the header hasn't been fused with a preheader block due to
   // crazy branches. If it has, we need to start with the header at the top to
@@ -2110,10 +2096,11 @@ MachineBlockPlacement::findBestLoopTopHelper(
     if (!canMoveBottomBlockToTop(Pred, OldTop))
       continue;
 
-    BlockFrequency Gains = FallThroughGains(Pred, OldTop, OtherBB,
-                                            LoopBlockSet);
-    if ((Gains > 0) && (Gains > BestGains ||
-        ((Gains == BestGains) && Pred->isLayoutSuccessor(OldTop)))) {
+    BlockFrequency Gains =
+        FallThroughGains(Pred, OldTop, OtherBB, LoopBlockSet);
+    if ((Gains > 0) &&
+        (Gains > BestGains ||
+         ((Gains == BestGains) && Pred->isLayoutSuccessor(OldTop)))) {
       BestPred = Pred;
       BestGains = Gains;
     }
@@ -2160,7 +2147,7 @@ MachineBlockPlacement::findBestLoopTop(const MachineLoop &L,
     OldTop = NewTop;
     NewTop = findBestLoopTopHelper(OldTop, L, LoopBlockSet);
     if (NewTop != OldTop)
-      ComputedEdges[NewTop] = { OldTop, false };
+      ComputedEdges[NewTop] = {OldTop, false};
   }
   return NewTop;
 }
@@ -2292,10 +2279,8 @@ MachineBlockPlacement::findBestLoopExit(const MachineLoop &L,
 ///
 ///   1. Look for a Pred that can be layout before Top.
 ///   2. Check if Top is the most possible successor of Pred.
-bool
-MachineBlockPlacement::hasViableTopFallthrough(
-    const MachineBasicBlock *Top,
-    const BlockFilterSet &LoopBlockSet) {
+bool MachineBlockPlacement::hasViableTopFallthrough(
+    const MachineBasicBlock *Top, const BlockFilterSet &LoopBlockSet) {
   for (MachineBasicBlock *Pred : Top->predecessors()) {
     BlockChain *PredChain = BlockToChain[Pred];
     if (!LoopBlockSet.count(Pred) &&
@@ -2447,7 +2432,7 @@ void MachineBlockPlacement::rotateLoopWithProfile(
     if (!LoopBlockSet.count(Pred) &&
         (!PredChain || Pred == *std::prev(PredChain->end()))) {
       auto EdgeFreq = MBFI->getBlockFreq(Pred) *
-          MBPI->getEdgeProbability(Pred, ChainHeaderBB);
+                      MBPI->getEdgeProbability(Pred, ChainHeaderBB);
       auto FallThruCost = ScaleBlockFrequency(EdgeFreq, MisfetchCost);
       // If the predecessor has only an unconditional jump to the header, we
       // need to consider the cost of this jump.
@@ -2994,14 +2979,13 @@ void MachineBlockPlacement::alignBlocks() {
 /// @return true if \p BB was removed.
 bool MachineBlockPlacement::repeatedlyTailDuplicateBlock(
     MachineBasicBlock *BB, MachineBasicBlock *&LPred,
-    const MachineBasicBlock *LoopHeaderBB,
-    BlockChain &Chain, BlockFilterSet *BlockFilter,
+    const MachineBasicBlock *LoopHeaderBB, BlockChain &Chain,
+    BlockFilterSet *BlockFilter,
     MachineFunction::iterator &PrevUnplacedBlockIt) {
   bool Removed, DuplicatedToLPred;
   bool DuplicatedToOriginalLPred;
   Removed = maybeTailDuplicateBlock(BB, LPred, Chain, BlockFilter,
-                                    PrevUnplacedBlockIt,
-                                    DuplicatedToLPred);
+                                    PrevUnplacedBlockIt, DuplicatedToLPred);
   if (!Removed)
     return false;
   DuplicatedToOriginalLPred = DuplicatedToLPred;
@@ -3023,8 +3007,7 @@ bool MachineBlockPlacement::repeatedlyTailDuplicateBlock(
       break;
     DupPred = *std::prev(ChainEnd);
     Removed = maybeTailDuplicateBlock(DupBB, DupPred, Chain, BlockFilter,
-                                      PrevUnplacedBlockIt,
-                                      DuplicatedToLPred);
+                                      PrevUnplacedBlockIt, DuplicatedToLPred);
   }
   // If BB was duplicated into LPred, it is now scheduled. But because it was
   // removed, markChainSuccessors won't be called for its chain. Instead we
@@ -3051,9 +3034,8 @@ bool MachineBlockPlacement::repeatedlyTailDuplicateBlock(
 /// \p DuplicatedToLPred - True if the block was duplicated into LPred.
 /// \return  - True if the block was duplicated into all preds and removed.
 bool MachineBlockPlacement::maybeTailDuplicateBlock(
-    MachineBasicBlock *BB, MachineBasicBlock *LPred,
-    BlockChain &Chain, BlockFilterSet *BlockFilter,
-    MachineFunction::iterator &PrevUnplacedBlockIt,
+    MachineBasicBlock *BB, MachineBasicBlock *LPred, BlockChain &Chain,
+    BlockFilterSet *BlockFilter, MachineFunction::iterator &PrevUnplacedBlockIt,
     bool &DuplicatedToLPred) {
   DuplicatedToLPred = false;
   if (!shouldTailDuplicate(BB))
@@ -3065,49 +3047,48 @@ bool MachineBlockPlacement::maybeTailDuplicateBlock(
   // This has to be a callback because none of it can be done after
   // BB is deleted.
   bool Removed = false;
-  auto RemovalCallback =
-      [&](MachineBasicBlock *RemBB) {
-        // Signal to outer function
-        Removed = true;
-
-        // Conservative default.
-        bool InWorkList = true;
-        // Remove from the Chain and Chain Map
-        if (BlockToChain.count(RemBB)) {
-          BlockChain *Chain = BlockToChain[RemBB];
-          InWorkList = Chain->UnscheduledPredecessors == 0;
-          Chain->remove(RemBB);
-          BlockToChain.erase(RemBB);
-        }
+  auto RemovalCallback = [&](MachineBasicBlock *RemBB) {
+    // Signal to outer function
+    Removed = true;
+
+    // Conservative default.
+    bool InWorkList = true;
+    // Remove from the Chain and Chain Map
+    if (BlockToChain.count(RemBB)) {
+      BlockChain *Chain = BlockToChain[RemBB];
+      InWorkList = Chain->UnscheduledPredecessors == 0;
+      Chain->remove(RemBB);
+      BlockToChain.erase(RemBB);
+    }
 
-        // Handle the unplaced block iterator
-        if (&(*PrevUnplacedBlockIt) == RemBB) {
-          PrevUnplacedBlockIt++;
-        }
+    // Handle the unplaced block iterator
+    if (&(*PrevUnplacedBlockIt) == RemBB) {
+      PrevUnplacedBlockIt++;
+    }
 
-        // Handle the Work Lists
-        if (InWorkList) {
-          SmallVectorImpl<MachineBasicBlock *> &RemoveList = BlockWorkList;
-          if (RemBB->isEHPad())
-            RemoveList = EHPadWorkList;
-          llvm::erase_value(RemoveList, RemBB);
-        }
+    // Handle the Work Lists
+    if (InWorkList) {
+      SmallVectorImpl<MachineBasicBlock *> &RemoveList = BlockWorkList;
+      if (RemBB->isEHPad())
+        RemoveList = EHPadWorkList;
+      llvm::erase_value(RemoveList, RemBB);
+    }
 
-        // Handle the filter set
-        if (BlockFilter) {
-          BlockFilter->remove(RemBB);
-        }
+    // Handle the filter set
+    if (BlockFilter) {
+      BlockFilter->remove(RemBB);
+    }
 
-        // Remove the block from loop info.
-        MLI->removeBlock(RemBB);
-        if (RemBB == PreferredLoopExit)
-          PreferredLoopExit = nullptr;
+    // Remove the block from loop info.
+    MLI->removeBlock(RemBB);
+    if (RemBB == PreferredLoopExit)
+      PreferredLoopExit = nullptr;
 
-        LLVM_DEBUG(dbgs() << "TailDuplicator deleted block: "
-                          << getBlockName(RemBB) << "\n");
-      };
+    LLVM_DEBUG(dbgs() << "TailDuplicator deleted block: " << getBlockName(RemBB)
+                      << "\n");
+  };
   auto RemovalCallbackRef =
-      function_ref<void(MachineBasicBlock*)>(RemovalCallback);
+      function_ref<void(MachineBasicBlock *)>(RemovalCallback);
 
   SmallVector<MachineBasicBlock *, 8> DuplicatedPreds;
   bool IsSimple = TailDup.isSimpleBB(BB);
@@ -3128,11 +3109,11 @@ bool MachineBlockPlacement::maybeTailDuplicateBlock(
   DuplicatedToLPred = false;
   for (MachineBasicBlock *Pred : DuplicatedPreds) {
     // We're only looking for unscheduled predecessors that match the filter.
-    BlockChain* PredChain = BlockToChain[Pred];
+    BlockChain *PredChain = BlockToChain[Pred];
     if (Pred == LPred)
       DuplicatedToLPred = true;
-    if (Pred == LPred || (BlockFilter && !BlockFilter->count(Pred))
-        || PredChain == &Chain)
+    if (Pred == LPred || (BlockFilter && !BlockFilter->count(Pred)) ||
+        PredChain == &Chain)
       continue;
     for (MachineBasicBlock *NewSucc : Pred->successors()) {
       if (BlockFilter && !BlockFilter->count(NewSucc))
@@ -3202,8 +3183,7 @@ bool MachineBlockPlacement::isBestSuccessor(MachineBasicBlock *BB,
 // Find out the predecessors of BB and BB can be beneficially duplicated into
 // them.
 void MachineBlockPlacement::findDuplicateCandidates(
-    SmallVectorImpl<MachineBasicBlock *> &Candidates,
-    MachineBasicBlock *BB,
+    SmallVectorImpl<MachineBasicBlock *> &Candidates, MachineBasicBlock *BB,
     BlockFilterSet *BlockFilter) {
   MachineBasicBlock *Fallthrough = nullptr;
   BranchProbability DefaultBranchProb = BranchProbability::getZero();
@@ -3348,8 +3328,8 @@ bool MachineBlockPlacement::runOnMachineFunction(MachineFunction &MF) {
 
   F = &MF;
   MBPI = &getAnalysis<MachineBranchProbabilityInfo>();
-  MBFI = std::make_unique<MBFIWrapper>(
-      getAnalysis<MachineBlockFrequencyInfo>());
+  MBFI =
+      std::make_unique<MBFIWrapper>(getAnalysis<MachineBlockFrequencyInfo>());
   MLI = &getAnalysis<MachineLoopInfo>();
   TII = MF.getSubtarget().getInstrInfo();
   TLI = MF.getSubtarget().getTargetLowering();
diff --git a/llvm/lib/CodeGen/MachineCSE.cpp b/llvm/lib/CodeGen/MachineCSE.cpp
index 89c4562e8d38033..4ce392d51d83812 100644
--- a/llvm/lib/CodeGen/MachineCSE.cpp
+++ b/llvm/lib/CodeGen/MachineCSE.cpp
@@ -51,14 +51,14 @@ using namespace llvm;
 #define DEBUG_TYPE "machine-cse"
 
 STATISTIC(NumCoalesces, "Number of copies coalesced");
-STATISTIC(NumCSEs,      "Number of common subexpression eliminated");
-STATISTIC(NumPREs,      "Number of partial redundant expression"
-                        " transformed to fully redundant");
+STATISTIC(NumCSEs, "Number of common subexpression eliminated");
+STATISTIC(NumPREs, "Number of partial redundant expression"
+                   " transformed to fully redundant");
 STATISTIC(NumPhysCSEs,
           "Number of physreg referencing common subexpr eliminated");
 STATISTIC(NumCrossBBCSEs,
           "Number of cross-MBB physreg referencing CS eliminated");
-STATISTIC(NumCommutes,  "Number of copies coalesced after commuting");
+STATISTIC(NumCommutes, "Number of copies coalesced after commuting");
 
 // Threshold to avoid excessive cost to compute isProfitableToCSE.
 static cl::opt<int>
@@ -71,93 +71,92 @@ static cl::opt<bool> AggressiveMachineCSE(
 
 namespace {
 
-  class MachineCSE : public MachineFunctionPass {
-    const TargetInstrInfo *TII = nullptr;
-    const TargetRegisterInfo *TRI = nullptr;
-    AliasAnalysis *AA = nullptr;
-    MachineDominatorTree *DT = nullptr;
-    MachineRegisterInfo *MRI = nullptr;
-    MachineBlockFrequencyInfo *MBFI = nullptr;
+class MachineCSE : public MachineFunctionPass {
+  const TargetInstrInfo *TII = nullptr;
+  const TargetRegisterInfo *TRI = nullptr;
+  AliasAnalysis *AA = nullptr;
+  MachineDominatorTree *DT = nullptr;
+  MachineRegisterInfo *MRI = nullptr;
+  MachineBlockFrequencyInfo *MBFI = nullptr;
 
-  public:
-    static char ID; // Pass identification
+public:
+  static char ID; // Pass identification
 
-    MachineCSE() : MachineFunctionPass(ID) {
-      initializeMachineCSEPass(*PassRegistry::getPassRegistry());
-    }
+  MachineCSE() : MachineFunctionPass(ID) {
+    initializeMachineCSEPass(*PassRegistry::getPassRegistry());
+  }
 
-    bool runOnMachineFunction(MachineFunction &MF) override;
-
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.setPreservesCFG();
-      MachineFunctionPass::getAnalysisUsage(AU);
-      AU.addRequired<AAResultsWrapperPass>();
-      AU.addPreservedID(MachineLoopInfoID);
-      AU.addRequired<MachineDominatorTree>();
-      AU.addPreserved<MachineDominatorTree>();
-      AU.addRequired<MachineBlockFrequencyInfo>();
-      AU.addPreserved<MachineBlockFrequencyInfo>();
-    }
+  bool runOnMachineFunction(MachineFunction &MF) override;
+
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.setPreservesCFG();
+    MachineFunctionPass::getAnalysisUsage(AU);
+    AU.addRequired<AAResultsWrapperPass>();
+    AU.addPreservedID(MachineLoopInfoID);
+    AU.addRequired<MachineDominatorTree>();
+    AU.addPreserved<MachineDominatorTree>();
+    AU.addRequired<MachineBlockFrequencyInfo>();
+    AU.addPreserved<MachineBlockFrequencyInfo>();
+  }
 
-    MachineFunctionProperties getRequiredProperties() const override {
-      return MachineFunctionProperties()
-        .set(MachineFunctionProperties::Property::IsSSA);
-    }
+  MachineFunctionProperties getRequiredProperties() const override {
+    return MachineFunctionProperties().set(
+        MachineFunctionProperties::Property::IsSSA);
+  }
 
-    void releaseMemory() override {
-      ScopeMap.clear();
-      PREMap.clear();
-      Exps.clear();
-    }
+  void releaseMemory() override {
+    ScopeMap.clear();
+    PREMap.clear();
+    Exps.clear();
+  }
 
-  private:
-    using AllocatorTy = RecyclingAllocator<BumpPtrAllocator,
-                            ScopedHashTableVal<MachineInstr *, unsigned>>;
-    using ScopedHTType =
-        ScopedHashTable<MachineInstr *, unsigned, MachineInstrExpressionTrait,
-                        AllocatorTy>;
-    using ScopeType = ScopedHTType::ScopeTy;
-    using PhysDefVector = SmallVector<std::pair<unsigned, unsigned>, 2>;
-
-    unsigned LookAheadLimit = 0;
-    DenseMap<MachineBasicBlock *, ScopeType *> ScopeMap;
-    DenseMap<MachineInstr *, MachineBasicBlock *, MachineInstrExpressionTrait>
-        PREMap;
-    ScopedHTType VNT;
-    SmallVector<MachineInstr *, 64> Exps;
-    unsigned CurrVN = 0;
-
-    bool PerformTrivialCopyPropagation(MachineInstr *MI,
-                                       MachineBasicBlock *MBB);
-    bool isPhysDefTriviallyDead(MCRegister Reg,
-                                MachineBasicBlock::const_iterator I,
-                                MachineBasicBlock::const_iterator E) const;
-    bool hasLivePhysRegDefUses(const MachineInstr *MI,
-                               const MachineBasicBlock *MBB,
-                               SmallSet<MCRegister, 8> &PhysRefs,
-                               PhysDefVector &PhysDefs, bool &PhysUseDef) const;
-    bool PhysRegDefsReach(MachineInstr *CSMI, MachineInstr *MI,
-                          SmallSet<MCRegister, 8> &PhysRefs,
-                          PhysDefVector &PhysDefs, bool &NonLocal) const;
-    bool isCSECandidate(MachineInstr *MI);
-    bool isProfitableToCSE(Register CSReg, Register Reg,
-                           MachineBasicBlock *CSBB, MachineInstr *MI);
-    void EnterScope(MachineBasicBlock *MBB);
-    void ExitScope(MachineBasicBlock *MBB);
-    bool ProcessBlockCSE(MachineBasicBlock *MBB);
-    void ExitScopeIfDone(MachineDomTreeNode *Node,
-                         DenseMap<MachineDomTreeNode*, unsigned> &OpenChildren);
-    bool PerformCSE(MachineDomTreeNode *Node);
-
-    bool isPRECandidate(MachineInstr *MI, SmallSet<MCRegister, 8> &PhysRefs);
-    bool ProcessBlockPRE(MachineDominatorTree *MDT, MachineBasicBlock *MBB);
-    bool PerformSimplePRE(MachineDominatorTree *DT);
-    /// Heuristics to see if it's profitable to move common computations of MBB
-    /// and MBB1 to CandidateBB.
-    bool isProfitableToHoistInto(MachineBasicBlock *CandidateBB,
-                                 MachineBasicBlock *MBB,
-                                 MachineBasicBlock *MBB1);
-  };
+private:
+  using AllocatorTy =
+      RecyclingAllocator<BumpPtrAllocator,
+                         ScopedHashTableVal<MachineInstr *, unsigned>>;
+  using ScopedHTType =
+      ScopedHashTable<MachineInstr *, unsigned, MachineInstrExpressionTrait,
+                      AllocatorTy>;
+  using ScopeType = ScopedHTType::ScopeTy;
+  using PhysDefVector = SmallVector<std::pair<unsigned, unsigned>, 2>;
+
+  unsigned LookAheadLimit = 0;
+  DenseMap<MachineBasicBlock *, ScopeType *> ScopeMap;
+  DenseMap<MachineInstr *, MachineBasicBlock *, MachineInstrExpressionTrait>
+      PREMap;
+  ScopedHTType VNT;
+  SmallVector<MachineInstr *, 64> Exps;
+  unsigned CurrVN = 0;
+
+  bool PerformTrivialCopyPropagation(MachineInstr *MI, MachineBasicBlock *MBB);
+  bool isPhysDefTriviallyDead(MCRegister Reg,
+                              MachineBasicBlock::const_iterator I,
+                              MachineBasicBlock::const_iterator E) const;
+  bool hasLivePhysRegDefUses(const MachineInstr *MI,
+                             const MachineBasicBlock *MBB,
+                             SmallSet<MCRegister, 8> &PhysRefs,
+                             PhysDefVector &PhysDefs, bool &PhysUseDef) const;
+  bool PhysRegDefsReach(MachineInstr *CSMI, MachineInstr *MI,
+                        SmallSet<MCRegister, 8> &PhysRefs,
+                        PhysDefVector &PhysDefs, bool &NonLocal) const;
+  bool isCSECandidate(MachineInstr *MI);
+  bool isProfitableToCSE(Register CSReg, Register Reg, MachineBasicBlock *CSBB,
+                         MachineInstr *MI);
+  void EnterScope(MachineBasicBlock *MBB);
+  void ExitScope(MachineBasicBlock *MBB);
+  bool ProcessBlockCSE(MachineBasicBlock *MBB);
+  void ExitScopeIfDone(MachineDomTreeNode *Node,
+                       DenseMap<MachineDomTreeNode *, unsigned> &OpenChildren);
+  bool PerformCSE(MachineDomTreeNode *Node);
+
+  bool isPRECandidate(MachineInstr *MI, SmallSet<MCRegister, 8> &PhysRefs);
+  bool ProcessBlockPRE(MachineDominatorTree *MDT, MachineBasicBlock *MBB);
+  bool PerformSimplePRE(MachineDominatorTree *DT);
+  /// Heuristics to see if it's profitable to move common computations of MBB
+  /// and MBB1 to CandidateBB.
+  bool isProfitableToHoistInto(MachineBasicBlock *CandidateBB,
+                               MachineBasicBlock *MBB, MachineBasicBlock *MBB1);
+};
 
 } // end anonymous namespace
 
@@ -309,7 +308,8 @@ bool MachineCSE::hasLivePhysRegDefUses(const MachineInstr *MI,
   // Next, collect all defs into PhysDefs.  If any is already in PhysRefs
   // (which currently contains only uses), set the PhysUseDef flag.
   PhysUseDef = false;
-  MachineBasicBlock::const_iterator I = MI; I = std::next(I);
+  MachineBasicBlock::const_iterator I = MI;
+  I = std::next(I);
   for (const auto &MOP : llvm::enumerate(MI->operands())) {
     const MachineOperand &MO = MOP.value();
     if (!MO.isReg() || !MO.isDef())
@@ -357,12 +357,13 @@ bool MachineCSE::PhysRegDefsReach(MachineInstr *CSMI, MachineInstr *MI,
       if (MRI->isAllocatable(PhysDefs[i].second) ||
           MRI->isReserved(PhysDefs[i].second))
         // Avoid extending live range of physical registers if they are
-        //allocatable or reserved.
+        // allocatable or reserved.
         return false;
     }
     CrossMBB = true;
   }
-  MachineBasicBlock::const_iterator I = CSMI; I = std::next(I);
+  MachineBasicBlock::const_iterator I = CSMI;
+  I = std::next(I);
   MachineBasicBlock::const_iterator E = MI;
   MachineBasicBlock::const_iterator EE = CSMBB->end();
   unsigned LookAheadLeft = LookAheadLimit;
@@ -453,7 +454,7 @@ bool MachineCSE::isProfitableToCSE(Register CSReg, Register Reg,
   bool MayIncreasePressure = true;
   if (CSReg.isVirtual() && Reg.isVirtual()) {
     MayIncreasePressure = false;
-    SmallPtrSet<MachineInstr*, 8> CSUses;
+    SmallPtrSet<MachineInstr *, 8> CSUses;
     int NumOfUses = 0;
     for (MachineInstr &MI : MRI->use_nodbg_instructions(CSReg)) {
       CSUses.insert(&MI);
@@ -472,7 +473,8 @@ bool MachineCSE::isProfitableToCSE(Register CSReg, Register Reg,
         }
       }
   }
-  if (!MayIncreasePressure) return true;
+  if (!MayIncreasePressure)
+    return true;
 
   // Heuristics #1: Don't CSE "cheap" computation if the def is not local or in
   // an immediate predecessor. We don't want to increase register pressure and
@@ -525,7 +527,7 @@ void MachineCSE::EnterScope(MachineBasicBlock *MBB) {
 
 void MachineCSE::ExitScope(MachineBasicBlock *MBB) {
   LLVM_DEBUG(dbgs() << "Exiting: " << MBB->getName() << '\n');
-  DenseMap<MachineBasicBlock*, ScopeType*>::iterator SI = ScopeMap.find(MBB);
+  DenseMap<MachineBasicBlock *, ScopeType *>::iterator SI = ScopeMap.find(MBB);
   assert(SI != ScopeMap.end());
   delete SI->second;
   ScopeMap.erase(SI);
@@ -752,9 +754,9 @@ bool MachineCSE::ProcessBlockCSE(MachineBasicBlock *MBB) {
 /// ExitScopeIfDone - Destroy scope for the MBB that corresponds to the given
 /// dominator tree node if its a leaf or all of its children are done. Walk
 /// up the dominator tree to destroy ancestors which are now done.
-void
-MachineCSE::ExitScopeIfDone(MachineDomTreeNode *Node,
-                        DenseMap<MachineDomTreeNode*, unsigned> &OpenChildren) {
+void MachineCSE::ExitScopeIfDone(
+    MachineDomTreeNode *Node,
+    DenseMap<MachineDomTreeNode *, unsigned> &OpenChildren) {
   if (OpenChildren[Node])
     return;
 
@@ -772,9 +774,9 @@ MachineCSE::ExitScopeIfDone(MachineDomTreeNode *Node,
 }
 
 bool MachineCSE::PerformCSE(MachineDomTreeNode *Node) {
-  SmallVector<MachineDomTreeNode*, 32> Scopes;
-  SmallVector<MachineDomTreeNode*, 8> WorkList;
-  DenseMap<MachineDomTreeNode*, unsigned> OpenChildren;
+  SmallVector<MachineDomTreeNode *, 32> Scopes;
+  SmallVector<MachineDomTreeNode *, 8> WorkList;
+  DenseMap<MachineDomTreeNode *, unsigned> OpenChildren;
 
   CurrVN = 0;
 
@@ -805,11 +807,8 @@ bool MachineCSE::PerformCSE(MachineDomTreeNode *Node) {
 // to exclude instrs created by PRE that won't be CSEed later.
 bool MachineCSE::isPRECandidate(MachineInstr *MI,
                                 SmallSet<MCRegister, 8> &PhysRefs) {
-  if (!isCSECandidate(MI) ||
-      MI->isNotDuplicable() ||
-      MI->mayLoad() ||
-      TII->isAsCheapAsAMove(*MI) ||
-      MI->getNumDefs() != 1 ||
+  if (!isCSECandidate(MI) || MI->isNotDuplicable() || MI->mayLoad() ||
+      TII->isAsCheapAsAMove(*MI) || MI->getNumDefs() != 1 ||
       MI->getNumExplicitDefs() != 1)
     return false;
 
diff --git a/llvm/lib/CodeGen/MachineCombiner.cpp b/llvm/lib/CodeGen/MachineCombiner.cpp
index c65937935ed8200..6bf7adf2a1f82fb 100644
--- a/llvm/lib/CodeGen/MachineCombiner.cpp
+++ b/llvm/lib/CodeGen/MachineCombiner.cpp
@@ -38,10 +38,11 @@ using namespace llvm;
 
 STATISTIC(NumInstCombined, "Number of machineinst combined");
 
-static cl::opt<unsigned>
-inc_threshold("machine-combiner-inc-threshold", cl::Hidden,
-              cl::desc("Incremental depth computation will be used for basic "
-                       "blocks with more instructions."), cl::init(500));
+static cl::opt<unsigned> inc_threshold(
+    "machine-combiner-inc-threshold", cl::Hidden,
+    cl::desc("Incremental depth computation will be used for basic "
+             "blocks with more instructions."),
+    cl::init(500));
 
 static cl::opt<bool> dump_intrs("machine-combiner-dump-subst-intrs", cl::Hidden,
                                 cl::desc("Dump all substituted intrs"),
@@ -99,13 +100,13 @@ class MachineCombiner : public MachineFunctionPass {
                     const MachineBasicBlock &MBB);
   unsigned getLatency(MachineInstr *Root, MachineInstr *NewRoot,
                       MachineTraceMetrics::Trace BlockTrace);
-  bool
-  improvesCriticalPathLen(MachineBasicBlock *MBB, MachineInstr *Root,
-                          MachineTraceMetrics::Trace BlockTrace,
-                          SmallVectorImpl<MachineInstr *> &InsInstrs,
-                          SmallVectorImpl<MachineInstr *> &DelInstrs,
-                          DenseMap<unsigned, unsigned> &InstrIdxForVirtReg,
-                          MachineCombinerPattern Pattern, bool SlackIsAccurate);
+  bool improvesCriticalPathLen(MachineBasicBlock *MBB, MachineInstr *Root,
+                               MachineTraceMetrics::Trace BlockTrace,
+                               SmallVectorImpl<MachineInstr *> &InsInstrs,
+                               SmallVectorImpl<MachineInstr *> &DelInstrs,
+                               DenseMap<unsigned, unsigned> &InstrIdxForVirtReg,
+                               MachineCombinerPattern Pattern,
+                               bool SlackIsAccurate);
   bool reduceRegisterPressure(MachineInstr &Root, MachineBasicBlock *MBB,
                               SmallVectorImpl<MachineInstr *> &InsInstrs,
                               SmallVectorImpl<MachineInstr *> &DelInstrs,
@@ -125,17 +126,17 @@ class MachineCombiner : public MachineFunctionPass {
   void verifyPatternOrder(MachineBasicBlock *MBB, MachineInstr &Root,
                           SmallVector<MachineCombinerPattern, 16> &Patterns);
 };
-}
+} // namespace
 
 char MachineCombiner::ID = 0;
 char &llvm::MachineCombinerID = MachineCombiner::ID;
 
-INITIALIZE_PASS_BEGIN(MachineCombiner, DEBUG_TYPE,
-                      "Machine InstCombiner", false, false)
+INITIALIZE_PASS_BEGIN(MachineCombiner, DEBUG_TYPE, "Machine InstCombiner",
+                      false, false)
 INITIALIZE_PASS_DEPENDENCY(MachineLoopInfo)
 INITIALIZE_PASS_DEPENDENCY(MachineTraceMetrics)
-INITIALIZE_PASS_END(MachineCombiner, DEBUG_TYPE, "Machine InstCombiner",
-                    false, false)
+INITIALIZE_PASS_END(MachineCombiner, DEBUG_TYPE, "Machine InstCombiner", false,
+                    false)
 
 void MachineCombiner::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.setPreservesCFG();
@@ -149,8 +150,7 @@ void MachineCombiner::getAnalysisUsage(AnalysisUsage &AU) const {
   MachineFunctionPass::getAnalysisUsage(AU);
 }
 
-MachineInstr *
-MachineCombiner::getOperandDef(const MachineOperand &MO) {
+MachineInstr *MachineCombiner::getOperandDef(const MachineOperand &MO) {
   MachineInstr *DefInstr = nullptr;
   // We need a virtual register definition.
   if (MO.isReg() && MO.getReg().isVirtual())
@@ -372,8 +372,7 @@ bool MachineCombiner::improvesCriticalPathLen(
     SmallVectorImpl<MachineInstr *> &InsInstrs,
     SmallVectorImpl<MachineInstr *> &DelInstrs,
     DenseMap<unsigned, unsigned> &InstrIdxForVirtReg,
-    MachineCombinerPattern Pattern,
-    bool SlackIsAccurate) {
+    MachineCombinerPattern Pattern, bool SlackIsAccurate) {
   // Get depth and latency of NewRoot and Root.
   unsigned NewRootDepth =
       getDepth(InsInstrs, InstrIdxForVirtReg, BlockTrace, *MBB);
@@ -450,8 +449,8 @@ bool MachineCombiner::preservesResourceLen(
 
   // Compute current resource length
 
-  //ArrayRef<const MachineBasicBlock *> MBBarr(MBB);
-  SmallVector <const MachineBasicBlock *, 1> MBBarr;
+  // ArrayRef<const MachineBasicBlock *> MBBarr(MBB);
+  SmallVector<const MachineBasicBlock *, 1> MBBarr;
   MBBarr.push_back(MBB);
   unsigned ResLenBeforeCombine = BlockTrace.getResourceLength(MBBarr);
 
@@ -474,7 +473,7 @@ bool MachineCombiner::preservesResourceLen(
                     << " and after: " << ResLenAfterCombine << "\n";);
   LLVM_DEBUG(
       ResLenAfterCombine <=
-      ResLenBeforeCombine + TII->getExtendResourceLenLimit()
+              ResLenBeforeCombine + TII->getExtendResourceLenLimit()
           ? dbgs() << "\t\t  As result it IMPROVES/PRESERVES Resource Length\n"
           : dbgs() << "\t\t  As result it DOES NOT improve/preserve Resource "
                       "Length\n");
@@ -643,12 +642,12 @@ bool MachineCombiner::combineInstructions(MachineBasicBlock *MBB) {
         dbgs() << "\tFor the Pattern (" << (int)P
                << ") these instructions could be removed\n";
         for (auto const *InstrPtr : DelInstrs)
-          InstrPtr->print(dbgs(), /*IsStandalone*/false, /*SkipOpers*/false,
-                          /*SkipDebugLoc*/false, /*AddNewLine*/true, TII);
+          InstrPtr->print(dbgs(), /*IsStandalone*/ false, /*SkipOpers*/ false,
+                          /*SkipDebugLoc*/ false, /*AddNewLine*/ true, TII);
         dbgs() << "\tThese instructions could replace the removed ones\n";
         for (auto const *InstrPtr : InsInstrs)
-          InstrPtr->print(dbgs(), /*IsStandalone*/false, /*SkipOpers*/false,
-                          /*SkipDebugLoc*/false, /*AddNewLine*/true, TII);
+          InstrPtr->print(dbgs(), /*IsStandalone*/ false, /*SkipOpers*/ false,
+                          /*SkipDebugLoc*/ false, /*AddNewLine*/ true, TII);
       });
 
       if (IncrementalUpdate && LastUpdate != BlockIter) {
@@ -679,7 +678,8 @@ bool MachineCombiner::combineInstructions(MachineBasicBlock *MBB) {
       }
 
       if (ML && TII->isThroughputPattern(P)) {
-        LLVM_DEBUG(dbgs() << "\t Replacing due to throughput pattern in loop\n");
+        LLVM_DEBUG(
+            dbgs() << "\t Replacing due to throughput pattern in loop\n");
         insertDeleteInstructions(MBB, MI, InsInstrs, DelInstrs, TraceEnsemble,
                                  RegUnits, TII, P, IncrementalUpdate);
         // Eagerly stop after the first pattern fires.
@@ -687,8 +687,8 @@ bool MachineCombiner::combineInstructions(MachineBasicBlock *MBB) {
         break;
       } else if (OptForSize && InsInstrs.size() < DelInstrs.size()) {
         LLVM_DEBUG(dbgs() << "\t Replacing due to OptForSize ("
-                          << InsInstrs.size() << " < "
-                          << DelInstrs.size() << ")\n");
+                          << InsInstrs.size() << " < " << DelInstrs.size()
+                          << ")\n");
         insertDeleteInstructions(MBB, MI, InsInstrs, DelInstrs, TraceEnsemble,
                                  RegUnits, TII, P, IncrementalUpdate);
         // Eagerly stop after the first pattern fires.
@@ -744,9 +744,9 @@ bool MachineCombiner::runOnMachineFunction(MachineFunction &MF) {
   MLI = &getAnalysis<MachineLoopInfo>();
   Traces = &getAnalysis<MachineTraceMetrics>();
   PSI = &getAnalysis<ProfileSummaryInfoWrapperPass>().getPSI();
-  MBFI = (PSI && PSI->hasProfileSummary()) ?
-         &getAnalysis<LazyMachineBlockFrequencyInfoPass>().getBFI() :
-         nullptr;
+  MBFI = (PSI && PSI->hasProfileSummary())
+             ? &getAnalysis<LazyMachineBlockFrequencyInfoPass>().getBFI()
+             : nullptr;
   TraceEnsemble = nullptr;
   OptSize = MF.getFunction().hasOptSize();
   RegClassInfo.runOnMachineFunction(MF);
diff --git a/llvm/lib/CodeGen/MachineCopyPropagation.cpp b/llvm/lib/CodeGen/MachineCopyPropagation.cpp
index 488ef31e1dd1e35..7a9f24d459586bf 100644
--- a/llvm/lib/CodeGen/MachineCopyPropagation.cpp
+++ b/llvm/lib/CodeGen/MachineCopyPropagation.cpp
@@ -209,9 +209,7 @@ class CopyTracker {
     }
   }
 
-  bool hasAnyCopies() {
-    return !Copies.empty();
-  }
+  bool hasAnyCopies() { return !Copies.empty(); }
 
   MachineInstr *findCopyForUnit(MCRegister RegUnit,
                                 const TargetRegisterInfo &TRI,
@@ -336,9 +334,7 @@ class CopyTracker {
     return CI->second.LastSeenUseInCopy;
   }
 
-  void clear() {
-    Copies.clear();
-  }
+  void clear() { Copies.clear(); }
 };
 
 class MachineCopyPropagation : public MachineFunctionPass {
@@ -582,7 +578,8 @@ bool MachineCopyPropagation::isForwardableRegClassCopy(const MachineInstr &Copy,
 /// operand (the register being replaced), since these can sometimes be
 /// implicitly tied to other operands.  For example, on AMDGPU:
 ///
-/// V_MOVRELS_B32_e32 %VGPR2, %M0<imp-use>, %EXEC<imp-use>, %VGPR2_VGPR3_VGPR4_VGPR5<imp-use>
+/// V_MOVRELS_B32_e32 %VGPR2, %M0<imp-use>, %EXEC<imp-use>,
+/// %VGPR2_VGPR3_VGPR4_VGPR5<imp-use>
 ///
 /// the %VGPR2 is implicitly tied to the larger reg operand, but we have no
 /// way of knowing we need to update the latter when updating the former.
@@ -729,8 +726,9 @@ void MachineCopyPropagation::ForwardCopyPropagateBlock(MachineBasicBlock &MBB) {
       Register RegDef = CopyOperands->Destination->getReg();
 
       if (!TRI->regsOverlap(RegDef, RegSrc)) {
-        assert(RegDef.isPhysical() && RegSrc.isPhysical() &&
-              "MachineCopyPropagation should be run after register allocation!");
+        assert(
+            RegDef.isPhysical() && RegSrc.isPhysical() &&
+            "MachineCopyPropagation should be run after register allocation!");
 
         MCRegister Def = RegDef.asMCReg();
         MCRegister Src = RegSrc.asMCReg();
@@ -777,9 +775,8 @@ void MachineCopyPropagation::ForwardCopyPropagateBlock(MachineBasicBlock &MBB) {
         if (!MRI->isReserved(Def))
           MaybeDeadCopies.insert(&MI);
 
-        // If 'Def' is previously source of another copy, then this earlier copy's
-        // source is no longer available. e.g.
-        // %xmm9 = copy %xmm2
+        // If 'Def' is previously source of another copy, then this earlier
+        // copy's source is no longer available. e.g. %xmm9 = copy %xmm2
         // ...
         // %xmm2 = copy %xmm0
         // ...
@@ -1117,8 +1114,8 @@ static void LLVM_ATTRIBUTE_UNUSED printSpillReloadChain(
 // Reg is defined by a COPY, we untrack this Reg via
 // CopyTracker::clobberRegister(Reg, ...).
 void MachineCopyPropagation::EliminateSpillageCopies(MachineBasicBlock &MBB) {
-  // ChainLeader maps MI inside a spill-reload chain to its innermost reload COPY.
-  // Thus we can track if a MI belongs to an existing spill-reload chain.
+  // ChainLeader maps MI inside a spill-reload chain to its innermost reload
+  // COPY. Thus we can track if a MI belongs to an existing spill-reload chain.
   DenseMap<MachineInstr *, MachineInstr *> ChainLeader;
   // SpillChain maps innermost reload COPY of a spill-reload chain to a sequence
   // of COPYs that forms spills of a spill-reload chain.
@@ -1141,8 +1138,8 @@ void MachineCopyPropagation::EliminateSpillageCopies(MachineBasicBlock &MBB) {
         // pairs, we already have the shortest sequence this code can handle:
         // the outermost pair for the temporary spill slot, and the pair that
         // use that temporary spill slot for the other end of the chain.
-        // TODO: We might be able to simplify to one spill-reload pair if collecting
-        // more infomation about the outermost COPY.
+        // TODO: We might be able to simplify to one spill-reload pair if
+        // collecting more infomation about the outermost COPY.
         if (SC.size() <= 2)
           return;
 
@@ -1269,7 +1266,7 @@ void MachineCopyPropagation::EliminateSpillageCopies(MachineBasicBlock &MBB) {
         // defined by a previous COPY, since we don't want to make COPYs uses
         // Reg unavailable.
         if (Tracker.findLastSeenDefInCopy(MI, Reg.asMCReg(), *TRI, *TII,
-                                    UseCopyInstr))
+                                          UseCopyInstr))
           // Thus we can keep the property#1.
           RegsToClobber.insert(Reg);
       }
@@ -1286,8 +1283,8 @@ void MachineCopyPropagation::EliminateSpillageCopies(MachineBasicBlock &MBB) {
     // Check if we can find a pair spill-reload copy.
     LLVM_DEBUG(dbgs() << "MCP: Searching paired spill for reload: ");
     LLVM_DEBUG(MI.dump());
-    MachineInstr *MaybeSpill =
-        Tracker.findLastSeenDefInCopy(MI, Src.asMCReg(), *TRI, *TII, UseCopyInstr);
+    MachineInstr *MaybeSpill = Tracker.findLastSeenDefInCopy(
+        MI, Src.asMCReg(), *TRI, *TII, UseCopyInstr);
     bool MaybeSpillIsChained = ChainLeader.count(MaybeSpill);
     if (!MaybeSpillIsChained && MaybeSpill &&
         IsSpillReloadPair(*MaybeSpill, MI)) {
diff --git a/llvm/lib/CodeGen/MachineDominanceFrontier.cpp b/llvm/lib/CodeGen/MachineDominanceFrontier.cpp
index 346cfedde390d33..19fe4dcee29f3af 100644
--- a/llvm/lib/CodeGen/MachineDominanceFrontier.cpp
+++ b/llvm/lib/CodeGen/MachineDominanceFrontier.cpp
@@ -19,16 +19,15 @@ namespace llvm {
 template class DominanceFrontierBase<MachineBasicBlock, false>;
 template class DominanceFrontierBase<MachineBasicBlock, true>;
 template class ForwardDominanceFrontierBase<MachineBasicBlock>;
-}
-
+} // namespace llvm
 
 char MachineDominanceFrontier::ID = 0;
 
 INITIALIZE_PASS_BEGIN(MachineDominanceFrontier, "machine-domfrontier",
-                "Machine Dominance Frontier Construction", true, true)
+                      "Machine Dominance Frontier Construction", true, true)
 INITIALIZE_PASS_DEPENDENCY(MachineDominatorTree)
 INITIALIZE_PASS_END(MachineDominanceFrontier, "machine-domfrontier",
-                "Machine Dominance Frontier Construction", true, true)
+                    "Machine Dominance Frontier Construction", true, true)
 
 MachineDominanceFrontier::MachineDominanceFrontier() : MachineFunctionPass(ID) {
   initializeMachineDominanceFrontierPass(*PassRegistry::getPassRegistry());
@@ -42,9 +41,7 @@ bool MachineDominanceFrontier::runOnMachineFunction(MachineFunction &) {
   return false;
 }
 
-void MachineDominanceFrontier::releaseMemory() {
-  Base.releaseMemory();
-}
+void MachineDominanceFrontier::releaseMemory() { Base.releaseMemory(); }
 
 void MachineDominanceFrontier::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.setPreservesAll();
diff --git a/llvm/lib/CodeGen/MachineDominators.cpp b/llvm/lib/CodeGen/MachineDominators.cpp
index 0632cde9c6f4e4a..934f22db69ef040 100644
--- a/llvm/lib/CodeGen/MachineDominators.cpp
+++ b/llvm/lib/CodeGen/MachineDominators.cpp
@@ -37,7 +37,7 @@ static cl::opt<bool, true> VerifyMachineDomInfoX(
 namespace llvm {
 template class DomTreeNodeBase<MachineBasicBlock>;
 template class DominatorTreeBase<MachineBasicBlock, false>; // DomTreeBase
-}
+} // namespace llvm
 
 char MachineDominatorTree::ID = 0;
 
@@ -63,8 +63,7 @@ void MachineDominatorTree::calculate(MachineFunction &F) {
   DT->recalculate(F);
 }
 
-MachineDominatorTree::MachineDominatorTree()
-    : MachineFunctionPass(ID) {
+MachineDominatorTree::MachineDominatorTree() : MachineFunctionPass(ID) {
   initializeMachineDominatorTreePass(*PassRegistry::getPassRegistry());
 }
 
@@ -81,7 +80,7 @@ void MachineDominatorTree::verifyAnalysis() const {
     }
 }
 
-void MachineDominatorTree::print(raw_ostream &OS, const Module*) const {
+void MachineDominatorTree::print(raw_ostream &OS, const Module *) const {
   if (DT)
     DT->print(OS);
 }
diff --git a/llvm/lib/CodeGen/MachineFrameInfo.cpp b/llvm/lib/CodeGen/MachineFrameInfo.cpp
index 280d3a6a41edc95..d0a9ca2d04177ac 100644
--- a/llvm/lib/CodeGen/MachineFrameInfo.cpp
+++ b/llvm/lib/CodeGen/MachineFrameInfo.cpp
@@ -77,7 +77,7 @@ int MachineFrameInfo::CreateVariableSizedObject(Align Alignment,
   Alignment = clampStackAlignment(!StackRealignable, Alignment, StackAlignment);
   Objects.push_back(StackObject(0, Alignment, 0, false, false, Alloca, true));
   ensureMaxAlignment(Alignment);
-  return (int)Objects.size()-NumFixedObjects-1;
+  return (int)Objects.size() - NumFixedObjects - 1;
 }
 
 int MachineFrameInfo::CreateFixedObject(uint64_t Size, int64_t SPOffset,
@@ -122,8 +122,7 @@ BitVector MachineFrameInfo::getPristineRegs(const MachineFunction &MF) const {
     return BV;
 
   const MachineRegisterInfo &MRI = MF.getRegInfo();
-  for (const MCPhysReg *CSR = MRI.getCalleeSavedRegs(); CSR && *CSR;
-       ++CSR)
+  for (const MCPhysReg *CSR = MRI.getCalleeSavedRegs(); CSR && *CSR; ++CSR)
     BV.set(*CSR);
 
   // Saved CSRs are not pristine.
@@ -149,7 +148,8 @@ uint64_t MachineFrameInfo::estimateStackSize(const MachineFunction &MF) const {
     if (getStackID(i) != TargetStackID::Default)
       continue;
     int64_t FixedOff = -getObjectOffset(i);
-    if (FixedOff > Offset) Offset = FixedOff;
+    if (FixedOff > Offset)
+      Offset = FixedOff;
   }
   for (unsigned i = 0, e = getObjectIndexEnd(); i != e; ++i) {
     // Only estimate stack size of live objects on default stack.
@@ -209,8 +209,9 @@ void MachineFrameInfo::computeMaxCallFrameSize(const MachineFunction &MF) {
   }
 }
 
-void MachineFrameInfo::print(const MachineFunction &MF, raw_ostream &OS) const{
-  if (Objects.empty()) return;
+void MachineFrameInfo::print(const MachineFunction &MF, raw_ostream &OS) const {
+  if (Objects.empty())
+    return;
 
   const TargetFrameLowering *FI = MF.getSubtarget().getFrameLowering();
   int ValOffset = (FI ? FI->getOffsetOfLocalArea() : 0);
@@ -219,7 +220,7 @@ void MachineFrameInfo::print(const MachineFunction &MF, raw_ostream &OS) const{
 
   for (unsigned i = 0, e = Objects.size(); i != e; ++i) {
     const StackObject &SO = Objects[i];
-    OS << "  fi#" << (int)(i-NumFixedObjects) << ": ";
+    OS << "  fi#" << (int)(i - NumFixedObjects) << ": ";
 
     if (SO.StackID != 0)
       OS << "id=" << static_cast<unsigned>(SO.StackID) << ' ';
diff --git a/llvm/lib/CodeGen/MachineFunction.cpp b/llvm/lib/CodeGen/MachineFunction.cpp
index e1f9488a1c88f34..c714f586e243e8a 100644
--- a/llvm/lib/CodeGen/MachineFunction.cpp
+++ b/llvm/lib/CodeGen/MachineFunction.cpp
@@ -154,7 +154,7 @@ void ilist_alloc_traits<MachineBasicBlock>::deleteNode(MachineBasicBlock *MBB) {
 }
 
 static inline Align getFnStackAlignment(const TargetSubtargetInfo *STI,
-                                           const Function &F) {
+                                        const Function &F) {
   if (auto MA = F.getFnStackAlign())
     return *MA;
   return STI->getFrameLowering()->getStackAlign();
@@ -248,9 +248,7 @@ void MachineFunction::initTargetMachineFunctionInfo(
   MFInfo = Target.createMachineFunctionInfo(Allocator, F, &STI);
 }
 
-MachineFunction::~MachineFunction() {
-  clear();
-}
+MachineFunction::~MachineFunction() { clear(); }
 
 void MachineFunction::clear() {
   Properties.reset();
@@ -304,16 +302,18 @@ const DataLayout &MachineFunction::getDataLayout() const {
 
 /// Get the JumpTableInfo for this function.
 /// If it does not already exist, allocate one.
-MachineJumpTableInfo *MachineFunction::
-getOrCreateJumpTableInfo(unsigned EntryKind) {
-  if (JumpTableInfo) return JumpTableInfo;
+MachineJumpTableInfo *
+MachineFunction::getOrCreateJumpTableInfo(unsigned EntryKind) {
+  if (JumpTableInfo)
+    return JumpTableInfo;
 
   JumpTableInfo = new (Allocator)
-    MachineJumpTableInfo((MachineJumpTableInfo::JTEntryKind)EntryKind);
+      MachineJumpTableInfo((MachineJumpTableInfo::JTEntryKind)EntryKind);
   return JumpTableInfo;
 }
 
-DenormalMode MachineFunction::getDenormalMode(const fltSemantics &FPType) const {
+DenormalMode
+MachineFunction::getDenormalMode(const fltSemantics &FPType) const {
   return F.getDenormalMode(FPType);
 }
 
@@ -333,7 +333,10 @@ MachineFunction::addFrameInst(const MCCFIInstruction &Inst) {
 /// ordering of the blocks within the function.  If a specific MachineBasicBlock
 /// is specified, only that block and those after it are renumbered.
 void MachineFunction::RenumberBlocks(MachineBasicBlock *MBB) {
-  if (empty()) { MBBNumbering.clear(); return; }
+  if (empty()) {
+    MBBNumbering.clear();
+    return;
+  }
   MachineFunction::iterator MBBI, E = end();
   if (MBB == nullptr)
     MBBI = begin();
@@ -395,10 +398,9 @@ MachineInstr *MachineFunction::CreateMachineInstr(const MCInstrDesc &MCID,
 
 /// Create a new MachineInstr which is a copy of the 'Orig' instruction,
 /// identical in all ways except the instruction has no parent, prev, or next.
-MachineInstr *
-MachineFunction::CloneMachineInstr(const MachineInstr *Orig) {
+MachineInstr *MachineFunction::CloneMachineInstr(const MachineInstr *Orig) {
   return new (InstructionRecycler.Allocate<MachineInstr>(Allocator))
-             MachineInstr(*this, *Orig);
+      MachineInstr(*this, *Orig);
 }
 
 MachineInstr &MachineFunction::cloneMachineInstrBundle(
@@ -480,8 +482,8 @@ MachineMemOperand *MachineFunction::getMachineMemOperand(
     SyncScope::ID SSID, AtomicOrdering Ordering,
     AtomicOrdering FailureOrdering) {
   return new (Allocator)
-      MachineMemOperand(PtrInfo, f, s, base_alignment, AAInfo, Ranges,
-                        SSID, Ordering, FailureOrdering);
+      MachineMemOperand(PtrInfo, f, s, base_alignment, AAInfo, Ranges, SSID,
+                        Ordering, FailureOrdering);
 }
 
 MachineMemOperand *MachineFunction::getMachineMemOperand(
@@ -494,8 +496,10 @@ MachineMemOperand *MachineFunction::getMachineMemOperand(
                         Ordering, FailureOrdering);
 }
 
-MachineMemOperand *MachineFunction::getMachineMemOperand(
-    const MachineMemOperand *MMO, const MachinePointerInfo &PtrInfo, uint64_t Size) {
+MachineMemOperand *
+MachineFunction::getMachineMemOperand(const MachineMemOperand *MMO,
+                                      const MachinePointerInfo &PtrInfo,
+                                      uint64_t Size) {
   return new (Allocator)
       MachineMemOperand(PtrInfo, MMO->getFlags(), Size, MMO->getBaseAlign(),
                         AAMDNodes(), nullptr, MMO->getSyncScopeID(),
@@ -532,9 +536,10 @@ MachineFunction::getMachineMemOperand(const MachineMemOperand *MMO,
 MachineMemOperand *
 MachineFunction::getMachineMemOperand(const MachineMemOperand *MMO,
                                       const AAMDNodes &AAInfo) {
-  MachinePointerInfo MPI = MMO->getValue() ?
-             MachinePointerInfo(MMO->getValue(), MMO->getOffset()) :
-             MachinePointerInfo(MMO->getPseudoValue(), MMO->getOffset());
+  MachinePointerInfo MPI =
+      MMO->getValue()
+          ? MachinePointerInfo(MMO->getValue(), MMO->getOffset())
+          : MachinePointerInfo(MMO->getPseudoValue(), MMO->getOffset());
 
   return new (Allocator) MachineMemOperand(
       MPI, MMO->getFlags(), MMO->getSize(), MMO->getBaseAlign(), AAInfo,
@@ -576,20 +581,16 @@ uint32_t *MachineFunction::allocateRegMask() {
 }
 
 ArrayRef<int> MachineFunction::allocateShuffleMask(ArrayRef<int> Mask) {
-  int* AllocMask = Allocator.Allocate<int>(Mask.size());
+  int *AllocMask = Allocator.Allocate<int>(Mask.size());
   copy(Mask, AllocMask);
   return {AllocMask, Mask.size()};
 }
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
-LLVM_DUMP_METHOD void MachineFunction::dump() const {
-  print(dbgs());
-}
+LLVM_DUMP_METHOD void MachineFunction::dump() const { print(dbgs()); }
 #endif
 
-StringRef MachineFunction::getName() const {
-  return getFunction().getName();
-}
+StringRef MachineFunction::getName() const { return getFunction().getName(); }
 
 void MachineFunction::print(raw_ostream &OS, const SlotIndexes *Indexes) const {
   OS << "# Machine code for function " << getName() << ": ";
@@ -610,8 +611,9 @@ void MachineFunction::print(raw_ostream &OS, const SlotIndexes *Indexes) const {
 
   if (RegInfo && !RegInfo->livein_empty()) {
     OS << "Function Live Ins: ";
-    for (MachineRegisterInfo::livein_iterator
-         I = RegInfo->livein_begin(), E = RegInfo->livein_end(); I != E; ++I) {
+    for (MachineRegisterInfo::livein_iterator I = RegInfo->livein_begin(),
+                                              E = RegInfo->livein_end();
+         I != E; ++I) {
       OS << printReg(I->first, TRI);
       if (I->second)
         OS << " in " << printReg(I->second, TRI);
@@ -641,44 +643,44 @@ bool MachineFunction::needsFrameMoves() const {
 
 namespace llvm {
 
-  template<>
-  struct DOTGraphTraits<const MachineFunction*> : public DefaultDOTGraphTraits {
-    DOTGraphTraits(bool isSimple = false) : DefaultDOTGraphTraits(isSimple) {}
+template <>
+struct DOTGraphTraits<const MachineFunction *> : public DefaultDOTGraphTraits {
+  DOTGraphTraits(bool isSimple = false) : DefaultDOTGraphTraits(isSimple) {}
 
-    static std::string getGraphName(const MachineFunction *F) {
-      return ("CFG for '" + F->getName() + "' function").str();
-    }
+  static std::string getGraphName(const MachineFunction *F) {
+    return ("CFG for '" + F->getName() + "' function").str();
+  }
 
-    std::string getNodeLabel(const MachineBasicBlock *Node,
-                             const MachineFunction *Graph) {
-      std::string OutStr;
-      {
-        raw_string_ostream OSS(OutStr);
-
-        if (isSimple()) {
-          OSS << printMBBReference(*Node);
-          if (const BasicBlock *BB = Node->getBasicBlock())
-            OSS << ": " << BB->getName();
-        } else
-          Node->print(OSS);
-      }
+  std::string getNodeLabel(const MachineBasicBlock *Node,
+                           const MachineFunction *Graph) {
+    std::string OutStr;
+    {
+      raw_string_ostream OSS(OutStr);
+
+      if (isSimple()) {
+        OSS << printMBBReference(*Node);
+        if (const BasicBlock *BB = Node->getBasicBlock())
+          OSS << ": " << BB->getName();
+      } else
+        Node->print(OSS);
+    }
 
-      if (OutStr[0] == '\n') OutStr.erase(OutStr.begin());
+    if (OutStr[0] == '\n')
+      OutStr.erase(OutStr.begin());
 
-      // Process string output to make it nicer...
-      for (unsigned i = 0; i != OutStr.length(); ++i)
-        if (OutStr[i] == '\n') {                            // Left justify
-          OutStr[i] = '\\';
-          OutStr.insert(OutStr.begin()+i+1, 'l');
-        }
-      return OutStr;
-    }
-  };
+    // Process string output to make it nicer...
+    for (unsigned i = 0; i != OutStr.length(); ++i)
+      if (OutStr[i] == '\n') { // Left justify
+        OutStr[i] = '\\';
+        OutStr.insert(OutStr.begin() + i + 1, 'l');
+      }
+    return OutStr;
+  }
+};
 
 } // end namespace llvm
 
-void MachineFunction::viewCFG() const
-{
+void MachineFunction::viewCFG() const {
 #ifndef NDEBUG
   ViewGraph(this, "mf" + getName());
 #else
@@ -687,8 +689,7 @@ void MachineFunction::viewCFG() const
 #endif // NDEBUG
 }
 
-void MachineFunction::viewCFGOnly() const
-{
+void MachineFunction::viewCFGOnly() const {
 #ifndef NDEBUG
   ViewGraph(this, "mf" + getName(), true);
 #else
@@ -711,9 +712,9 @@ Register MachineFunction::addLiveIn(MCRegister PReg,
     // may have been constrained to match some operation constraints.
     // In that case, check that the current register class includes the
     // physical register and is a sub class of the specified RC.
-    assert((VRegRC == RC || (VRegRC->contains(PReg) &&
-                             RC->hasSubClassEq(VRegRC))) &&
-            "Register class mismatch!");
+    assert((VRegRC == RC ||
+            (VRegRC->contains(PReg) && RC->hasSubClassEq(VRegRC))) &&
+           "Register class mismatch!");
     return VReg;
   }
   VReg = MRI.createVirtualRegister(RC);
@@ -734,7 +735,7 @@ MCSymbol *MachineFunction::getJTISymbol(unsigned JTI, MCContext &Ctx,
                                      : DL.getPrivateGlobalPrefix();
   SmallString<60> Name;
   raw_svector_ostream(Name)
-    << Prefix << "JTI" << getFunctionNumber() << '_' << JTI;
+      << Prefix << "JTI" << getFunctionNumber() << '_' << JTI;
   return Ctx.getOrCreateSymbol(Name);
 }
 
@@ -821,7 +822,8 @@ void MachineFunction::setCallSiteLandingPad(MCSymbol *Sym,
 
 unsigned MachineFunction::getTypeIDFor(const GlobalValue *TI) {
   for (unsigned i = 0, N = TypeInfos.size(); i != N; ++i)
-    if (TypeInfos[i] == TI) return i + 1;
+    if (TypeInfos[i] == TI)
+      return i + 1;
 
   TypeInfos.push_back(TI);
   return TypeInfos.size();
@@ -842,7 +844,7 @@ int MachineFunction::getFilterIDFor(ArrayRef<unsigned> TyIds) {
       // The new filter coincides with range [i, end) of the existing filter.
       return -(1 + i);
 
-try_next:;
+  try_next:;
   }
 
   // Add the new filter.
@@ -1219,13 +1221,9 @@ bool MachineFunction::shouldUseDebugInstrRef() const {
   return false;
 }
 
-bool MachineFunction::useDebugInstrRef() const {
-  return UseDebugInstrRef;
-}
+bool MachineFunction::useDebugInstrRef() const { return UseDebugInstrRef; }
 
-void MachineFunction::setUseDebugInstrRef(bool Use) {
-  UseDebugInstrRef = Use;
-}
+void MachineFunction::setUseDebugInstrRef(bool Use) { UseDebugInstrRef = Use; }
 
 // Use one million as a high / reserved number.
 const unsigned MachineFunction::DebugOperandMemNumber = 1000000;
@@ -1279,10 +1277,10 @@ unsigned MachineJumpTableInfo::getEntryAlignment(const DataLayout &TD) const {
 
 /// Create a new jump table entry in the jump table info.
 unsigned MachineJumpTableInfo::createJumpTableIndex(
-                               const std::vector<MachineBasicBlock*> &DestBBs) {
+    const std::vector<MachineBasicBlock *> &DestBBs) {
   assert(!DestBBs.empty() && "Cannot create an empty jump table!");
   JumpTables.push_back(MachineJumpTableEntry(DestBBs));
-  return JumpTables.size()-1;
+  return JumpTables.size() - 1;
 }
 
 /// If Old is the target of any jump tables, update the jump tables to branch
@@ -1324,7 +1322,8 @@ bool MachineJumpTableInfo::ReplaceMBBInJumpTable(unsigned Idx,
 }
 
 void MachineJumpTableInfo::print(raw_ostream &OS) const {
-  if (JumpTables.empty()) return;
+  if (JumpTables.empty())
+    return;
 
   OS << "Jump Tables:\n";
 
@@ -1390,7 +1389,7 @@ MachineConstantPoolEntry::getSectionKind(const DataLayout *DL) const {
 MachineConstantPool::~MachineConstantPool() {
   // A constant may be a member of both Constants and MachineCPVsSharingEntries,
   // so keep track of which we've deleted to avoid double deletions.
-  DenseSet<MachineConstantPoolValue*> Deleted;
+  DenseSet<MachineConstantPoolValue *> Deleted;
   for (const MachineConstantPoolEntry &C : Constants)
     if (C.isMachineConstantPoolEntry()) {
       Deleted.insert(C.Val.MachineCPVal);
@@ -1407,11 +1406,13 @@ MachineConstantPool::~MachineConstantPool() {
 static bool CanShareConstantPoolEntry(const Constant *A, const Constant *B,
                                       const DataLayout &DL) {
   // Handle the trivial case quickly.
-  if (A == B) return true;
+  if (A == B)
+    return true;
 
   // If they have the same type but weren't the same constant, quickly
   // reject them.
-  if (A->getType() == B->getType()) return false;
+  if (A->getType() == B->getType())
+    return false;
 
   // We can't handle structs or arrays.
   if (isa<StructType>(A->getType()) || isa<ArrayType>(A->getType()) ||
@@ -1425,7 +1426,7 @@ static bool CanShareConstantPoolEntry(const Constant *A, const Constant *B,
 
   bool ContainsUndefOrPoisonA = A->containsUndefOrPoisonElement();
 
-  Type *IntTy = IntegerType::get(A->getContext(), StoreSize*8);
+  Type *IntTy = IntegerType::get(A->getContext(), StoreSize * 8);
 
   // Try constant folding a bitcast of both instructions to an integer.  If we
   // get two identical ConstantInt's, then we are good to share them.  We use
@@ -1458,7 +1459,8 @@ static bool CanShareConstantPoolEntry(const Constant *A, const Constant *B,
 /// User must specify the log2 of the minimum required alignment for the object.
 unsigned MachineConstantPool::getConstantPoolIndex(const Constant *C,
                                                    Align Alignment) {
-  if (Alignment > PoolAlignment) PoolAlignment = Alignment;
+  if (Alignment > PoolAlignment)
+    PoolAlignment = Alignment;
 
   // Check to see if we already have this constant.
   //
@@ -1472,12 +1474,13 @@ unsigned MachineConstantPool::getConstantPoolIndex(const Constant *C,
     }
 
   Constants.push_back(MachineConstantPoolEntry(C, Alignment));
-  return Constants.size()-1;
+  return Constants.size() - 1;
 }
 
 unsigned MachineConstantPool::getConstantPoolIndex(MachineConstantPoolValue *V,
                                                    Align Alignment) {
-  if (Alignment > PoolAlignment) PoolAlignment = Alignment;
+  if (Alignment > PoolAlignment)
+    PoolAlignment = Alignment;
 
   // Check to see if we already have this constant.
   //
@@ -1489,11 +1492,12 @@ unsigned MachineConstantPool::getConstantPoolIndex(MachineConstantPoolValue *V,
   }
 
   Constants.push_back(MachineConstantPoolEntry(V, Alignment));
-  return Constants.size()-1;
+  return Constants.size() - 1;
 }
 
 void MachineConstantPool::print(raw_ostream &OS) const {
-  if (Constants.empty()) return;
+  if (Constants.empty())
+    return;
 
   OS << "Constant Pool:\n";
   for (unsigned i = 0, e = Constants.size(); i != e; ++i) {
diff --git a/llvm/lib/CodeGen/MachineFunctionPrinterPass.cpp b/llvm/lib/CodeGen/MachineFunctionPrinterPass.cpp
index c31c065b1976701..90273625c470076 100644
--- a/llvm/lib/CodeGen/MachineFunctionPrinterPass.cpp
+++ b/llvm/lib/CodeGen/MachineFunctionPrinterPass.cpp
@@ -31,7 +31,7 @@ struct MachineFunctionPrinterPass : public MachineFunctionPass {
   raw_ostream &OS;
   const std::string Banner;
 
-  MachineFunctionPrinterPass() : MachineFunctionPass(ID), OS(dbgs()) { }
+  MachineFunctionPrinterPass() : MachineFunctionPass(ID), OS(dbgs()) {}
   MachineFunctionPrinterPass(raw_ostream &os, const std::string &banner)
       : MachineFunctionPass(ID), OS(os), Banner(banner) {}
 
@@ -53,7 +53,7 @@ struct MachineFunctionPrinterPass : public MachineFunctionPass {
 };
 
 char MachineFunctionPrinterPass::ID = 0;
-}
+} // namespace
 
 char &llvm::MachineFunctionPrinterPassID = MachineFunctionPrinterPass::ID;
 INITIALIZE_PASS(MachineFunctionPrinterPass, "machineinstr-printer",
@@ -63,9 +63,9 @@ namespace llvm {
 /// Returns a newly-created MachineFunction Printer pass. The
 /// default banner is empty.
 ///
-MachineFunctionPass *createMachineFunctionPrinterPass(raw_ostream &OS,
-                                                      const std::string &Banner){
+MachineFunctionPass *
+createMachineFunctionPrinterPass(raw_ostream &OS, const std::string &Banner) {
   return new MachineFunctionPrinterPass(OS, Banner);
 }
 
-}
+} // namespace llvm
diff --git a/llvm/lib/CodeGen/MachineInstr.cpp b/llvm/lib/CodeGen/MachineInstr.cpp
index fefae6582991944..84d2ffec4097e34 100644
--- a/llvm/lib/CodeGen/MachineInstr.cpp
+++ b/llvm/lib/CodeGen/MachineInstr.cpp
@@ -216,7 +216,8 @@ void MachineInstr::addOperand(MachineFunction &MF, const MachineOperand &Op) {
   unsigned OpNo = getNumOperands();
   bool isImpReg = Op.isReg() && Op.isImplicit();
   if (!isImpReg && !isInlineAsm()) {
-    while (OpNo && Operands[OpNo-1].isReg() && Operands[OpNo-1].isImplicit()) {
+    while (OpNo && Operands[OpNo - 1].isReg() &&
+           Operands[OpNo - 1].isImplicit()) {
       --OpNo;
       assert(!Operands[OpNo].isTied() && "Cannot move tied operands");
     }
@@ -370,8 +371,7 @@ void MachineInstr::setMemRefs(MachineFunction &MF,
                getHeapAllocMarker(), getPCSections(), getCFIType());
 }
 
-void MachineInstr::addMemOperand(MachineFunction &MF,
-                                 MachineMemOperand *MO) {
+void MachineInstr::addMemOperand(MachineFunction &MF, MachineMemOperand *MO) {
   SmallVector<MachineMemOperand *, 2> MMOs;
   MMOs.append(memoperands_begin(), memoperands_end());
   MMOs.push_back(MO);
@@ -897,9 +897,8 @@ bool MachineInstr::isDebugEntryValue() const {
   return isDebugValue() && getDebugExpression()->isEntryValue();
 }
 
-const TargetRegisterClass*
-MachineInstr::getRegClassConstraint(unsigned OpIdx,
-                                    const TargetInstrInfo *TII,
+const TargetRegisterClass *
+MachineInstr::getRegClassConstraint(unsigned OpIdx, const TargetInstrInfo *TII,
                                     const TargetRegisterInfo *TRI) const {
   assert(getParent() && "Can't have an MBB reference here!");
   assert(getMF() && "Can't have an MF reference here!");
@@ -1029,7 +1028,7 @@ int MachineInstr::findRegisterUseOperandIdx(
 /// readsWritesVirtualRegister - Return a pair of bools (reads, writes)
 /// indicating if this instruction reads or writes Reg. This also considers
 /// partial defines.
-std::pair<bool,bool>
+std::pair<bool, bool>
 MachineInstr::readsWritesVirtualRegister(Register Reg,
                                          SmallVectorImpl<unsigned> *Ops) const {
   bool PartDef = false; // Partial redefine.
@@ -1058,9 +1057,9 @@ MachineInstr::readsWritesVirtualRegister(Register Reg,
 /// the specified register or -1 if it is not found. If isDead is true, defs
 /// that are not dead are skipped. If TargetRegisterInfo is non-null, then it
 /// also checks if there is a def of a super-register.
-int
-MachineInstr::findRegisterDefOperandIdx(Register Reg, bool isDead, bool Overlap,
-                                        const TargetRegisterInfo *TRI) const {
+int MachineInstr::findRegisterDefOperandIdx(
+    Register Reg, bool isDead, bool Overlap,
+    const TargetRegisterInfo *TRI) const {
   bool isPhys = Reg.isPhysical();
   for (unsigned i = 0, e = getNumOperands(); i != e; ++i) {
     const MachineOperand &MO = getOperand(i);
@@ -1171,7 +1170,8 @@ unsigned MachineInstr::findTiedOperandIdx(unsigned OpIdx) const {
     // on registers.
     StatepointOpers SO(this);
     unsigned CurUseIdx = SO.getFirstGCPtrIdx();
-    assert(CurUseIdx != -1U && "only gc pointer statepoint operands can be tied");
+    assert(CurUseIdx != -1U &&
+           "only gc pointer statepoint operands can be tied");
     unsigned NumDefs = getNumDefs();
     for (unsigned CurDefIdx = 0; CurDefIdx < NumDefs; ++CurDefIdx) {
       while (!getOperand(CurUseIdx).isReg())
@@ -1397,10 +1397,7 @@ bool MachineInstr::mayAlias(AAResults *AA, const MachineInstr &Other,
 /// memory references.
 bool MachineInstr::hasOrderedMemoryRef() const {
   // An instruction known never to access memory won't have a volatile access.
-  if (!mayStore() &&
-      !mayLoad() &&
-      !isCall() &&
-      !hasUnmodeledSideEffects())
+  if (!mayStore() && !mayLoad() && !isCall() && !hasUnmodeledSideEffects())
     return false;
 
   // Otherwise, if the instruction has no memory reference information,
@@ -1435,7 +1432,8 @@ bool MachineInstr::isDereferenceableInvariantLoad() const {
       // instruction.  Such an instruction is technically an invariant load,
       // but the caller code would need updated to expect that.
       return false;
-    if (MMO->isStore()) return false;
+    if (MMO->isStore())
+      return false;
     if (MMO->isInvariant() && MMO->isDereferenceable())
       continue;
 
@@ -1708,9 +1706,9 @@ void MachineInstr::print(raw_ostream &OS, ModuleSlotTracker &MST,
     const unsigned OpIdx = InlineAsm::MIOp_AsmString;
     LLT TypeToPrint = MRI ? getTypeToPrint(OpIdx, PrintedTypes, *MRI) : LLT{};
     unsigned TiedOperandIdx = getTiedOperandIdx(OpIdx);
-    getOperand(OpIdx).print(OS, MST, TypeToPrint, OpIdx, /*PrintDef=*/true, IsStandalone,
-                            ShouldPrintRegisterTies, TiedOperandIdx, TRI,
-                            IntrinsicInfo);
+    getOperand(OpIdx).print(OS, MST, TypeToPrint, OpIdx, /*PrintDef=*/true,
+                            IsStandalone, ShouldPrintRegisterTies,
+                            TiedOperandIdx, TRI, IntrinsicInfo);
 
     // Print HasSideEffects, MayLoad, MayStore, IsAlignStack
     unsigned ExtraInfo = getOperand(InlineAsm::MIOp_ExtraInfo).getImm();
@@ -1736,7 +1734,10 @@ void MachineInstr::print(raw_ostream &OS, ModuleSlotTracker &MST,
   for (unsigned i = StartOp, e = getNumOperands(); i != e; ++i) {
     const MachineOperand &MO = getOperand(i);
 
-    if (FirstOp) FirstOp = false; else OS << ",";
+    if (FirstOp)
+      FirstOp = false;
+    else
+      OS << ",";
     OS << " ";
 
     if (isDebugValueLike() && MO.isMetadata()) {
@@ -1919,10 +1920,10 @@ bool MachineInstr::addRegisterKilled(Register IncomingReg,
                                      const TargetRegisterInfo *RegInfo,
                                      bool AddIfNotFound) {
   bool isPhysReg = IncomingReg.isPhysical();
-  bool hasAliases = isPhysReg &&
-    MCRegAliasIterator(IncomingReg, RegInfo, false).isValid();
+  bool hasAliases =
+      isPhysReg && MCRegAliasIterator(IncomingReg, RegInfo, false).isValid();
   bool Found = false;
-  SmallVector<unsigned,4> DeadOps;
+  SmallVector<unsigned, 4> DeadOps;
   for (unsigned i = 0, e = getNumOperands(); i != e; ++i) {
     MachineOperand &MO = getOperand(i);
     if (!MO.isReg() || !MO.isUse() || MO.isUndef())
@@ -1972,10 +1973,8 @@ bool MachineInstr::addRegisterKilled(Register IncomingReg,
   // If not found, this means an alias of one of the operands is killed. Add a
   // new implicit operand if required.
   if (!Found && AddIfNotFound) {
-    addOperand(MachineOperand::CreateReg(IncomingReg,
-                                         false /*IsDef*/,
-                                         true  /*IsImp*/,
-                                         true  /*IsKill*/));
+    addOperand(MachineOperand::CreateReg(IncomingReg, false /*IsDef*/,
+                                         true /*IsImp*/, true /*IsKill*/));
     return true;
   }
   return Found;
@@ -1998,10 +1997,10 @@ bool MachineInstr::addRegisterDead(Register Reg,
                                    const TargetRegisterInfo *RegInfo,
                                    bool AddIfNotFound) {
   bool isPhysReg = Reg.isPhysical();
-  bool hasAliases = isPhysReg &&
-    MCRegAliasIterator(Reg, RegInfo, false).isValid();
+  bool hasAliases =
+      isPhysReg && MCRegAliasIterator(Reg, RegInfo, false).isValid();
   bool Found = false;
-  SmallVector<unsigned,4> DeadOps;
+  SmallVector<unsigned, 4> DeadOps;
   for (unsigned i = 0, e = getNumOperands(); i != e; ++i) {
     MachineOperand &MO = getOperand(i);
     if (!MO.isReg() || !MO.isDef())
@@ -2038,11 +2037,8 @@ bool MachineInstr::addRegisterDead(Register Reg,
   if (Found || !AddIfNotFound)
     return Found;
 
-  addOperand(MachineOperand::CreateReg(Reg,
-                                       true  /*IsDef*/,
-                                       true  /*IsImp*/,
-                                       false /*IsKill*/,
-                                       true  /*IsDead*/));
+  addOperand(MachineOperand::CreateReg(Reg, true /*IsDef*/, true /*IsImp*/,
+                                       false /*IsKill*/, true /*IsDead*/));
   return true;
 }
 
@@ -2070,14 +2066,11 @@ void MachineInstr::addRegisterDefined(Register Reg,
       return;
   } else {
     for (const MachineOperand &MO : operands()) {
-      if (MO.isReg() && MO.getReg() == Reg && MO.isDef() &&
-          MO.getSubReg() == 0)
+      if (MO.isReg() && MO.getReg() == Reg && MO.isDef() && MO.getSubReg() == 0)
         return;
     }
   }
-  addOperand(MachineOperand::CreateReg(Reg,
-                                       true  /*IsDef*/,
-                                       true  /*IsImp*/));
+  addOperand(MachineOperand::CreateReg(Reg, true /*IsDef*/, true /*IsImp*/));
 }
 
 void MachineInstr::setPhysRegsDeadExcept(ArrayRef<Register> UsedRegs,
@@ -2088,13 +2081,15 @@ void MachineInstr::setPhysRegsDeadExcept(ArrayRef<Register> UsedRegs,
       HasRegMask = true;
       continue;
     }
-    if (!MO.isReg() || !MO.isDef()) continue;
+    if (!MO.isReg() || !MO.isDef())
+      continue;
     Register Reg = MO.getReg();
     if (!Reg.isPhysical())
       continue;
     // If there are no uses, including partial uses, the def is dead.
-    if (llvm::none_of(UsedRegs,
-                      [&](MCRegister Use) { return TRI.regsOverlap(Use, Reg); }))
+    if (llvm::none_of(UsedRegs, [&](MCRegister Use) {
+          return TRI.regsOverlap(Use, Reg);
+        }))
       MO.setIsDead();
   }
 
@@ -2106,14 +2101,14 @@ void MachineInstr::setPhysRegsDeadExcept(ArrayRef<Register> UsedRegs,
 }
 
 unsigned
-MachineInstrExpressionTrait::getHashValue(const MachineInstr* const &MI) {
+MachineInstrExpressionTrait::getHashValue(const MachineInstr *const &MI) {
   // Build up a buffer of hash code components.
   SmallVector<size_t, 16> HashComponents;
   HashComponents.reserve(MI->getNumOperands() + 1);
   HashComponents.push_back(MI->getOpcode());
   for (const MachineOperand &MO : MI->operands()) {
     if (MO.isReg() && MO.isDef() && MO.getReg().isVirtual())
-      continue;  // Skip virtual register defs.
+      continue; // Skip virtual register defs.
 
     HashComponents.push_back(hash_value(MO));
   }
@@ -2125,8 +2120,8 @@ void MachineInstr::emitError(StringRef Msg) const {
   uint64_t LocCookie = 0;
   const MDNode *LocMD = nullptr;
   for (unsigned i = getNumOperands(); i != 0; --i) {
-    if (getOperand(i-1).isMetadata() &&
-        (LocMD = getOperand(i-1).getMetadata()) &&
+    if (getOperand(i - 1).isMetadata() &&
+        (LocMD = getOperand(i - 1).getMetadata()) &&
         LocMD->getNumOperands() != 0) {
       if (const ConstantInt *CI =
               mdconst::dyn_extract<ConstantInt>(LocMD->getOperand(0))) {
@@ -2305,14 +2300,14 @@ void llvm::updateDbgValueForSpill(MachineInstr &Orig, int FrameIndex,
 }
 
 void MachineInstr::collectDebugValues(
-                                SmallVectorImpl<MachineInstr *> &DbgValues) {
+    SmallVectorImpl<MachineInstr *> &DbgValues) {
   MachineInstr &MI = *this;
   if (!MI.getOperand(0).isReg())
     return;
 
-  MachineBasicBlock::iterator DI = MI; ++DI;
-  for (MachineBasicBlock::iterator DE = MI.getParent()->end();
-       DI != DE; ++DI) {
+  MachineBasicBlock::iterator DI = MI;
+  ++DI;
+  for (MachineBasicBlock::iterator DE = MI.getParent()->end(); DI != DE; ++DI) {
     if (!DI->isDebugValue())
       return;
     if (DI->hasDebugOperandForReg(MI.getOperand(0).getReg()))
diff --git a/llvm/lib/CodeGen/MachineInstrBundle.cpp b/llvm/lib/CodeGen/MachineInstrBundle.cpp
index b9db34f7be9548c..f96ff386118ba9e 100644
--- a/llvm/lib/CodeGen/MachineInstrBundle.cpp
+++ b/llvm/lib/CodeGen/MachineInstrBundle.cpp
@@ -22,20 +22,20 @@
 using namespace llvm;
 
 namespace {
-  class UnpackMachineBundles : public MachineFunctionPass {
-  public:
-    static char ID; // Pass identification
-    UnpackMachineBundles(
-        std::function<bool(const MachineFunction &)> Ftor = nullptr)
-        : MachineFunctionPass(ID), PredicateFtor(std::move(Ftor)) {
-      initializeUnpackMachineBundlesPass(*PassRegistry::getPassRegistry());
-    }
+class UnpackMachineBundles : public MachineFunctionPass {
+public:
+  static char ID; // Pass identification
+  UnpackMachineBundles(
+      std::function<bool(const MachineFunction &)> Ftor = nullptr)
+      : MachineFunctionPass(ID), PredicateFtor(std::move(Ftor)) {
+    initializeUnpackMachineBundlesPass(*PassRegistry::getPassRegistry());
+  }
 
-    bool runOnMachineFunction(MachineFunction &MF) override;
+  bool runOnMachineFunction(MachineFunction &MF) override;
 
-  private:
-    std::function<bool(const MachineFunction &)> PredicateFtor;
-  };
+private:
+  std::function<bool(const MachineFunction &)> PredicateFtor;
+};
 } // end anonymous namespace
 
 char UnpackMachineBundles::ID = 0;
@@ -50,7 +50,8 @@ bool UnpackMachineBundles::runOnMachineFunction(MachineFunction &MF) {
   bool Changed = false;
   for (MachineBasicBlock &MBB : MF) {
     for (MachineBasicBlock::instr_iterator MII = MBB.instr_begin(),
-           MIE = MBB.instr_end(); MII != MIE; ) {
+                                           MIE = MBB.instr_end();
+         MII != MIE;) {
       MachineInstr *MI = &*MII;
 
       // Remove BUNDLE instruction and the InsideBundle flags from bundled
@@ -58,7 +59,7 @@ bool UnpackMachineBundles::runOnMachineFunction(MachineFunction &MF) {
       if (MI->isBundle()) {
         while (++MII != MIE && MII->isBundledWithPred()) {
           MII->unbundleFromPred();
-          for (MachineOperand &MO  : MII->operands()) {
+          for (MachineOperand &MO : MII->operands()) {
             if (MO.isReg() && MO.isInternalRead())
               MO.setIsInternalRead(false);
           }
@@ -76,22 +77,21 @@ bool UnpackMachineBundles::runOnMachineFunction(MachineFunction &MF) {
   return Changed;
 }
 
-FunctionPass *
-llvm::createUnpackMachineBundles(
+FunctionPass *llvm::createUnpackMachineBundles(
     std::function<bool(const MachineFunction &)> Ftor) {
   return new UnpackMachineBundles(std::move(Ftor));
 }
 
 namespace {
-  class FinalizeMachineBundles : public MachineFunctionPass {
-  public:
-    static char ID; // Pass identification
-    FinalizeMachineBundles() : MachineFunctionPass(ID) {
-      initializeFinalizeMachineBundlesPass(*PassRegistry::getPassRegistry());
-    }
+class FinalizeMachineBundles : public MachineFunctionPass {
+public:
+  static char ID; // Pass identification
+  FinalizeMachineBundles() : MachineFunctionPass(ID) {
+    initializeFinalizeMachineBundlesPass(*PassRegistry::getPassRegistry());
+  }
 
-    bool runOnMachineFunction(MachineFunction &MF) override;
-  };
+  bool runOnMachineFunction(MachineFunction &MF) override;
+};
 } // end anonymous namespace
 
 char FinalizeMachineBundles::ID = 0;
@@ -142,7 +142,7 @@ void llvm::finalizeBundle(MachineBasicBlock &MBB,
   SmallSet<Register, 8> ExternUseSet;
   SmallSet<Register, 8> KilledUseSet;
   SmallSet<Register, 8> UndefUseSet;
-  SmallVector<MachineOperand*, 4> Defs;
+  SmallVector<MachineOperand *, 4> Defs;
   for (auto MII = FirstMI; MII != LastMI; ++MII) {
     // Debug instructions have no effects to track.
     if (MII->isDebugInstr())
@@ -214,7 +214,7 @@ void llvm::finalizeBundle(MachineBasicBlock &MBB,
       // If it's not live beyond end of the bundle, mark it dead.
       bool isDead = DeadDefSet.count(Reg) || KilledDefSet.count(Reg);
       MIB.addReg(Reg, getDefRegState(true) | getDeadRegState(isDead) |
-                 getImplRegState(true));
+                          getImplRegState(true));
     }
   }
 
@@ -223,7 +223,7 @@ void llvm::finalizeBundle(MachineBasicBlock &MBB,
     bool isKill = KilledUseSet.count(Reg);
     bool isUndef = UndefUseSet.count(Reg);
     MIB.addReg(Reg, getKillRegState(isKill) | getUndefRegState(isUndef) |
-               getImplRegState(true));
+                        getImplRegState(true));
   }
 
   // Set FrameSetup/FrameDestroy for the bundle. If any of the instructions got
@@ -264,7 +264,7 @@ bool llvm::finalizeBundles(MachineFunction &MF) {
     assert(!MII->isInsideBundle() &&
            "First instr cannot be inside bundle before finalization!");
 
-    for (++MII; MII != MIE; ) {
+    for (++MII; MII != MIE;) {
       if (!MII->isInsideBundle())
         ++MII;
       else {
diff --git a/llvm/lib/CodeGen/MachineLICM.cpp b/llvm/lib/CodeGen/MachineLICM.cpp
index fa7aa4a7a44ad08..361f3f76734d628 100644
--- a/llvm/lib/CodeGen/MachineLICM.cpp
+++ b/llvm/lib/CodeGen/MachineLICM.cpp
@@ -59,49 +59,44 @@ using namespace llvm;
 #define DEBUG_TYPE "machinelicm"
 
 static cl::opt<bool>
-AvoidSpeculation("avoid-speculation",
-                 cl::desc("MachineLICM should avoid speculation"),
-                 cl::init(true), cl::Hidden);
-
-static cl::opt<bool>
-HoistCheapInsts("hoist-cheap-insts",
-                cl::desc("MachineLICM should hoist even cheap instructions"),
-                cl::init(false), cl::Hidden);
-
-static cl::opt<bool>
-HoistConstStores("hoist-const-stores",
-                 cl::desc("Hoist invariant stores"),
-                 cl::init(true), cl::Hidden);
+    AvoidSpeculation("avoid-speculation",
+                     cl::desc("MachineLICM should avoid speculation"),
+                     cl::init(true), cl::Hidden);
+
+static cl::opt<bool> HoistCheapInsts(
+    "hoist-cheap-insts",
+    cl::desc("MachineLICM should hoist even cheap instructions"),
+    cl::init(false), cl::Hidden);
+
+static cl::opt<bool> HoistConstStores("hoist-const-stores",
+                                      cl::desc("Hoist invariant stores"),
+                                      cl::init(true), cl::Hidden);
 // The default threshold of 100 (i.e. if target block is 100 times hotter)
 // is based on empirical data on a single target and is subject to tuning.
-static cl::opt<unsigned>
-BlockFrequencyRatioThreshold("block-freq-ratio-threshold",
-                             cl::desc("Do not hoist instructions if target"
-                             "block is N times hotter than the source."),
-                             cl::init(100), cl::Hidden);
+static cl::opt<unsigned> BlockFrequencyRatioThreshold(
+    "block-freq-ratio-threshold",
+    cl::desc("Do not hoist instructions if target"
+             "block is N times hotter than the source."),
+    cl::init(100), cl::Hidden);
 
 enum class UseBFI { None, PGO, All };
 
-static cl::opt<UseBFI>
-DisableHoistingToHotterBlocks("disable-hoisting-to-hotter-blocks",
-                              cl::desc("Disable hoisting instructions to"
-                              " hotter blocks"),
-                              cl::init(UseBFI::PGO), cl::Hidden,
-                              cl::values(clEnumValN(UseBFI::None, "none",
-                              "disable the feature"),
-                              clEnumValN(UseBFI::PGO, "pgo",
-                              "enable the feature when using profile data"),
-                              clEnumValN(UseBFI::All, "all",
-                              "enable the feature with/wo profile data")));
-
-STATISTIC(NumHoisted,
-          "Number of machine instructions hoisted out of loops");
+static cl::opt<UseBFI> DisableHoistingToHotterBlocks(
+    "disable-hoisting-to-hotter-blocks",
+    cl::desc("Disable hoisting instructions to"
+             " hotter blocks"),
+    cl::init(UseBFI::PGO), cl::Hidden,
+    cl::values(clEnumValN(UseBFI::None, "none", "disable the feature"),
+               clEnumValN(UseBFI::PGO, "pgo",
+                          "enable the feature when using profile data"),
+               clEnumValN(UseBFI::All, "all",
+                          "enable the feature with/wo profile data")));
+
+STATISTIC(NumHoisted, "Number of machine instructions hoisted out of loops");
 STATISTIC(NumLowRP,
           "Number of instructions hoisted in low reg pressure situation");
-STATISTIC(NumHighLatency,
-          "Number of high latency instructions hoisted");
-STATISTIC(NumCSEed,
-          "Number of hoisted machine instructions CSEed");
+STATISTIC(NumHighLatency, "Number of high latency instructions hoisted");
+STATISTIC(NumCSEed, "Number of hoisted machine instructions CSEed");
 STATISTIC(NumPostRAHoisted,
           "Number of machine instructions hoisted out of loops post regalloc");
 STATISTIC(NumStoreConst,
@@ -111,182 +106,177 @@ STATISTIC(NumNotHoistedDueToHotness,
 
 namespace {
 
-  class MachineLICMBase : public MachineFunctionPass {
-    const TargetInstrInfo *TII = nullptr;
-    const TargetLoweringBase *TLI = nullptr;
-    const TargetRegisterInfo *TRI = nullptr;
-    const MachineFrameInfo *MFI = nullptr;
-    MachineRegisterInfo *MRI = nullptr;
-    TargetSchedModel SchedModel;
-    bool PreRegAlloc = false;
-    bool HasProfileData = false;
-
-    // Various analyses that we use...
-    AliasAnalysis *AA = nullptr;               // Alias analysis info.
-    MachineBlockFrequencyInfo *MBFI = nullptr; // Machine block frequncy info
-    MachineLoopInfo *MLI = nullptr;            // Current MachineLoopInfo
-    MachineDominatorTree *DT = nullptr; // Machine dominator tree for the cur loop
-
-    // State that is updated as we process loops
-    bool Changed = false;           // True if a loop is changed.
-    bool FirstInLoop = false;       // True if it's the first LICM in the loop.
-    MachineLoop *CurLoop = nullptr; // The current loop we are working on.
-    MachineBasicBlock *CurPreheader = nullptr; // The preheader for CurLoop.
-
-    // Exit blocks for CurLoop.
-    SmallVector<MachineBasicBlock *, 8> ExitBlocks;
-
-    bool isExitBlock(const MachineBasicBlock *MBB) const {
-      return is_contained(ExitBlocks, MBB);
-    }
+class MachineLICMBase : public MachineFunctionPass {
+  const TargetInstrInfo *TII = nullptr;
+  const TargetLoweringBase *TLI = nullptr;
+  const TargetRegisterInfo *TRI = nullptr;
+  const MachineFrameInfo *MFI = nullptr;
+  MachineRegisterInfo *MRI = nullptr;
+  TargetSchedModel SchedModel;
+  bool PreRegAlloc = false;
+  bool HasProfileData = false;
+
+  // Various analyses that we use...
+  AliasAnalysis *AA = nullptr;               // Alias analysis info.
+  MachineBlockFrequencyInfo *MBFI = nullptr; // Machine block frequncy info
+  MachineLoopInfo *MLI = nullptr;            // Current MachineLoopInfo
+  MachineDominatorTree *DT = nullptr; // Machine dominator tree for the cur loop
+
+  // State that is updated as we process loops
+  bool Changed = false;           // True if a loop is changed.
+  bool FirstInLoop = false;       // True if it's the first LICM in the loop.
+  MachineLoop *CurLoop = nullptr; // The current loop we are working on.
+  MachineBasicBlock *CurPreheader = nullptr; // The preheader for CurLoop.
+
+  // Exit blocks for CurLoop.
+  SmallVector<MachineBasicBlock *, 8> ExitBlocks;
+
+  bool isExitBlock(const MachineBasicBlock *MBB) const {
+    return is_contained(ExitBlocks, MBB);
+  }
 
-    // Track 'estimated' register pressure.
-    SmallSet<Register, 32> RegSeen;
-    SmallVector<unsigned, 8> RegPressure;
-
-    // Register pressure "limit" per register pressure set. If the pressure
-    // is higher than the limit, then it's considered high.
-    SmallVector<unsigned, 8> RegLimit;
-
-    // Register pressure on path leading from loop preheader to current BB.
-    SmallVector<SmallVector<unsigned, 8>, 16> BackTrace;
-
-    // For each opcode, keep a list of potential CSE instructions.
-    DenseMap<unsigned, std::vector<MachineInstr *>> CSEMap;
-
-    enum {
-      SpeculateFalse   = 0,
-      SpeculateTrue    = 1,
-      SpeculateUnknown = 2
-    };
-
-    // If a MBB does not dominate loop exiting blocks then it may not safe
-    // to hoist loads from this block.
-    // Tri-state: 0 - false, 1 - true, 2 - unknown
-    unsigned SpeculationState = SpeculateUnknown;
-
-  public:
-    MachineLICMBase(char &PassID, bool PreRegAlloc)
-        : MachineFunctionPass(PassID), PreRegAlloc(PreRegAlloc) {}
-
-    bool runOnMachineFunction(MachineFunction &MF) override;
-
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.addRequired<MachineLoopInfo>();
-      if (DisableHoistingToHotterBlocks != UseBFI::None)
-        AU.addRequired<MachineBlockFrequencyInfo>();
-      AU.addRequired<MachineDominatorTree>();
-      AU.addRequired<AAResultsWrapperPass>();
-      AU.addPreserved<MachineLoopInfo>();
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
+  // Track 'estimated' register pressure.
+  SmallSet<Register, 32> RegSeen;
+  SmallVector<unsigned, 8> RegPressure;
 
-    void releaseMemory() override {
-      RegSeen.clear();
-      RegPressure.clear();
-      RegLimit.clear();
-      BackTrace.clear();
-      CSEMap.clear();
-    }
+  // Register pressure "limit" per register pressure set. If the pressure
+  // is higher than the limit, then it's considered high.
+  SmallVector<unsigned, 8> RegLimit;
+
+  // Register pressure on path leading from loop preheader to current BB.
+  SmallVector<SmallVector<unsigned, 8>, 16> BackTrace;
+
+  // For each opcode, keep a list of potential CSE instructions.
+  DenseMap<unsigned, std::vector<MachineInstr *>> CSEMap;
+
+  enum { SpeculateFalse = 0, SpeculateTrue = 1, SpeculateUnknown = 2 };
+
+  // If a MBB does not dominate loop exiting blocks then it may not safe
+  // to hoist loads from this block.
+  // Tri-state: 0 - false, 1 - true, 2 - unknown
+  unsigned SpeculationState = SpeculateUnknown;
+
+public:
+  MachineLICMBase(char &PassID, bool PreRegAlloc)
+      : MachineFunctionPass(PassID), PreRegAlloc(PreRegAlloc) {}
+
+  bool runOnMachineFunction(MachineFunction &MF) override;
+
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.addRequired<MachineLoopInfo>();
+    if (DisableHoistingToHotterBlocks != UseBFI::None)
+      AU.addRequired<MachineBlockFrequencyInfo>();
+    AU.addRequired<MachineDominatorTree>();
+    AU.addRequired<AAResultsWrapperPass>();
+    AU.addPreserved<MachineLoopInfo>();
+    MachineFunctionPass::getAnalysisUsage(AU);
+  }
 
-  private:
-    /// Keep track of information about hoisting candidates.
-    struct CandidateInfo {
-      MachineInstr *MI;
-      unsigned      Def;
-      int           FI;
+  void releaseMemory() override {
+    RegSeen.clear();
+    RegPressure.clear();
+    RegLimit.clear();
+    BackTrace.clear();
+    CSEMap.clear();
+  }
+
+private:
+  /// Keep track of information about hoisting candidates.
+  struct CandidateInfo {
+    MachineInstr *MI;
+    unsigned Def;
+    int FI;
 
-      CandidateInfo(MachineInstr *mi, unsigned def, int fi)
+    CandidateInfo(MachineInstr *mi, unsigned def, int fi)
         : MI(mi), Def(def), FI(fi) {}
-    };
+  };
 
-    void HoistRegionPostRA();
+  void HoistRegionPostRA();
 
-    void HoistPostRA(MachineInstr *MI, unsigned Def);
+  void HoistPostRA(MachineInstr *MI, unsigned Def);
 
-    void ProcessMI(MachineInstr *MI, BitVector &PhysRegDefs,
-                   BitVector &PhysRegClobbers, SmallSet<int, 32> &StoredFIs,
-                   SmallVectorImpl<CandidateInfo> &Candidates);
+  void ProcessMI(MachineInstr *MI, BitVector &PhysRegDefs,
+                 BitVector &PhysRegClobbers, SmallSet<int, 32> &StoredFIs,
+                 SmallVectorImpl<CandidateInfo> &Candidates);
 
-    void AddToLiveIns(MCRegister Reg);
+  void AddToLiveIns(MCRegister Reg);
 
-    bool IsLICMCandidate(MachineInstr &I);
+  bool IsLICMCandidate(MachineInstr &I);
 
-    bool IsLoopInvariantInst(MachineInstr &I);
+  bool IsLoopInvariantInst(MachineInstr &I);
 
-    bool HasLoopPHIUse(const MachineInstr *MI) const;
+  bool HasLoopPHIUse(const MachineInstr *MI) const;
 
-    bool HasHighOperandLatency(MachineInstr &MI, unsigned DefIdx,
-                               Register Reg) const;
+  bool HasHighOperandLatency(MachineInstr &MI, unsigned DefIdx,
+                             Register Reg) const;
 
-    bool IsCheapInstruction(MachineInstr &MI) const;
+  bool IsCheapInstruction(MachineInstr &MI) const;
 
-    bool CanCauseHighRegPressure(const DenseMap<unsigned, int> &Cost,
-                                 bool Cheap);
+  bool CanCauseHighRegPressure(const DenseMap<unsigned, int> &Cost, bool Cheap);
 
-    void UpdateBackTraceRegPressure(const MachineInstr *MI);
+  void UpdateBackTraceRegPressure(const MachineInstr *MI);
 
-    bool IsProfitableToHoist(MachineInstr &MI);
+  bool IsProfitableToHoist(MachineInstr &MI);
 
-    bool IsGuaranteedToExecute(MachineBasicBlock *BB);
+  bool IsGuaranteedToExecute(MachineBasicBlock *BB);
 
-    bool isTriviallyReMaterializable(const MachineInstr &MI) const;
+  bool isTriviallyReMaterializable(const MachineInstr &MI) const;
 
-    void EnterScope(MachineBasicBlock *MBB);
+  void EnterScope(MachineBasicBlock *MBB);
 
-    void ExitScope(MachineBasicBlock *MBB);
+  void ExitScope(MachineBasicBlock *MBB);
 
-    void ExitScopeIfDone(
-        MachineDomTreeNode *Node,
-        DenseMap<MachineDomTreeNode *, unsigned> &OpenChildren,
-        const DenseMap<MachineDomTreeNode *, MachineDomTreeNode *> &ParentMap);
+  void ExitScopeIfDone(
+      MachineDomTreeNode *Node,
+      DenseMap<MachineDomTreeNode *, unsigned> &OpenChildren,
+      const DenseMap<MachineDomTreeNode *, MachineDomTreeNode *> &ParentMap);
 
-    void HoistOutOfLoop(MachineDomTreeNode *HeaderN);
+  void HoistOutOfLoop(MachineDomTreeNode *HeaderN);
 
-    void InitRegPressure(MachineBasicBlock *BB);
+  void InitRegPressure(MachineBasicBlock *BB);
 
-    DenseMap<unsigned, int> calcRegisterCost(const MachineInstr *MI,
-                                             bool ConsiderSeen,
-                                             bool ConsiderUnseenAsDef);
+  DenseMap<unsigned, int> calcRegisterCost(const MachineInstr *MI,
+                                           bool ConsiderSeen,
+                                           bool ConsiderUnseenAsDef);
 
-    void UpdateRegPressure(const MachineInstr *MI,
-                           bool ConsiderUnseenAsDef = false);
+  void UpdateRegPressure(const MachineInstr *MI,
+                         bool ConsiderUnseenAsDef = false);
 
-    MachineInstr *ExtractHoistableLoad(MachineInstr *MI);
+  MachineInstr *ExtractHoistableLoad(MachineInstr *MI);
 
-    MachineInstr *LookForDuplicate(const MachineInstr *MI,
-                                   std::vector<MachineInstr *> &PrevMIs);
+  MachineInstr *LookForDuplicate(const MachineInstr *MI,
+                                 std::vector<MachineInstr *> &PrevMIs);
 
-    bool
-    EliminateCSE(MachineInstr *MI,
-                 DenseMap<unsigned, std::vector<MachineInstr *>>::iterator &CI);
+  bool
+  EliminateCSE(MachineInstr *MI,
+               DenseMap<unsigned, std::vector<MachineInstr *>>::iterator &CI);
 
-    bool MayCSE(MachineInstr *MI);
+  bool MayCSE(MachineInstr *MI);
 
-    bool Hoist(MachineInstr *MI, MachineBasicBlock *Preheader);
+  bool Hoist(MachineInstr *MI, MachineBasicBlock *Preheader);
 
-    void InitCSEMap(MachineBasicBlock *BB);
+  void InitCSEMap(MachineBasicBlock *BB);
 
-    bool isTgtHotterThanSrc(MachineBasicBlock *SrcBlock,
-                            MachineBasicBlock *TgtBlock);
-    MachineBasicBlock *getCurPreheader();
-  };
+  bool isTgtHotterThanSrc(MachineBasicBlock *SrcBlock,
+                          MachineBasicBlock *TgtBlock);
+  MachineBasicBlock *getCurPreheader();
+};
 
-  class MachineLICM : public MachineLICMBase {
-  public:
-    static char ID;
-    MachineLICM() : MachineLICMBase(ID, false) {
-      initializeMachineLICMPass(*PassRegistry::getPassRegistry());
-    }
-  };
+class MachineLICM : public MachineLICMBase {
+public:
+  static char ID;
+  MachineLICM() : MachineLICMBase(ID, false) {
+    initializeMachineLICMPass(*PassRegistry::getPassRegistry());
+  }
+};
 
-  class EarlyMachineLICM : public MachineLICMBase {
-  public:
-    static char ID;
-    EarlyMachineLICM() : MachineLICMBase(ID, true) {
-      initializeEarlyMachineLICMPass(*PassRegistry::getPassRegistry());
-    }
-  };
+class EarlyMachineLICM : public MachineLICMBase {
+public:
+  static char ID;
+  EarlyMachineLICM() : MachineLICMBase(ID, true) {
+    initializeEarlyMachineLICMPass(*PassRegistry::getPassRegistry());
+  }
+};
 
 } // end anonymous namespace
 
@@ -363,7 +353,7 @@ bool MachineLICMBase::runOnMachineFunction(MachineFunction &MF) {
   if (DisableHoistingToHotterBlocks != UseBFI::None)
     MBFI = &getAnalysis<MachineBlockFrequencyInfo>();
   MLI = &getAnalysis<MachineLoopInfo>();
-  DT  = &getAnalysis<MachineDominatorTree>();
+  DT = &getAnalysis<MachineDominatorTree>();
   AA = &getAnalysis<AAResultsWrapperPass>().getAAResults();
 
   SmallVector<MachineLoop *, 8> Worklist(MLI->begin(), MLI->end());
@@ -401,7 +391,7 @@ static bool InstructionStoresToFI(const MachineInstr *MI, int FI) {
   // Check mayStore before memory operands so that e.g. DBG_VALUEs will return
   // true since they have no memory operands.
   if (!MI->mayStore())
-     return false;
+    return false;
   // If we lost memory operands, conservatively assume that the instruction
   // writes to all slots.
   if (MI->memoperands_empty())
@@ -410,7 +400,7 @@ static bool InstructionStoresToFI(const MachineInstr *MI, int FI) {
     if (!MemOp->isStore() || !MemOp->getPseudoValue())
       continue;
     if (const FixedStackPseudoSourceValue *Value =
-        dyn_cast<FixedStackPseudoSourceValue>(MemOp->getPseudoValue())) {
+            dyn_cast<FixedStackPseudoSourceValue>(MemOp->getPseudoValue())) {
       if (Value->getFrameIndex() == FI)
         return true;
     }
@@ -420,8 +410,7 @@ static bool InstructionStoresToFI(const MachineInstr *MI, int FI) {
 
 /// Examine the instruction for potentai LICM candidate. Also
 /// gather register def and frame object update information.
-void MachineLICMBase::ProcessMI(MachineInstr *MI,
-                                BitVector &PhysRegDefs,
+void MachineLICMBase::ProcessMI(MachineInstr *MI, BitVector &PhysRegDefs,
                                 BitVector &PhysRegClobbers,
                                 SmallSet<int, 32> &StoredFIs,
                                 SmallVectorImpl<CandidateInfo> &Candidates) {
@@ -432,8 +421,7 @@ void MachineLICMBase::ProcessMI(MachineInstr *MI,
     if (MO.isFI()) {
       // Remember if the instruction stores to the frame index.
       int FI = MO.getIndex();
-      if (!StoredFIs.count(FI) &&
-          MFI->isSpillSlotObjectIndex(FI) &&
+      if (!StoredFIs.count(FI) && MFI->isSpillSlotObjectIndex(FI) &&
           InstructionStoresToFI(MI, FI))
         StoredFIs.insert(FI);
       HasNonInvariantUse = true;
@@ -516,7 +504,7 @@ void MachineLICMBase::HoistRegionPostRA() {
     return;
 
   unsigned NumRegs = TRI->getNumRegs();
-  BitVector PhysRegDefs(NumRegs); // Regs defined once in the loop.
+  BitVector PhysRegDefs(NumRegs);     // Regs defined once in the loop.
   BitVector PhysRegClobbers(NumRegs); // Regs defined more than once.
 
   SmallVector<CandidateInfo, 32> Candidates;
@@ -528,7 +516,8 @@ void MachineLICMBase::HoistRegionPostRA() {
     // If the header of the loop containing this basic block is a landing pad,
     // then don't try to hoist instructions out of this loop.
     const MachineLoop *ML = MLI->getLoopFor(BB);
-    if (ML && ML->getHeader()->isEHPad()) continue;
+    if (ML && ML->getHeader()->isEHPad())
+      continue;
 
     // Conservatively treat live-in's as an external def.
     // FIXME: That means a reload that're reused in successor block(s) will not
@@ -583,8 +572,7 @@ void MachineLICMBase::HoistRegionPostRA() {
         if (!MO.getReg())
           continue;
         Register Reg = MO.getReg();
-        if (PhysRegDefs.test(Reg) ||
-            PhysRegClobbers.test(Reg)) {
+        if (PhysRegDefs.test(Reg) || PhysRegClobbers.test(Reg)) {
           // If it's using a non-loop-invariant register, then it's obviously
           // not safe to hoist.
           Safe = false;
@@ -652,7 +640,7 @@ bool MachineLICMBase::IsGuaranteedToExecute(MachineBasicBlock *BB) {
 
   if (BB != CurLoop->getHeader()) {
     // Check loop exiting blocks.
-    SmallVector<MachineBasicBlock*, 8> CurrentLoopExitingBlocks;
+    SmallVector<MachineBasicBlock *, 8> CurrentLoopExitingBlocks;
     CurLoop->getExitingBlocks(CurrentLoopExitingBlocks);
     for (MachineBasicBlock *CurrentLoopExitingBlock : CurrentLoopExitingBlocks)
       if (!DT->dominates(BB, CurrentLoopExitingBlock)) {
@@ -697,13 +685,14 @@ void MachineLICMBase::ExitScope(MachineBasicBlock *MBB) {
 /// Destroy scope for the MBB that corresponds to the given dominator tree node
 /// if its a leaf or all of its children are done. Walk up the dominator tree to
 /// destroy ancestors which are now done.
-void MachineLICMBase::ExitScopeIfDone(MachineDomTreeNode *Node,
-    DenseMap<MachineDomTreeNode*, unsigned> &OpenChildren,
-    const DenseMap<MachineDomTreeNode*, MachineDomTreeNode*> &ParentMap) {
+void MachineLICMBase::ExitScopeIfDone(
+    MachineDomTreeNode *Node,
+    DenseMap<MachineDomTreeNode *, unsigned> &OpenChildren,
+    const DenseMap<MachineDomTreeNode *, MachineDomTreeNode *> &ParentMap) {
   if (OpenChildren[Node])
     return;
 
-  for(;;) {
+  for (;;) {
     ExitScope(Node->getBlock());
     // Now traverse upwards to pop ancestors whose offsprings are all done.
     MachineDomTreeNode *Parent = ParentMap.lookup(Node);
@@ -722,10 +711,10 @@ void MachineLICMBase::HoistOutOfLoop(MachineDomTreeNode *HeaderN) {
   if (!Preheader)
     return;
 
-  SmallVector<MachineDomTreeNode*, 32> Scopes;
-  SmallVector<MachineDomTreeNode*, 8> WorkList;
-  DenseMap<MachineDomTreeNode*, MachineDomTreeNode*> ParentMap;
-  DenseMap<MachineDomTreeNode*, unsigned> OpenChildren;
+  SmallVector<MachineDomTreeNode *, 32> Scopes;
+  SmallVector<MachineDomTreeNode *, 8> WorkList;
+  DenseMap<MachineDomTreeNode *, MachineDomTreeNode *> ParentMap;
+  DenseMap<MachineDomTreeNode *, unsigned> OpenChildren;
 
   // Perform a DFS walk to determine the order of visit.
   WorkList.push_back(HeaderN);
@@ -929,7 +918,7 @@ static bool isInvariantStore(const MachineInstr &MI,
       else
         FoundCallerPresReg = true;
     } else if (!MO.isImm()) {
-        return false;
+      return false;
     }
   }
   return FoundCallerPresReg;
@@ -1016,7 +1005,7 @@ bool MachineLICMBase::IsLoopInvariantInst(MachineInstr &I) {
 /// Return true if the specified instruction is used by a phi node and hoisting
 /// it could cause a copy to be inserted.
 bool MachineLICMBase::HasLoopPHIUse(const MachineInstr *MI) const {
-  SmallVector<const MachineInstr*, 8> Work(1, MI);
+  SmallVector<const MachineInstr *, 8> Work(1, MI);
   do {
     MI = Work.pop_back_val();
     for (const MachineOperand &MO : MI->all_defs()) {
@@ -1026,8 +1015,8 @@ bool MachineLICMBase::HasLoopPHIUse(const MachineInstr *MI) const {
       for (MachineInstr &UseMI : MRI->use_instructions(Reg)) {
         // A PHI may cause a copy to be inserted.
         if (UseMI.isPHI()) {
-          // A PHI inside the loop causes a copy because the live range of Reg is
-          // extended across the PHI.
+          // A PHI inside the loop causes a copy because the live range of Reg
+          // is extended across the PHI.
           if (CurLoop->contains(&UseMI))
             return true;
           // A PHI in an exit block can cause a copy to be inserted if the PHI
@@ -1104,9 +1093,8 @@ bool MachineLICMBase::IsCheapInstruction(MachineInstr &MI) const {
 
 /// Visit BBs from header to current BB, check if hoisting an instruction of the
 /// given cost matrix can cause high register pressure.
-bool
-MachineLICMBase::CanCauseHighRegPressure(const DenseMap<unsigned, int>& Cost,
-                                         bool CheapInstr) {
+bool MachineLICMBase::CanCauseHighRegPressure(
+    const DenseMap<unsigned, int> &Cost, bool CheapInstr) {
   for (const auto &RPIdAndCost : Cost) {
     if (RPIdAndCost.second <= 0)
       continue;
@@ -1160,7 +1148,7 @@ bool MachineLICMBase::IsProfitableToHoist(MachineInstr &MI) {
   // - When hoisting the last use of a value in the loop, that value no longer
   //   needs to be live in the loop. This lowers register pressure in the loop.
 
-  if (HoistConstStores &&  isCopyFeedingInvariantStore(MI, MRI, TRI))
+  if (HoistConstStores && isCopyFeedingInvariantStore(MI, MRI, TRI))
     return true;
 
   bool CheapInstr = IsCheapInstruction(MI);
@@ -1253,11 +1241,11 @@ MachineInstr *MachineLICMBase::ExtractHoistableLoad(MachineInstr *MI) {
   // Next determine the register class for a temporary register.
   unsigned LoadRegIndex;
   unsigned NewOpc =
-    TII->getOpcodeAfterMemoryUnfold(MI->getOpcode(),
-                                    /*UnfoldLoad=*/true,
-                                    /*UnfoldStore=*/false,
-                                    &LoadRegIndex);
-  if (NewOpc == 0) return nullptr;
+      TII->getOpcodeAfterMemoryUnfold(MI->getOpcode(),
+                                      /*UnfoldLoad=*/true,
+                                      /*UnfoldStore=*/false, &LoadRegIndex);
+  if (NewOpc == 0)
+    return nullptr;
   const MCInstrDesc &MID = TII->get(NewOpc);
   MachineFunction &MF = *MI->getMF();
   const TargetRegisterClass *RC = TII->getRegClass(MID, LoadRegIndex, TRI, MF);
@@ -1272,8 +1260,7 @@ MachineInstr *MachineLICMBase::ExtractHoistableLoad(MachineInstr *MI) {
   assert(Success &&
          "unfoldMemoryOperand failed when getOpcodeAfterMemoryUnfold "
          "succeeded!");
-  assert(NewMIs.size() == 2 &&
-         "Unfolded a load into multiple instructions!");
+  assert(NewMIs.size() == 2 && "Unfolded a load into multiple instructions!");
   MachineBasicBlock *MBB = MI->getParent();
   MachineBasicBlock::iterator Pos = MI;
   MBB->insert(Pos, NewMIs[0]);
@@ -1349,7 +1336,7 @@ bool MachineLICMBase::EliminateCSE(
         Defs.push_back(i);
     }
 
-    SmallVector<const TargetRegisterClass*, 2> OrigRCs;
+    SmallVector<const TargetRegisterClass *, 2> OrigRCs;
     for (unsigned i = 0, e = Defs.size(); i != e; ++i) {
       unsigned Idx = Defs[i];
       Register Reg = MI->getOperand(Idx).getReg();
@@ -1403,7 +1390,7 @@ bool MachineLICMBase::Hoist(MachineInstr *MI, MachineBasicBlock *Preheader) {
 
   // Disable the instruction hoisting due to block hotness
   if ((DisableHoistingToHotterBlocks == UseBFI::All ||
-      (DisableHoistingToHotterBlocks == UseBFI::PGO && HasProfileData)) &&
+       (DisableHoistingToHotterBlocks == UseBFI::PGO && HasProfileData)) &&
       isTgtHotterThanSrc(SrcBlock, Preheader)) {
     ++NumNotHoistedDueToHotness;
     return false;
@@ -1412,7 +1399,8 @@ bool MachineLICMBase::Hoist(MachineInstr *MI, MachineBasicBlock *Preheader) {
   if (!IsLoopInvariantInst(*MI) || !IsProfitableToHoist(*MI)) {
     // If not, try unfolding a hoistable load.
     MI = ExtractHoistableLoad(MI);
-    if (!MI) return false;
+    if (!MI)
+      return false;
   }
 
   // If we have hoisted an instruction that may store, it can only be a constant
@@ -1444,7 +1432,7 @@ bool MachineLICMBase::Hoist(MachineInstr *MI, MachineBasicBlock *Preheader) {
       CSEMap.find(Opcode);
   if (!EliminateCSE(MI, CI)) {
     // Otherwise, splice the instruction to the preheader.
-    Preheader->splice(Preheader->getFirstTerminator(),MI->getParent(),MI);
+    Preheader->splice(Preheader->getFirstTerminator(), MI->getParent(), MI);
 
     // Since we are moving the instruction out of its basic block, we do not
     // retain its debug location. Doing so would degrade the debugging
diff --git a/llvm/lib/CodeGen/MachineLateInstrsCleanup.cpp b/llvm/lib/CodeGen/MachineLateInstrsCleanup.cpp
index c44b968b317d7ac..755e20337bbc721 100644
--- a/llvm/lib/CodeGen/MachineLateInstrsCleanup.cpp
+++ b/llvm/lib/CodeGen/MachineLateInstrsCleanup.cpp
@@ -59,8 +59,7 @@ class MachineLateInstrsCleanup : public MachineFunctionPass {
 
   void removeRedundantDef(MachineInstr *MI);
   void clearKillsForDef(Register Reg, MachineBasicBlock *MBB,
-                        MachineBasicBlock::iterator I,
-                        BitVector &VisitedPreds);
+                        MachineBasicBlock::iterator I, BitVector &VisitedPreds);
 
 public:
   static char ID; // Pass identification, replacement for typeid
@@ -116,10 +115,10 @@ bool MachineLateInstrsCleanup::runOnMachineFunction(MachineFunction &MF) {
 // in MBB and if needed continue in predecessors until a use/def of Reg is
 // encountered. This seems to be faster in practice than tracking kill flags
 // in a map.
-void MachineLateInstrsCleanup::
-clearKillsForDef(Register Reg, MachineBasicBlock *MBB,
-                 MachineBasicBlock::iterator I,
-                 BitVector &VisitedPreds) {
+void MachineLateInstrsCleanup::clearKillsForDef(Register Reg,
+                                                MachineBasicBlock *MBB,
+                                                MachineBasicBlock::iterator I,
+                                                BitVector &VisitedPreds) {
   VisitedPreds.set(MBB->getNumber());
 
   // Kill flag in MBB
diff --git a/llvm/lib/CodeGen/MachineLoopInfo.cpp b/llvm/lib/CodeGen/MachineLoopInfo.cpp
index 37a0ff3d71c87e8..13db6958897fdcd 100644
--- a/llvm/lib/CodeGen/MachineLoopInfo.cpp
+++ b/llvm/lib/CodeGen/MachineLoopInfo.cpp
@@ -35,10 +35,10 @@ MachineLoopInfo::MachineLoopInfo() : MachineFunctionPass(ID) {
   initializeMachineLoopInfoPass(*PassRegistry::getPassRegistry());
 }
 INITIALIZE_PASS_BEGIN(MachineLoopInfo, "machine-loops",
-                "Machine Natural Loop Construction", true, true)
+                      "Machine Natural Loop Construction", true, true)
 INITIALIZE_PASS_DEPENDENCY(MachineDominatorTree)
 INITIALIZE_PASS_END(MachineLoopInfo, "machine-loops",
-                "Machine Natural Loop Construction", true, true)
+                    "Machine Natural Loop Construction", true, true)
 
 char &llvm::MachineLoopInfoID = MachineLoopInfo::ID;
 
@@ -164,7 +164,8 @@ bool MachineLoop::isLoopInvariant(MachineInstr &I) const {
       continue;
 
     Register Reg = MO.getReg();
-    if (Reg == 0) continue;
+    if (Reg == 0)
+      continue;
 
     // An instruction that uses or defines a physical register can't e.g. be
     // hoisted, so mark this as not invariant.
@@ -194,8 +195,7 @@ bool MachineLoop::isLoopInvariant(MachineInstr &I) const {
     if (!MO.isUse())
       continue;
 
-    assert(MRI->getVRegDef(Reg) &&
-           "Machine instr not mapped for this vreg?!");
+    assert(MRI->getVRegDef(Reg) && "Machine instr not mapped for this vreg?!");
 
     // If the loop contains the definition of an operand, then the instruction
     // isn't loop invariant.
@@ -208,7 +208,5 @@ bool MachineLoop::isLoopInvariant(MachineInstr &I) const {
 }
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
-LLVM_DUMP_METHOD void MachineLoop::dump() const {
-  print(dbgs());
-}
+LLVM_DUMP_METHOD void MachineLoop::dump() const { print(dbgs()); }
 #endif
diff --git a/llvm/lib/CodeGen/MachineLoopUtils.cpp b/llvm/lib/CodeGen/MachineLoopUtils.cpp
index 0e8335d4974d722..d84484ccd35899b 100644
--- a/llvm/lib/CodeGen/MachineLoopUtils.cpp
+++ b/llvm/lib/CodeGen/MachineLoopUtils.cpp
@@ -18,7 +18,8 @@ namespace {
 MachineInstr &findEquivalentInstruction(MachineInstr &MI,
                                         MachineBasicBlock *BB) {
   MachineBasicBlock *PB = MI.getParent();
-  unsigned Offset = std::distance(PB->instr_begin(), MachineBasicBlock::instr_iterator(MI));
+  unsigned Offset =
+      std::distance(PB->instr_begin(), MachineBasicBlock::instr_iterator(MI));
   return *std::next(BB->instr_begin(), Offset);
 }
 } // namespace
diff --git a/llvm/lib/CodeGen/MachineModuleInfo.cpp b/llvm/lib/CodeGen/MachineModuleInfo.cpp
index 921feb253d64226..ba0dbe8f69fe489 100644
--- a/llvm/lib/CodeGen/MachineModuleInfo.cpp
+++ b/llvm/lib/CodeGen/MachineModuleInfo.cpp
@@ -155,9 +155,7 @@ class FreeMachineFunction : public FunctionPass {
     return true;
   }
 
-  StringRef getPassName() const override {
-    return "Free MachineFunction";
-  }
+  StringRef getPassName() const override { return "Free MachineFunction"; }
 };
 
 } // end anonymous namespace
@@ -225,8 +223,8 @@ bool MachineModuleInfoWrapperPass::doInitialization(Module &M) {
         Ctx.diagnose(
             DiagnosticInfoSrcMgr(SMD, M.getName(), IsInlineAsm, LocCookie));
       });
-  MMI.DbgInfoAvailable = !DisableDebugInfoPrinting &&
-                         !M.debug_compile_units().empty();
+  MMI.DbgInfoAvailable =
+      !DisableDebugInfoPrinting && !M.debug_compile_units().empty();
   return false;
 }
 
@@ -241,7 +239,7 @@ MachineModuleInfo MachineModuleAnalysis::run(Module &M,
                                              ModuleAnalysisManager &) {
   MachineModuleInfo MMI(TM);
   MMI.TheModule = &M;
-  MMI.DbgInfoAvailable = !DisableDebugInfoPrinting &&
-                         !M.debug_compile_units().empty();
+  MMI.DbgInfoAvailable =
+      !DisableDebugInfoPrinting && !M.debug_compile_units().empty();
   return MMI;
 }
diff --git a/llvm/lib/CodeGen/MachineOperand.cpp b/llvm/lib/CodeGen/MachineOperand.cpp
index 788c134b6ee8408..69028c37461aa02 100644
--- a/llvm/lib/CodeGen/MachineOperand.cpp
+++ b/llvm/lib/CodeGen/MachineOperand.cpp
@@ -89,7 +89,8 @@ void MachineOperand::substVirtReg(Register Reg, unsigned SubIdx,
     setSubReg(SubIdx);
 }
 
-void MachineOperand::substPhysReg(MCRegister Reg, const TargetRegisterInfo &TRI) {
+void MachineOperand::substPhysReg(MCRegister Reg,
+                                  const TargetRegisterInfo &TRI) {
   assert(Register::isPhysicalRegister(Reg));
   if (getSubReg()) {
     Reg = TRI.getSubReg(Reg, getSubReg());
@@ -179,8 +180,7 @@ void MachineOperand::ChangeToFPImmediate(const ConstantFP *FPImm,
   setTargetFlags(TargetFlags);
 }
 
-void MachineOperand::ChangeToES(const char *SymName,
-                                unsigned TargetFlags) {
+void MachineOperand::ChangeToES(const char *SymName, unsigned TargetFlags) {
   assert((!isReg() || !isTied()) &&
          "Cannot change a tied operand into an external symbol");
 
@@ -376,7 +376,8 @@ hash_code llvm::hash_value(const MachineOperand &MO) {
   switch (MO.getType()) {
   case MachineOperand::MO_Register:
     // Register operands don't have target flags.
-    return hash_combine(MO.getType(), (unsigned)MO.getReg(), MO.getSubReg(), MO.isDef());
+    return hash_combine(MO.getType(), (unsigned)MO.getReg(), MO.getSubReg(),
+                        MO.isDef());
   case MachineOperand::MO_Immediate:
     return hash_combine(MO.getType(), MO.getTargetFlags(), MO.getImm());
   case MachineOperand::MO_CImmediate:
@@ -538,7 +539,7 @@ static const char *getTargetMMOFlagName(const TargetInstrInfo &TII,
   return nullptr;
 }
 
-static void printFrameIndex(raw_ostream& OS, int FrameIndex, bool IsFixed,
+static void printFrameIndex(raw_ostream &OS, int FrameIndex, bool IsFixed,
                             const MachineFrameInfo *MFI) {
   StringRef Name;
   if (MFI) {
@@ -989,8 +990,8 @@ void MachineOperand::print(raw_ostream &OS, ModuleSlotTracker &MST,
   }
   case MachineOperand::MO_Predicate: {
     auto Pred = static_cast<CmpInst::Predicate>(getPredicate());
-    OS << (CmpInst::isIntPredicate(Pred) ? "int" : "float") << "pred("
-       << Pred << ')';
+    OS << (CmpInst::isIntPredicate(Pred) ? "int" : "float") << "pred(" << Pred
+       << ')';
     break;
   }
   case MachineOperand::MO_ShuffleMask:
diff --git a/llvm/lib/CodeGen/MachineOptimizationRemarkEmitter.cpp b/llvm/lib/CodeGen/MachineOptimizationRemarkEmitter.cpp
index 1c31eba909e780a..5fac61fc3401c89 100644
--- a/llvm/lib/CodeGen/MachineOptimizationRemarkEmitter.cpp
+++ b/llvm/lib/CodeGen/MachineOptimizationRemarkEmitter.cpp
@@ -1,7 +1,7 @@
 ///===- MachineOptimizationRemarkEmitter.cpp - Opt Diagnostic -*- C++ -*---===//
 ///
-/// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
-/// See https://llvm.org/LICENSE.txt for license information.
+/// Part of the LLVM Project, under the Apache License v2.0 with LLVM
+/// Exceptions. See https://llvm.org/LICENSE.txt for license information.
 /// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
 ///
 ///===---------------------------------------------------------------------===//
diff --git a/llvm/lib/CodeGen/MachineOutliner.cpp b/llvm/lib/CodeGen/MachineOutliner.cpp
index a0769105c929246..162e360fa23aca6 100644
--- a/llvm/lib/CodeGen/MachineOutliner.cpp
+++ b/llvm/lib/CodeGen/MachineOutliner.cpp
@@ -93,8 +93,7 @@ STATISTIC(NumLegalInUnsignedVec, "Outlinable instructions mapped");
 STATISTIC(NumIllegalInUnsignedVec,
           "Unoutlinable instructions mapped + number of sentinel values");
 STATISTIC(NumSentinels, "Sentinel values inserted during mapping");
-STATISTIC(NumInvisible,
-          "Invisible instructions skipped during mapping");
+STATISTIC(NumInvisible, "Invisible instructions skipped during mapping");
 STATISTIC(UnsignedVecSize,
           "Total number of instructions mapped and saved to mapping vector");
 
@@ -582,8 +581,7 @@ void MachineOutliner::findCandidates(
   // 2.
   std::vector<Candidate> CandidatesForRepeatedSeq;
   LLVM_DEBUG(dbgs() << "*** Discarding overlapping candidates *** \n");
-  LLVM_DEBUG(
-      dbgs() << "Searching for overlaps in all repeated sequences...\n");
+  LLVM_DEBUG(dbgs() << "Searching for overlaps in all repeated sequences...\n");
   for (const SuffixTree::RepeatedSubstring &RS : ST) {
     CandidatesForRepeatedSeq.clear();
     unsigned StringLen = RS.Length;
@@ -623,8 +621,8 @@ void MachineOutliner::findCandidates(
       if (FirstOverlap != CandidatesForRepeatedSeq.end()) {
 #ifndef NDEBUG
         ++NumDiscarded;
-        LLVM_DEBUG(dbgs() << "    .. DISCARD candidate @ [" << StartIdx
-                          << ", " << EndIdx << "]; overlaps with candidate @ ["
+        LLVM_DEBUG(dbgs() << "    .. DISCARD candidate @ [" << StartIdx << ", "
+                          << EndIdx << "]; overlaps with candidate @ ["
                           << FirstOverlap->getStartIdx() << ", "
                           << FirstOverlap->getEndIdx() << "]\n");
 #endif
@@ -644,8 +642,7 @@ void MachineOutliner::findCandidates(
                                             Mapper.MBBFlagsMap[MBB]);
     }
 #ifndef NDEBUG
-    LLVM_DEBUG(dbgs() << "    Candidates discarded: " << NumDiscarded
-                      << "\n");
+    LLVM_DEBUG(dbgs() << "    Candidates discarded: " << NumDiscarded << "\n");
     LLVM_DEBUG(dbgs() << "    Candidates kept: " << NumKept << "\n\n");
 #endif
 
diff --git a/llvm/lib/CodeGen/MachinePipeliner.cpp b/llvm/lib/CodeGen/MachinePipeliner.cpp
index 788ff5b3b5acdfc..32d01a9924758f8 100644
--- a/llvm/lib/CodeGen/MachinePipeliner.cpp
+++ b/llvm/lib/CodeGen/MachinePipeliner.cpp
@@ -121,8 +121,8 @@ static cl::opt<bool> EnableSWPOptSize("enable-pipeliner-opt-size",
 
 /// A command line argument to limit minimum initial interval for pipelining.
 static cl::opt<int> SwpMaxMii("pipeliner-max-mii",
-                              cl::desc("Size limit for the MII."),
-                              cl::Hidden, cl::init(27));
+                              cl::desc("Size limit for the MII."), cl::Hidden,
+                              cl::init(27));
 
 /// A command line argument to force pipeliner to use specified initial
 /// interval.
@@ -203,8 +203,8 @@ INITIALIZE_PASS_DEPENDENCY(AAResultsWrapperPass)
 INITIALIZE_PASS_DEPENDENCY(MachineLoopInfo)
 INITIALIZE_PASS_DEPENDENCY(MachineDominatorTree)
 INITIALIZE_PASS_DEPENDENCY(LiveIntervals)
-INITIALIZE_PASS_END(MachinePipeliner, DEBUG_TYPE,
-                    "Modulo Software Pipelining", false, false)
+INITIALIZE_PASS_END(MachinePipeliner, DEBUG_TYPE, "Modulo Software Pipelining",
+                    false, false)
 
 /// The "main" function for implementing Swing Modulo Scheduling.
 bool MachinePipeliner::runOnMachineFunction(MachineFunction &mf) {
@@ -318,11 +318,12 @@ void MachinePipeliner::setPragmaPipelineOptions(MachineLoop &L) {
       continue;
 
     if (S->getString() == "llvm.loop.pipeline.initiationinterval") {
-      assert(MD->getNumOperands() == 2 &&
-             "Pipeline initiation interval hint metadata should have two operands.");
+      assert(MD->getNumOperands() == 2 && "Pipeline initiation interval hint "
+                                          "metadata should have two operands.");
       II_setByPragma =
           mdconst::extract<ConstantInt>(MD->getOperand(1))->getZExtValue();
-      assert(II_setByPragma >= 1 && "Pipeline initiation interval must be positive.");
+      assert(II_setByPragma >= 1 &&
+             "Pipeline initiation interval must be positive.");
     } else if (S->getString() == "llvm.loop.pipeline.disable") {
       disabledByPragma = true;
     }
@@ -415,12 +416,12 @@ void MachinePipeliner::preprocessPhiNodes(MachineBasicBlock &B) {
       // If the operand uses a subregister, replace it with a new register
       // without subregisters, and generate a copy to the new register.
       Register NewReg = MRI.createVirtualRegister(RC);
-      MachineBasicBlock &PredB = *PI.getOperand(i+1).getMBB();
+      MachineBasicBlock &PredB = *PI.getOperand(i + 1).getMBB();
       MachineBasicBlock::iterator At = PredB.getFirstTerminator();
       const DebugLoc &DL = PredB.findDebugLoc(At);
-      auto Copy = BuildMI(PredB, At, DL, TII->get(TargetOpcode::COPY), NewReg)
-                    .addReg(RegOp.getReg(), getRegState(RegOp),
-                            RegOp.getSubReg());
+      auto Copy =
+          BuildMI(PredB, At, DL, TII->get(TargetOpcode::COPY), NewReg)
+              .addReg(RegOp.getReg(), getRegState(RegOp), RegOp.getSubReg());
       Slots.insertMachineInstrInMaps(*Copy);
       RegOp.setReg(NewReg);
       RegOp.setSubReg(0);
@@ -583,7 +584,7 @@ void SwingSchedulerDAG::schedule() {
   SMSchedule Schedule(Pass.MF, this);
   Scheduled = schedulePipeline(Schedule);
 
-  if (!Scheduled){
+  if (!Scheduled) {
     LLVM_DEBUG(dbgs() << "No schedule found, return\n");
     NumFailNoSchedule++;
     Pass.ORE->emit([&]() {
@@ -760,7 +761,7 @@ static void getUnderlyingObjects(const MachineInstr *MI,
 void SwingSchedulerDAG::addLoopCarriedDependences(AliasAnalysis *AA) {
   MapVector<const Value *, SmallVector<SUnit *, 4>> PendingLoads;
   Value *UnknownValue =
-    UndefValue::get(Type::getVoidTy(MF.getFunction().getContext()));
+      UndefValue::get(Type::getVoidTy(MF.getFunction().getContext()));
   for (auto &SU : SUnits) {
     MachineInstr &MI = *SU.getInstr();
     if (isDependenceBarrier(MI))
@@ -1572,8 +1573,8 @@ static void computeLiveOuts(MachineFunction &MF, RegPressureTracker &RPTracker,
         Register Reg = MO.getReg();
         if (Reg.isVirtual()) {
           if (!Uses.count(Reg))
-            LiveOutRegs.push_back(RegisterMaskPair(Reg,
-                                                   LaneBitmask::getNone()));
+            LiveOutRegs.push_back(
+                RegisterMaskPair(Reg, LaneBitmask::getNone()));
         } else if (MRI.isAllocatable(Reg)) {
           for (MCRegUnit Unit : TRI->regunits(Reg.asMCReg()))
             if (!Uses.count(Unit))
@@ -1961,8 +1962,8 @@ void SwingSchedulerDAG::computeNodeOrder(NodeSetType &NodeSets) {
 /// of the instructions, if possible. Return true if a schedule is found.
 bool SwingSchedulerDAG::schedulePipeline(SMSchedule &Schedule) {
 
-  if (NodeOrder.empty()){
-    LLVM_DEBUG(dbgs() << "NodeOrder is empty! abort scheduling\n" );
+  if (NodeOrder.empty()) {
+    LLVM_DEBUG(dbgs() << "NodeOrder is empty! abort scheduling\n");
     return false;
   }
 
@@ -2047,8 +2048,7 @@ bool SwingSchedulerDAG::schedulePipeline(SMSchedule &Schedule) {
   }
 
   LLVM_DEBUG(dbgs() << "Schedule Found? " << scheduleFound
-                    << " (II=" << Schedule.getInitiationInterval()
-                    << ")\n");
+                    << " (II=" << Schedule.getInitiationInterval() << ")\n");
 
   if (scheduleFound) {
     scheduleFound = LoopPipelinerInfo->shouldUseSchedule(*this, Schedule);
@@ -2860,7 +2860,7 @@ void SwingSchedulerDAG::fixupRegisterOverlaps(std::deque<SUnit *> &Instrs) {
         // Check that the instruction appears in the InstrChanges structure,
         // which contains instructions that can have the offset updated.
         DenseMap<SUnit *, std::pair<unsigned, int64_t>>::iterator It =
-          InstrChanges.find(SU);
+            InstrChanges.find(SU);
         if (It != InstrChanges.end()) {
           unsigned BasePos, OffsetPos;
           // Update the base register and adjust the offset.
diff --git a/llvm/lib/CodeGen/MachinePostDominators.cpp b/llvm/lib/CodeGen/MachinePostDominators.cpp
index fb96d0efa4d4c21..4691db6135a5161 100644
--- a/llvm/lib/CodeGen/MachinePostDominators.cpp
+++ b/llvm/lib/CodeGen/MachinePostDominators.cpp
@@ -24,7 +24,7 @@ extern bool VerifyMachineDomInfo;
 
 char MachinePostDominatorTree::ID = 0;
 
-//declare initializeMachinePostDominatorTreePass
+// declare initializeMachinePostDominatorTreePass
 INITIALIZE_PASS(MachinePostDominatorTree, "machinepostdomtree",
                 "MachinePostDominator Tree Construction", true, true)
 
diff --git a/llvm/lib/CodeGen/MachineRegionInfo.cpp b/llvm/lib/CodeGen/MachineRegionInfo.cpp
index 45cdcbfeab9f12c..b2df9f1d476028a 100644
--- a/llvm/lib/CodeGen/MachineRegionInfo.cpp
+++ b/llvm/lib/CodeGen/MachineRegionInfo.cpp
@@ -20,7 +20,7 @@
 
 using namespace llvm;
 
-STATISTIC(numMachineRegions,       "The # of machine regions");
+STATISTIC(numMachineRegions, "The # of machine regions");
 STATISTIC(numMachineSimpleRegions, "The # of simple machine regions");
 
 namespace llvm {
@@ -35,9 +35,9 @@ template class RegionInfoBase<RegionTraits<MachineFunction>>;
 // MachineRegion implementation
 
 MachineRegion::MachineRegion(MachineBasicBlock *Entry, MachineBasicBlock *Exit,
-                             MachineRegionInfo* RI,
-                             MachineDominatorTree *DT, MachineRegion *Parent) :
-  RegionBase<RegionTraits<MachineFunction>>(Entry, Exit, RI, DT, Parent) {}
+                             MachineRegionInfo *RI, MachineDominatorTree *DT,
+                             MachineRegion *Parent)
+    : RegionBase<RegionTraits<MachineFunction>>(Entry, Exit, RI, DT, Parent) {}
 
 MachineRegion::~MachineRegion() = default;
 
@@ -64,7 +64,7 @@ void MachineRegionInfo::recalculate(MachineFunction &F,
   PDT = PDT_;
   DF = DF_;
 
-  MachineBasicBlock *Entry = GraphTraits<MachineFunction*>::getEntryNode(&F);
+  MachineBasicBlock *Entry = GraphTraits<MachineFunction *>::getEntryNode(&F);
 
   TopLevelRegion = new MachineRegion(Entry, nullptr, this, DT, nullptr);
   updateStatistics(TopLevelRegion);
@@ -95,9 +95,7 @@ bool MachineRegionInfoPass::runOnMachineFunction(MachineFunction &F) {
   return false;
 }
 
-void MachineRegionInfoPass::releaseMemory() {
-  RI.releaseMemory();
-}
+void MachineRegionInfoPass::releaseMemory() { RI.releaseMemory(); }
 
 void MachineRegionInfoPass::verifyAnalysis() const {
   // Only do verification when user wants to, otherwise this expensive check
@@ -120,9 +118,7 @@ void MachineRegionInfoPass::print(raw_ostream &OS, const Module *) const {
 }
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
-LLVM_DUMP_METHOD void MachineRegionInfoPass::dump() const {
-  RI.dump();
-}
+LLVM_DUMP_METHOD void MachineRegionInfoPass::dump() const { RI.dump(); }
 #endif
 
 char MachineRegionInfoPass::ID = 0;
diff --git a/llvm/lib/CodeGen/MachineRegisterInfo.cpp b/llvm/lib/CodeGen/MachineRegisterInfo.cpp
index 7bd8a67ee06c82c..8a51bafc8f389a2 100644
--- a/llvm/lib/CodeGen/MachineRegisterInfo.cpp
+++ b/llvm/lib/CodeGen/MachineRegisterInfo.cpp
@@ -34,8 +34,9 @@
 
 using namespace llvm;
 
-static cl::opt<bool> EnableSubRegLiveness("enable-subreg-liveness", cl::Hidden,
-  cl::init(true), cl::desc("Enable subregister liveness tracking."));
+static cl::opt<bool>
+    EnableSubRegLiveness("enable-subreg-liveness", cl::Hidden, cl::init(true),
+                         cl::desc("Enable subregister liveness tracking."));
 
 // Pin the vtable to this file.
 void MachineRegisterInfo::Delegate::anchor() {}
@@ -47,14 +48,14 @@ MachineRegisterInfo::MachineRegisterInfo(MachineFunction *MF)
   VRegInfo.reserve(256);
   RegAllocHints.reserve(256);
   UsedPhysRegMask.resize(NumRegs);
-  PhysRegUseDefLists.reset(new MachineOperand*[NumRegs]());
+  PhysRegUseDefLists.reset(new MachineOperand *[NumRegs]());
   TheDelegates.clear();
 }
 
 /// setRegClass - Set the register class of the specified virtual register.
 ///
-void
-MachineRegisterInfo::setRegClass(Register Reg, const TargetRegisterClass *RC) {
+void MachineRegisterInfo::setRegClass(Register Reg,
+                                      const TargetRegisterClass *RC) {
   assert(RC && RC->isAllocatable() && "Invalid RC for virtual register");
   VRegInfo[Reg].first = RC;
 }
@@ -87,10 +88,9 @@ const TargetRegisterClass *MachineRegisterInfo::constrainRegClass(
   return ::constrainRegClass(*this, Reg, getRegClass(Reg), RC, MinNumRegs);
 }
 
-bool
-MachineRegisterInfo::constrainRegAttrs(Register Reg,
-                                       Register ConstrainingReg,
-                                       unsigned MinNumRegs) {
+bool MachineRegisterInfo::constrainRegAttrs(Register Reg,
+                                            Register ConstrainingReg,
+                                            unsigned MinNumRegs) {
   const LLT RegTy = getType(Reg);
   const LLT ConstrainingRegTy = getType(ConstrainingReg);
   if (RegTy.isValid() && ConstrainingRegTy.isValid() &&
@@ -117,8 +117,7 @@ MachineRegisterInfo::constrainRegAttrs(Register Reg,
   return true;
 }
 
-bool
-MachineRegisterInfo::recomputeRegClass(Register Reg) {
+bool MachineRegisterInfo::recomputeRegClass(Register Reg) {
   const TargetInstrInfo *TII = MF->getSubtarget().getInstrInfo();
   const TargetRegisterClass *OldRC = getRegClass(Reg);
   const TargetRegisterClass *NewRC =
@@ -181,8 +180,8 @@ void MachineRegisterInfo::setType(Register VReg, LLT Ty) {
   VRegToType[VReg] = Ty;
 }
 
-Register
-MachineRegisterInfo::createGenericVirtualRegister(LLT Ty, StringRef Name) {
+Register MachineRegisterInfo::createGenericVirtualRegister(LLT Ty,
+                                                           StringRef Name) {
   // New virtual register number.
   Register Reg = createIncompleteVirtualRegister(Name);
   // FIXME: Should we use a dummy register class?
@@ -229,22 +228,21 @@ void MachineRegisterInfo::verifyUseList(Register Reg) const {
     }
     MachineOperand *MO0 = &MI->getOperand(0);
     unsigned NumOps = MI->getNumOperands();
-    if (!(MO >= MO0 && MO < MO0+NumOps)) {
+    if (!(MO >= MO0 && MO < MO0 + NumOps)) {
       errs() << printReg(Reg, getTargetRegisterInfo())
              << " use list MachineOperand " << MO
              << " doesn't belong to parent MI: " << *MI;
       Valid = false;
     }
     if (!MO->isReg()) {
-      errs() << printReg(Reg, getTargetRegisterInfo())
-             << " MachineOperand " << MO << ": " << *MO
-             << " is not a register\n";
+      errs() << printReg(Reg, getTargetRegisterInfo()) << " MachineOperand "
+             << MO << ": " << *MO << " is not a register\n";
       Valid = false;
     }
     if (MO->getReg() != Reg) {
       errs() << printReg(Reg, getTargetRegisterInfo())
-             << " use-list MachineOperand " << MO << ": "
-             << *MO << " is the wrong register\n";
+             << " use-list MachineOperand " << MO << ": " << *MO
+             << " is the wrong register\n";
       Valid = false;
     }
   }
@@ -330,8 +328,7 @@ void MachineRegisterInfo::removeRegOperandFromUseList(MachineOperand *MO) {
 /// trivial anyway).
 ///
 /// The Src and Dst ranges may overlap.
-void MachineRegisterInfo::moveOperands(MachineOperand *Dst,
-                                       MachineOperand *Src,
+void MachineRegisterInfo::moveOperands(MachineOperand *Dst, MachineOperand *Src,
                                        unsigned NumOps) {
   assert(Src != Dst && NumOps && "Noop moveOperands");
 
@@ -407,7 +404,8 @@ MachineInstr *MachineRegisterInfo::getVRegDef(Register Reg) const {
 /// specified virtual register or null if none is found.  If there are
 /// multiple definitions or no definition, return null.
 MachineInstr *MachineRegisterInfo::getUniqueVRegDef(Register Reg) const {
-  if (def_empty(Reg)) return nullptr;
+  if (def_empty(Reg))
+    return nullptr;
   def_instr_iterator I = def_instr_begin(Reg);
   if (std::next(I) != def_instr_end())
     return nullptr;
@@ -464,10 +462,9 @@ Register MachineRegisterInfo::getLiveInVirtReg(MCRegister PReg) const {
 
 /// EmitLiveInCopies - Emit copies to initialize livein virtual registers
 /// into the given entry block.
-void
-MachineRegisterInfo::EmitLiveInCopies(MachineBasicBlock *EntryMBB,
-                                      const TargetRegisterInfo &TRI,
-                                      const TargetInstrInfo &TII) {
+void MachineRegisterInfo::EmitLiveInCopies(MachineBasicBlock *EntryMBB,
+                                           const TargetRegisterInfo &TRI,
+                                           const TargetInstrInfo &TII) {
   // Emit the copies into the top of the block.
   for (unsigned i = 0, e = LiveIns.size(); i != e; ++i)
     if (LiveIns[i].second) {
@@ -478,12 +475,13 @@ MachineRegisterInfo::EmitLiveInCopies(MachineBasicBlock *EntryMBB,
         // records for unused arguments in the first place, but it's
         // complicated by the debug info code for arguments.
         LiveIns.erase(LiveIns.begin() + i);
-        --i; --e;
+        --i;
+        --e;
       } else {
         // Emit a copy.
         BuildMI(*EntryMBB, EntryMBB->begin(), DebugLoc(),
                 TII.get(TargetOpcode::COPY), LiveIns[i].second)
-          .addReg(LiveIns[i].first);
+            .addReg(LiveIns[i].first);
 
         // Add the register to the entry block live-in set.
         EntryMBB->addLiveIn(LiveIns[i].first);
@@ -523,8 +521,7 @@ bool MachineRegisterInfo::isConstantPhysReg(MCRegister PhysReg) const {
 
   // Check if any overlapping register is modified, or allocatable so it may be
   // used later.
-  for (MCRegAliasIterator AI(PhysReg, TRI, true);
-       AI.isValid(); ++AI)
+  for (MCRegAliasIterator AI(PhysReg, TRI, true); AI.isValid(); ++AI)
     if (!def_empty(*AI) || isAllocatable(*AI))
       return false;
   return true;
@@ -536,7 +533,8 @@ bool MachineRegisterInfo::isConstantPhysReg(MCRegister PhysReg) const {
 void MachineRegisterInfo::markUsesInDebugValueAsUndef(Register Reg) const {
   // Mark any DBG_VALUE* that uses Reg as undef (but don't delete it.)
   // We use make_early_inc_range because setReg invalidates the iterator.
-  for (MachineInstr &UseMI : llvm::make_early_inc_range(use_instructions(Reg))) {
+  for (MachineInstr &UseMI :
+       llvm::make_early_inc_range(use_instructions(Reg))) {
     if (UseMI.isDebugValue() && UseMI.hasDebugOperandForReg(Reg))
       UseMI.setDebugValueUndef();
   }
diff --git a/llvm/lib/CodeGen/MachineSSAUpdater.cpp b/llvm/lib/CodeGen/MachineSSAUpdater.cpp
index 48076663ddf5382..e1182dba9835066 100644
--- a/llvm/lib/CodeGen/MachineSSAUpdater.cpp
+++ b/llvm/lib/CodeGen/MachineSSAUpdater.cpp
@@ -37,16 +37,16 @@ using namespace llvm;
 using AvailableValsTy = DenseMap<MachineBasicBlock *, Register>;
 
 static AvailableValsTy &getAvailableVals(void *AV) {
-  return *static_cast<AvailableValsTy*>(AV);
+  return *static_cast<AvailableValsTy *>(AV);
 }
 
 MachineSSAUpdater::MachineSSAUpdater(MachineFunction &MF,
-                                     SmallVectorImpl<MachineInstr*> *NewPHI)
-  : InsertedPHIs(NewPHI), TII(MF.getSubtarget().getInstrInfo()),
-    MRI(&MF.getRegInfo()) {}
+                                     SmallVectorImpl<MachineInstr *> *NewPHI)
+    : InsertedPHIs(NewPHI), TII(MF.getSubtarget().getInstrInfo()),
+      MRI(&MF.getRegInfo()) {}
 
 MachineSSAUpdater::~MachineSSAUpdater() {
-  delete static_cast<AvailableValsTy*>(AV);
+  delete static_cast<AvailableValsTy *>(AV);
 }
 
 /// Initialize - Reset this object to get ready for a new set of SSA
@@ -64,8 +64,8 @@ void MachineSSAUpdater::Initialize(Register V) {
   Initialize(MRI->getRegClass(V));
 }
 
-/// HasValueForBlock - Return true if the MachineSSAUpdater already has a value for
-/// the specified block.
+/// HasValueForBlock - Return true if the MachineSSAUpdater already has a value
+/// for the specified block.
 bool MachineSSAUpdater::HasValueForBlock(MachineBasicBlock *BB) const {
   return getAvailableVals(AV).count(BB);
 }
@@ -82,9 +82,9 @@ Register MachineSSAUpdater::GetValueAtEndOfBlock(MachineBasicBlock *BB) {
   return GetValueAtEndOfBlockInternal(BB);
 }
 
-static
-Register LookForIdenticalPHI(MachineBasicBlock *BB,
-        SmallVectorImpl<std::pair<MachineBasicBlock *, Register>> &PredValues) {
+static Register LookForIdenticalPHI(
+    MachineBasicBlock *BB,
+    SmallVectorImpl<std::pair<MachineBasicBlock *, Register>> &PredValues) {
   if (BB->empty())
     return Register();
 
@@ -99,7 +99,7 @@ Register LookForIdenticalPHI(MachineBasicBlock *BB,
     bool Same = true;
     for (unsigned i = 1, e = I->getNumOperands(); i != e; i += 2) {
       Register SrcReg = I->getOperand(i).getReg();
-      MachineBasicBlock *SrcBB = I->getOperand(i+1).getMBB();
+      MachineBasicBlock *SrcBB = I->getOperand(i + 1).getMBB();
       if (AVals[SrcBB] != SrcReg) {
         Same = false;
         break;
@@ -115,12 +115,11 @@ Register LookForIdenticalPHI(MachineBasicBlock *BB,
 /// InsertNewDef - Insert an empty PHI or IMPLICIT_DEF instruction which define
 /// a value of the given register class at the start of the specified basic
 /// block. It returns the virtual register defined by the instruction.
-static
-MachineInstrBuilder InsertNewDef(unsigned Opcode,
-                           MachineBasicBlock *BB, MachineBasicBlock::iterator I,
-                           const TargetRegisterClass *RC,
-                           MachineRegisterInfo *MRI,
-                           const TargetInstrInfo *TII) {
+static MachineInstrBuilder InsertNewDef(unsigned Opcode, MachineBasicBlock *BB,
+                                        MachineBasicBlock::iterator I,
+                                        const TargetRegisterClass *RC,
+                                        MachineRegisterInfo *MRI,
+                                        const TargetInstrInfo *TII) {
   Register NewVR = MRI->createVirtualRegister(RC);
   return BuildMI(*BB, I, DebugLoc(), TII->get(Opcode), NewVR);
 }
@@ -158,15 +157,15 @@ Register MachineSSAUpdater::GetValueInMiddleOfBlock(MachineBasicBlock *BB,
     if (ExistingValueOnly)
       return Register();
     // Insert an implicit_def to represent an undef value.
-    MachineInstr *NewDef = InsertNewDef(TargetOpcode::IMPLICIT_DEF,
-                                        BB, BB->getFirstTerminator(),
-                                        VRC, MRI, TII);
+    MachineInstr *NewDef =
+        InsertNewDef(TargetOpcode::IMPLICIT_DEF, BB, BB->getFirstTerminator(),
+                     VRC, MRI, TII);
     return NewDef->getOperand(0).getReg();
   }
 
   // Otherwise, we have the hard case.  Get the live-in values for each
   // predecessor.
-  SmallVector<std::pair<MachineBasicBlock*, Register>, 8> PredValues;
+  SmallVector<std::pair<MachineBasicBlock *, Register>, 8> PredValues;
   Register SingularValue;
 
   bool isFirstPred = true;
@@ -197,8 +196,8 @@ Register MachineSSAUpdater::GetValueInMiddleOfBlock(MachineBasicBlock *BB,
 
   // Otherwise, we do need a PHI: insert one now.
   MachineBasicBlock::iterator Loc = BB->empty() ? BB->end() : BB->begin();
-  MachineInstrBuilder InsertedPHI = InsertNewDef(TargetOpcode::PHI, BB,
-                                                 Loc, VRC, MRI, TII);
+  MachineInstrBuilder InsertedPHI =
+      InsertNewDef(TargetOpcode::PHI, BB, Loc, VRC, MRI, TII);
 
   // Fill in all the predecessors of the PHI.
   for (unsigned i = 0, e = PredValues.size(); i != e; ++i)
@@ -212,18 +211,18 @@ Register MachineSSAUpdater::GetValueInMiddleOfBlock(MachineBasicBlock *BB,
   }
 
   // If the client wants to know about all new instructions, tell it.
-  if (InsertedPHIs) InsertedPHIs->push_back(InsertedPHI);
+  if (InsertedPHIs)
+    InsertedPHIs->push_back(InsertedPHI);
 
   LLVM_DEBUG(dbgs() << "  Inserted PHI: " << *InsertedPHI << "\n");
   return InsertedPHI.getReg(0);
 }
 
-static
-MachineBasicBlock *findCorrespondingPred(const MachineInstr *MI,
-                                         MachineOperand *U) {
+static MachineBasicBlock *findCorrespondingPred(const MachineInstr *MI,
+                                                MachineOperand *U) {
   for (unsigned i = 1, e = MI->getNumOperands(); i != e; i += 2) {
     if (&MI->getOperand(i) == U)
-      return MI->getOperand(i+1).getMBB();
+      return MI->getOperand(i + 1).getMBB();
   }
 
   llvm_unreachable("MachineOperand::getParent() failure?");
@@ -248,8 +247,7 @@ namespace llvm {
 
 /// SSAUpdaterTraits<MachineSSAUpdater> - Traits for the SSAUpdaterImpl
 /// template, specialized for MachineSSAUpdater.
-template<>
-class SSAUpdaterTraits<MachineSSAUpdater> {
+template <> class SSAUpdaterTraits<MachineSSAUpdater> {
 public:
   using BlkT = MachineBasicBlock;
   using ValT = Register;
@@ -267,18 +265,21 @@ class SSAUpdaterTraits<MachineSSAUpdater> {
 
   public:
     explicit PHI_iterator(MachineInstr *P) // begin iterator
-      : PHI(P), idx(1) {}
+        : PHI(P), idx(1) {}
     PHI_iterator(MachineInstr *P, bool) // end iterator
-      : PHI(P), idx(PHI->getNumOperands()) {}
+        : PHI(P), idx(PHI->getNumOperands()) {}
 
-    PHI_iterator &operator++() { idx += 2; return *this; }
-    bool operator==(const PHI_iterator& x) const { return idx == x.idx; }
-    bool operator!=(const PHI_iterator& x) const { return !operator==(x); }
+    PHI_iterator &operator++() {
+      idx += 2;
+      return *this;
+    }
+    bool operator==(const PHI_iterator &x) const { return idx == x.idx; }
+    bool operator!=(const PHI_iterator &x) const { return !operator==(x); }
 
     unsigned getIncomingValue() { return PHI->getOperand(idx).getReg(); }
 
     MachineBasicBlock *getIncomingBlock() {
-      return PHI->getOperand(idx+1).getMBB();
+      return PHI->getOperand(idx + 1).getMBB();
     }
   };
 
@@ -290,8 +291,9 @@ class SSAUpdaterTraits<MachineSSAUpdater> {
 
   /// FindPredecessorBlocks - Put the predecessors of BB into the Preds
   /// vector.
-  static void FindPredecessorBlocks(MachineBasicBlock *BB,
-                                    SmallVectorImpl<MachineBasicBlock*> *Preds){
+  static void
+  FindPredecessorBlocks(MachineBasicBlock *BB,
+                        SmallVectorImpl<MachineBasicBlock *> *Preds) {
     append_range(*Preds, BB->predecessors());
   }
 
@@ -300,10 +302,9 @@ class SSAUpdaterTraits<MachineSSAUpdater> {
   static Register GetUndefVal(MachineBasicBlock *BB,
                               MachineSSAUpdater *Updater) {
     // Insert an implicit_def to represent an undef value.
-    MachineInstr *NewDef = InsertNewDef(TargetOpcode::IMPLICIT_DEF,
-                                        BB, BB->getFirstNonPHI(),
-                                        Updater->VRC, Updater->MRI,
-                                        Updater->TII);
+    MachineInstr *NewDef =
+        InsertNewDef(TargetOpcode::IMPLICIT_DEF, BB, BB->getFirstNonPHI(),
+                     Updater->VRC, Updater->MRI, Updater->TII);
     return NewDef->getOperand(0).getReg();
   }
 
@@ -312,9 +313,8 @@ class SSAUpdaterTraits<MachineSSAUpdater> {
   static Register CreateEmptyPHI(MachineBasicBlock *BB, unsigned NumPreds,
                                  MachineSSAUpdater *Updater) {
     MachineBasicBlock::iterator Loc = BB->empty() ? BB->end() : BB->begin();
-    MachineInstr *PHI = InsertNewDef(TargetOpcode::PHI, BB, Loc,
-                                     Updater->VRC, Updater->MRI,
-                                     Updater->TII);
+    MachineInstr *PHI = InsertNewDef(TargetOpcode::PHI, BB, Loc, Updater->VRC,
+                                     Updater->MRI, Updater->TII);
     return PHI->getOperand(0).getReg();
   }
 
diff --git a/llvm/lib/CodeGen/MachineScheduler.cpp b/llvm/lib/CodeGen/MachineScheduler.cpp
index 4add33ba0996af0..4861ad391873c61 100644
--- a/llvm/lib/CodeGen/MachineScheduler.cpp
+++ b/llvm/lib/CodeGen/MachineScheduler.cpp
@@ -82,8 +82,8 @@ cl::opt<bool> ForceTopDown("misched-topdown", cl::Hidden,
 cl::opt<bool> ForceBottomUp("misched-bottomup", cl::Hidden,
                             cl::desc("Force bottom-up list scheduling"));
 cl::opt<bool>
-DumpCriticalPathLength("misched-dcpl", cl::Hidden,
-                       cl::desc("Print critical path length to stdout"));
+    DumpCriticalPathLength("misched-dcpl", cl::Hidden,
+                           cl::desc("Print critical path length to stdout"));
 
 cl::opt<bool> VerifyScheduling(
     "verify-misched", cl::Hidden,
@@ -115,28 +115,38 @@ const bool MISchedDumpReservedCycles = false;
 #ifndef NDEBUG
 /// In some situations a few uninteresting nodes depend on nearly all other
 /// nodes in the graph, provide a cutoff to hide them.
-static cl::opt<unsigned> ViewMISchedCutoff("view-misched-cutoff", cl::Hidden,
-  cl::desc("Hide nodes with more predecessor/successor than cutoff"));
+static cl::opt<unsigned> ViewMISchedCutoff(
+    "view-misched-cutoff", cl::Hidden,
+    cl::desc("Hide nodes with more predecessor/successor than cutoff"));
 
-static cl::opt<unsigned> MISchedCutoff("misched-cutoff", cl::Hidden,
-  cl::desc("Stop scheduling after N instructions"), cl::init(~0U));
+static cl::opt<unsigned>
+    MISchedCutoff("misched-cutoff", cl::Hidden,
+                  cl::desc("Stop scheduling after N instructions"),
+                  cl::init(~0U));
 
-static cl::opt<std::string> SchedOnlyFunc("misched-only-func", cl::Hidden,
-  cl::desc("Only schedule this function"));
+static cl::opt<std::string>
+    SchedOnlyFunc("misched-only-func", cl::Hidden,
+                  cl::desc("Only schedule this function"));
 static cl::opt<unsigned> SchedOnlyBlock("misched-only-block", cl::Hidden,
                                         cl::desc("Only schedule this MBB#"));
 #endif // NDEBUG
 
 /// Avoid quadratic complexity in unusually large basic blocks by limiting the
 /// size of the ready lists.
-static cl::opt<unsigned> ReadyListLimit("misched-limit", cl::Hidden,
-  cl::desc("Limit ready list to N instructions"), cl::init(256));
+static cl::opt<unsigned>
+    ReadyListLimit("misched-limit", cl::Hidden,
+                   cl::desc("Limit ready list to N instructions"),
+                   cl::init(256));
 
-static cl::opt<bool> EnableRegPressure("misched-regpressure", cl::Hidden,
-  cl::desc("Enable register pressure scheduling."), cl::init(true));
+static cl::opt<bool>
+    EnableRegPressure("misched-regpressure", cl::Hidden,
+                      cl::desc("Enable register pressure scheduling."),
+                      cl::init(true));
 
-static cl::opt<bool> EnableCyclicPath("misched-cyclicpath", cl::Hidden,
-  cl::desc("Enable cyclic critical path analysis."), cl::init(true));
+static cl::opt<bool>
+    EnableCyclicPath("misched-cyclicpath", cl::Hidden,
+                     cl::desc("Enable cyclic critical path analysis."),
+                     cl::init(true));
 
 static cl::opt<bool> EnableMemOpCluster("misched-cluster", cl::Hidden,
                                         cl::desc("Enable memop clustering."),
@@ -189,9 +199,7 @@ MachineSchedContext::MachineSchedContext() {
   RegClassInfo = new RegisterClassInfo();
 }
 
-MachineSchedContext::~MachineSchedContext() {
-  delete RegClassInfo;
-}
+MachineSchedContext::~MachineSchedContext() { delete RegClassInfo; }
 
 namespace {
 
@@ -199,9 +207,9 @@ namespace {
 class MachineSchedulerBase : public MachineSchedContext,
                              public MachineFunctionPass {
 public:
-  MachineSchedulerBase(char &ID): MachineFunctionPass(ID) {}
+  MachineSchedulerBase(char &ID) : MachineFunctionPass(ID) {}
 
-  void print(raw_ostream &O, const Module* = nullptr) const override;
+  void print(raw_ostream &O, const Module * = nullptr) const override;
 
 protected:
   void scheduleRegions(ScheduleDAGInstrs &Scheduler, bool FixKillFlags);
@@ -214,7 +222,7 @@ class MachineScheduler : public MachineSchedulerBase {
 
   void getAnalysisUsage(AnalysisUsage &AU) const override;
 
-  bool runOnMachineFunction(MachineFunction&) override;
+  bool runOnMachineFunction(MachineFunction &) override;
 
   static char ID; // Class identification, replacement for typeinfo
 
@@ -229,7 +237,7 @@ class PostMachineScheduler : public MachineSchedulerBase {
 
   void getAnalysisUsage(AnalysisUsage &AU) const override;
 
-  bool runOnMachineFunction(MachineFunction&) override;
+  bool runOnMachineFunction(MachineFunction &) override;
 
   static char ID; // Class identification, replacement for typeinfo
 
@@ -307,13 +315,13 @@ static ScheduleDAGInstrs *useDefaultMachineSched(MachineSchedContext *C) {
 /// MachineSchedOpt allows command line selection of the scheduler.
 static cl::opt<MachineSchedRegistry::ScheduleDAGCtor, false,
                RegisterPassParser<MachineSchedRegistry>>
-MachineSchedOpt("misched",
-                cl::init(&useDefaultMachineSched), cl::Hidden,
-                cl::desc("Machine instruction scheduler to use"));
+    MachineSchedOpt("misched", cl::init(&useDefaultMachineSched), cl::Hidden,
+                    cl::desc("Machine instruction scheduler to use"));
 
 static MachineSchedRegistry
-DefaultSchedRegistry("default", "Use the target's default scheduler choice.",
-                     useDefaultMachineSched);
+    DefaultSchedRegistry("default",
+                         "Use the target's default scheduler choice.",
+                         useDefaultMachineSched);
 
 static cl::opt<bool> EnableMachineSched(
     "enable-misched",
@@ -350,7 +358,7 @@ priorNonDebug(MachineBasicBlock::iterator I,
 static MachineBasicBlock::const_iterator
 nextIfDebug(MachineBasicBlock::const_iterator I,
             MachineBasicBlock::const_iterator End) {
-  for(; I != End; ++I) {
+  for (; I != End; ++I) {
     if (!I->isDebugOrPseudoInstr())
       break;
   }
@@ -491,8 +499,7 @@ bool PostMachineScheduler::runOnMachineFunction(MachineFunction &mf) {
 /// the boundary, but there would be no benefit to postRA scheduling across
 /// calls this late anyway.
 static bool isSchedBoundary(MachineBasicBlock::iterator MI,
-                            MachineBasicBlock *MBB,
-                            MachineFunction *MF,
+                            MachineBasicBlock *MBB, MachineFunction *MF,
                             const TargetInstrInfo *TII) {
   return MI->isCall() || TII->isSchedulingBoundary(*MI, MBB, *MF);
 }
@@ -510,23 +517,21 @@ struct SchedRegion {
   unsigned NumRegionInstrs;
 
   SchedRegion(MachineBasicBlock::iterator B, MachineBasicBlock::iterator E,
-              unsigned N) :
-    RegionBegin(B), RegionEnd(E), NumRegionInstrs(N) {}
+              unsigned N)
+      : RegionBegin(B), RegionEnd(E), NumRegionInstrs(N) {}
 };
 } // end anonymous namespace
 
 using MBBRegionsVector = SmallVector<SchedRegion, 16>;
 
-static void
-getSchedRegions(MachineBasicBlock *MBB,
-                MBBRegionsVector &Regions,
-                bool RegionsTopDown) {
+static void getSchedRegions(MachineBasicBlock *MBB, MBBRegionsVector &Regions,
+                            bool RegionsTopDown) {
   MachineFunction *MF = MBB->getParent();
   const TargetInstrInfo *TII = MF->getSubtarget().getInstrInfo();
 
   MachineBasicBlock::iterator I = nullptr;
-  for(MachineBasicBlock::iterator RegionEnd = MBB->end();
-      RegionEnd != MBB->begin(); RegionEnd = I) {
+  for (MachineBasicBlock::iterator RegionEnd = MBB->end();
+       RegionEnd != MBB->begin(); RegionEnd = I) {
 
     // Avoid decrementing RegionEnd for blocks with no terminator.
     if (RegionEnd != MBB->end() ||
@@ -538,7 +543,7 @@ getSchedRegions(MachineBasicBlock *MBB,
     // instruction stream until we find the nearest boundary.
     unsigned NumRegionInstrs = 0;
     I = RegionEnd;
-    for (;I != MBB->begin(); --I) {
+    for (; I != MBB->begin(); --I) {
       MachineInstr &MI = *std::prev(I);
       if (isSchedBoundary(&MI, &*MBB, MF, TII))
         break;
@@ -574,8 +579,8 @@ void MachineSchedulerBase::scheduleRegions(ScheduleDAGInstrs &Scheduler,
 #ifndef NDEBUG
     if (SchedOnlyFunc.getNumOccurrences() && SchedOnlyFunc != MF->getName())
       continue;
-    if (SchedOnlyBlock.getNumOccurrences()
-        && (int)SchedOnlyBlock != MBB->getNumber())
+    if (SchedOnlyBlock.getNumOccurrences() &&
+        (int)SchedOnlyBlock != MBB->getNumber())
       continue;
 #endif
 
@@ -641,7 +646,7 @@ void MachineSchedulerBase::scheduleRegions(ScheduleDAGInstrs &Scheduler,
   Scheduler.finalizeSchedule();
 }
 
-void MachineSchedulerBase::print(raw_ostream &O, const Module* m) const {
+void MachineSchedulerBase::print(raw_ostream &O, const Module *m) const {
   // unimplemented
 }
 
@@ -752,10 +757,9 @@ void ScheduleDAGMI::finishBlock() {
 /// the region, including the boundary itself and single-instruction regions
 /// that don't get scheduled.
 void ScheduleDAGMI::enterRegion(MachineBasicBlock *bb,
-                                     MachineBasicBlock::iterator begin,
-                                     MachineBasicBlock::iterator end,
-                                     unsigned regioninstrs)
-{
+                                MachineBasicBlock::iterator begin,
+                                MachineBasicBlock::iterator end,
+                                unsigned regioninstrs) {
   ScheduleDAGInstrs::enterRegion(bb, begin, end, regioninstrs);
 
   SchedImpl->initPolicy(begin, end, regioninstrs);
@@ -763,8 +767,8 @@ void ScheduleDAGMI::enterRegion(MachineBasicBlock *bb,
 
 /// This is normally called from the main scheduler loop but may also be invoked
 /// by the scheduling strategy to perform additional code motion.
-void ScheduleDAGMI::moveInstruction(
-  MachineInstr *MI, MachineBasicBlock::iterator InsertPos) {
+void ScheduleDAGMI::moveInstruction(MachineInstr *MI,
+                                    MachineBasicBlock::iterator InsertPos) {
   // Advance RegionBegin if the first instruction moves down.
   if (&*RegionBegin == MI)
     ++RegionBegin;
@@ -805,12 +809,14 @@ void ScheduleDAGMI::schedule() {
 
   postProcessDAG();
 
-  SmallVector<SUnit*, 8> TopRoots, BotRoots;
+  SmallVector<SUnit *, 8> TopRoots, BotRoots;
   findRootsAndBiasEdges(TopRoots, BotRoots);
 
   LLVM_DEBUG(dump());
-  if (PrintDAGs) dump();
-  if (ViewMISchedDAGs) viewGraph();
+  if (PrintDAGs)
+    dump();
+  if (ViewMISchedDAGs)
+    viewGraph();
 
   // Initialize the strategy before modifying the DAG.
   // This may initialize a DFSResult to be used for queue priority.
@@ -823,7 +829,8 @@ void ScheduleDAGMI::schedule() {
   while (true) {
     LLVM_DEBUG(dbgs() << "** ScheduleDAGMI::schedule picking next node\n");
     SUnit *SU = SchedImpl->pickNode(IsTopNode);
-    if (!SU) break;
+    if (!SU)
+      break;
 
     assert(!SU->isScheduled && "Node already scheduled");
     if (!checkSchedLimit())
@@ -839,7 +846,7 @@ void ScheduleDAGMI::schedule() {
     } else {
       assert(SU->isBottomReady() && "node still has unscheduled dependencies");
       MachineBasicBlock::iterator priorII =
-        priorNonDebug(CurrentBottom, CurrentTop);
+          priorNonDebug(CurrentBottom, CurrentTop);
       if (&*priorII == MI)
         CurrentBottom = priorII;
       else {
@@ -875,9 +882,8 @@ void ScheduleDAGMI::postProcessDAG() {
     m->apply(this);
 }
 
-void ScheduleDAGMI::
-findRootsAndBiasEdges(SmallVectorImpl<SUnit*> &TopRoots,
-                      SmallVectorImpl<SUnit*> &BotRoots) {
+void ScheduleDAGMI::findRootsAndBiasEdges(SmallVectorImpl<SUnit *> &TopRoots,
+                                          SmallVectorImpl<SUnit *> &BotRoots) {
   for (SUnit &SU : SUnits) {
     assert(!SU.isBoundaryNode() && "Boundary node should not be in SUnits");
 
@@ -895,8 +901,8 @@ findRootsAndBiasEdges(SmallVectorImpl<SUnit*> &TopRoots,
 }
 
 /// Identify DAG roots and setup scheduler queues.
-void ScheduleDAGMI::initQueues(ArrayRef<SUnit*> TopRoots,
-                               ArrayRef<SUnit*> BotRoots) {
+void ScheduleDAGMI::initQueues(ArrayRef<SUnit *> TopRoots,
+                               ArrayRef<SUnit *> BotRoots) {
   NextClusterSucc = nullptr;
   NextClusterPred = nullptr;
 
@@ -909,8 +915,9 @@ void ScheduleDAGMI::initQueues(ArrayRef<SUnit*> TopRoots,
 
   // Release bottom roots in reverse order so the higher priority nodes appear
   // first. This is more natural and slightly more efficient.
-  for (SmallVectorImpl<SUnit*>::const_reverse_iterator
-         I = BotRoots.rbegin(), E = BotRoots.rend(); I != E; ++I) {
+  for (SmallVectorImpl<SUnit *>::const_reverse_iterator I = BotRoots.rbegin(),
+                                                        E = BotRoots.rend();
+       I != E; ++I) {
     SchedImpl->releaseBottomNode(*I);
   }
 
@@ -944,7 +951,9 @@ void ScheduleDAGMI::placeDebugValues() {
   }
 
   for (std::vector<std::pair<MachineInstr *, MachineInstr *>>::iterator
-         DI = DbgValues.end(), DE = DbgValues.begin(); DI != DE; --DI) {
+           DI = DbgValues.end(),
+           DE = DbgValues.begin();
+       DI != DE; --DI) {
     std::pair<MachineInstr *, MachineInstr *> P = *std::prev(DI);
     MachineInstr *DbgValue = P.first;
     MachineBasicBlock::iterator OrigPrevMI = P.second;
@@ -1148,9 +1157,7 @@ LLVM_DUMP_METHOD void ScheduleDAGMI::dumpSchedule() const {
 // preservation.
 //===----------------------------------------------------------------------===//
 
-ScheduleDAGMILive::~ScheduleDAGMILive() {
-  delete DFSResult;
-}
+ScheduleDAGMILive::~ScheduleDAGMILive() { delete DFSResult; }
 
 void ScheduleDAGMILive::collectVRegUses(SUnit &SU) {
   const MachineInstr &MI = *SU.getInstr();
@@ -1195,10 +1202,9 @@ void ScheduleDAGMILive::collectVRegUses(SUnit &SU) {
 /// the region, including the boundary itself and single-instruction regions
 /// that don't get scheduled.
 void ScheduleDAGMILive::enterRegion(MachineBasicBlock *bb,
-                                MachineBasicBlock::iterator begin,
-                                MachineBasicBlock::iterator end,
-                                unsigned regioninstrs)
-{
+                                    MachineBasicBlock::iterator begin,
+                                    MachineBasicBlock::iterator end,
+                                    unsigned regioninstrs) {
   // ScheduleDAGMI initializes SchedImpl's per-region policy.
   ScheduleDAGMI::enterRegion(bb, begin, end, regioninstrs);
 
@@ -1274,7 +1280,7 @@ void ScheduleDAGMILive::initRegPressure() {
   // the max pressure in the scheduled code for these sets.
   RegionCriticalPSets.clear();
   const std::vector<unsigned> &RegionPressure =
-    RPTracker.getPressure().MaxSetPressure;
+      RPTracker.getPressure().MaxSetPressure;
   for (unsigned i = 0, e = RegionPressure.size(); i < e; ++i) {
     unsigned Limit = RegClassInfo->getRegPressureSetLimit(i);
     if (RegionPressure[i] > Limit) {
@@ -1290,9 +1296,8 @@ void ScheduleDAGMILive::initRegPressure() {
              dbgs() << "\n");
 }
 
-void ScheduleDAGMILive::
-updateScheduledPressure(const SUnit *SU,
-                        const std::vector<unsigned> &NewMaxPressure) {
+void ScheduleDAGMILive::updateScheduledPressure(
+    const SUnit *SU, const std::vector<unsigned> &NewMaxPressure) {
   const PressureDiff &PDiff = getPressureDiff(SU);
   unsigned CritIdx = 0, CritEnd = RegionCriticalPSets.size();
   for (const PressureChange &PC : PDiff) {
@@ -1302,8 +1307,8 @@ updateScheduledPressure(const SUnit *SU,
     while (CritIdx != CritEnd && RegionCriticalPSets[CritIdx].getPSet() < ID)
       ++CritIdx;
     if (CritIdx != CritEnd && RegionCriticalPSets[CritIdx].getPSet() == ID) {
-      if ((int)NewMaxPressure[ID] > RegionCriticalPSets[CritIdx].getUnitInc()
-          && NewMaxPressure[ID] <= (unsigned)std::numeric_limits<int16_t>::max())
+      if ((int)NewMaxPressure[ID] > RegionCriticalPSets[CritIdx].getUnitInc() &&
+          NewMaxPressure[ID] <= (unsigned)std::numeric_limits<int16_t>::max())
         RegionCriticalPSets[CritIdx].setUnitInc(NewMaxPressure[ID]);
     }
     unsigned Limit = RegClassInfo->getRegPressureSetLimit(ID);
@@ -1334,8 +1339,8 @@ void ScheduleDAGMILive::updatePressureDiffs(
       // back to life => increment pressure.
       bool Decrement = P.LaneMask.any();
 
-      for (const VReg2SUnit &V2SU
-           : make_range(VRegUses.find(Reg), VRegUses.end())) {
+      for (const VReg2SUnit &V2SU :
+           make_range(VRegUses.find(Reg), VRegUses.end())) {
         SUnit &SU = *V2SU.SU;
         if (SU.isScheduled || &SU == &ExitSU)
           continue;
@@ -1351,13 +1356,13 @@ void ScheduleDAGMILive::updatePressureDiffs(
       assert(P.LaneMask.any());
       LLVM_DEBUG(dbgs() << "  LiveReg: " << printVRegOrUnit(Reg, TRI) << "\n");
       // This may be called before CurrentBottom has been initialized. However,
-      // BotRPTracker must have a valid position. We want the value live into the
-      // instruction or live out of the block, so ask for the previous
+      // BotRPTracker must have a valid position. We want the value live into
+      // the instruction or live out of the block, so ask for the previous
       // instruction's live-out.
       const LiveInterval &LI = LIS->getInterval(Reg);
       VNInfo *VNI;
       MachineBasicBlock::const_iterator I =
-        nextIfDebug(BotRPTracker.getPos(), BB->end());
+          nextIfDebug(BotRPTracker.getPos(), BB->end());
       if (I == BB->end())
         VNI = LI.getVNInfoBefore(LIS->getMBBEndIdx(BB));
       else {
@@ -1366,8 +1371,8 @@ void ScheduleDAGMILive::updatePressureDiffs(
       }
       // RegisterPressureTracker guarantees that readsReg is true for LiveUses.
       assert(VNI && "No live value at use.");
-      for (const VReg2SUnit &V2SU
-           : make_range(VRegUses.find(Reg), VRegUses.end())) {
+      for (const VReg2SUnit &V2SU :
+           make_range(VRegUses.find(Reg), VRegUses.end())) {
         SUnit *SU = V2SU.SU;
         // If this use comes before the reaching def, it cannot be a last use,
         // so decrease its pressure change.
@@ -1427,7 +1432,7 @@ void ScheduleDAGMILive::schedule() {
 
   postProcessDAG();
 
-  SmallVector<SUnit*, 8> TopRoots, BotRoots;
+  SmallVector<SUnit *, 8> TopRoots, BotRoots;
   findRootsAndBiasEdges(TopRoots, BotRoots);
 
   // Initialize the strategy before modifying the DAG.
@@ -1435,8 +1440,10 @@ void ScheduleDAGMILive::schedule() {
   SchedImpl->initialize(this);
 
   LLVM_DEBUG(dump());
-  if (PrintDAGs) dump();
-  if (ViewMISchedDAGs) viewGraph();
+  if (PrintDAGs)
+    dump();
+  if (ViewMISchedDAGs)
+    viewGraph();
 
   // Initialize ready queues now that the DAG and priority data are finalized.
   initQueues(TopRoots, BotRoots);
@@ -1445,7 +1452,8 @@ void ScheduleDAGMILive::schedule() {
   while (true) {
     LLVM_DEBUG(dbgs() << "** ScheduleDAGMILive::schedule picking next node\n");
     SUnit *SU = SchedImpl->pickNode(IsTopNode);
-    if (!SU) break;
+    if (!SU)
+      break;
 
     assert(!SU->isScheduled && "Node already scheduled");
     if (!checkSchedLimit())
@@ -1505,7 +1513,7 @@ void ScheduleDAGMILive::buildDAGWithRegPressure() {
 
 void ScheduleDAGMILive::computeDFSResult() {
   if (!DFSResult)
-    DFSResult = new SchedDFSResult(/*BottomU*/true, MinSubtreeSize);
+    DFSResult = new SchedDFSResult(/*BottomU*/ true, MinSubtreeSize);
   DFSResult->clear();
   ScheduledTrees.clear();
   DFSResult->resize(SUnits.size());
@@ -1563,8 +1571,8 @@ unsigned ScheduleDAGMILive::computeCyclicCriticalPath() {
     unsigned LiveOutHeight = DefSU->getHeight();
     unsigned LiveOutDepth = DefSU->getDepth() + DefSU->Latency;
     // Visit all local users of the vreg def.
-    for (const VReg2SUnit &V2SU
-         : make_range(VRegUses.find(Reg), VRegUses.end())) {
+    for (const VReg2SUnit &V2SU :
+         make_range(VRegUses.find(Reg), VRegUses.end())) {
       SUnit *SU = V2SU.SU;
       if (SU == &ExitSU)
         continue;
@@ -1600,8 +1608,8 @@ unsigned ScheduleDAGMILive::computeCyclicCriticalPath() {
 
 /// Release ExitSU predecessors and setup scheduler queues. Re-position
 /// the Top RP tracker in case the region beginning has changed.
-void ScheduleDAGMILive::initQueues(ArrayRef<SUnit*> TopRoots,
-                                   ArrayRef<SUnit*> BotRoots) {
+void ScheduleDAGMILive::initQueues(ArrayRef<SUnit *> TopRoots,
+                                   ArrayRef<SUnit *> BotRoots) {
   ScheduleDAGMI::initQueues(TopRoots, BotRoots);
   if (ShouldTrackPressure) {
     assert(TopRPTracker.getPos() == RegionBegin && "bad initial Top tracker");
@@ -1646,7 +1654,7 @@ void ScheduleDAGMILive::scheduleMI(SUnit *SU, bool IsTopNode) {
   } else {
     assert(SU->isBottomReady() && "node still has unscheduled dependencies");
     MachineBasicBlock::iterator priorII =
-      priorNonDebug(CurrentBottom, CurrentTop);
+        priorNonDebug(CurrentBottom, CurrentTop);
     if (&*priorII == MI)
       CurrentBottom = priorII;
     else {
@@ -2104,7 +2112,7 @@ void CopyConstrain::constrainLocalCopy(SUnit *CopySU, ScheduleDAGMILive *DAG) {
 
   // GlobalDef is the bottom of the GlobalLI hole. Open the hole by
   // constraining the uses of the last local def to precede GlobalDef.
-  SmallVector<SUnit*,8> LocalUses;
+  SmallVector<SUnit *, 8> LocalUses;
   const VNInfo *LastLocalVN = LocalLI->getVNInfoBefore(LocalLI->endIndex());
   MachineInstr *LastLocalDef = LIS->getInstructionFromIndex(LastLocalVN->def);
   SUnit *LastLocalSU = DAG->getSUnit(LastLocalDef);
@@ -2119,9 +2127,9 @@ void CopyConstrain::constrainLocalCopy(SUnit *CopySU, ScheduleDAGMILive *DAG) {
   }
   // Open the top of the GlobalLI hole by constraining any earlier global uses
   // to precede the start of LocalLI.
-  SmallVector<SUnit*,8> GlobalUses;
+  SmallVector<SUnit *, 8> GlobalUses;
   MachineInstr *FirstLocalDef =
-    LIS->getInstructionFromIndex(LocalLI->beginIndex());
+      LIS->getInstructionFromIndex(LocalLI->beginIndex());
   SUnit *FirstLocalSU = DAG->getSUnit(FirstLocalDef);
   for (const SDep &Pred : GlobalSU->Preds) {
     if (Pred.getKind() != SDep::Anti || Pred.getReg() != GlobalReg)
@@ -2149,7 +2157,7 @@ void CopyConstrain::constrainLocalCopy(SUnit *CopySU, ScheduleDAGMILive *DAG) {
 /// Callback from DAG postProcessing to create weak edges to encourage
 /// copy elimination.
 void CopyConstrain::apply(ScheduleDAGInstrs *DAGInstrs) {
-  ScheduleDAGMI *DAG = static_cast<ScheduleDAGMI*>(DAGInstrs);
+  ScheduleDAGMI *DAG = static_cast<ScheduleDAGMI *>(DAGInstrs);
   assert(DAG->hasVRegLiveness() && "Expect VRegs with LiveIntervals");
 
   MachineBasicBlock::iterator FirstPos = nextIfDebug(DAG->begin(), DAG->end());
@@ -2163,7 +2171,7 @@ void CopyConstrain::apply(ScheduleDAGInstrs *DAGInstrs) {
     if (!SU.getInstr()->isCopy())
       continue;
 
-    constrainLocalCopy(&SU, static_cast<ScheduleDAGMILive*>(DAG));
+    constrainLocalCopy(&SU, static_cast<ScheduleDAGMILive *>(DAG));
   }
 }
 
@@ -2224,19 +2232,20 @@ void SchedBoundary::reset() {
   assert(!ExecutedResCounts[0] && "nonzero count for bad resource");
 }
 
-void SchedRemainder::
-init(ScheduleDAGMI *DAG, const TargetSchedModel *SchedModel) {
+void SchedRemainder::init(ScheduleDAGMI *DAG,
+                          const TargetSchedModel *SchedModel) {
   reset();
   if (!SchedModel->hasInstrSchedModel())
     return;
   RemainingCounts.resize(SchedModel->getNumProcResourceKinds());
   for (SUnit &SU : DAG->SUnits) {
     const MCSchedClassDesc *SC = DAG->getSchedClass(&SU);
-    RemIssueCount += SchedModel->getNumMicroOps(SU.getInstr(), SC)
-      * SchedModel->getMicroOpFactor();
+    RemIssueCount += SchedModel->getNumMicroOps(SU.getInstr(), SC) *
+                     SchedModel->getMicroOpFactor();
     for (TargetSchedModel::ProcResIter
-           PI = SchedModel->getWriteProcResBegin(SC),
-           PE = SchedModel->getWriteProcResEnd(SC); PI != PE; ++PI) {
+             PI = SchedModel->getWriteProcResBegin(SC),
+             PE = SchedModel->getWriteProcResEnd(SC);
+         PI != PE; ++PI) {
       unsigned PIdx = PI->ProcResourceIdx;
       unsigned Factor = SchedModel->getResourceFactor(PIdx);
       assert(PI->ReleaseAtCycle >= PI->AcquireAtCycle);
@@ -2246,8 +2255,8 @@ init(ScheduleDAGMI *DAG, const TargetSchedModel *SchedModel) {
   }
 }
 
-void SchedBoundary::
-init(ScheduleDAGMI *dag, const TargetSchedModel *smodel, SchedRemainder *rem) {
+void SchedBoundary::init(ScheduleDAGMI *dag, const TargetSchedModel *smodel,
+                         SchedRemainder *rem) {
   reset();
   DAG = dag;
   SchedModel = smodel;
@@ -2293,9 +2302,8 @@ unsigned SchedBoundary::getLatencyStallCycles(SUnit *SU) {
 
 /// Compute the next cycle at which the given processor resource unit
 /// can be scheduled.
-unsigned SchedBoundary::getNextResourceCycleByInstance(unsigned InstanceIdx,
-                                                       unsigned ReleaseAtCycle,
-                                                       unsigned AcquireAtCycle) {
+unsigned SchedBoundary::getNextResourceCycleByInstance(
+    unsigned InstanceIdx, unsigned ReleaseAtCycle, unsigned AcquireAtCycle) {
   if (SchedModel && SchedModel->enableIntervals()) {
     if (isTop())
       return ReservedResourceSegments[InstanceIdx].getFirstAvailableAtFromTop(
@@ -2401,8 +2409,8 @@ SchedBoundary::getNextResourceCycle(const MCSchedClassDesc *SC, unsigned PIdx,
 ///
 /// TODO: Also check whether the SU must start a new group.
 bool SchedBoundary::checkHazard(SUnit *SU) {
-  if (HazardRec->isEnabled()
-      && HazardRec->getHazardType(SU) != ScheduleHazardRecognizer::NoHazard) {
+  if (HazardRec->isEnabled() &&
+      HazardRec->getHazardType(SU) != ScheduleHazardRecognizer::NoHazard) {
     return true;
   }
 
@@ -2424,8 +2432,8 @@ bool SchedBoundary::checkHazard(SUnit *SU) {
   if (SchedModel->hasInstrSchedModel() && SU->hasReservedResource) {
     const MCSchedClassDesc *SC = DAG->getSchedClass(SU);
     for (const MCWriteProcResEntry &PE :
-          make_range(SchedModel->getWriteProcResBegin(SC),
-                     SchedModel->getWriteProcResEnd(SC))) {
+         make_range(SchedModel->getWriteProcResBegin(SC),
+                    SchedModel->getWriteProcResEnd(SC))) {
       unsigned ResIdx = PE.ProcResourceIdx;
       unsigned ReleaseAtCycle = PE.ReleaseAtCycle;
       unsigned AcquireAtCycle = PE.AcquireAtCycle;
@@ -2437,8 +2445,8 @@ bool SchedBoundary::checkHazard(SUnit *SU) {
         MaxObservedStall = std::max(ReleaseAtCycle, MaxObservedStall);
 #endif
         LLVM_DEBUG(dbgs() << "  SU(" << SU->NodeNum << ") "
-                          << SchedModel->getResourceName(ResIdx)
-                          << '[' << InstanceIdx - ReservedCyclesIndex[ResIdx]  << ']'
+                          << SchedModel->getResourceName(ResIdx) << '['
+                          << InstanceIdx - ReservedCyclesIndex[ResIdx] << ']'
                           << "=" << NRCycle << "c\n");
         return true;
       }
@@ -2448,8 +2456,7 @@ bool SchedBoundary::checkHazard(SUnit *SU) {
 }
 
 // Find the unscheduled node in ReadySUs with the highest latency.
-unsigned SchedBoundary::
-findMaxLatency(ArrayRef<SUnit*> ReadySUs) {
+unsigned SchedBoundary::findMaxLatency(ArrayRef<SUnit *> ReadySUs) {
   SUnit *LateSU = nullptr;
   unsigned RemLatency = 0;
   for (SUnit *SU : ReadySUs) {
@@ -2469,14 +2476,13 @@ findMaxLatency(ArrayRef<SUnit*> ReadySUs) {
 // Count resources in this zone and the remaining unscheduled
 // instruction. Return the max count, scaled. Set OtherCritIdx to the critical
 // resource index, or zero if the zone is issue limited.
-unsigned SchedBoundary::
-getOtherResourceCount(unsigned &OtherCritIdx) {
+unsigned SchedBoundary::getOtherResourceCount(unsigned &OtherCritIdx) {
   OtherCritIdx = 0;
   if (!SchedModel->hasInstrSchedModel())
     return 0;
 
-  unsigned OtherCritCount = Rem->RemIssueCount
-    + (RetiredMOps * SchedModel->getMicroOpFactor());
+  unsigned OtherCritCount =
+      Rem->RemIssueCount + (RetiredMOps * SchedModel->getMicroOpFactor());
   LLVM_DEBUG(dbgs() << "  " << Available.getName() << " + Remain MOps: "
                     << OtherCritCount / SchedModel->getMicroOpFactor() << '\n');
   for (unsigned PIdx = 1, PEnd = SchedModel->getNumProcResourceKinds();
@@ -2589,7 +2595,7 @@ unsigned SchedBoundary::countResource(const MCSchedClassDesc *SC, unsigned PIdx,
                                       unsigned NextCycle,
                                       unsigned AcquireAtCycle) {
   unsigned Factor = SchedModel->getResourceFactor(PIdx);
-  unsigned Count = Factor * (ReleaseAtCycle- AcquireAtCycle);
+  unsigned Count = Factor * (ReleaseAtCycle - AcquireAtCycle);
   LLVM_DEBUG(dbgs() << "  " << SchedModel->getResourceName(PIdx) << " +"
                     << ReleaseAtCycle << "x" << Factor << "u\n");
 
@@ -2613,8 +2619,8 @@ unsigned SchedBoundary::countResource(const MCSchedClassDesc *SC, unsigned PIdx,
       getNextResourceCycle(SC, PIdx, ReleaseAtCycle, AcquireAtCycle);
   if (NextAvailable > CurrCycle) {
     LLVM_DEBUG(dbgs() << "  Resource conflict: "
-                      << SchedModel->getResourceName(PIdx)
-                      << '[' << InstanceIdx - ReservedCyclesIndex[PIdx]  << ']'
+                      << SchedModel->getResourceName(PIdx) << '['
+                      << InstanceIdx - ReservedCyclesIndex[PIdx] << ']'
                       << " reserved until @" << NextAvailable << "\n");
   }
   return NextAvailable;
@@ -2673,13 +2679,12 @@ void SchedBoundary::bumpNode(SUnit *SU) {
     Rem->RemIssueCount -= DecRemIssue;
     if (ZoneCritResIdx) {
       // Scale scheduled micro-ops for comparing with the critical resource.
-      unsigned ScaledMOps =
-        RetiredMOps * SchedModel->getMicroOpFactor();
+      unsigned ScaledMOps = RetiredMOps * SchedModel->getMicroOpFactor();
 
       // If scaled micro-ops are now more than the previous critical resource by
       // a full cycle, then micro-ops issue becomes critical.
-      if ((int)(ScaledMOps - getResourceCount(ZoneCritResIdx))
-          >= (int)SchedModel->getLatencyFactor()) {
+      if ((int)(ScaledMOps - getResourceCount(ZoneCritResIdx)) >=
+          (int)SchedModel->getLatencyFactor()) {
         ZoneCritResIdx = 0;
         LLVM_DEBUG(dbgs() << "  *** Critical resource NumMicroOps: "
                           << ScaledMOps / SchedModel->getLatencyFactor()
@@ -2687,8 +2692,9 @@ void SchedBoundary::bumpNode(SUnit *SU) {
       }
     }
     for (TargetSchedModel::ProcResIter
-           PI = SchedModel->getWriteProcResBegin(SC),
-           PE = SchedModel->getWriteProcResEnd(SC); PI != PE; ++PI) {
+             PI = SchedModel->getWriteProcResBegin(SC),
+             PE = SchedModel->getWriteProcResEnd(SC);
+         PI != PE; ++PI) {
       unsigned RCycle =
           countResource(SC, PI->ProcResourceIdx, PI->ReleaseAtCycle, NextCycle,
                         PI->AcquireAtCycle);
@@ -2701,8 +2707,9 @@ void SchedBoundary::bumpNode(SUnit *SU) {
       // instruction plus the number of cycles the operations reserves the
       // resource. For bottom-up is it simply the instruction's cycle.
       for (TargetSchedModel::ProcResIter
-             PI = SchedModel->getWriteProcResBegin(SC),
-             PE = SchedModel->getWriteProcResEnd(SC); PI != PE; ++PI) {
+               PI = SchedModel->getWriteProcResBegin(SC),
+               PE = SchedModel->getWriteProcResEnd(SC);
+           PI != PE; ++PI) {
         unsigned PIdx = PI->ProcResourceIdx;
         if (SchedModel->getProcResource(PIdx)->BufferSize == 0) {
 
@@ -2769,7 +2776,7 @@ void SchedBoundary::bumpNode(SUnit *SU) {
   // This must be done after NextCycle has been adjust for all other stalls.
   // Calling bumpCycle(X) will reduce CurrMOps by one issue group and set
   // currCycle to X.
-  if ((isTop() &&  SchedModel->mustEndGroup(SU->getInstr())) ||
+  if ((isTop() && SchedModel->mustEndGroup(SU->getInstr())) ||
       (!isTop() && SchedModel->mustBeginGroup(SU->getInstr()))) {
     LLVM_DEBUG(dbgs() << "  Bump cycle to " << (isTop() ? "end" : "begin")
                       << " group\n");
@@ -2839,9 +2846,9 @@ SUnit *SchedBoundary::pickOnlyChoice() {
     ++I;
   }
   for (unsigned i = 0; Available.empty(); ++i) {
-//  FIXME: Re-enable assert once PR20057 is resolved.
-//    assert(i <= (HazardRec->getMaxLookAhead() + MaxObservedStall) &&
-//           "permanent hazard");
+    //  FIXME: Re-enable assert once PR20057 is resolved.
+    //    assert(i <= (HazardRec->getMaxLookAhead() + MaxObservedStall) &&
+    //           "permanent hazard");
     (void)i;
     bumpCycle(CurrCycle + 1);
     releasePending();
@@ -2915,16 +2922,15 @@ LLVM_DUMP_METHOD void SchedBoundary::dumpScheduledState() const {
 // GenericScheduler - Generic implementation of MachineSchedStrategy.
 //===----------------------------------------------------------------------===//
 
-void GenericSchedulerBase::SchedCandidate::
-initResourceDelta(const ScheduleDAGMI *DAG,
-                  const TargetSchedModel *SchedModel) {
+void GenericSchedulerBase::SchedCandidate::initResourceDelta(
+    const ScheduleDAGMI *DAG, const TargetSchedModel *SchedModel) {
   if (!Policy.ReduceResIdx && !Policy.DemandResIdx)
     return;
 
   const MCSchedClassDesc *SC = DAG->getSchedClass(SU);
-  for (TargetSchedModel::ProcResIter
-         PI = SchedModel->getWriteProcResBegin(SC),
-         PE = SchedModel->getWriteProcResEnd(SC); PI != PE; ++PI) {
+  for (TargetSchedModel::ProcResIter PI = SchedModel->getWriteProcResBegin(SC),
+                                     PE = SchedModel->getWriteProcResEnd(SC);
+       PI != PE; ++PI) {
     if (PI->ProcResourceIdx == Policy.ReduceResIdx)
       ResDelta.CritResources += PI->ReleaseAtCycle;
     if (PI->ProcResourceIdx == Policy.DemandResIdx)
@@ -2990,7 +2996,7 @@ void GenericSchedulerBase::setPolicy(CandPolicy &Policy, bool IsPostRA,
   // Compute the critical resource outside the zone.
   unsigned OtherCritIdx = 0;
   unsigned OtherCount =
-    OtherZone ? OtherZone->getOtherResourceCount(OtherCritIdx) : 0;
+      OtherZone ? OtherZone->getOtherResourceCount(OtherCritIdx) : 0;
 
   bool OtherResLimited = false;
   unsigned RemLatency = 0;
@@ -3035,26 +3041,43 @@ void GenericSchedulerBase::setPolicy(CandPolicy &Policy, bool IsPostRA,
 }
 
 #ifndef NDEBUG
-const char *GenericSchedulerBase::getReasonStr(
-  GenericSchedulerBase::CandReason Reason) {
+const char *
+GenericSchedulerBase::getReasonStr(GenericSchedulerBase::CandReason Reason) {
   switch (Reason) {
-  case NoCand:         return "NOCAND    ";
-  case Only1:          return "ONLY1     ";
-  case PhysReg:        return "PHYS-REG  ";
-  case RegExcess:      return "REG-EXCESS";
-  case RegCritical:    return "REG-CRIT  ";
-  case Stall:          return "STALL     ";
-  case Cluster:        return "CLUSTER   ";
-  case Weak:           return "WEAK      ";
-  case RegMax:         return "REG-MAX   ";
-  case ResourceReduce: return "RES-REDUCE";
-  case ResourceDemand: return "RES-DEMAND";
-  case TopDepthReduce: return "TOP-DEPTH ";
-  case TopPathReduce:  return "TOP-PATH  ";
-  case BotHeightReduce:return "BOT-HEIGHT";
-  case BotPathReduce:  return "BOT-PATH  ";
-  case NextDefUse:     return "DEF-USE   ";
-  case NodeOrder:      return "ORDER     ";
+  case NoCand:
+    return "NOCAND    ";
+  case Only1:
+    return "ONLY1     ";
+  case PhysReg:
+    return "PHYS-REG  ";
+  case RegExcess:
+    return "REG-EXCESS";
+  case RegCritical:
+    return "REG-CRIT  ";
+  case Stall:
+    return "STALL     ";
+  case Cluster:
+    return "CLUSTER   ";
+  case Weak:
+    return "WEAK      ";
+  case RegMax:
+    return "REG-MAX   ";
+  case ResourceReduce:
+    return "RES-REDUCE";
+  case ResourceDemand:
+    return "RES-DEMAND";
+  case TopDepthReduce:
+    return "TOP-DEPTH ";
+  case TopPathReduce:
+    return "TOP-PATH  ";
+  case BotHeightReduce:
+    return "BOT-HEIGHT";
+  case BotPathReduce:
+    return "BOT-PATH  ";
+  case NextDefUse:
+    return "DEF-USE   ";
+  case NodeOrder:
+    return "ORDER     ";
   };
   llvm_unreachable("Unknown reason!");
 }
@@ -3094,10 +3117,11 @@ void GenericSchedulerBase::traceCandidate(const SchedCandidate &Cand) {
     Latency = Cand.SU->getDepth();
     break;
   }
-  dbgs() << "  Cand SU(" << Cand.SU->NodeNum << ") " << getReasonStr(Cand.Reason);
+  dbgs() << "  Cand SU(" << Cand.SU->NodeNum << ") "
+         << getReasonStr(Cand.Reason);
   if (P.isValid())
-    dbgs() << " " << TRI->getRegPressureSetName(P.getPSet())
-           << ":" << P.getUnitInc() << " ";
+    dbgs() << " " << TRI->getRegPressureSetName(P.getPSet()) << ":"
+           << P.getUnitInc() << " ";
   else
     dbgs() << "      ";
   if (ResIdx)
@@ -3157,12 +3181,12 @@ bool tryLatency(GenericSchedulerBase::SchedCandidate &TryCand,
     // of them could be scheduled now with no stall.
     if (std::max(TryCand.SU->getDepth(), Cand.SU->getDepth()) >
         Zone.getScheduledLatency()) {
-      if (tryLess(TryCand.SU->getDepth(), Cand.SU->getDepth(),
-                  TryCand, Cand, GenericSchedulerBase::TopDepthReduce))
+      if (tryLess(TryCand.SU->getDepth(), Cand.SU->getDepth(), TryCand, Cand,
+                  GenericSchedulerBase::TopDepthReduce))
         return true;
     }
-    if (tryGreater(TryCand.SU->getHeight(), Cand.SU->getHeight(),
-                   TryCand, Cand, GenericSchedulerBase::TopPathReduce))
+    if (tryGreater(TryCand.SU->getHeight(), Cand.SU->getHeight(), TryCand, Cand,
+                   GenericSchedulerBase::TopPathReduce))
       return true;
   } else {
     // Prefer the candidate with the lesser height, but only if one of them has
@@ -3170,12 +3194,12 @@ bool tryLatency(GenericSchedulerBase::SchedCandidate &TryCand,
     // of them could be scheduled now with no stall.
     if (std::max(TryCand.SU->getHeight(), Cand.SU->getHeight()) >
         Zone.getScheduledLatency()) {
-      if (tryLess(TryCand.SU->getHeight(), Cand.SU->getHeight(),
-                  TryCand, Cand, GenericSchedulerBase::BotHeightReduce))
+      if (tryLess(TryCand.SU->getHeight(), Cand.SU->getHeight(), TryCand, Cand,
+                  GenericSchedulerBase::BotHeightReduce))
         return true;
     }
-    if (tryGreater(TryCand.SU->getDepth(), Cand.SU->getDepth(),
-                   TryCand, Cand, GenericSchedulerBase::BotPathReduce))
+    if (tryGreater(TryCand.SU->getDepth(), Cand.SU->getDepth(), TryCand, Cand,
+                   GenericSchedulerBase::BotPathReduce))
       return true;
   }
   return false;
@@ -3194,7 +3218,7 @@ static void tracePick(const GenericSchedulerBase::SchedCandidate &Cand) {
 void GenericScheduler::initialize(ScheduleDAGMI *dag) {
   assert(dag->hasVRegLiveness() &&
          "(PreRA)GenericScheduler needs vreg liveness");
-  DAG = static_cast<ScheduleDAGMILive*>(dag);
+  DAG = static_cast<ScheduleDAGMILive *>(dag);
   SchedModel = DAG->getSchedModel();
   TRI = DAG->TRI;
 
@@ -3239,7 +3263,7 @@ void GenericScheduler::initPolicy(MachineBasicBlock::iterator Begin,
     MVT::SimpleValueType LegalIntVT = (MVT::SimpleValueType)VT;
     if (TLI->isTypeLegal(LegalIntVT)) {
       unsigned NIntRegs = Context->RegClassInfo->getNumAllocatableRegs(
-        TLI->getRegClassFor(LegalIntVT));
+          TLI->getRegClassFor(LegalIntVT));
       RegionPolicy.ShouldTrackPressure = NumRegionInstrs > (NIntRegs / 2);
     }
   }
@@ -3279,8 +3303,7 @@ void GenericScheduler::dumpPolicy() const {
   dbgs() << "GenericScheduler RegionPolicy: "
          << " ShouldTrackPressure=" << RegionPolicy.ShouldTrackPressure
          << " OnlyTopDown=" << RegionPolicy.OnlyTopDown
-         << " OnlyBottomUp=" << RegionPolicy.OnlyBottomUp
-         << "\n";
+         << " OnlyBottomUp=" << RegionPolicy.OnlyBottomUp << "\n";
 #endif
 }
 
@@ -3298,16 +3321,15 @@ void GenericScheduler::checkAcyclicLatency() {
     return;
 
   // Scaled number of cycles per loop iteration.
-  unsigned IterCount =
-    std::max(Rem.CyclicCritPath * SchedModel->getLatencyFactor(),
-             Rem.RemIssueCount);
+  unsigned IterCount = std::max(
+      Rem.CyclicCritPath * SchedModel->getLatencyFactor(), Rem.RemIssueCount);
   // Scaled acyclic critical path.
   unsigned AcyclicCount = Rem.CriticalPath * SchedModel->getLatencyFactor();
   // InFlightCount = (AcyclicPath / IterCycles) * InstrPerLoop
   unsigned InFlightCount =
-    (AcyclicCount * Rem.RemIssueCount + IterCount-1) / IterCount;
+      (AcyclicCount * Rem.RemIssueCount + IterCount - 1) / IterCount;
   unsigned BufferLimit =
-    SchedModel->getMicroOpBufferSize() * SchedModel->getMicroOpFactor();
+      SchedModel->getMicroOpBufferSize() * SchedModel->getMicroOpFactor();
 
   Rem.IsAcyclicLatencyLimited = InFlightCount > BufferLimit;
 
@@ -3341,13 +3363,11 @@ void GenericScheduler::registerRoots() {
 }
 
 namespace llvm {
-bool tryPressure(const PressureChange &TryP,
-                 const PressureChange &CandP,
+bool tryPressure(const PressureChange &TryP, const PressureChange &CandP,
                  GenericSchedulerBase::SchedCandidate &TryCand,
                  GenericSchedulerBase::SchedCandidate &Cand,
                  GenericSchedulerBase::CandReason Reason,
-                 const TargetRegisterInfo *TRI,
-                 const MachineFunction &MF) {
+                 const TargetRegisterInfo *TRI, const MachineFunction &MF) {
   // If one candidate decreases and the other increases, go with it.
   // Invalid candidates have UnitInc==0.
   if (tryGreater(TryP.getUnitInc() < 0, CandP.getUnitInc() < 0, TryCand, Cand,
@@ -3368,11 +3388,11 @@ bool tryPressure(const PressureChange &TryP,
                    Reason);
   }
 
-  int TryRank = TryP.isValid() ? TRI->getRegPressureSetScore(MF, TryPSet) :
-                                 std::numeric_limits<int>::max();
+  int TryRank = TryP.isValid() ? TRI->getRegPressureSetScore(MF, TryPSet)
+                               : std::numeric_limits<int>::max();
 
-  int CandRank = CandP.isValid() ? TRI->getRegPressureSetScore(MF, CandPSet) :
-                                   std::numeric_limits<int>::max();
+  int CandRank = CandP.isValid() ? TRI->getRegPressureSetScore(MF, CandPSet)
+                                 : std::numeric_limits<int>::max();
 
   // If the candidates are decreasing pressure, reverse priority.
   if (TryP.getUnitInc() < 0)
@@ -3437,25 +3457,19 @@ void GenericScheduler::initCandidate(SchedCandidate &Cand, SUnit *SU,
   if (DAG->isTrackingPressure()) {
     if (AtTop) {
       TempTracker.getMaxDownwardPressureDelta(
-        Cand.SU->getInstr(),
-        Cand.RPDelta,
-        DAG->getRegionCriticalPSets(),
-        DAG->getRegPressure().MaxSetPressure);
+          Cand.SU->getInstr(), Cand.RPDelta, DAG->getRegionCriticalPSets(),
+          DAG->getRegPressure().MaxSetPressure);
     } else {
       if (VerifyScheduling) {
         TempTracker.getMaxUpwardPressureDelta(
-          Cand.SU->getInstr(),
-          &DAG->getPressureDiff(Cand.SU),
-          Cand.RPDelta,
-          DAG->getRegionCriticalPSets(),
-          DAG->getRegPressure().MaxSetPressure);
+            Cand.SU->getInstr(), &DAG->getPressureDiff(Cand.SU), Cand.RPDelta,
+            DAG->getRegionCriticalPSets(),
+            DAG->getRegPressure().MaxSetPressure);
       } else {
         RPTracker.getUpwardPressureDelta(
-          Cand.SU->getInstr(),
-          DAG->getPressureDiff(Cand.SU),
-          Cand.RPDelta,
-          DAG->getRegionCriticalPSets(),
-          DAG->getRegPressure().MaxSetPressure);
+            Cand.SU->getInstr(), DAG->getPressureDiff(Cand.SU), Cand.RPDelta,
+            DAG->getRegionCriticalPSets(),
+            DAG->getRegPressure().MaxSetPressure);
       }
     }
   }
@@ -3491,17 +3505,15 @@ bool GenericScheduler::tryCandidate(SchedCandidate &Cand,
     return TryCand.Reason != NoCand;
 
   // Avoid exceeding the target's limit.
-  if (DAG->isTrackingPressure() && tryPressure(TryCand.RPDelta.Excess,
-                                               Cand.RPDelta.Excess,
-                                               TryCand, Cand, RegExcess, TRI,
-                                               DAG->MF))
+  if (DAG->isTrackingPressure() &&
+      tryPressure(TryCand.RPDelta.Excess, Cand.RPDelta.Excess, TryCand, Cand,
+                  RegExcess, TRI, DAG->MF))
     return TryCand.Reason != NoCand;
 
   // Avoid increasing the max critical pressure in the scheduled region.
-  if (DAG->isTrackingPressure() && tryPressure(TryCand.RPDelta.CriticalMax,
-                                               Cand.RPDelta.CriticalMax,
-                                               TryCand, Cand, RegCritical, TRI,
-                                               DAG->MF))
+  if (DAG->isTrackingPressure() &&
+      tryPressure(TryCand.RPDelta.CriticalMax, Cand.RPDelta.CriticalMax,
+                  TryCand, Cand, RegCritical, TRI, DAG->MF))
     return TryCand.Reason != NoCand;
 
   // We only compare a subset of features when comparing nodes between
@@ -3531,27 +3543,24 @@ bool GenericScheduler::tryCandidate(SchedCandidate &Cand,
   // like generating loads of multiple registers should ideally be done within
   // the scheduler pass by combining the loads during DAG postprocessing.
   const SUnit *CandNextClusterSU =
-    Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
+      Cand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
   const SUnit *TryCandNextClusterSU =
-    TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
+      TryCand.AtTop ? DAG->getNextClusterSucc() : DAG->getNextClusterPred();
   if (tryGreater(TryCand.SU == TryCandNextClusterSU,
-                 Cand.SU == CandNextClusterSU,
-                 TryCand, Cand, Cluster))
+                 Cand.SU == CandNextClusterSU, TryCand, Cand, Cluster))
     return TryCand.Reason != NoCand;
 
   if (SameBoundary) {
     // Weak edges are for clustering and other constraints.
     if (tryLess(getWeakLeft(TryCand.SU, TryCand.AtTop),
-                getWeakLeft(Cand.SU, Cand.AtTop),
-                TryCand, Cand, Weak))
+                getWeakLeft(Cand.SU, Cand.AtTop), TryCand, Cand, Weak))
       return TryCand.Reason != NoCand;
   }
 
   // Avoid increasing the max pressure of the entire region.
-  if (DAG->isTrackingPressure() && tryPressure(TryCand.RPDelta.CurrentMax,
-                                               Cand.RPDelta.CurrentMax,
-                                               TryCand, Cand, RegMax, TRI,
-                                               DAG->MF))
+  if (DAG->isTrackingPressure() &&
+      tryPressure(TryCand.RPDelta.CurrentMax, Cand.RPDelta.CurrentMax, TryCand,
+                  Cand, RegMax, TRI, DAG->MF))
     return TryCand.Reason != NoCand;
 
   if (SameBoundary) {
@@ -3561,8 +3570,8 @@ bool GenericScheduler::tryCandidate(SchedCandidate &Cand,
                 TryCand, Cand, ResourceReduce))
       return TryCand.Reason != NoCand;
     if (tryGreater(TryCand.ResDelta.DemandedResources,
-                   Cand.ResDelta.DemandedResources,
-                   TryCand, Cand, ResourceDemand))
+                   Cand.ResDelta.DemandedResources, TryCand, Cand,
+                   ResourceDemand))
       return TryCand.Reason != NoCand;
 
     // Avoid serializing long latency dependence chains.
@@ -3572,8 +3581,8 @@ bool GenericScheduler::tryCandidate(SchedCandidate &Cand,
       return TryCand.Reason != NoCand;
 
     // Fall through to original instruction order.
-    if ((Zone->isTop() && TryCand.SU->NodeNum < Cand.SU->NodeNum)
-        || (!Zone->isTop() && TryCand.SU->NodeNum > Cand.SU->NodeNum)) {
+    if ((Zone->isTop() && TryCand.SU->NodeNum < Cand.SU->NodeNum) ||
+        (!Zone->isTop() && TryCand.SU->NodeNum > Cand.SU->NodeNum)) {
       TryCand.Reason = NodeOrder;
       return true;
     }
@@ -3592,7 +3601,7 @@ void GenericScheduler::pickNodeFromQueue(SchedBoundary &Zone,
                                          const RegPressureTracker &RPTracker,
                                          SchedCandidate &Cand) {
   // getMaxPressureDelta temporarily modifies the tracker.
-  RegPressureTracker &TempTracker = const_cast<RegPressureTracker&>(RPTracker);
+  RegPressureTracker &TempTracker = const_cast<RegPressureTracker &>(RPTracker);
 
   ReadyQueue &Q = Zone.Available;
   for (SUnit *SU : Q) {
@@ -3669,7 +3678,7 @@ SUnit *GenericScheduler::pickNodeBidirectional(bool &IsTopNode) {
       TCand.reset(CandPolicy());
       pickNodeFromQueue(Top, TopPolicy, DAG->getTopRPTracker(), TCand);
       assert(TCand.SU == TopCand.SU &&
-           "Last pick result should correspond to re-picking right now");
+             "Last pick result should correspond to re-picking right now");
     }
 #endif
   }
@@ -3799,8 +3808,8 @@ static ScheduleDAGInstrs *createConvergingSched(MachineSchedContext *C) {
 }
 
 static MachineSchedRegistry
-GenericSchedRegistry("converge", "Standard converging scheduler.",
-                     createConvergingSched);
+    GenericSchedRegistry("converge", "Standard converging scheduler.",
+                         createConvergingSched);
 
 //===----------------------------------------------------------------------===//
 // PostGenericScheduler - Generic PostRA implementation of MachineSchedStrategy.
@@ -3859,8 +3868,7 @@ bool PostGenericScheduler::tryCandidate(SchedCandidate &Cand,
 
   // Keep clustered nodes together.
   if (tryGreater(TryCand.SU == DAG->getNextClusterSucc(),
-                 Cand.SU == DAG->getNextClusterSucc(),
-                 TryCand, Cand, Cluster))
+                 Cand.SU == DAG->getNextClusterSucc(), TryCand, Cand, Cluster))
     return TryCand.Reason != NoCand;
 
   // Avoid critical resource consumption and balance the schedule.
@@ -3868,8 +3876,8 @@ bool PostGenericScheduler::tryCandidate(SchedCandidate &Cand,
               TryCand, Cand, ResourceReduce))
     return TryCand.Reason != NoCand;
   if (tryGreater(TryCand.ResDelta.DemandedResources,
-                 Cand.ResDelta.DemandedResources,
-                 TryCand, Cand, ResourceDemand))
+                 Cand.ResDelta.DemandedResources, TryCand, Cand,
+                 ResourceDemand))
     return TryCand.Reason != NoCand;
 
   // Avoid serializing long latency dependence chains.
@@ -3970,10 +3978,10 @@ struct ILPOrder {
         return ScheduledTrees->test(SchedTreeB);
 
       // Trees with shallower connections have lower priority.
-      if (DFSResult->getSubtreeLevel(SchedTreeA)
-          != DFSResult->getSubtreeLevel(SchedTreeB)) {
-        return DFSResult->getSubtreeLevel(SchedTreeA)
-          < DFSResult->getSubtreeLevel(SchedTreeB);
+      if (DFSResult->getSubtreeLevel(SchedTreeA) !=
+          DFSResult->getSubtreeLevel(SchedTreeB)) {
+        return DFSResult->getSubtreeLevel(SchedTreeA) <
+               DFSResult->getSubtreeLevel(SchedTreeB);
       }
     }
     if (MaximizeILP)
@@ -3988,14 +3996,14 @@ class ILPScheduler : public MachineSchedStrategy {
   ScheduleDAGMILive *DAG = nullptr;
   ILPOrder Cmp;
 
-  std::vector<SUnit*> ReadyQ;
+  std::vector<SUnit *> ReadyQ;
 
 public:
   ILPScheduler(bool MaximizeILP) : Cmp(MaximizeILP) {}
 
   void initialize(ScheduleDAGMI *dag) override {
     assert(dag->hasVRegLiveness() && "ILPScheduler needs vreg liveness");
-    DAG = static_cast<ScheduleDAGMILive*>(dag);
+    DAG = static_cast<ScheduleDAGMILive *>(dag);
     DAG->computeDFSResult();
     Cmp.DFSResult = DAG->getDFSResult();
     Cmp.ScheduledTrees = &DAG->getScheduledTrees();
@@ -4012,7 +4020,8 @@ class ILPScheduler : public MachineSchedStrategy {
 
   /// Callback to select the highest priority node from the ready Q.
   SUnit *pickNode(bool &IsTopNode) override {
-    if (ReadyQ.empty()) return nullptr;
+    if (ReadyQ.empty())
+      return nullptr;
     std::pop_heap(ReadyQ.begin(), ReadyQ.end(), Cmp);
     SUnit *SU = ReadyQ.back();
     ReadyQ.pop_back();
@@ -4040,7 +4049,8 @@ class ILPScheduler : public MachineSchedStrategy {
     assert(!IsTopNode && "SchedDFSResult needs bottom-up");
   }
 
-  void releaseTopNode(SUnit *) override { /*only called for top roots*/ }
+  void releaseTopNode(SUnit *) override { /*only called for top roots*/
+  }
 
   void releaseBottomNode(SUnit *SU) override {
     ReadyQ.push_back(SU);
@@ -4057,10 +4067,12 @@ static ScheduleDAGInstrs *createILPMinScheduler(MachineSchedContext *C) {
   return new ScheduleDAGMILive(C, std::make_unique<ILPScheduler>(false));
 }
 
-static MachineSchedRegistry ILPMaxRegistry(
-  "ilpmax", "Schedule bottom-up for max ILP", createILPMaxScheduler);
-static MachineSchedRegistry ILPMinRegistry(
-  "ilpmin", "Schedule bottom-up for min ILP", createILPMinScheduler);
+static MachineSchedRegistry ILPMaxRegistry("ilpmax",
+                                           "Schedule bottom-up for max ILP",
+                                           createILPMaxScheduler);
+static MachineSchedRegistry ILPMinRegistry("ilpmin",
+                                           "Schedule bottom-up for min ILP",
+                                           createILPMinScheduler);
 
 //===----------------------------------------------------------------------===//
 // Machine Instruction Shuffler for Correctness Testing
@@ -4071,8 +4083,7 @@ namespace {
 
 /// Apply a less-than relation on the node order, which corresponds to the
 /// instruction order prior to scheduling. IsReverse implements greater-than.
-template<bool IsReverse>
-struct SUnitOrder {
+template <bool IsReverse> struct SUnitOrder {
   bool operator()(SUnit *A, SUnit *B) const {
     if (IsReverse)
       return A->NodeNum > B->NodeNum;
@@ -4089,18 +4100,16 @@ class InstructionShuffler : public MachineSchedStrategy {
   // Using a less-than relation (SUnitOrder<false>) for the TopQ priority
   // gives nodes with a higher number higher priority causing the latest
   // instructions to be scheduled first.
-  PriorityQueue<SUnit*, std::vector<SUnit*>, SUnitOrder<false>>
-    TopQ;
+  PriorityQueue<SUnit *, std::vector<SUnit *>, SUnitOrder<false>> TopQ;
 
   // When scheduling bottom-up, use greater-than as the queue priority.
-  PriorityQueue<SUnit*, std::vector<SUnit*>, SUnitOrder<true>>
-    BottomQ;
+  PriorityQueue<SUnit *, std::vector<SUnit *>, SUnitOrder<true>> BottomQ;
 
 public:
   InstructionShuffler(bool alternate, bool topdown)
-    : IsAlternating(alternate), IsTopDown(topdown) {}
+      : IsAlternating(alternate), IsTopDown(topdown) {}
 
-  void initialize(ScheduleDAGMI*) override {
+  void initialize(ScheduleDAGMI *) override {
     TopQ.clear();
     BottomQ.clear();
   }
@@ -4112,14 +4121,16 @@ class InstructionShuffler : public MachineSchedStrategy {
     SUnit *SU;
     if (IsTopDown) {
       do {
-        if (TopQ.empty()) return nullptr;
+        if (TopQ.empty())
+          return nullptr;
         SU = TopQ.top();
         TopQ.pop();
       } while (SU->isScheduled);
       IsTopNode = true;
     } else {
       do {
-        if (BottomQ.empty()) return nullptr;
+        if (BottomQ.empty())
+          return nullptr;
         SU = BottomQ.top();
         BottomQ.pop();
       } while (SU->isScheduled);
@@ -4132,12 +4143,8 @@ class InstructionShuffler : public MachineSchedStrategy {
 
   void schedNode(SUnit *SU, bool IsTopNode) override {}
 
-  void releaseTopNode(SUnit *SU) override {
-    TopQ.push(SU);
-  }
-  void releaseBottomNode(SUnit *SU) override {
-    BottomQ.push(SU);
-  }
+  void releaseTopNode(SUnit *SU) override { TopQ.push(SU); }
+  void releaseBottomNode(SUnit *SU) override { BottomQ.push(SU); }
 };
 
 } // end anonymous namespace
@@ -4151,9 +4158,10 @@ static ScheduleDAGInstrs *createInstructionShuffler(MachineSchedContext *C) {
       C, std::make_unique<InstructionShuffler>(Alternate, TopDown));
 }
 
-static MachineSchedRegistry ShufflerRegistry(
-  "shuffle", "Shuffle machine instructions alternating directions",
-  createInstructionShuffler);
+static MachineSchedRegistry
+    ShufflerRegistry("shuffle",
+                     "Shuffle machine instructions alternating directions",
+                     createInstructionShuffler);
 #endif // !NDEBUG
 
 //===----------------------------------------------------------------------===//
@@ -4163,32 +4171,29 @@ static MachineSchedRegistry ShufflerRegistry(
 #ifndef NDEBUG
 namespace llvm {
 
-template<> struct GraphTraits<
-  ScheduleDAGMI*> : public GraphTraits<ScheduleDAG*> {};
+template <>
+struct GraphTraits<ScheduleDAGMI *> : public GraphTraits<ScheduleDAG *> {};
 
-template<>
-struct DOTGraphTraits<ScheduleDAGMI*> : public DefaultDOTGraphTraits {
+template <>
+struct DOTGraphTraits<ScheduleDAGMI *> : public DefaultDOTGraphTraits {
   DOTGraphTraits(bool isSimple = false) : DefaultDOTGraphTraits(isSimple) {}
 
   static std::string getGraphName(const ScheduleDAG *G) {
     return std::string(G->MF.getName());
   }
 
-  static bool renderGraphFromBottomUp() {
-    return true;
-  }
+  static bool renderGraphFromBottomUp() { return true; }
 
   static bool isNodeHidden(const SUnit *Node, const ScheduleDAG *G) {
     if (ViewMISchedCutoff == 0)
       return false;
-    return (Node->Preds.size() > ViewMISchedCutoff
-         || Node->Succs.size() > ViewMISchedCutoff);
+    return (Node->Preds.size() > ViewMISchedCutoff ||
+            Node->Succs.size() > ViewMISchedCutoff);
   }
 
   /// If you want to override the dot attributes printed for a particular
   /// edge, override this method.
-  static std::string getEdgeAttributes(const SUnit *Node,
-                                       SUnitIterator EI,
+  static std::string getEdgeAttributes(const SUnit *Node, SUnitIterator EI,
                                        const ScheduleDAG *Graph) {
     if (EI.isArtificialDep())
       return "color=cyan,style=dashed";
@@ -4200,9 +4205,11 @@ struct DOTGraphTraits<ScheduleDAGMI*> : public DefaultDOTGraphTraits {
   static std::string getNodeLabel(const SUnit *SU, const ScheduleDAG *G) {
     std::string Str;
     raw_string_ostream SS(Str);
-    const ScheduleDAGMI *DAG = static_cast<const ScheduleDAGMI*>(G);
-    const SchedDFSResult *DFS = DAG->hasVRegLiveness() ?
-      static_cast<const ScheduleDAGMILive*>(G)->getDFSResult() : nullptr;
+    const ScheduleDAGMI *DAG = static_cast<const ScheduleDAGMI *>(G);
+    const SchedDFSResult *DFS =
+        DAG->hasVRegLiveness()
+            ? static_cast<const ScheduleDAGMILive *>(G)->getDFSResult()
+            : nullptr;
     SS << "SU:" << SU->NodeNum;
     if (DFS)
       SS << " I:" << DFS->getNumInstrs(SU);
@@ -4215,9 +4222,11 @@ struct DOTGraphTraits<ScheduleDAGMI*> : public DefaultDOTGraphTraits {
 
   static std::string getNodeAttributes(const SUnit *N, const ScheduleDAG *G) {
     std::string Str("shape=Mrecord");
-    const ScheduleDAGMI *DAG = static_cast<const ScheduleDAGMI*>(G);
-    const SchedDFSResult *DFS = DAG->hasVRegLiveness() ?
-      static_cast<const ScheduleDAGMILive*>(G)->getDFSResult() : nullptr;
+    const ScheduleDAGMI *DAG = static_cast<const ScheduleDAGMI *>(G);
+    const SchedDFSResult *DFS =
+        DAG->hasVRegLiveness()
+            ? static_cast<const ScheduleDAGMILive *>(G)->getDFSResult()
+            : nullptr;
     if (DFS) {
       Str += ",style=filled,fillcolor=\"#";
       Str += DOT::getColorString(DFS->getSubtreeID(N));
@@ -4238,7 +4247,7 @@ void ScheduleDAGMI::viewGraph(const Twine &Name, const Twine &Title) {
 #else
   errs() << "ScheduleDAGMI::viewGraph is only available in debug builds on "
          << "systems with Graphviz or gv!\n";
-#endif  // NDEBUG
+#endif // NDEBUG
 }
 
 /// Out-of-line implementation with no arguments is handy for gdb.
diff --git a/llvm/lib/CodeGen/MachineSink.cpp b/llvm/lib/CodeGen/MachineSink.cpp
index b4cbb93d758ef2f..5b7fb9745491438 100644
--- a/llvm/lib/CodeGen/MachineSink.cpp
+++ b/llvm/lib/CodeGen/MachineSink.cpp
@@ -65,14 +65,14 @@ using namespace llvm;
 #define DEBUG_TYPE "machine-sink"
 
 static cl::opt<bool>
-SplitEdges("machine-sink-split",
-           cl::desc("Split critical edges during machine sinking"),
-           cl::init(true), cl::Hidden);
+    SplitEdges("machine-sink-split",
+               cl::desc("Split critical edges during machine sinking"),
+               cl::init(true), cl::Hidden);
 
-static cl::opt<bool>
-UseBlockFreqInfo("machine-sink-bfi",
-           cl::desc("Use block frequency info to find successors to sink"),
-           cl::init(true), cl::Hidden);
+static cl::opt<bool> UseBlockFreqInfo(
+    "machine-sink-bfi",
+    cl::desc("Use block frequency info to find successors to sink"),
+    cl::init(true), cl::Hidden);
 
 static cl::opt<unsigned> SplitEdgeProbabilityThreshold(
     "machine-sink-split-probability-threshold",
@@ -103,155 +103,152 @@ static cl::opt<bool>
 
 static cl::opt<unsigned> SinkIntoCycleLimit(
     "machine-sink-cycle-limit",
-    cl::desc("The maximum number of instructions considered for cycle sinking."),
+    cl::desc(
+        "The maximum number of instructions considered for cycle sinking."),
     cl::init(50), cl::Hidden);
 
-STATISTIC(NumSunk,      "Number of machine instructions sunk");
-STATISTIC(NumCycleSunk,  "Number of machine instructions sunk into a cycle");
-STATISTIC(NumSplit,     "Number of critical edges split");
+STATISTIC(NumSunk, "Number of machine instructions sunk");
+STATISTIC(NumCycleSunk, "Number of machine instructions sunk into a cycle");
+STATISTIC(NumSplit, "Number of critical edges split");
 STATISTIC(NumCoalesces, "Number of copies coalesced");
 STATISTIC(NumPostRACopySink, "Number of copies sunk after RA");
 
 namespace {
 
-  class MachineSinking : public MachineFunctionPass {
-    const TargetInstrInfo *TII = nullptr;
-    const TargetRegisterInfo *TRI = nullptr;
-    MachineRegisterInfo *MRI = nullptr;      // Machine register information
-    MachineDominatorTree *DT = nullptr;      // Machine dominator tree
-    MachinePostDominatorTree *PDT = nullptr; // Machine post dominator tree
-    MachineCycleInfo *CI = nullptr;
-    MachineBlockFrequencyInfo *MBFI = nullptr;
-    const MachineBranchProbabilityInfo *MBPI = nullptr;
-    AliasAnalysis *AA = nullptr;
-    RegisterClassInfo RegClassInfo;
-
-    // Remember which edges have been considered for breaking.
-    SmallSet<std::pair<MachineBasicBlock*, MachineBasicBlock*>, 8>
-    CEBCandidates;
-    // Remember which edges we are about to split.
-    // This is different from CEBCandidates since those edges
-    // will be split.
-    SetVector<std::pair<MachineBasicBlock *, MachineBasicBlock *>> ToSplit;
-
-    DenseSet<Register> RegsToClearKillFlags;
-
-    using AllSuccsCache =
-        std::map<MachineBasicBlock *, SmallVector<MachineBasicBlock *, 4>>;
-
-    /// DBG_VALUE pointer and flag. The flag is true if this DBG_VALUE is
-    /// post-dominated by another DBG_VALUE of the same variable location.
-    /// This is necessary to detect sequences such as:
-    ///     %0 = someinst
-    ///     DBG_VALUE %0, !123, !DIExpression()
-    ///     %1 = anotherinst
-    ///     DBG_VALUE %1, !123, !DIExpression()
-    /// Where if %0 were to sink, the DBG_VAUE should not sink with it, as that
-    /// would re-order assignments.
-    using SeenDbgUser = PointerIntPair<MachineInstr *, 1>;
-
-    /// Record of DBG_VALUE uses of vregs in a block, so that we can identify
-    /// debug instructions to sink.
-    SmallDenseMap<unsigned, TinyPtrVector<SeenDbgUser>> SeenDbgUsers;
-
-    /// Record of debug variables that have had their locations set in the
-    /// current block.
-    DenseSet<DebugVariable> SeenDbgVars;
-
-    std::map<std::pair<MachineBasicBlock *, MachineBasicBlock *>, bool>
-        HasStoreCache;
-    std::map<std::pair<MachineBasicBlock *, MachineBasicBlock *>,
-             std::vector<MachineInstr *>>
-        StoreInstrCache;
-
-    /// Cached BB's register pressure.
-    std::map<MachineBasicBlock *, std::vector<unsigned>> CachedRegisterPressure;
-
-  public:
-    static char ID; // Pass identification
-
-    MachineSinking() : MachineFunctionPass(ID) {
-      initializeMachineSinkingPass(*PassRegistry::getPassRegistry());
-    }
+class MachineSinking : public MachineFunctionPass {
+  const TargetInstrInfo *TII = nullptr;
+  const TargetRegisterInfo *TRI = nullptr;
+  MachineRegisterInfo *MRI = nullptr;      // Machine register information
+  MachineDominatorTree *DT = nullptr;      // Machine dominator tree
+  MachinePostDominatorTree *PDT = nullptr; // Machine post dominator tree
+  MachineCycleInfo *CI = nullptr;
+  MachineBlockFrequencyInfo *MBFI = nullptr;
+  const MachineBranchProbabilityInfo *MBPI = nullptr;
+  AliasAnalysis *AA = nullptr;
+  RegisterClassInfo RegClassInfo;
+
+  // Remember which edges have been considered for breaking.
+  SmallSet<std::pair<MachineBasicBlock *, MachineBasicBlock *>, 8>
+      CEBCandidates;
+  // Remember which edges we are about to split.
+  // This is different from CEBCandidates since those edges
+  // will be split.
+  SetVector<std::pair<MachineBasicBlock *, MachineBasicBlock *>> ToSplit;
+
+  DenseSet<Register> RegsToClearKillFlags;
+
+  using AllSuccsCache =
+      std::map<MachineBasicBlock *, SmallVector<MachineBasicBlock *, 4>>;
+
+  /// DBG_VALUE pointer and flag. The flag is true if this DBG_VALUE is
+  /// post-dominated by another DBG_VALUE of the same variable location.
+  /// This is necessary to detect sequences such as:
+  ///     %0 = someinst
+  ///     DBG_VALUE %0, !123, !DIExpression()
+  ///     %1 = anotherinst
+  ///     DBG_VALUE %1, !123, !DIExpression()
+  /// Where if %0 were to sink, the DBG_VAUE should not sink with it, as that
+  /// would re-order assignments.
+  using SeenDbgUser = PointerIntPair<MachineInstr *, 1>;
+
+  /// Record of DBG_VALUE uses of vregs in a block, so that we can identify
+  /// debug instructions to sink.
+  SmallDenseMap<unsigned, TinyPtrVector<SeenDbgUser>> SeenDbgUsers;
+
+  /// Record of debug variables that have had their locations set in the
+  /// current block.
+  DenseSet<DebugVariable> SeenDbgVars;
+
+  std::map<std::pair<MachineBasicBlock *, MachineBasicBlock *>, bool>
+      HasStoreCache;
+  std::map<std::pair<MachineBasicBlock *, MachineBasicBlock *>,
+           std::vector<MachineInstr *>>
+      StoreInstrCache;
+
+  /// Cached BB's register pressure.
+  std::map<MachineBasicBlock *, std::vector<unsigned>> CachedRegisterPressure;
 
-    bool runOnMachineFunction(MachineFunction &MF) override;
-
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      MachineFunctionPass::getAnalysisUsage(AU);
-      AU.addRequired<AAResultsWrapperPass>();
-      AU.addRequired<MachineDominatorTree>();
-      AU.addRequired<MachinePostDominatorTree>();
-      AU.addRequired<MachineCycleInfoWrapperPass>();
-      AU.addRequired<MachineBranchProbabilityInfo>();
-      AU.addPreserved<MachineCycleInfoWrapperPass>();
-      AU.addPreserved<MachineLoopInfo>();
-      if (UseBlockFreqInfo)
-        AU.addRequired<MachineBlockFrequencyInfo>();
-    }
+public:
+  static char ID; // Pass identification
 
-    void releaseMemory() override {
-      CEBCandidates.clear();
-    }
+  MachineSinking() : MachineFunctionPass(ID) {
+    initializeMachineSinkingPass(*PassRegistry::getPassRegistry());
+  }
 
-  private:
-    bool ProcessBlock(MachineBasicBlock &MBB);
-    void ProcessDbgInst(MachineInstr &MI);
-    bool isWorthBreakingCriticalEdge(MachineInstr &MI,
-                                     MachineBasicBlock *From,
-                                     MachineBasicBlock *To);
-
-    bool hasStoreBetween(MachineBasicBlock *From, MachineBasicBlock *To,
-                         MachineInstr &MI);
-
-    /// Postpone the splitting of the given critical
-    /// edge (\p From, \p To).
-    ///
-    /// We do not split the edges on the fly. Indeed, this invalidates
-    /// the dominance information and thus triggers a lot of updates
-    /// of that information underneath.
-    /// Instead, we postpone all the splits after each iteration of
-    /// the main loop. That way, the information is at least valid
-    /// for the lifetime of an iteration.
-    ///
-    /// \return True if the edge is marked as toSplit, false otherwise.
-    /// False can be returned if, for instance, this is not profitable.
-    bool PostponeSplitCriticalEdge(MachineInstr &MI,
-                                   MachineBasicBlock *From,
-                                   MachineBasicBlock *To,
-                                   bool BreakPHIEdge);
-    bool SinkInstruction(MachineInstr &MI, bool &SawStore,
-                         AllSuccsCache &AllSuccessors);
-
-    /// If we sink a COPY inst, some debug users of it's destination may no
-    /// longer be dominated by the COPY, and will eventually be dropped.
-    /// This is easily rectified by forwarding the non-dominated debug uses
-    /// to the copy source.
-    void SalvageUnsunkDebugUsersOfCopy(MachineInstr &,
-                                       MachineBasicBlock *TargetBlock);
-    bool AllUsesDominatedByBlock(Register Reg, MachineBasicBlock *MBB,
-                                 MachineBasicBlock *DefMBB, bool &BreakPHIEdge,
-                                 bool &LocalUse) const;
-    MachineBasicBlock *FindSuccToSinkTo(MachineInstr &MI, MachineBasicBlock *MBB,
-               bool &BreakPHIEdge, AllSuccsCache &AllSuccessors);
-
-    void FindCycleSinkCandidates(MachineCycle *Cycle, MachineBasicBlock *BB,
-                                 SmallVectorImpl<MachineInstr *> &Candidates);
-    bool SinkIntoCycle(MachineCycle *Cycle, MachineInstr &I);
-
-    bool isProfitableToSinkTo(Register Reg, MachineInstr &MI,
-                              MachineBasicBlock *MBB,
-                              MachineBasicBlock *SuccToSinkTo,
-                              AllSuccsCache &AllSuccessors);
-
-    bool PerformTrivialForwardCoalescing(MachineInstr &MI,
-                                         MachineBasicBlock *MBB);
-
-    SmallVector<MachineBasicBlock *, 4> &
-    GetAllSortedSuccessors(MachineInstr &MI, MachineBasicBlock *MBB,
-                           AllSuccsCache &AllSuccessors) const;
-
-    std::vector<unsigned> &getBBRegisterPressure(MachineBasicBlock &MBB);
-  };
+  bool runOnMachineFunction(MachineFunction &MF) override;
+
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    MachineFunctionPass::getAnalysisUsage(AU);
+    AU.addRequired<AAResultsWrapperPass>();
+    AU.addRequired<MachineDominatorTree>();
+    AU.addRequired<MachinePostDominatorTree>();
+    AU.addRequired<MachineCycleInfoWrapperPass>();
+    AU.addRequired<MachineBranchProbabilityInfo>();
+    AU.addPreserved<MachineCycleInfoWrapperPass>();
+    AU.addPreserved<MachineLoopInfo>();
+    if (UseBlockFreqInfo)
+      AU.addRequired<MachineBlockFrequencyInfo>();
+  }
+
+  void releaseMemory() override { CEBCandidates.clear(); }
+
+private:
+  bool ProcessBlock(MachineBasicBlock &MBB);
+  void ProcessDbgInst(MachineInstr &MI);
+  bool isWorthBreakingCriticalEdge(MachineInstr &MI, MachineBasicBlock *From,
+                                   MachineBasicBlock *To);
+
+  bool hasStoreBetween(MachineBasicBlock *From, MachineBasicBlock *To,
+                       MachineInstr &MI);
+
+  /// Postpone the splitting of the given critical
+  /// edge (\p From, \p To).
+  ///
+  /// We do not split the edges on the fly. Indeed, this invalidates
+  /// the dominance information and thus triggers a lot of updates
+  /// of that information underneath.
+  /// Instead, we postpone all the splits after each iteration of
+  /// the main loop. That way, the information is at least valid
+  /// for the lifetime of an iteration.
+  ///
+  /// \return True if the edge is marked as toSplit, false otherwise.
+  /// False can be returned if, for instance, this is not profitable.
+  bool PostponeSplitCriticalEdge(MachineInstr &MI, MachineBasicBlock *From,
+                                 MachineBasicBlock *To, bool BreakPHIEdge);
+  bool SinkInstruction(MachineInstr &MI, bool &SawStore,
+                       AllSuccsCache &AllSuccessors);
+
+  /// If we sink a COPY inst, some debug users of it's destination may no
+  /// longer be dominated by the COPY, and will eventually be dropped.
+  /// This is easily rectified by forwarding the non-dominated debug uses
+  /// to the copy source.
+  void SalvageUnsunkDebugUsersOfCopy(MachineInstr &,
+                                     MachineBasicBlock *TargetBlock);
+  bool AllUsesDominatedByBlock(Register Reg, MachineBasicBlock *MBB,
+                               MachineBasicBlock *DefMBB, bool &BreakPHIEdge,
+                               bool &LocalUse) const;
+  MachineBasicBlock *FindSuccToSinkTo(MachineInstr &MI, MachineBasicBlock *MBB,
+                                      bool &BreakPHIEdge,
+                                      AllSuccsCache &AllSuccessors);
+
+  void FindCycleSinkCandidates(MachineCycle *Cycle, MachineBasicBlock *BB,
+                               SmallVectorImpl<MachineInstr *> &Candidates);
+  bool SinkIntoCycle(MachineCycle *Cycle, MachineInstr &I);
+
+  bool isProfitableToSinkTo(Register Reg, MachineInstr &MI,
+                            MachineBasicBlock *MBB,
+                            MachineBasicBlock *SuccToSinkTo,
+                            AllSuccsCache &AllSuccessors);
+
+  bool PerformTrivialForwardCoalescing(MachineInstr &MI,
+                                       MachineBasicBlock *MBB);
+
+  SmallVector<MachineBasicBlock *, 4> &
+  GetAllSortedSuccessors(MachineInstr &MI, MachineBasicBlock *MBB,
+                         AllSuccsCache &AllSuccessors) const;
+
+  std::vector<unsigned> &getBBRegisterPressure(MachineBasicBlock &MBB);
+};
 
 } // end anonymous namespace
 
@@ -259,14 +256,14 @@ char MachineSinking::ID = 0;
 
 char &llvm::MachineSinkingID = MachineSinking::ID;
 
-INITIALIZE_PASS_BEGIN(MachineSinking, DEBUG_TYPE,
-                      "Machine code sinking", false, false)
+INITIALIZE_PASS_BEGIN(MachineSinking, DEBUG_TYPE, "Machine code sinking", false,
+                      false)
 INITIALIZE_PASS_DEPENDENCY(MachineBranchProbabilityInfo)
 INITIALIZE_PASS_DEPENDENCY(MachineDominatorTree)
 INITIALIZE_PASS_DEPENDENCY(MachineCycleInfoWrapperPass)
 INITIALIZE_PASS_DEPENDENCY(AAResultsWrapperPass)
-INITIALIZE_PASS_END(MachineSinking, DEBUG_TYPE,
-                    "Machine code sinking", false, false)
+INITIALIZE_PASS_END(MachineSinking, DEBUG_TYPE, "Machine code sinking", false,
+                    false)
 
 /// Return true if a target defined block prologue instruction interferes
 /// with a sink candidate.
@@ -386,7 +383,7 @@ bool MachineSinking::AllUsesDominatedByBlock(Register Reg,
     if (UseInst->isPHI()) {
       // PHI nodes use the operand in the predecessor block, not the block with
       // the PHI.
-      UseBlock = UseInst->getOperand(OpNo+1).getMBB();
+      UseBlock = UseInst->getOperand(OpNo + 1).getMBB();
     } else if (UseBlock == DefMBB) {
       LocalUse = true;
       return false;
@@ -480,7 +477,7 @@ bool MachineSinking::runOnMachineFunction(MachineFunction &MF) {
     // Process all basic blocks.
     CEBCandidates.clear();
     ToSplit.clear();
-    for (auto &MBB: MF)
+    for (auto &MBB : MF)
       MadeChange |= ProcessBlock(MBB);
 
     // If we have anything we marked as toSplit, split it now.
@@ -500,7 +497,8 @@ bool MachineSinking::runOnMachineFunction(MachineFunction &MF) {
         LLVM_DEBUG(dbgs() << " *** Not legal to break critical edge\n");
     }
     // If this iteration over the code changed anything, keep iterating.
-    if (!MadeChange) break;
+    if (!MadeChange)
+      break;
     EverMadeChange = true;
   }
 
@@ -548,12 +546,14 @@ bool MachineSinking::runOnMachineFunction(MachineFunction &MF) {
 
 bool MachineSinking::ProcessBlock(MachineBasicBlock &MBB) {
   // Can't sink anything out of a block that has less than two successors.
-  if (MBB.succ_size() <= 1 || MBB.empty()) return false;
+  if (MBB.succ_size() <= 1 || MBB.empty())
+    return false;
 
   // Don't bother sinking code out of unreachable blocks. In addition to being
   // unprofitable, it can also lead to infinite looping, because in an
   // unreachable cycle there may be nowhere to stop.
-  if (!DT->isReachableFromEntry(&MBB)) return false;
+  if (!DT->isReachableFromEntry(&MBB))
+    return false;
 
   bool MadeChange = false;
 
@@ -565,7 +565,7 @@ bool MachineSinking::ProcessBlock(MachineBasicBlock &MBB) {
   --I;
   bool ProcessedBegin, SawStore = false;
   do {
-    MachineInstr &MI = *I;  // The instruction to sink.
+    MachineInstr &MI = *I; // The instruction to sink.
 
     // Predecrement I (if it's not begin) so that it isn't invalidated by
     // sinking.
@@ -633,8 +633,9 @@ bool MachineSinking::isWorthBreakingCriticalEdge(MachineInstr &MI,
   if (!MI.isCopy() && !TII->isAsCheapAsAMove(MI))
     return true;
 
-  if (From->isSuccessor(To) && MBPI->getEdgeProbability(From, To) <=
-      BranchProbability(SplitEdgeProbabilityThreshold, 100))
+  if (From->isSuccessor(To) &&
+      MBPI->getEdgeProbability(From, To) <=
+          BranchProbability(SplitEdgeProbabilityThreshold, 100))
     return true;
 
   // MI is cheap, we probably don't want to break the critical edge for it.
@@ -777,7 +778,7 @@ bool MachineSinking::isProfitableToSinkTo(Register Reg, MachineInstr &MI,
                                           MachineBasicBlock *MBB,
                                           MachineBasicBlock *SuccToSinkTo,
                                           AllSuccsCache &AllSuccessors) {
-  assert (SuccToSinkTo && "Invalid SinkTo Candidate BB");
+  assert(SuccToSinkTo && "Invalid SinkTo Candidate BB");
 
   if (MBB == SuccToSinkTo)
     return false;
@@ -811,8 +812,8 @@ bool MachineSinking::isProfitableToSinkTo(Register Reg, MachineInstr &MI,
 
   MachineCycle *MCycle = CI->getCycle(MBB);
 
-  // If the instruction is not inside a cycle, it is not profitable to sink MI to
-  // a post dominate block SuccToSinkTo.
+  // If the instruction is not inside a cycle, it is not profitable to sink MI
+  // to a post dominate block SuccToSinkTo.
   if (!MCycle)
     return false;
 
@@ -843,7 +844,8 @@ bool MachineSinking::isProfitableToSinkTo(Register Reg, MachineInstr &MI,
 
     if (Reg.isPhysical()) {
       // Don't handle non-constant and non-ignorable physical register uses.
-      if (MO.isUse() && !MRI->isConstantPhysReg(Reg) && !TII->isIgnorableUse(MO))
+      if (MO.isUse() && !MRI->isConstantPhysReg(Reg) &&
+          !TII->isIgnorableUse(MO))
         return false;
       continue;
     }
@@ -930,7 +932,7 @@ MachineBasicBlock *
 MachineSinking::FindSuccToSinkTo(MachineInstr &MI, MachineBasicBlock *MBB,
                                  bool &BreakPHIEdge,
                                  AllSuccsCache &AllSuccessors) {
-  assert (MBB && "Invalid MachineBasicBlock!");
+  assert(MBB && "Invalid MachineBasicBlock!");
 
   // loop over all the operands of the specified instruction.  If there is
   // anything we can't handle, bail out.
@@ -939,10 +941,12 @@ MachineSinking::FindSuccToSinkTo(MachineInstr &MI, MachineBasicBlock *MBB,
   // decide.
   MachineBasicBlock *SuccToSinkTo = nullptr;
   for (const MachineOperand &MO : MI.operands()) {
-    if (!MO.isReg()) continue;  // Ignore non-register operands.
+    if (!MO.isReg())
+      continue; // Ignore non-register operands.
 
     Register Reg = MO.getReg();
-    if (Reg == 0) continue;
+    if (Reg == 0)
+      continue;
 
     if (Reg.isPhysical()) {
       if (MO.isUse()) {
@@ -957,7 +961,8 @@ MachineSinking::FindSuccToSinkTo(MachineInstr &MI, MachineBasicBlock *MBB,
       }
     } else {
       // Virtual register uses are always safe to sink.
-      if (MO.isUse()) continue;
+      if (MO.isUse())
+        continue;
 
       // If it's not safe to move defs of the register class, then abort.
       if (!TII->isSafeToMoveRegClassDefs(MRI->getRegClass(Reg)))
@@ -969,8 +974,8 @@ MachineSinking::FindSuccToSinkTo(MachineInstr &MI, MachineBasicBlock *MBB,
         // If a previous operand picked a block to sink to, then this operand
         // must be sinkable to the same block.
         bool LocalUse = false;
-        if (!AllUsesDominatedByBlock(Reg, SuccToSinkTo, MBB,
-                                     BreakPHIEdge, LocalUse))
+        if (!AllUsesDominatedByBlock(Reg, SuccToSinkTo, MBB, BreakPHIEdge,
+                                     LocalUse))
           return nullptr;
 
         continue;
@@ -983,8 +988,8 @@ MachineSinking::FindSuccToSinkTo(MachineInstr &MI, MachineBasicBlock *MBB,
       for (MachineBasicBlock *SuccBlock :
            GetAllSortedSuccessors(MI, MBB, AllSuccessors)) {
         bool LocalUse = false;
-        if (AllUsesDominatedByBlock(Reg, SuccBlock, MBB,
-                                    BreakPHIEdge, LocalUse)) {
+        if (AllUsesDominatedByBlock(Reg, SuccBlock, MBB, BreakPHIEdge,
+                                    LocalUse)) {
           SuccToSinkTo = SuccBlock;
           break;
         }
@@ -1224,7 +1229,7 @@ bool MachineSinking::hasStoreBetween(MachineBasicBlock *From,
         for (auto *DomBB : HandledDomBlocks) {
           if (DomBB != BB && DT->dominates(DomBB, BB))
             HasStoreCache[std::make_pair(DomBB, To)] = true;
-          else if(DomBB != BB && DT->dominates(BB, DomBB))
+          else if (DomBB != BB && DT->dominates(BB, DomBB))
             HasStoreCache[std::make_pair(From, DomBB)] = true;
         }
         HasStoreCache[BlockPair] = true;
@@ -1238,7 +1243,7 @@ bool MachineSinking::hasStoreBetween(MachineBasicBlock *From,
           for (auto *DomBB : HandledDomBlocks) {
             if (DomBB != BB && DT->dominates(DomBB, BB))
               HasStoreCache[std::make_pair(DomBB, To)] = true;
-            else if(DomBB != BB && DT->dominates(BB, DomBB))
+            else if (DomBB != BB && DT->dominates(BB, DomBB))
               HasStoreCache[std::make_pair(From, DomBB)] = true;
           }
           HasStoreCache[BlockPair] = true;
@@ -1304,8 +1309,8 @@ bool MachineSinking::SinkIntoCycle(MachineCycle *Cycle, MachineInstr &I) {
       CanSink = false;
       break;
     }
-    LLVM_DEBUG(dbgs() << "CycleSink:   Setting nearest common dom block: " <<
-               printMBBReference(*SinkBlock) << "\n");
+    LLVM_DEBUG(dbgs() << "CycleSink:   Setting nearest common dom block: "
+                      << printMBBReference(*SinkBlock) << "\n");
   }
 
   if (!CanSink) {
@@ -1431,8 +1436,8 @@ bool MachineSinking::SinkInstruction(MachineInstr &MI, bool &SawStore,
       // Mark this edge as to be split.
       // If the edge can actually be split, the next iteration of the main loop
       // will sink MI in the newly created block.
-      bool Status =
-        PostponeSplitCriticalEdge(MI, ParentBlock, SuccToSinkTo, BreakPHIEdge);
+      bool Status = PostponeSplitCriticalEdge(MI, ParentBlock, SuccToSinkTo,
+                                              BreakPHIEdge);
       if (!Status)
         LLVM_DEBUG(dbgs() << " *** PUNTING: Not legal or profitable to "
                              "break critical edge\n");
@@ -1445,8 +1450,8 @@ bool MachineSinking::SinkInstruction(MachineInstr &MI, bool &SawStore,
     // BreakPHIEdge is true if all the uses are in the successor MBB being
     // sunken into and they are all PHI nodes. In this case, machine-sink must
     // break the critical edge first.
-    bool Status = PostponeSplitCriticalEdge(MI, ParentBlock,
-                                            SuccToSinkTo, BreakPHIEdge);
+    bool Status =
+        PostponeSplitCriticalEdge(MI, ParentBlock, SuccToSinkTo, BreakPHIEdge);
     if (!Status)
       LLVM_DEBUG(dbgs() << " *** PUNTING: Not legal or profitable to "
                            "break critical edge\n");
@@ -1857,8 +1862,7 @@ bool PostRAMachineSinking::tryToSinkCopy(MachineBasicBlock &CurBB,
     MachineBasicBlock::iterator InsertPos =
         SuccBB->SkipPHIsAndLabels(SuccBB->begin());
     if (blockPrologueInterferes(SuccBB, InsertPos, MI, TRI, TII, nullptr)) {
-      LLVM_DEBUG(
-          dbgs() << " *** Not sinking: prologue interference\n");
+      LLVM_DEBUG(dbgs() << " *** Not sinking: prologue interference\n");
       continue;
     }
 
diff --git a/llvm/lib/CodeGen/MachineSizeOpts.cpp b/llvm/lib/CodeGen/MachineSizeOpts.cpp
index 53bed7397d0992e..e09dde44919a7c9 100644
--- a/llvm/lib/CodeGen/MachineSizeOpts.cpp
+++ b/llvm/lib/CodeGen/MachineSizeOpts.cpp
@@ -12,8 +12,8 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/CodeGen/MachineSizeOpts.h"
-#include "llvm/CodeGen/MBFIWrapper.h"
 #include "llvm/Analysis/ProfileSummaryInfo.h"
+#include "llvm/CodeGen/MBFIWrapper.h"
 #include "llvm/CodeGen/MachineBlockFrequencyInfo.h"
 
 using namespace llvm;
@@ -40,8 +40,7 @@ bool llvm::shouldOptimizeForSize(const MachineBasicBlock *MBB,
 }
 
 bool llvm::shouldOptimizeForSize(const MachineBasicBlock *MBB,
-                                 ProfileSummaryInfo *PSI,
-                                 MBFIWrapper *MBFIW,
+                                 ProfileSummaryInfo *PSI, MBFIWrapper *MBFIW,
                                  PGSOQueryType QueryType) {
   assert(MBB);
   if (!PSI || !MBFIW)
diff --git a/llvm/lib/CodeGen/MachineTraceMetrics.cpp b/llvm/lib/CodeGen/MachineTraceMetrics.cpp
index 3e6f36fe936fff0..09860aa6bfc8d3d 100644
--- a/llvm/lib/CodeGen/MachineTraceMetrics.cpp
+++ b/llvm/lib/CodeGen/MachineTraceMetrics.cpp
@@ -44,12 +44,12 @@ char MachineTraceMetrics::ID = 0;
 
 char &llvm::MachineTraceMetricsID = MachineTraceMetrics::ID;
 
-INITIALIZE_PASS_BEGIN(MachineTraceMetrics, DEBUG_TYPE,
-                      "Machine Trace Metrics", false, true)
+INITIALIZE_PASS_BEGIN(MachineTraceMetrics, DEBUG_TYPE, "Machine Trace Metrics",
+                      false, true)
 INITIALIZE_PASS_DEPENDENCY(MachineBranchProbabilityInfo)
 INITIALIZE_PASS_DEPENDENCY(MachineLoopInfo)
-INITIALIZE_PASS_END(MachineTraceMetrics, DEBUG_TYPE,
-                    "Machine Trace Metrics", false, true)
+INITIALIZE_PASS_END(MachineTraceMetrics, DEBUG_TYPE, "Machine Trace Metrics",
+                    false, true)
 
 MachineTraceMetrics::MachineTraceMetrics() : MachineFunctionPass(ID) {
   std::fill(std::begin(Ensembles), std::end(Ensembles), nullptr);
@@ -72,7 +72,7 @@ bool MachineTraceMetrics::runOnMachineFunction(MachineFunction &Func) {
   SchedModel.init(&ST);
   BlockInfo.resize(MF->getNumBlockIDs());
   ProcReleaseAtCycles.resize(MF->getNumBlockIDs() *
-                            SchedModel.getNumProcResourceKinds());
+                             SchedModel.getNumProcResourceKinds());
   return false;
 }
 
@@ -93,7 +93,7 @@ void MachineTraceMetrics::releaseMemory() {
 // those instructions don't depend on any given trace strategy.
 
 /// Compute the resource usage in basic block MBB.
-const MachineTraceMetrics::FixedBlockInfo*
+const MachineTraceMetrics::FixedBlockInfo *
 MachineTraceMetrics::getResources(const MachineBasicBlock *MBB) {
   assert(MBB && "No basic block");
   FixedBlockInfo *FBI = &BlockInfo[MBB->getNumber()];
@@ -122,9 +122,9 @@ MachineTraceMetrics::getResources(const MachineBasicBlock *MBB) {
     if (!SC->isValid())
       continue;
 
-    for (TargetSchedModel::ProcResIter
-         PI = SchedModel.getWriteProcResBegin(SC),
-         PE = SchedModel.getWriteProcResEnd(SC); PI != PE; ++PI) {
+    for (TargetSchedModel::ProcResIter PI = SchedModel.getWriteProcResBegin(SC),
+                                       PE = SchedModel.getWriteProcResEnd(SC);
+         PI != PE; ++PI) {
       assert(PI->ProcResourceIdx < PRKinds && "Bad processor resource kind");
       PRCycles[PI->ProcResourceIdx] += PI->ReleaseAtCycle;
     }
@@ -135,7 +135,7 @@ MachineTraceMetrics::getResources(const MachineBasicBlock *MBB) {
   unsigned PROffset = MBB->getNumber() * PRKinds;
   for (unsigned K = 0; K != PRKinds; ++K)
     ProcReleaseAtCycles[PROffset + K] =
-      PRCycles[K] * SchedModel.getResourceFactor(K);
+        PRCycles[K] * SchedModel.getResourceFactor(K);
 
   return FBI;
 }
@@ -145,7 +145,7 @@ MachineTraceMetrics::getProcReleaseAtCycles(unsigned MBBNum) const {
   assert(BlockInfo[MBBNum].hasResources() &&
          "getResources() must be called before getProcReleaseAtCycles()");
   unsigned PRKinds = SchedModel.getNumProcResourceKinds();
-  assert((MBBNum+1) * PRKinds <= ProcReleaseAtCycles.size());
+  assert((MBBNum + 1) * PRKinds <= ProcReleaseAtCycles.size());
   return ArrayRef(ProcReleaseAtCycles.data() + MBBNum * PRKinds, PRKinds);
 }
 
@@ -153,8 +153,7 @@ MachineTraceMetrics::getProcReleaseAtCycles(unsigned MBBNum) const {
 //                         Ensemble utility functions
 //===----------------------------------------------------------------------===//
 
-MachineTraceMetrics::Ensemble::Ensemble(MachineTraceMetrics *ct)
-  : MTM(*ct) {
+MachineTraceMetrics::Ensemble::Ensemble(MachineTraceMetrics *ct) : MTM(*ct) {
   BlockInfo.resize(MTM.BlockInfo.size());
   unsigned PRKinds = MTM.SchedModel.getNumProcResourceKinds();
   ProcResourceDepths.resize(MTM.BlockInfo.size() * PRKinds);
@@ -164,15 +163,15 @@ MachineTraceMetrics::Ensemble::Ensemble(MachineTraceMetrics *ct)
 // Virtual destructor serves as an anchor.
 MachineTraceMetrics::Ensemble::~Ensemble() = default;
 
-const MachineLoop*
+const MachineLoop *
 MachineTraceMetrics::Ensemble::getLoopFor(const MachineBasicBlock *MBB) const {
   return MTM.Loops->getLoopFor(MBB);
 }
 
 // Update resource-related information in the TraceBlockInfo for MBB.
 // Only update resources related to the trace above MBB.
-void MachineTraceMetrics::Ensemble::
-computeDepthResources(const MachineBasicBlock *MBB) {
+void MachineTraceMetrics::Ensemble::computeDepthResources(
+    const MachineBasicBlock *MBB) {
   TraceBlockInfo *TBI = &BlockInfo[MBB->getNumber()];
   unsigned PRKinds = MTM.SchedModel.getNumProcResourceKinds();
   unsigned PROffset = MBB->getNumber() * PRKinds;
@@ -204,8 +203,8 @@ computeDepthResources(const MachineBasicBlock *MBB) {
 
 // Update resource-related information in the TraceBlockInfo for MBB.
 // Only update resources related to the trace below MBB.
-void MachineTraceMetrics::Ensemble::
-computeHeightResources(const MachineBasicBlock *MBB) {
+void MachineTraceMetrics::Ensemble::computeHeightResources(
+    const MachineBasicBlock *MBB) {
   TraceBlockInfo *TBI = &BlockInfo[MBB->getNumber()];
   unsigned PRKinds = MTM.SchedModel.getNumProcResourceKinds();
   unsigned PROffset = MBB->getNumber() * PRKinds;
@@ -237,18 +236,18 @@ computeHeightResources(const MachineBasicBlock *MBB) {
 
 // Check if depth resources for MBB are valid and return the TBI.
 // Return NULL if the resources have been invalidated.
-const MachineTraceMetrics::TraceBlockInfo*
-MachineTraceMetrics::Ensemble::
-getDepthResources(const MachineBasicBlock *MBB) const {
+const MachineTraceMetrics::TraceBlockInfo *
+MachineTraceMetrics::Ensemble::getDepthResources(
+    const MachineBasicBlock *MBB) const {
   const TraceBlockInfo *TBI = &BlockInfo[MBB->getNumber()];
   return TBI->hasValidDepth() ? TBI : nullptr;
 }
 
 // Check if height resources for MBB are valid and return the TBI.
 // Return NULL if the resources have been invalidated.
-const MachineTraceMetrics::TraceBlockInfo*
-MachineTraceMetrics::Ensemble::
-getHeightResources(const MachineBasicBlock *MBB) const {
+const MachineTraceMetrics::TraceBlockInfo *
+MachineTraceMetrics::Ensemble::getHeightResources(
+    const MachineBasicBlock *MBB) const {
   const TraceBlockInfo *TBI = &BlockInfo[MBB->getNumber()];
   return TBI->hasValidHeight() ? TBI : nullptr;
 }
@@ -260,10 +259,9 @@ getHeightResources(const MachineBasicBlock *MBB) const {
 ///
 /// Compare TraceBlockInfo::InstrDepth.
 ArrayRef<unsigned>
-MachineTraceMetrics::Ensemble::
-getProcResourceDepths(unsigned MBBNum) const {
+MachineTraceMetrics::Ensemble::getProcResourceDepths(unsigned MBBNum) const {
   unsigned PRKinds = MTM.SchedModel.getNumProcResourceKinds();
-  assert((MBBNum+1) * PRKinds <= ProcResourceDepths.size());
+  assert((MBBNum + 1) * PRKinds <= ProcResourceDepths.size());
   return ArrayRef(ProcResourceDepths.data() + MBBNum * PRKinds, PRKinds);
 }
 
@@ -273,10 +271,9 @@ getProcResourceDepths(unsigned MBBNum) const {
 ///
 /// Compare TraceBlockInfo::InstrHeight.
 ArrayRef<unsigned>
-MachineTraceMetrics::Ensemble::
-getProcResourceHeights(unsigned MBBNum) const {
+MachineTraceMetrics::Ensemble::getProcResourceHeights(unsigned MBBNum) const {
   unsigned PRKinds = MTM.SchedModel.getNumProcResourceKinds();
-  assert((MBBNum+1) * PRKinds <= ProcResourceHeights.size());
+  assert((MBBNum + 1) * PRKinds <= ProcResourceHeights.size());
   return ArrayRef(ProcResourceHeights.data() + MBBNum * PRKinds, PRKinds);
 }
 
@@ -310,12 +307,12 @@ namespace {
 
 class MinInstrCountEnsemble : public MachineTraceMetrics::Ensemble {
   const char *getName() const override { return "MinInstr"; }
-  const MachineBasicBlock *pickTracePred(const MachineBasicBlock*) override;
-  const MachineBasicBlock *pickTraceSucc(const MachineBasicBlock*) override;
+  const MachineBasicBlock *pickTracePred(const MachineBasicBlock *) override;
+  const MachineBasicBlock *pickTraceSucc(const MachineBasicBlock *) override;
 
 public:
   MinInstrCountEnsemble(MachineTraceMetrics *mtm)
-    : MachineTraceMetrics::Ensemble(mtm) {}
+      : MachineTraceMetrics::Ensemble(mtm) {}
 };
 
 /// Pick only the current basic block for the trace and do not choose any
@@ -336,7 +333,7 @@ class LocalEnsemble : public MachineTraceMetrics::Ensemble {
 } // end anonymous namespace
 
 // Select the preferred predecessor for MBB.
-const MachineBasicBlock*
+const MachineBasicBlock *
 MinInstrCountEnsemble::pickTracePred(const MachineBasicBlock *MBB) {
   if (MBB->pred_empty())
     return nullptr;
@@ -349,7 +346,7 @@ MinInstrCountEnsemble::pickTracePred(const MachineBasicBlock *MBB) {
   unsigned BestDepth = 0;
   for (const MachineBasicBlock *Pred : MBB->predecessors()) {
     const MachineTraceMetrics::TraceBlockInfo *PredTBI =
-      getDepthResources(Pred);
+        getDepthResources(Pred);
     // Ignore cycles that aren't natural loops.
     if (!PredTBI)
       continue;
@@ -364,7 +361,7 @@ MinInstrCountEnsemble::pickTracePred(const MachineBasicBlock *MBB) {
 }
 
 // Select the preferred successor for MBB.
-const MachineBasicBlock*
+const MachineBasicBlock *
 MinInstrCountEnsemble::pickTraceSucc(const MachineBasicBlock *MBB) {
   if (MBB->succ_empty())
     return nullptr;
@@ -379,7 +376,7 @@ MinInstrCountEnsemble::pickTraceSucc(const MachineBasicBlock *MBB) {
     if (isExitingLoop(CurLoop, getLoopFor(Succ)))
       continue;
     const MachineTraceMetrics::TraceBlockInfo *SuccTBI =
-      getHeightResources(Succ);
+        getHeightResources(Succ);
     // Ignore cycles that aren't natural loops.
     if (!SuccTBI)
       continue;
@@ -408,7 +405,8 @@ MachineTraceMetrics::getEnsemble(MachineTraceStrategy strategy) {
     return (E = new MinInstrCountEnsemble(this));
   case MachineTraceStrategy::TS_Local:
     return (E = new LocalEnsemble(this));
-  default: llvm_unreachable("Invalid trace strategy enum");
+  default:
+    llvm_unreachable("Invalid trace strategy enum");
   }
 }
 
@@ -444,12 +442,13 @@ namespace {
 
 struct LoopBounds {
   MutableArrayRef<MachineTraceMetrics::TraceBlockInfo> Blocks;
-  SmallPtrSet<const MachineBasicBlock*, 8> Visited;
+  SmallPtrSet<const MachineBasicBlock *, 8> Visited;
   const MachineLoopInfo *Loops;
   bool Downward = false;
 
   LoopBounds(MutableArrayRef<MachineTraceMetrics::TraceBlockInfo> blocks,
-             const MachineLoopInfo *loops) : Blocks(blocks), Loops(loops) {}
+             const MachineLoopInfo *loops)
+      : Blocks(blocks), Loops(loops) {}
 };
 
 } // end anonymous namespace
@@ -458,14 +457,13 @@ struct LoopBounds {
 // it is limited to the current loop and doesn't traverse the loop back edges.
 namespace llvm {
 
-template<>
-class po_iterator_storage<LoopBounds, true> {
+template <> class po_iterator_storage<LoopBounds, true> {
   LoopBounds &LB;
 
 public:
   po_iterator_storage(LoopBounds &lb) : LB(lb) {}
 
-  void finishPostorder(const MachineBasicBlock*) {}
+  void finishPostorder(const MachineBasicBlock *) {}
 
   bool insertEdge(std::optional<const MachineBasicBlock *> From,
                   const MachineBasicBlock *To) {
@@ -537,9 +535,9 @@ void MachineTraceMetrics::Ensemble::computeTrace(const MachineBasicBlock *MBB) {
 }
 
 /// Invalidate traces through BadMBB.
-void
-MachineTraceMetrics::Ensemble::invalidate(const MachineBasicBlock *BadMBB) {
-  SmallVector<const MachineBasicBlock*, 16> WorkList;
+void MachineTraceMetrics::Ensemble::invalidate(
+    const MachineBasicBlock *BadMBB) {
+  SmallVector<const MachineBasicBlock *, 16> WorkList;
   TraceBlockInfo &BadTBI = BlockInfo[BadMBB->getNumber()];
 
   // Invalidate height resources of blocks above MBB.
@@ -648,11 +646,11 @@ struct DataDep {
   unsigned UseOp;
 
   DataDep(const MachineInstr *DefMI, unsigned DefOp, unsigned UseOp)
-    : DefMI(DefMI), DefOp(DefOp), UseOp(UseOp) {}
+      : DefMI(DefMI), DefOp(DefOp), UseOp(UseOp) {}
 
   /// Create a DataDep from an SSA form virtual register.
   DataDep(const MachineRegisterInfo *MRI, unsigned VirtReg, unsigned UseOp)
-    : UseOp(UseOp) {
+      : UseOp(UseOp) {
     assert(Register::isVirtualRegister(VirtReg));
     MachineRegisterInfo::def_iterator DefI = MRI->def_begin(VirtReg);
     assert(!DefI.atEnd() && "Register has no defs");
@@ -772,8 +770,8 @@ static void updatePhysDepsDownwards(const MachineInstr *UseMI,
 ///
 /// This function computes the second number from the live-in list of the
 /// center block.
-unsigned MachineTraceMetrics::Ensemble::
-computeCrossBlockCriticalPath(const TraceBlockInfo &TBI) {
+unsigned MachineTraceMetrics::Ensemble::computeCrossBlockCriticalPath(
+    const TraceBlockInfo &TBI) {
   assert(TBI.HasValidInstrDepths && "Missing depth info");
   assert(TBI.HasValidInstrHeights && "Missing height info");
   unsigned MaxLen = 0;
@@ -791,9 +789,9 @@ computeCrossBlockCriticalPath(const TraceBlockInfo &TBI) {
   return MaxLen;
 }
 
-void MachineTraceMetrics::Ensemble::
-updateDepth(MachineTraceMetrics::TraceBlockInfo &TBI, const MachineInstr &UseMI,
-            SparseSet<LiveRegUnit> &RegUnits) {
+void MachineTraceMetrics::Ensemble::updateDepth(
+    MachineTraceMetrics::TraceBlockInfo &TBI, const MachineInstr &UseMI,
+    SparseSet<LiveRegUnit> &RegUnits) {
   SmallVector<DataDep, 8> Deps;
   // Collect all data dependencies.
   if (UseMI.isPHI())
@@ -804,8 +802,8 @@ updateDepth(MachineTraceMetrics::TraceBlockInfo &TBI, const MachineInstr &UseMI,
   // Filter and process dependencies, computing the earliest issue cycle.
   unsigned Cycle = 0;
   for (const DataDep &Dep : Deps) {
-    const TraceBlockInfo&DepTBI =
-      BlockInfo[Dep.DefMI->getParent()->getNumber()];
+    const TraceBlockInfo &DepTBI =
+        BlockInfo[Dep.DefMI->getParent()->getNumber()];
     // Ignore dependencies from outside the current trace.
     if (!DepTBI.isUsefulDominator(TBI))
       continue;
@@ -813,8 +811,8 @@ updateDepth(MachineTraceMetrics::TraceBlockInfo &TBI, const MachineInstr &UseMI,
     unsigned DepCycle = Cycles.lookup(Dep.DefMI).Depth;
     // Add latency if DefMI is a real instruction. Transients get latency 0.
     if (!Dep.DefMI->isTransient())
-      DepCycle += MTM.SchedModel
-        .computeOperandLatency(Dep.DefMI, Dep.DefOp, &UseMI, Dep.UseOp);
+      DepCycle += MTM.SchedModel.computeOperandLatency(Dep.DefMI, Dep.DefOp,
+                                                       &UseMI, Dep.UseOp);
     Cycle = std::max(Cycle, DepCycle);
   }
   // Remember the instruction depth.
@@ -830,28 +828,27 @@ updateDepth(MachineTraceMetrics::TraceBlockInfo &TBI, const MachineInstr &UseMI,
   }
 }
 
-void MachineTraceMetrics::Ensemble::
-updateDepth(const MachineBasicBlock *MBB, const MachineInstr &UseMI,
-            SparseSet<LiveRegUnit> &RegUnits) {
+void MachineTraceMetrics::Ensemble::updateDepth(
+    const MachineBasicBlock *MBB, const MachineInstr &UseMI,
+    SparseSet<LiveRegUnit> &RegUnits) {
   updateDepth(BlockInfo[MBB->getNumber()], UseMI, RegUnits);
 }
 
-void MachineTraceMetrics::Ensemble::
-updateDepths(MachineBasicBlock::iterator Start,
-             MachineBasicBlock::iterator End,
-             SparseSet<LiveRegUnit> &RegUnits) {
-    for (; Start != End; Start++)
-      updateDepth(Start->getParent(), *Start, RegUnits);
+void MachineTraceMetrics::Ensemble::updateDepths(
+    MachineBasicBlock::iterator Start, MachineBasicBlock::iterator End,
+    SparseSet<LiveRegUnit> &RegUnits) {
+  for (; Start != End; Start++)
+    updateDepth(Start->getParent(), *Start, RegUnits);
 }
 
 /// Compute instruction depths for all instructions above or in MBB in its
 /// trace. This assumes that the trace through MBB has already been computed.
-void MachineTraceMetrics::Ensemble::
-computeInstrDepths(const MachineBasicBlock *MBB) {
+void MachineTraceMetrics::Ensemble::computeInstrDepths(
+    const MachineBasicBlock *MBB) {
   // The top of the trace may already be computed, and HasValidInstrDepths
   // implies Head->HasValidInstrDepths, so we only need to start from the first
   // block in the trace that needs to be recomputed.
-  SmallVector<const MachineBasicBlock*, 8> Stack;
+  SmallVector<const MachineBasicBlock *, 8> Stack;
   do {
     TraceBlockInfo &TBI = BlockInfo[MBB->getNumber()];
     assert(TBI.hasValidDepth() && "Incomplete trace");
@@ -885,7 +882,7 @@ computeInstrDepths(const MachineBasicBlock *MBB) {
           unsigned Factor = MTM.SchedModel.getResourceFactor(K);
           dbgs() << format("%6uc @ ", MTM.getCycles(PRDepths[K]))
                  << MTM.SchedModel.getProcResource(K)->Name << " ("
-                 << PRDepths[K]/Factor << " ops x" << Factor << ")\n";
+                 << PRDepths[K] / Factor << " ops x" << Factor << ")\n";
         }
     });
 
@@ -984,9 +981,9 @@ static bool pushDepHeight(const DataDep &Dep, const MachineInstr &UseMI,
 /// Assuming that the virtual register defined by DefMI:DefOp was used by
 /// Trace.back(), add it to the live-in lists of all the blocks in Trace. Stop
 /// when reaching the block that contains DefMI.
-void MachineTraceMetrics::Ensemble::
-addLiveIns(const MachineInstr *DefMI, unsigned DefOp,
-           ArrayRef<const MachineBasicBlock*> Trace) {
+void MachineTraceMetrics::Ensemble::addLiveIns(
+    const MachineInstr *DefMI, unsigned DefOp,
+    ArrayRef<const MachineBasicBlock *> Trace) {
   assert(!Trace.empty() && "Trace should contain at least one block");
   Register Reg = DefMI->getOperand(DefOp).getReg();
   assert(Reg.isVirtual());
@@ -1005,11 +1002,11 @@ addLiveIns(const MachineInstr *DefMI, unsigned DefOp,
 /// Compute instruction heights in the trace through MBB. This updates MBB and
 /// the blocks below it in the trace. It is assumed that the trace has already
 /// been computed.
-void MachineTraceMetrics::Ensemble::
-computeInstrHeights(const MachineBasicBlock *MBB) {
+void MachineTraceMetrics::Ensemble::computeInstrHeights(
+    const MachineBasicBlock *MBB) {
   // The bottom of the trace may already be computed.
   // Find the blocks that need updating.
-  SmallVector<const MachineBasicBlock*, 8> Stack;
+  SmallVector<const MachineBasicBlock *, 8> Stack;
   do {
     TraceBlockInfo &TBI = BlockInfo[MBB->getNumber()];
     assert(TBI.hasValidHeight() && "Incomplete trace");
@@ -1050,7 +1047,7 @@ computeInstrHeights(const MachineBasicBlock *MBB) {
 
   // Go through the trace blocks in bottom-up order.
   SmallVector<DataDep, 8> Deps;
-  for (;!Stack.empty(); Stack.pop_back()) {
+  for (; !Stack.empty(); Stack.pop_back()) {
     MBB = Stack.back();
     LLVM_DEBUG(dbgs() << "Heights for " << printMBBReference(*MBB) << ":\n");
     TraceBlockInfo &TBI = BlockInfo[MBB->getNumber()];
@@ -1065,7 +1062,7 @@ computeInstrHeights(const MachineBasicBlock *MBB) {
           unsigned Factor = MTM.SchedModel.getResourceFactor(K);
           dbgs() << format("%6uc @ ", MTM.getCycles(PRHeights[K]))
                  << MTM.SchedModel.getProcResource(K)->Name << " ("
-                 << PRHeights[K]/Factor << " ops x" << Factor << ")\n";
+                 << PRHeights[K] / Factor << " ops x" << Factor << ")\n";
         }
     });
 
@@ -1154,8 +1151,8 @@ computeInstrHeights(const MachineBasicBlock *MBB) {
     if (!TBI.HasValidInstrDepths)
       continue;
     // Add live-ins to the critical path length.
-    TBI.CriticalPath = std::max(TBI.CriticalPath,
-                                computeCrossBlockCriticalPath(TBI));
+    TBI.CriticalPath =
+        std::max(TBI.CriticalPath, computeCrossBlockCriticalPath(TBI));
     LLVM_DEBUG(dbgs() << "Critical path: " << TBI.CriticalPath << '\n');
   }
 }
@@ -1236,8 +1233,7 @@ unsigned MachineTraceMetrics::Trace::getResourceLength(
 
   // Capture computing cycles from extra instructions
   auto extraCycles = [this](ArrayRef<const MCSchedClassDesc *> Instrs,
-                            unsigned ResourceIdx)
-                         ->unsigned {
+                            unsigned ResourceIdx) -> unsigned {
     unsigned Cycles = 0;
     for (const MCSchedClassDesc *SC : Instrs) {
       if (!SC->isValid())
diff --git a/llvm/lib/CodeGen/MachineVerifier.cpp b/llvm/lib/CodeGen/MachineVerifier.cpp
index cfca2d4ba6798f7..6c11e3bca7d1820 100644
--- a/llvm/lib/CodeGen/MachineVerifier.cpp
+++ b/llvm/lib/CodeGen/MachineVerifier.cpp
@@ -89,244 +89,245 @@ using namespace llvm;
 
 namespace {
 
-  struct MachineVerifier {
-    MachineVerifier(Pass *pass, const char *b) : PASS(pass), Banner(b) {}
-
-    MachineVerifier(const char *b, LiveVariables *LiveVars,
-                    LiveIntervals *LiveInts, LiveStacks *LiveStks,
-                    SlotIndexes *Indexes)
-        : Banner(b), LiveVars(LiveVars), LiveInts(LiveInts), LiveStks(LiveStks),
-          Indexes(Indexes) {}
-
-    unsigned verify(const MachineFunction &MF);
-
-    Pass *const PASS = nullptr;
-    const char *Banner;
-    const MachineFunction *MF = nullptr;
-    const TargetMachine *TM = nullptr;
-    const TargetInstrInfo *TII = nullptr;
-    const TargetRegisterInfo *TRI = nullptr;
-    const MachineRegisterInfo *MRI = nullptr;
-    const RegisterBankInfo *RBI = nullptr;
-
-    unsigned foundErrors = 0;
-
-    // Avoid querying the MachineFunctionProperties for each operand.
-    bool isFunctionRegBankSelected = false;
-    bool isFunctionSelected = false;
-    bool isFunctionTracksDebugUserValues = false;
-
-    using RegVector = SmallVector<Register, 16>;
-    using RegMaskVector = SmallVector<const uint32_t *, 4>;
-    using RegSet = DenseSet<Register>;
-    using RegMap = DenseMap<Register, const MachineInstr *>;
-    using BlockSet = SmallPtrSet<const MachineBasicBlock *, 8>;
-
-    const MachineInstr *FirstNonPHI = nullptr;
-    const MachineInstr *FirstTerminator = nullptr;
-    BlockSet FunctionBlocks;
-
-    BitVector regsReserved;
-    RegSet regsLive;
-    RegVector regsDefined, regsDead, regsKilled;
-    RegMaskVector regMasks;
-
-    SlotIndex lastIndex;
-
-    // Add Reg and any sub-registers to RV
-    void addRegWithSubRegs(RegVector &RV, Register Reg) {
-      RV.push_back(Reg);
-      if (Reg.isPhysical())
-        append_range(RV, TRI->subregs(Reg.asMCReg()));
-    }
-
-    struct BBInfo {
-      // Is this MBB reachable from the MF entry point?
-      bool reachable = false;
-
-      // Vregs that must be live in because they are used without being
-      // defined. Map value is the user. vregsLiveIn doesn't include regs
-      // that only are used by PHI nodes.
-      RegMap vregsLiveIn;
-
-      // Regs killed in MBB. They may be defined again, and will then be in both
-      // regsKilled and regsLiveOut.
-      RegSet regsKilled;
-
-      // Regs defined in MBB and live out. Note that vregs passing through may
-      // be live out without being mentioned here.
-      RegSet regsLiveOut;
-
-      // Vregs that pass through MBB untouched. This set is disjoint from
-      // regsKilled and regsLiveOut.
-      RegSet vregsPassed;
-
-      // Vregs that must pass through MBB because they are needed by a successor
-      // block. This set is disjoint from regsLiveOut.
-      RegSet vregsRequired;
-
-      // Set versions of block's predecessor and successor lists.
-      BlockSet Preds, Succs;
-
-      BBInfo() = default;
-
-      // Add register to vregsRequired if it belongs there. Return true if
-      // anything changed.
-      bool addRequired(Register Reg) {
-        if (!Reg.isVirtual())
-          return false;
-        if (regsLiveOut.count(Reg))
-          return false;
-        return vregsRequired.insert(Reg).second;
-      }
+struct MachineVerifier {
+  MachineVerifier(Pass *pass, const char *b) : PASS(pass), Banner(b) {}
+
+  MachineVerifier(const char *b, LiveVariables *LiveVars,
+                  LiveIntervals *LiveInts, LiveStacks *LiveStks,
+                  SlotIndexes *Indexes)
+      : Banner(b), LiveVars(LiveVars), LiveInts(LiveInts), LiveStks(LiveStks),
+        Indexes(Indexes) {}
+
+  unsigned verify(const MachineFunction &MF);
+
+  Pass *const PASS = nullptr;
+  const char *Banner;
+  const MachineFunction *MF = nullptr;
+  const TargetMachine *TM = nullptr;
+  const TargetInstrInfo *TII = nullptr;
+  const TargetRegisterInfo *TRI = nullptr;
+  const MachineRegisterInfo *MRI = nullptr;
+  const RegisterBankInfo *RBI = nullptr;
+
+  unsigned foundErrors = 0;
+
+  // Avoid querying the MachineFunctionProperties for each operand.
+  bool isFunctionRegBankSelected = false;
+  bool isFunctionSelected = false;
+  bool isFunctionTracksDebugUserValues = false;
+
+  using RegVector = SmallVector<Register, 16>;
+  using RegMaskVector = SmallVector<const uint32_t *, 4>;
+  using RegSet = DenseSet<Register>;
+  using RegMap = DenseMap<Register, const MachineInstr *>;
+  using BlockSet = SmallPtrSet<const MachineBasicBlock *, 8>;
+
+  const MachineInstr *FirstNonPHI = nullptr;
+  const MachineInstr *FirstTerminator = nullptr;
+  BlockSet FunctionBlocks;
+
+  BitVector regsReserved;
+  RegSet regsLive;
+  RegVector regsDefined, regsDead, regsKilled;
+  RegMaskVector regMasks;
+
+  SlotIndex lastIndex;
+
+  // Add Reg and any sub-registers to RV
+  void addRegWithSubRegs(RegVector &RV, Register Reg) {
+    RV.push_back(Reg);
+    if (Reg.isPhysical())
+      append_range(RV, TRI->subregs(Reg.asMCReg()));
+  }
+
+  struct BBInfo {
+    // Is this MBB reachable from the MF entry point?
+    bool reachable = false;
+
+    // Vregs that must be live in because they are used without being
+    // defined. Map value is the user. vregsLiveIn doesn't include regs
+    // that only are used by PHI nodes.
+    RegMap vregsLiveIn;
+
+    // Regs killed in MBB. They may be defined again, and will then be in both
+    // regsKilled and regsLiveOut.
+    RegSet regsKilled;
+
+    // Regs defined in MBB and live out. Note that vregs passing through may
+    // be live out without being mentioned here.
+    RegSet regsLiveOut;
+
+    // Vregs that pass through MBB untouched. This set is disjoint from
+    // regsKilled and regsLiveOut.
+    RegSet vregsPassed;
+
+    // Vregs that must pass through MBB because they are needed by a successor
+    // block. This set is disjoint from regsLiveOut.
+    RegSet vregsRequired;
+
+    // Set versions of block's predecessor and successor lists.
+    BlockSet Preds, Succs;
+
+    BBInfo() = default;
+
+    // Add register to vregsRequired if it belongs there. Return true if
+    // anything changed.
+    bool addRequired(Register Reg) {
+      if (!Reg.isVirtual())
+        return false;
+      if (regsLiveOut.count(Reg))
+        return false;
+      return vregsRequired.insert(Reg).second;
+    }
 
-      // Same for a full set.
-      bool addRequired(const RegSet &RS) {
-        bool Changed = false;
-        for (Register Reg : RS)
-          Changed |= addRequired(Reg);
-        return Changed;
-      }
+    // Same for a full set.
+    bool addRequired(const RegSet &RS) {
+      bool Changed = false;
+      for (Register Reg : RS)
+        Changed |= addRequired(Reg);
+      return Changed;
+    }
 
-      // Same for a full map.
-      bool addRequired(const RegMap &RM) {
-        bool Changed = false;
-        for (const auto &I : RM)
-          Changed |= addRequired(I.first);
-        return Changed;
-      }
+    // Same for a full map.
+    bool addRequired(const RegMap &RM) {
+      bool Changed = false;
+      for (const auto &I : RM)
+        Changed |= addRequired(I.first);
+      return Changed;
+    }
 
-      // Live-out registers are either in regsLiveOut or vregsPassed.
-      bool isLiveOut(Register Reg) const {
-        return regsLiveOut.count(Reg) || vregsPassed.count(Reg);
-      }
-    };
+    // Live-out registers are either in regsLiveOut or vregsPassed.
+    bool isLiveOut(Register Reg) const {
+      return regsLiveOut.count(Reg) || vregsPassed.count(Reg);
+    }
+  };
 
-    // Extra register info per MBB.
-    DenseMap<const MachineBasicBlock*, BBInfo> MBBInfoMap;
-
-    bool isReserved(Register Reg) {
-      return Reg.id() < regsReserved.size() && regsReserved.test(Reg.id());
-    }
-
-    bool isAllocatable(Register Reg) const {
-      return Reg.id() < TRI->getNumRegs() && TRI->isInAllocatableClass(Reg) &&
-             !regsReserved.test(Reg.id());
-    }
-
-    // Analysis information if available
-    LiveVariables *LiveVars = nullptr;
-    LiveIntervals *LiveInts = nullptr;
-    LiveStacks *LiveStks = nullptr;
-    SlotIndexes *Indexes = nullptr;
-
-    void visitMachineFunctionBefore();
-    void visitMachineBasicBlockBefore(const MachineBasicBlock *MBB);
-    void visitMachineBundleBefore(const MachineInstr *MI);
-
-    /// Verify that all of \p MI's virtual register operands are scalars.
-    /// \returns True if all virtual register operands are scalar. False
-    /// otherwise.
-    bool verifyAllRegOpsScalar(const MachineInstr &MI,
-                               const MachineRegisterInfo &MRI);
-    bool verifyVectorElementMatch(LLT Ty0, LLT Ty1, const MachineInstr *MI);
-
-    bool verifyGIntrinsicSideEffects(const MachineInstr *MI);
-    bool verifyGIntrinsicConvergence(const MachineInstr *MI);
-    void verifyPreISelGenericInstruction(const MachineInstr *MI);
-
-    void visitMachineInstrBefore(const MachineInstr *MI);
-    void visitMachineOperand(const MachineOperand *MO, unsigned MONum);
-    void visitMachineBundleAfter(const MachineInstr *MI);
-    void visitMachineBasicBlockAfter(const MachineBasicBlock *MBB);
-    void visitMachineFunctionAfter();
-
-    void report(const char *msg, const MachineFunction *MF);
-    void report(const char *msg, const MachineBasicBlock *MBB);
-    void report(const char *msg, const MachineInstr *MI);
-    void report(const char *msg, const MachineOperand *MO, unsigned MONum,
-                LLT MOVRegType = LLT{});
-    void report(const Twine &Msg, const MachineInstr *MI);
-
-    void report_context(const LiveInterval &LI) const;
-    void report_context(const LiveRange &LR, Register VRegUnit,
-                        LaneBitmask LaneMask) const;
-    void report_context(const LiveRange::Segment &S) const;
-    void report_context(const VNInfo &VNI) const;
-    void report_context(SlotIndex Pos) const;
-    void report_context(MCPhysReg PhysReg) const;
-    void report_context_liverange(const LiveRange &LR) const;
-    void report_context_lanemask(LaneBitmask LaneMask) const;
-    void report_context_vreg(Register VReg) const;
-    void report_context_vreg_regunit(Register VRegOrUnit) const;
-
-    void verifyInlineAsm(const MachineInstr *MI);
-
-    void checkLiveness(const MachineOperand *MO, unsigned MONum);
-    void checkLivenessAtUse(const MachineOperand *MO, unsigned MONum,
-                            SlotIndex UseIdx, const LiveRange &LR,
-                            Register VRegOrUnit,
-                            LaneBitmask LaneMask = LaneBitmask::getNone());
-    void checkLivenessAtDef(const MachineOperand *MO, unsigned MONum,
-                            SlotIndex DefIdx, const LiveRange &LR,
-                            Register VRegOrUnit, bool SubRangeCheck = false,
-                            LaneBitmask LaneMask = LaneBitmask::getNone());
-
-    void markReachable(const MachineBasicBlock *MBB);
-    void calcRegsPassed();
-    void checkPHIOps(const MachineBasicBlock &MBB);
-
-    void calcRegsRequired();
-    void verifyLiveVariables();
-    void verifyLiveIntervals();
-    void verifyLiveInterval(const LiveInterval&);
-    void verifyLiveRangeValue(const LiveRange &, const VNInfo *, Register,
+  // Extra register info per MBB.
+  DenseMap<const MachineBasicBlock *, BBInfo> MBBInfoMap;
+
+  bool isReserved(Register Reg) {
+    return Reg.id() < regsReserved.size() && regsReserved.test(Reg.id());
+  }
+
+  bool isAllocatable(Register Reg) const {
+    return Reg.id() < TRI->getNumRegs() && TRI->isInAllocatableClass(Reg) &&
+           !regsReserved.test(Reg.id());
+  }
+
+  // Analysis information if available
+  LiveVariables *LiveVars = nullptr;
+  LiveIntervals *LiveInts = nullptr;
+  LiveStacks *LiveStks = nullptr;
+  SlotIndexes *Indexes = nullptr;
+
+  void visitMachineFunctionBefore();
+  void visitMachineBasicBlockBefore(const MachineBasicBlock *MBB);
+  void visitMachineBundleBefore(const MachineInstr *MI);
+
+  /// Verify that all of \p MI's virtual register operands are scalars.
+  /// \returns True if all virtual register operands are scalar. False
+  /// otherwise.
+  bool verifyAllRegOpsScalar(const MachineInstr &MI,
+                             const MachineRegisterInfo &MRI);
+  bool verifyVectorElementMatch(LLT Ty0, LLT Ty1, const MachineInstr *MI);
+
+  bool verifyGIntrinsicSideEffects(const MachineInstr *MI);
+  bool verifyGIntrinsicConvergence(const MachineInstr *MI);
+  void verifyPreISelGenericInstruction(const MachineInstr *MI);
+
+  void visitMachineInstrBefore(const MachineInstr *MI);
+  void visitMachineOperand(const MachineOperand *MO, unsigned MONum);
+  void visitMachineBundleAfter(const MachineInstr *MI);
+  void visitMachineBasicBlockAfter(const MachineBasicBlock *MBB);
+  void visitMachineFunctionAfter();
+
+  void report(const char *msg, const MachineFunction *MF);
+  void report(const char *msg, const MachineBasicBlock *MBB);
+  void report(const char *msg, const MachineInstr *MI);
+  void report(const char *msg, const MachineOperand *MO, unsigned MONum,
+              LLT MOVRegType = LLT{});
+  void report(const Twine &Msg, const MachineInstr *MI);
+
+  void report_context(const LiveInterval &LI) const;
+  void report_context(const LiveRange &LR, Register VRegUnit,
+                      LaneBitmask LaneMask) const;
+  void report_context(const LiveRange::Segment &S) const;
+  void report_context(const VNInfo &VNI) const;
+  void report_context(SlotIndex Pos) const;
+  void report_context(MCPhysReg PhysReg) const;
+  void report_context_liverange(const LiveRange &LR) const;
+  void report_context_lanemask(LaneBitmask LaneMask) const;
+  void report_context_vreg(Register VReg) const;
+  void report_context_vreg_regunit(Register VRegOrUnit) const;
+
+  void verifyInlineAsm(const MachineInstr *MI);
+
+  void checkLiveness(const MachineOperand *MO, unsigned MONum);
+  void checkLivenessAtUse(const MachineOperand *MO, unsigned MONum,
+                          SlotIndex UseIdx, const LiveRange &LR,
+                          Register VRegOrUnit,
+                          LaneBitmask LaneMask = LaneBitmask::getNone());
+  void checkLivenessAtDef(const MachineOperand *MO, unsigned MONum,
+                          SlotIndex DefIdx, const LiveRange &LR,
+                          Register VRegOrUnit, bool SubRangeCheck = false,
+                          LaneBitmask LaneMask = LaneBitmask::getNone());
+
+  void markReachable(const MachineBasicBlock *MBB);
+  void calcRegsPassed();
+  void checkPHIOps(const MachineBasicBlock &MBB);
+
+  void calcRegsRequired();
+  void verifyLiveVariables();
+  void verifyLiveIntervals();
+  void verifyLiveInterval(const LiveInterval &);
+  void verifyLiveRangeValue(const LiveRange &, const VNInfo *, Register,
+                            LaneBitmask);
+  void verifyLiveRangeSegment(const LiveRange &,
+                              const LiveRange::const_iterator I, Register,
                               LaneBitmask);
-    void verifyLiveRangeSegment(const LiveRange &,
-                                const LiveRange::const_iterator I, Register,
-                                LaneBitmask);
-    void verifyLiveRange(const LiveRange &, Register,
-                         LaneBitmask LaneMask = LaneBitmask::getNone());
+  void verifyLiveRange(const LiveRange &, Register,
+                       LaneBitmask LaneMask = LaneBitmask::getNone());
 
-    void verifyStackFrame();
+  void verifyStackFrame();
 
-    void verifySlotIndexes() const;
-    void verifyProperties(const MachineFunction &MF);
-  };
+  void verifySlotIndexes() const;
+  void verifyProperties(const MachineFunction &MF);
+};
 
-  struct MachineVerifierPass : public MachineFunctionPass {
-    static char ID; // Pass ID, replacement for typeid
+struct MachineVerifierPass : public MachineFunctionPass {
+  static char ID; // Pass ID, replacement for typeid
 
-    const std::string Banner;
+  const std::string Banner;
 
-    MachineVerifierPass(std::string banner = std::string())
+  MachineVerifierPass(std::string banner = std::string())
       : MachineFunctionPass(ID), Banner(std::move(banner)) {
-        initializeMachineVerifierPassPass(*PassRegistry::getPassRegistry());
-      }
-
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.addUsedIfAvailable<LiveStacks>();
-      AU.addUsedIfAvailable<LiveVariables>();
-      AU.addUsedIfAvailable<SlotIndexes>();
-      AU.addUsedIfAvailable<LiveIntervals>();
-      AU.setPreservesAll();
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
+    initializeMachineVerifierPassPass(*PassRegistry::getPassRegistry());
+  }
 
-    bool runOnMachineFunction(MachineFunction &MF) override {
-      // Skip functions that have known verification problems.
-      // FIXME: Remove this mechanism when all problematic passes have been
-      // fixed.
-      if (MF.getProperties().hasProperty(
-              MachineFunctionProperties::Property::FailsVerification))
-        return false;
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.addUsedIfAvailable<LiveStacks>();
+    AU.addUsedIfAvailable<LiveVariables>();
+    AU.addUsedIfAvailable<SlotIndexes>();
+    AU.addUsedIfAvailable<LiveIntervals>();
+    AU.setPreservesAll();
+    MachineFunctionPass::getAnalysisUsage(AU);
+  }
 
-      unsigned FoundErrors = MachineVerifier(this, Banner.c_str()).verify(MF);
-      if (FoundErrors)
-        report_fatal_error("Found "+Twine(FoundErrors)+" machine code errors.");
+  bool runOnMachineFunction(MachineFunction &MF) override {
+    // Skip functions that have known verification problems.
+    // FIXME: Remove this mechanism when all problematic passes have been
+    // fixed.
+    if (MF.getProperties().hasProperty(
+            MachineFunctionProperties::Property::FailsVerification))
       return false;
-    }
-  };
+
+    unsigned FoundErrors = MachineVerifier(this, Banner.c_str()).verify(MF);
+    if (FoundErrors)
+      report_fatal_error("Found " + Twine(FoundErrors) +
+                         " machine code errors.");
+    return false;
+  }
+};
 
 } // end anonymous namespace
 
@@ -352,12 +353,12 @@ void llvm::verifyMachineFunction(MachineFunctionAnalysisManager *,
     report_fatal_error("Found " + Twine(FoundErrors) + " machine code errors.");
 }
 
-bool MachineFunction::verify(Pass *p, const char *Banner, bool AbortOnErrors)
-    const {
-  MachineFunction &MF = const_cast<MachineFunction&>(*this);
+bool MachineFunction::verify(Pass *p, const char *Banner,
+                             bool AbortOnErrors) const {
+  MachineFunction &MF = const_cast<MachineFunction &>(*this);
   unsigned FoundErrors = MachineVerifier(p, Banner).verify(MF);
   if (AbortOnErrors && FoundErrors)
-    report_fatal_error("Found "+Twine(FoundErrors)+" machine code errors.");
+    report_fatal_error("Found " + Twine(FoundErrors) + " machine code errors.");
   return FoundErrors == 0;
 }
 
@@ -378,7 +379,8 @@ void MachineVerifier::verifySlotIndexes() const {
   // Ensure the IdxMBB list is sorted by slot indexes.
   SlotIndex Last;
   for (SlotIndexes::MBBIndexIterator I = Indexes->MBBIndexBegin(),
-       E = Indexes->MBBIndexEnd(); I != E; ++I) {
+                                     E = Indexes->MBBIndexEnd();
+       I != E; ++I) {
     assert(!Last.isValid() || I->first > Last);
     Last = I->first;
   }
@@ -512,7 +514,7 @@ void MachineVerifier::report(const char *msg, const MachineFunction *MF) {
       MF->print(errs(), Indexes);
   }
   errs() << "*** Bad machine code: " << msg << " ***\n"
-      << "- function:    " << MF->getName() << "\n";
+         << "- function:    " << MF->getName() << "\n";
 }
 
 void MachineVerifier::report(const char *msg, const MachineBasicBlock *MBB) {
@@ -521,8 +523,8 @@ void MachineVerifier::report(const char *msg, const MachineBasicBlock *MBB) {
   errs() << "- basic block: " << printMBBReference(*MBB) << ' '
          << MBB->getName() << " (" << (const void *)MBB << ')';
   if (Indexes)
-    errs() << " [" << Indexes->getMBBStartIdx(MBB)
-        << ';' <<  Indexes->getMBBEndIdx(MBB) << ')';
+    errs() << " [" << Indexes->getMBBStartIdx(MBB) << ';'
+           << Indexes->getMBBEndIdx(MBB) << ')';
   errs() << '\n';
 }
 
@@ -635,13 +637,14 @@ void MachineVerifier::visitMachineFunctionBefore() {
     verifyStackFrame();
 }
 
-void
-MachineVerifier::visitMachineBasicBlockBefore(const MachineBasicBlock *MBB) {
+void MachineVerifier::visitMachineBasicBlockBefore(
+    const MachineBasicBlock *MBB) {
   FirstTerminator = nullptr;
   FirstNonPHI = nullptr;
 
   if (!MF->getProperties().hasProperty(
-      MachineFunctionProperties::Property::NoPHIs) && MRI->tracksLiveness()) {
+          MachineFunctionProperties::Property::NoPHIs) &&
+      MRI->tracksLiveness()) {
     // If this block has allocatable physical registers live-in, check that
     // it is an entry block or landing pad.
     for (const auto &LI : MBB->liveins()) {
@@ -658,13 +661,14 @@ MachineVerifier::visitMachineBasicBlockBefore(const MachineBasicBlock *MBB) {
 
   if (MBB->isIRBlockAddressTaken()) {
     if (!MBB->getAddressTakenIRBlock()->hasAddressTaken())
-      report("ir-block-address-taken is associated with basic block not used by "
-             "a blockaddress.",
-             MBB);
+      report(
+          "ir-block-address-taken is associated with basic block not used by "
+          "a blockaddress.",
+          MBB);
   }
 
   // Count the number of landing pad successors.
-  SmallPtrSet<const MachineBasicBlock*, 4> LandingPadSuccs;
+  SmallPtrSet<const MachineBasicBlock *, 4> LandingPadSuccs;
   for (const auto *succ : MBB->successors()) {
     if (succ->isEHPad())
       LandingPadSuccs.insert(succ);
@@ -693,8 +697,8 @@ MachineVerifier::visitMachineBasicBlockBefore(const MachineBasicBlock *MBB) {
   const Function &F = MF->getFunction();
   if (LandingPadSuccs.size() > 1 &&
       !(AsmInfo &&
-        AsmInfo->getExceptionHandlingType() == ExceptionHandling::SjLj &&
-        BB && isa<SwitchInst>(BB->getTerminator())) &&
+        AsmInfo->getExceptionHandlingType() == ExceptionHandling::SjLj && BB &&
+        isa<SwitchInst>(BB->getTerminator())) &&
       !isScopedEHPersonality(classifyEHPersonality(F.getPersonalityFn())))
     report("MBB has more than one landing pad successor", MBB);
 
@@ -710,7 +714,8 @@ MachineVerifier::visitMachineBasicBlockBefore(const MachineBasicBlock *MBB) {
       if (!MBB->empty() && MBB->back().isBarrier() &&
           !TII->isPredicated(MBB->back())) {
         report("MBB exits via unconditional fall-through but ends with a "
-               "barrier instruction!", MBB);
+               "barrier instruction!",
+               MBB);
       }
       if (!Cond.empty()) {
         report("MBB exits via unconditional fall-through but has a condition!",
@@ -720,42 +725,52 @@ MachineVerifier::visitMachineBasicBlockBefore(const MachineBasicBlock *MBB) {
       // Block unconditionally branches somewhere.
       if (MBB->empty()) {
         report("MBB exits via unconditional branch but doesn't contain "
-               "any instructions!", MBB);
+               "any instructions!",
+               MBB);
       } else if (!MBB->back().isBarrier()) {
         report("MBB exits via unconditional branch but doesn't end with a "
-               "barrier instruction!", MBB);
+               "barrier instruction!",
+               MBB);
       } else if (!MBB->back().isTerminator()) {
         report("MBB exits via unconditional branch but the branch isn't a "
-               "terminator instruction!", MBB);
+               "terminator instruction!",
+               MBB);
       }
     } else if (TBB && !FBB && !Cond.empty()) {
       // Block conditionally branches somewhere, otherwise falls through.
       if (MBB->empty()) {
         report("MBB exits via conditional branch/fall-through but doesn't "
-               "contain any instructions!", MBB);
+               "contain any instructions!",
+               MBB);
       } else if (MBB->back().isBarrier()) {
         report("MBB exits via conditional branch/fall-through but ends with a "
-               "barrier instruction!", MBB);
+               "barrier instruction!",
+               MBB);
       } else if (!MBB->back().isTerminator()) {
         report("MBB exits via conditional branch/fall-through but the branch "
-               "isn't a terminator instruction!", MBB);
+               "isn't a terminator instruction!",
+               MBB);
       }
     } else if (TBB && FBB) {
       // Block conditionally branches somewhere, otherwise branches
       // somewhere else.
       if (MBB->empty()) {
         report("MBB exits via conditional branch/branch but doesn't "
-               "contain any instructions!", MBB);
+               "contain any instructions!",
+               MBB);
       } else if (!MBB->back().isBarrier()) {
         report("MBB exits via conditional branch/branch but doesn't end with a "
-               "barrier instruction!", MBB);
+               "barrier instruction!",
+               MBB);
       } else if (!MBB->back().isTerminator()) {
         report("MBB exits via conditional branch/branch but the branch "
-               "isn't a terminator instruction!", MBB);
+               "isn't a terminator instruction!",
+               MBB);
       }
       if (Cond.empty()) {
         report("MBB exits via conditional branch/branch but there's no "
-               "condition!", MBB);
+               "condition!",
+               MBB);
       }
     } else {
       report("analyzeBranch returned invalid data!", MBB);
@@ -1047,8 +1062,8 @@ void MachineVerifier::verifyPreISelGenericInstruction(const MachineInstr *MI) {
 
   // Check types.
   SmallVector<LLT, 4> Types;
-  for (unsigned I = 0, E = std::min(MCID.getNumOperands(), NumOps);
-       I != E; ++I) {
+  for (unsigned I = 0, E = std::min(MCID.getNumOperands(), NumOps); I != E;
+       ++I) {
     if (!MCID.operands()[I].isGenericType())
       continue;
     // Generic instructions specify type equality constraints between some of
@@ -1723,9 +1738,13 @@ void MachineVerifier::verifyPreISelGenericInstruction(const MachineInstr *MI) {
     if (!DstTy.isScalar())
       report("Vector reduction requires a scalar destination type", MI);
     if (!Src1Ty.isScalar())
-      report("Sequential FADD/FMUL vector reduction requires a scalar 1st operand", MI);
+      report(
+          "Sequential FADD/FMUL vector reduction requires a scalar 1st operand",
+          MI);
     if (!Src2Ty.isVector())
-      report("Sequential FADD/FMUL vector reduction must have a vector 2nd operand", MI);
+      report("Sequential FADD/FMUL vector reduction must have a vector 2nd "
+             "operand",
+             MI);
     break;
   }
   case TargetOpcode::G_VECREDUCE_FADD:
@@ -2014,7 +2033,8 @@ void MachineVerifier::visitMachineInstrBefore(const MachineInstr *MI) {
     unsigned SubRegSize = TRI->getSubRegIdxSize(MI->getOperand(3).getImm());
     if (SubRegSize < InsertedSize) {
       report("INSERT_SUBREG expected inserted value to have equal or lesser "
-             "size than the subreg it was inserted into", MI);
+             "size than the subreg it was inserted into",
+             MI);
       break;
     }
   } break;
@@ -2034,8 +2054,8 @@ void MachineVerifier::visitMachineInstrBefore(const MachineInstr *MI) {
 
       if (!SubRegOp.isImm() || SubRegOp.getImm() == 0 ||
           SubRegOp.getImm() >= TRI->getNumSubRegIndices()) {
-        report("Invalid subregister index operand for REG_SEQUENCE",
-               &SubRegOp, I + 1);
+        report("Invalid subregister index operand for REG_SEQUENCE", &SubRegOp,
+               I + 1);
       }
     }
 
@@ -2051,8 +2071,8 @@ void MachineVerifier::visitMachineInstrBefore(const MachineInstr *MI) {
   }
 }
 
-void
-MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum) {
+void MachineVerifier::visitMachineOperand(const MachineOperand *MO,
+                                          unsigned MONum) {
   const MachineInstr *MI = MO->getParent();
   const MCInstrDesc &MCID = MI->getDesc();
   unsigned NumDefs = MCID.getNumDefs();
@@ -2082,8 +2102,8 @@ MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum) {
       }
 
       // Check that an instruction has register operands only as expected.
-      if (MCOI.OperandType == MCOI::OPERAND_REGISTER &&
-          !MO->isReg() && !MO->isFI())
+      if (MCOI.OperandType == MCOI::OPERAND_REGISTER && !MO->isReg() &&
+          !MO->isFI())
         report("Expected a register operand.", MO, MONum);
       if (MO->isReg()) {
         if (MCOI.OperandType == MCOI::OPERAND_IMMEDIATE ||
@@ -2136,7 +2156,8 @@ MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum) {
 
     if (MO->isDef() && MO->isUndef() && !MO->getSubReg() &&
         MO->getReg().isVirtual()) // TODO: Apply to physregs too
-      report("Undef virtual register def operands require a subregister", MO, MONum);
+      report("Undef virtual register def operands require a subregister", MO,
+             MONum);
 
     // Verify the consistency of tied operands.
     if (MO->isTied()) {
@@ -2184,7 +2205,7 @@ MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum) {
       }
       if (MONum < MCID.getNumOperands()) {
         if (const TargetRegisterClass *DRC =
-              TII->getRegClass(MCID, MONum, TRI, *MF)) {
+                TII->getRegClass(MCID, MONum, TRI, *MF)) {
           if (!DRC->contains(Reg)) {
             report("Illegal physical register for instruction", MO, MONum);
             errs() << printReg(Reg, TRI) << " is not a "
@@ -2258,9 +2279,9 @@ MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum) {
           }
         }
 
-        if (SubIdx)  {
-          report("Generic virtual register does not allow subregister index", MO,
-                 MONum);
+        if (SubIdx) {
+          report("Generic virtual register does not allow subregister index",
+                 MO, MONum);
           return;
         }
 
@@ -2282,24 +2303,23 @@ MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum) {
         break;
       }
       if (SubIdx) {
-        const TargetRegisterClass *SRC =
-          TRI->getSubClassWithSubReg(RC, SubIdx);
+        const TargetRegisterClass *SRC = TRI->getSubClassWithSubReg(RC, SubIdx);
         if (!SRC) {
           report("Invalid subregister index for virtual register", MO, MONum);
           errs() << "Register class " << TRI->getRegClassName(RC)
-              << " does not support subreg index " << SubIdx << "\n";
+                 << " does not support subreg index " << SubIdx << "\n";
           return;
         }
         if (RC != SRC) {
           report("Invalid register class for subregister index", MO, MONum);
           errs() << "Register class " << TRI->getRegClassName(RC)
-              << " does not fully support subreg index " << SubIdx << "\n";
+                 << " does not fully support subreg index " << SubIdx << "\n";
           return;
         }
       }
       if (MONum < MCID.getNumOperands()) {
         if (const TargetRegisterClass *DRC =
-              TII->getRegClass(MCID, MONum, TRI, *MF)) {
+                TII->getRegClass(MCID, MONum, TRI, *MF)) {
           if (SubIdx) {
             const TargetRegisterClass *SuperRC =
                 TRI->getLargestLegalSuperClass(RC, *MF);
@@ -2316,8 +2336,8 @@ MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum) {
           if (!RC->hasSuperClassEq(DRC)) {
             report("Illegal virtual register for instruction", MO, MONum);
             errs() << "Expected a " << TRI->getRegClassName(DRC)
-                << " register, but got a " << TRI->getRegClassName(RC)
-                << " register\n";
+                   << " register, but got a " << TRI->getRegClassName(RC)
+                   << " register\n";
           }
         }
       }
@@ -2335,8 +2355,8 @@ MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum) {
     break;
 
   case MachineOperand::MO_FrameIndex:
-    if (LiveStks && LiveStks->hasInterval(MO->getIndex()) &&
-        LiveInts && !LiveInts->isNotInMIMap(*MI)) {
+    if (LiveStks && LiveStks->hasInterval(MO->getIndex()) && LiveInts &&
+        !LiveInts->isNotInMIMap(*MI)) {
       int FI = MO->getIndex();
       LiveInterval &LI = LiveStks->getInterval(FI);
       SlotIndex Idx = LiveInts->getInstructionIndex(*MI);
@@ -2349,11 +2369,14 @@ MachineVerifier::visitMachineOperand(const MachineOperand *MO, unsigned MONum) {
       if (stores && loads) {
         for (auto *MMO : MI->memoperands()) {
           const PseudoSourceValue *PSV = MMO->getPseudoValue();
-          if (PSV == nullptr) continue;
+          if (PSV == nullptr)
+            continue;
           const FixedStackPseudoSourceValue *Value =
-            dyn_cast<FixedStackPseudoSourceValue>(PSV);
-          if (Value == nullptr) continue;
-          if (Value->getFrameIndex() != FI) continue;
+              dyn_cast<FixedStackPseudoSourceValue>(PSV);
+          if (Value == nullptr)
+            continue;
+          if (Value->getFrameIndex() != FI)
+            continue;
 
           if (MMO->isStore())
             loads = false;
@@ -2504,8 +2527,8 @@ void MachineVerifier::checkLiveness(const MachineOperand *MO, unsigned MONum) {
       SlotIndex UseIdx;
       if (MI->isPHI()) {
         // PHI use occurs on the edge, so check for live out here instead.
-        UseIdx = LiveInts->getMBBEndIdx(
-          MI->getOperand(MONum + 1).getMBB()).getPrevSlot();
+        UseIdx = LiveInts->getMBBEndIdx(MI->getOperand(MONum + 1).getMBB())
+                     .getPrevSlot();
       } else {
         UseIdx = LiveInts->getInstructionIndex(*MI);
       }
@@ -2643,7 +2666,8 @@ void MachineVerifier::checkLiveness(const MachineOperand *MO, unsigned MONum) {
 void MachineVerifier::visitMachineBundleAfter(const MachineInstr *MI) {
   BBInfo &MInfo = MBBInfoMap[MI->getParent()];
   set_union(MInfo.regsKilled, regsKilled);
-  set_subtract(regsLive, regsKilled); regsKilled.clear();
+  set_subtract(regsLive, regsKilled);
+  regsKilled.clear();
   // Kill any masked registers.
   while (!regMasks.empty()) {
     const uint32_t *Mask = regMasks.pop_back_val();
@@ -2652,12 +2676,14 @@ void MachineVerifier::visitMachineBundleAfter(const MachineInstr *MI) {
           MachineOperand::clobbersPhysReg(Mask, Reg.asMCReg()))
         regsDead.push_back(Reg);
   }
-  set_subtract(regsLive, regsDead);   regsDead.clear();
-  set_union(regsLive, regsDefined);   regsDefined.clear();
+  set_subtract(regsLive, regsDead);
+  regsDead.clear();
+  set_union(regsLive, regsDefined);
+  regsDefined.clear();
 }
 
-void
-MachineVerifier::visitMachineBasicBlockAfter(const MachineBasicBlock *MBB) {
+void MachineVerifier::visitMachineBasicBlockAfter(
+    const MachineBasicBlock *MBB) {
   MBBInfoMap[MBB].regsLiveOut = regsLive;
   regsLive.clear();
 
@@ -2665,8 +2691,8 @@ MachineVerifier::visitMachineBasicBlockAfter(const MachineBasicBlock *MBB) {
     SlotIndex stop = Indexes->getMBBEndIdx(MBB);
     if (!(stop > lastIndex)) {
       report("Block ends before last instruction index", MBB);
-      errs() << "Block ends at " << stop
-          << " last instruction was at " << lastIndex << '\n';
+      errs() << "Block ends at " << stop << " last instruction was at "
+             << lastIndex << '\n';
     }
     lastIndex = stop;
   }
@@ -2809,7 +2835,7 @@ void MachineVerifier::calcRegsPassed() {
 // similar to calcRegsPassed, only backwards.
 void MachineVerifier::calcRegsRequired() {
   // First push live-in regs to predecessors' vregsRequired.
-  SmallPtrSet<const MachineBasicBlock*, 8> todo;
+  SmallPtrSet<const MachineBasicBlock *, 8> todo;
   for (const auto &MBB : *MF) {
     BBInfo &MInfo = MBBInfoMap[&MBB];
     for (const MachineBasicBlock *Pred : MBB.predecessors()) {
@@ -2857,7 +2883,7 @@ void MachineVerifier::calcRegsRequired() {
 void MachineVerifier::checkPHIOps(const MachineBasicBlock &MBB) {
   BBInfo &MInfo = MBBInfoMap[&MBB];
 
-  SmallPtrSet<const MachineBasicBlock*, 8> seen;
+  SmallPtrSet<const MachineBasicBlock *, 8> seen;
   for (const MachineInstr &Phi : MBB) {
     if (!Phi.isPHI())
       break;
@@ -2969,8 +2995,9 @@ void MachineVerifier::visitMachineFunctionAfter() {
         for (const MachineBasicBlock *Pred : MBB.predecessors()) {
           BBInfo &PInfo = MBBInfoMap[Pred];
           if (!PInfo.regsLiveOut.count(LiveInReg)) {
-            report("Live in register not found to be live out from predecessor.",
-                   &MBB);
+            report(
+                "Live in register not found to be live out from predecessor.",
+                &MBB);
             errs() << TRI->getName(LiveInReg)
                    << " not found to be live out from "
                    << printMBBReference(*Pred) << "\n";
@@ -3177,7 +3204,7 @@ void MachineVerifier::verifyLiveRangeSegment(const LiveRange &LR,
   }
 
   const MachineBasicBlock *EndMBB =
-    LiveInts->getMBBFromIndex(S.end.getPrevSlot());
+      LiveInts->getMBBFromIndex(S.end.getPrevSlot());
   if (!EndMBB) {
     report("Bad end of live segment, no basic block", MF);
     report_context(LR, Reg, LaneMask);
@@ -3320,8 +3347,7 @@ void MachineVerifier::verifyLiveRangeSegment(const LiveRange &LR,
     }
 
     // Is VNI a PHI-def in the current block?
-    bool IsPHI = VNI->isPHIDef() &&
-      VNI->def == LiveInts->getMBBStartIdx(&*MFI);
+    bool IsPHI = VNI->isPHIDef() && VNI->def == LiveInts->getMBBStartIdx(&*MFI);
 
     // Check that VNI is live-out of all predecessors.
     for (const MachineBasicBlock *Pred : MFI->predecessors()) {
@@ -3426,22 +3452,22 @@ void MachineVerifier::verifyLiveInterval(const LiveInterval &LI) {
 
 namespace {
 
-  // FrameSetup and FrameDestroy can have zero adjustment, so using a single
-  // integer, we can't tell whether it is a FrameSetup or FrameDestroy if the
-  // value is zero.
-  // We use a bool plus an integer to capture the stack state.
-  struct StackStateOfBB {
-    StackStateOfBB() = default;
-    StackStateOfBB(int EntryVal, int ExitVal, bool EntrySetup, bool ExitSetup) :
-      EntryValue(EntryVal), ExitValue(ExitVal), EntryIsSetup(EntrySetup),
-      ExitIsSetup(ExitSetup) {}
-
-    // Can be negative, which means we are setting up a frame.
-    int EntryValue = 0;
-    int ExitValue = 0;
-    bool EntryIsSetup = false;
-    bool ExitIsSetup = false;
-  };
+// FrameSetup and FrameDestroy can have zero adjustment, so using a single
+// integer, we can't tell whether it is a FrameSetup or FrameDestroy if the
+// value is zero.
+// We use a bool plus an integer to capture the stack state.
+struct StackStateOfBB {
+  StackStateOfBB() = default;
+  StackStateOfBB(int EntryVal, int ExitVal, bool EntrySetup, bool ExitSetup)
+      : EntryValue(EntryVal), ExitValue(ExitVal), EntryIsSetup(EntrySetup),
+        ExitIsSetup(ExitSetup) {}
+
+  // Can be negative, which means we are setting up a frame.
+  int EntryValue = 0;
+  int ExitValue = 0;
+  bool EntryIsSetup = false;
+  bool ExitIsSetup = false;
+};
 
 } // end anonymous namespace
 
@@ -3449,19 +3475,20 @@ namespace {
 /// by a FrameDestroy <n>, stack adjustments are identical on all
 /// CFG edges to a merge point, and frame is destroyed at end of a return block.
 void MachineVerifier::verifyStackFrame() {
-  unsigned FrameSetupOpcode   = TII->getCallFrameSetupOpcode();
+  unsigned FrameSetupOpcode = TII->getCallFrameSetupOpcode();
   unsigned FrameDestroyOpcode = TII->getCallFrameDestroyOpcode();
   if (FrameSetupOpcode == ~0u && FrameDestroyOpcode == ~0u)
     return;
 
   SmallVector<StackStateOfBB, 8> SPState;
   SPState.resize(MF->getNumBlockIDs());
-  df_iterator_default_set<const MachineBasicBlock*> Reachable;
+  df_iterator_default_set<const MachineBasicBlock *> Reachable;
 
   // Visit the MBBs in DFS order.
   for (df_ext_iterator<const MachineFunction *,
                        df_iterator_default_set<const MachineBasicBlock *>>
-       DFI = df_ext_begin(MF, Reachable), DFE = df_ext_end(MF, Reachable);
+           DFI = df_ext_begin(MF, Reachable),
+           DFE = df_ext_end(MF, Reachable);
        DFI != DFE; ++DFI) {
     const MachineBasicBlock *MBB = *DFI;
 
@@ -3499,12 +3526,12 @@ void MachineVerifier::verifyStackFrame() {
         int Size = TII->getFrameTotalSize(I);
         if (!BBState.ExitIsSetup)
           report("FrameDestroy is not after a FrameSetup", &I);
-        int AbsSPAdj = BBState.ExitValue < 0 ? -BBState.ExitValue :
-                                               BBState.ExitValue;
+        int AbsSPAdj =
+            BBState.ExitValue < 0 ? -BBState.ExitValue : BBState.ExitValue;
         if (BBState.ExitIsSetup && AbsSPAdj != Size) {
           report("FrameDestroy <n> is after FrameSetup <m>", &I);
           errs() << "FrameDestroy <" << Size << "> is after FrameSetup <"
-              << AbsSPAdj << ">.\n";
+                 << AbsSPAdj << ">.\n";
         }
         BBState.ExitValue += Size;
         BBState.ExitIsSetup = false;
diff --git a/llvm/lib/CodeGen/MacroFusion.cpp b/llvm/lib/CodeGen/MacroFusion.cpp
index fa5df68b8abcc0f..0604d212dade37f 100644
--- a/llvm/lib/CodeGen/MacroFusion.cpp
+++ b/llvm/lib/CodeGen/MacroFusion.cpp
@@ -28,8 +28,10 @@ STATISTIC(NumFused, "Number of instr pairs fused");
 
 using namespace llvm;
 
-static cl::opt<bool> EnableMacroFusion("misched-fusion", cl::Hidden,
-  cl::desc("Enable scheduling for macro fusion."), cl::init(true));
+static cl::opt<bool>
+    EnableMacroFusion("misched-fusion", cl::Hidden,
+                      cl::desc("Enable scheduling for macro fusion."),
+                      cl::init(true));
 
 static bool isHazard(const SDep &Dep) {
   return Dep.getKind() == SDep::Anti || Dep.getKind() == SDep::Output;
@@ -46,7 +48,8 @@ static SUnit *getPredClusterSU(const SUnit &SU) {
 bool llvm::hasLessThanNumFused(const SUnit &SU, unsigned FuseLimit) {
   unsigned Num = 1;
   const SUnit *CurrentSU = &SU;
-  while ((CurrentSU = getPredClusterSU(*CurrentSU)) && Num < FuseLimit) Num ++;
+  while ((CurrentSU = getPredClusterSU(*CurrentSU)) && Num < FuseLimit)
+    Num++;
   return Num < FuseLimit;
 }
 
@@ -62,8 +65,9 @@ bool llvm::fuseInstructionPair(ScheduleDAGInstrs &DAG, SUnit &FirstSU,
     if (SI.isCluster())
       return false;
   // Though the reachability checks above could be made more generic,
-  // perhaps as part of ScheduleDAGInstrs::addEdge(), since such edges are valid,
-  // the extra computation cost makes it less interesting in general cases.
+  // perhaps as part of ScheduleDAGInstrs::addEdge(), since such edges are
+  // valid, the extra computation cost makes it less interesting in general
+  // cases.
 
   // Create a single weak edge between the adjacent instrs. The only effect is
   // to cause bottom-up scheduling to heavily prioritize the clustered instrs.
@@ -98,8 +102,8 @@ bool llvm::fuseInstructionPair(ScheduleDAGInstrs &DAG, SUnit &FirstSU,
   if (&SecondSU != &DAG.ExitSU)
     for (const SDep &SI : FirstSU.Succs) {
       SUnit *SU = SI.getSUnit();
-      if (SI.isWeak() || isHazard(SI) ||
-          SU == &DAG.ExitSU || SU == &SecondSU || SU->isPred(&SecondSU))
+      if (SI.isWeak() || isHazard(SI) || SU == &DAG.ExitSU || SU == &SecondSU ||
+          SU->isPred(&SecondSU))
         continue;
       LLVM_DEBUG(dbgs() << "  Bind "; DAG.dumpNodeName(SecondSU);
                  dbgs() << " - "; DAG.dumpNodeName(*SU); dbgs() << '\n';);
@@ -143,7 +147,7 @@ class MacroFusion : public ScheduleDAGMutation {
 
 public:
   MacroFusion(ShouldSchedulePredTy shouldScheduleAdjacent, bool FuseBlock)
-    : shouldScheduleAdjacent(shouldScheduleAdjacent), FuseBlock(FuseBlock) {}
+      : shouldScheduleAdjacent(shouldScheduleAdjacent), FuseBlock(FuseBlock) {}
 
   void apply(ScheduleDAGInstrs *DAGInstrs) override;
 };
@@ -155,7 +159,7 @@ void MacroFusion::apply(ScheduleDAGInstrs *DAG) {
     // For each of the SUnits in the scheduling block, try to fuse the instr in
     // it with one in its predecessors.
     for (SUnit &ISU : DAG->SUnits)
-        scheduleAdjacentImpl(*DAG, ISU);
+      scheduleAdjacentImpl(*DAG, ISU);
 
   if (DAG->ExitSU.getInstr())
     // Try to fuse the instr in the ExitSU with one in its predecessors.
@@ -164,7 +168,8 @@ void MacroFusion::apply(ScheduleDAGInstrs *DAG) {
 
 /// Implement the fusion of instr pairs in the scheduling DAG,
 /// anchored at the instr in AnchorSU..
-bool MacroFusion::scheduleAdjacentImpl(ScheduleDAGInstrs &DAG, SUnit &AnchorSU) {
+bool MacroFusion::scheduleAdjacentImpl(ScheduleDAGInstrs &DAG,
+                                       SUnit &AnchorSU) {
   const MachineInstr &AnchorMI = *AnchorSU.getInstr();
   const TargetInstrInfo &TII = *DAG.TII;
   const TargetSubtargetInfo &ST = DAG.MF.getSubtarget();
@@ -196,18 +201,16 @@ bool MacroFusion::scheduleAdjacentImpl(ScheduleDAGInstrs &DAG, SUnit &AnchorSU)
   return false;
 }
 
-std::unique_ptr<ScheduleDAGMutation>
-llvm::createMacroFusionDAGMutation(
-     ShouldSchedulePredTy shouldScheduleAdjacent) {
-  if(EnableMacroFusion)
+std::unique_ptr<ScheduleDAGMutation> llvm::createMacroFusionDAGMutation(
+    ShouldSchedulePredTy shouldScheduleAdjacent) {
+  if (EnableMacroFusion)
     return std::make_unique<MacroFusion>(shouldScheduleAdjacent, true);
   return nullptr;
 }
 
-std::unique_ptr<ScheduleDAGMutation>
-llvm::createBranchMacroFusionDAGMutation(
-     ShouldSchedulePredTy shouldScheduleAdjacent) {
-  if(EnableMacroFusion)
+std::unique_ptr<ScheduleDAGMutation> llvm::createBranchMacroFusionDAGMutation(
+    ShouldSchedulePredTy shouldScheduleAdjacent) {
+  if (EnableMacroFusion)
     return std::make_unique<MacroFusion>(shouldScheduleAdjacent, false);
   return nullptr;
 }
diff --git a/llvm/lib/CodeGen/ModuloSchedule.cpp b/llvm/lib/CodeGen/ModuloSchedule.cpp
index 0bef513342ff123..1595195a02781bf 100644
--- a/llvm/lib/CodeGen/ModuloSchedule.cpp
+++ b/llvm/lib/CodeGen/ModuloSchedule.cpp
@@ -1873,9 +1873,10 @@ MachineBasicBlock *PeelingModuloScheduleExpander::CreateLCSSAExitingBlock() {
     for (MachineInstr *Use : Uses)
       Use->substituteRegister(OldR, R, /*SubIdx=*/0,
                               *MRI.getTargetRegisterInfo());
-    MachineInstr *NI = BuildMI(NewBB, DebugLoc(), TII->get(TargetOpcode::PHI), R)
-        .addReg(OldR)
-        .addMBB(BB);
+    MachineInstr *NI =
+        BuildMI(NewBB, DebugLoc(), TII->get(TargetOpcode::PHI), R)
+            .addReg(OldR)
+            .addMBB(BB);
     BlockMIs[{NewBB, &MI}] = NI;
     CanonicalMIs[NI] = &MI;
   }
diff --git a/llvm/lib/CodeGen/OptimizePHIs.cpp b/llvm/lib/CodeGen/OptimizePHIs.cpp
index d997fbbed5a68e2..74e3628ff4512e5 100644
--- a/llvm/lib/CodeGen/OptimizePHIs.cpp
+++ b/llvm/lib/CodeGen/OptimizePHIs.cpp
@@ -33,33 +33,33 @@ STATISTIC(NumDeadPHICycles, "Number of dead PHI cycles");
 
 namespace {
 
-  class OptimizePHIs : public MachineFunctionPass {
-    MachineRegisterInfo *MRI = nullptr;
-    const TargetInstrInfo *TII = nullptr;
+class OptimizePHIs : public MachineFunctionPass {
+  MachineRegisterInfo *MRI = nullptr;
+  const TargetInstrInfo *TII = nullptr;
 
-  public:
-    static char ID; // Pass identification
+public:
+  static char ID; // Pass identification
 
-    OptimizePHIs() : MachineFunctionPass(ID) {
-      initializeOptimizePHIsPass(*PassRegistry::getPassRegistry());
-    }
+  OptimizePHIs() : MachineFunctionPass(ID) {
+    initializeOptimizePHIsPass(*PassRegistry::getPassRegistry());
+  }
 
-    bool runOnMachineFunction(MachineFunction &Fn) override;
+  bool runOnMachineFunction(MachineFunction &Fn) override;
 
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.setPreservesCFG();
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.setPreservesCFG();
+    MachineFunctionPass::getAnalysisUsage(AU);
+  }
 
-  private:
-    using InstrSet = SmallPtrSet<MachineInstr *, 16>;
-    using InstrSetIterator = SmallPtrSetIterator<MachineInstr *>;
+private:
+  using InstrSet = SmallPtrSet<MachineInstr *, 16>;
+  using InstrSetIterator = SmallPtrSetIterator<MachineInstr *>;
 
-    bool IsSingleValuePHICycle(MachineInstr *MI, unsigned &SingleValReg,
-                               InstrSet &PHIsInCycle);
-    bool IsDeadPHICycle(MachineInstr *MI, InstrSet &PHIsInCycle);
-    bool OptimizeBB(MachineBasicBlock &MBB);
-  };
+  bool IsSingleValuePHICycle(MachineInstr *MI, unsigned &SingleValReg,
+                             InstrSet &PHIsInCycle);
+  bool IsDeadPHICycle(MachineInstr *MI, InstrSet &PHIsInCycle);
+  bool OptimizeBB(MachineBasicBlock &MBB);
+};
 
 } // end anonymous namespace
 
@@ -67,8 +67,8 @@ char OptimizePHIs::ID = 0;
 
 char &llvm::OptimizePHIsID = OptimizePHIs::ID;
 
-INITIALIZE_PASS(OptimizePHIs, DEBUG_TYPE,
-                "Optimize machine instruction PHIs", false, false)
+INITIALIZE_PASS(OptimizePHIs, DEBUG_TYPE, "Optimize machine instruction PHIs",
+                false, false)
 
 bool OptimizePHIs::runOnMachineFunction(MachineFunction &Fn) {
   if (skipFunction(Fn.getFunction()))
@@ -164,8 +164,8 @@ bool OptimizePHIs::IsDeadPHICycle(MachineInstr *MI, InstrSet &PHIsInCycle) {
 /// a single value.
 bool OptimizePHIs::OptimizeBB(MachineBasicBlock &MBB) {
   bool Changed = false;
-  for (MachineBasicBlock::iterator
-         MII = MBB.begin(), E = MBB.end(); MII != E; ) {
+  for (MachineBasicBlock::iterator MII = MBB.begin(), E = MBB.end();
+       MII != E;) {
     MachineInstr *MI = &*MII++;
     if (!MI->isPHI())
       break;
diff --git a/llvm/lib/CodeGen/PHIElimination.cpp b/llvm/lib/CodeGen/PHIElimination.cpp
index dbb9a9ffdf60b53..3b5ca733937a043 100644
--- a/llvm/lib/CodeGen/PHIElimination.cpp
+++ b/llvm/lib/CodeGen/PHIElimination.cpp
@@ -47,14 +47,16 @@ using namespace llvm;
 #define DEBUG_TYPE "phi-node-elimination"
 
 static cl::opt<bool>
-DisableEdgeSplitting("disable-phi-elim-edge-splitting", cl::init(false),
-                     cl::Hidden, cl::desc("Disable critical edge splitting "
-                                          "during PHI elimination"));
+    DisableEdgeSplitting("disable-phi-elim-edge-splitting", cl::init(false),
+                         cl::Hidden,
+                         cl::desc("Disable critical edge splitting "
+                                  "during PHI elimination"));
 
 static cl::opt<bool>
-SplitAllCriticalEdges("phi-elim-split-all-critical-edges", cl::init(false),
-                      cl::Hidden, cl::desc("Split all critical edges during "
-                                           "PHI elimination"));
+    SplitAllCriticalEdges("phi-elim-split-all-critical-edges", cl::init(false),
+                          cl::Hidden,
+                          cl::desc("Split all critical edges during "
+                                   "PHI elimination"));
 
 static cl::opt<bool> NoPhiElimLiveOutEarlyExit(
     "no-phi-elim-live-out-early-exit", cl::init(false), cl::Hidden,
@@ -62,60 +64,60 @@ static cl::opt<bool> NoPhiElimLiveOutEarlyExit(
 
 namespace {
 
-  class PHIElimination : public MachineFunctionPass {
-    MachineRegisterInfo *MRI = nullptr; // Machine register information
-    LiveVariables *LV = nullptr;
-    LiveIntervals *LIS = nullptr;
+class PHIElimination : public MachineFunctionPass {
+  MachineRegisterInfo *MRI = nullptr; // Machine register information
+  LiveVariables *LV = nullptr;
+  LiveIntervals *LIS = nullptr;
 
-  public:
-    static char ID; // Pass identification, replacement for typeid
+public:
+  static char ID; // Pass identification, replacement for typeid
 
-    PHIElimination() : MachineFunctionPass(ID) {
-      initializePHIEliminationPass(*PassRegistry::getPassRegistry());
-    }
+  PHIElimination() : MachineFunctionPass(ID) {
+    initializePHIEliminationPass(*PassRegistry::getPassRegistry());
+  }
 
-    bool runOnMachineFunction(MachineFunction &MF) override;
-    void getAnalysisUsage(AnalysisUsage &AU) const override;
+  bool runOnMachineFunction(MachineFunction &MF) override;
+  void getAnalysisUsage(AnalysisUsage &AU) const override;
 
-  private:
-    /// EliminatePHINodes - Eliminate phi nodes by inserting copy instructions
-    /// in predecessor basic blocks.
-    bool EliminatePHINodes(MachineFunction &MF, MachineBasicBlock &MBB);
+private:
+  /// EliminatePHINodes - Eliminate phi nodes by inserting copy instructions
+  /// in predecessor basic blocks.
+  bool EliminatePHINodes(MachineFunction &MF, MachineBasicBlock &MBB);
 
-    void LowerPHINode(MachineBasicBlock &MBB,
-                      MachineBasicBlock::iterator LastPHIIt);
+  void LowerPHINode(MachineBasicBlock &MBB,
+                    MachineBasicBlock::iterator LastPHIIt);
 
-    /// analyzePHINodes - Gather information about the PHI nodes in
-    /// here. In particular, we want to map the number of uses of a virtual
-    /// register which is used in a PHI node. We map that to the BB the
-    /// vreg is coming from. This is used later to determine when the vreg
-    /// is killed in the BB.
-    void analyzePHINodes(const MachineFunction& MF);
+  /// analyzePHINodes - Gather information about the PHI nodes in
+  /// here. In particular, we want to map the number of uses of a virtual
+  /// register which is used in a PHI node. We map that to the BB the
+  /// vreg is coming from. This is used later to determine when the vreg
+  /// is killed in the BB.
+  void analyzePHINodes(const MachineFunction &MF);
 
-    /// Split critical edges where necessary for good coalescer performance.
-    bool SplitPHIEdges(MachineFunction &MF, MachineBasicBlock &MBB,
-                       MachineLoopInfo *MLI,
-                       std::vector<SparseBitVector<>> *LiveInSets);
+  /// Split critical edges where necessary for good coalescer performance.
+  bool SplitPHIEdges(MachineFunction &MF, MachineBasicBlock &MBB,
+                     MachineLoopInfo *MLI,
+                     std::vector<SparseBitVector<>> *LiveInSets);
 
-    // These functions are temporary abstractions around LiveVariables and
-    // LiveIntervals, so they can go away when LiveVariables does.
-    bool isLiveIn(Register Reg, const MachineBasicBlock *MBB);
-    bool isLiveOutPastPHIs(Register Reg, const MachineBasicBlock *MBB);
+  // These functions are temporary abstractions around LiveVariables and
+  // LiveIntervals, so they can go away when LiveVariables does.
+  bool isLiveIn(Register Reg, const MachineBasicBlock *MBB);
+  bool isLiveOutPastPHIs(Register Reg, const MachineBasicBlock *MBB);
 
-    using BBVRegPair = std::pair<unsigned, Register>;
-    using VRegPHIUse = DenseMap<BBVRegPair, unsigned>;
+  using BBVRegPair = std::pair<unsigned, Register>;
+  using VRegPHIUse = DenseMap<BBVRegPair, unsigned>;
 
-    // Count the number of non-undef PHI uses of each register in each BB.
-    VRegPHIUse VRegPHIUseCount;
+  // Count the number of non-undef PHI uses of each register in each BB.
+  VRegPHIUse VRegPHIUseCount;
 
-    // Defs of PHI sources which are implicit_def.
-    SmallPtrSet<MachineInstr*, 4> ImpDefs;
+  // Defs of PHI sources which are implicit_def.
+  SmallPtrSet<MachineInstr *, 4> ImpDefs;
 
-    // Map reusable lowered PHI node -> incoming join register.
-    using LoweredPHIMap =
-        DenseMap<MachineInstr*, unsigned, MachineInstrExpressionTrait>;
-    LoweredPHIMap LoweredPHIs;
-  };
+  // Map reusable lowered PHI node -> incoming join register.
+  using LoweredPHIMap =
+      DenseMap<MachineInstr *, unsigned, MachineInstrExpressionTrait>;
+  LoweredPHIMap LoweredPHIs;
+};
 
 } // end anonymous namespace
 
@@ -125,11 +127,11 @@ STATISTIC(NumReused, "Number of reused lowered phis");
 
 char PHIElimination::ID = 0;
 
-char& llvm::PHIEliminationID = PHIElimination::ID;
+char &llvm::PHIEliminationID = PHIElimination::ID;
 
 INITIALIZE_PASS_BEGIN(PHIElimination, DEBUG_TYPE,
-                      "Eliminate PHI nodes for register allocation",
-                      false, false)
+                      "Eliminate PHI nodes for register allocation", false,
+                      false)
 INITIALIZE_PASS_DEPENDENCY(LiveVariables)
 INITIALIZE_PASS_END(PHIElimination, DEBUG_TYPE,
                     "Eliminate PHI nodes for register allocation", false, false)
@@ -233,11 +235,11 @@ bool PHIElimination::runOnMachineFunction(MachineFunction &MF) {
 bool PHIElimination::EliminatePHINodes(MachineFunction &MF,
                                        MachineBasicBlock &MBB) {
   if (MBB.empty() || !MBB.front().isPHI())
-    return false;   // Quick exit for basic blocks without PHIs.
+    return false; // Quick exit for basic blocks without PHIs.
 
   // Get an iterator to the last PHI node.
   MachineBasicBlock::iterator LastPHIIt =
-    std::prev(MBB.SkipPHIsAndLabels(MBB.begin()));
+      std::prev(MBB.SkipPHIsAndLabels(MBB.begin()));
 
   while (MBB.front().isPHI())
     LowerPHINode(MBB, LastPHIIt);
@@ -283,7 +285,7 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
   // Create a new register for the incoming PHI arguments.
   MachineFunction &MF = *MBB.getParent();
   unsigned IncomingReg = 0;
-  bool reusedIncoming = false;  // Is IncomingReg reused from an earlier PHI?
+  bool reusedIncoming = false; // Is IncomingReg reused from an earlier PHI?
 
   // Insert a register to register copy at the top of the current block (but
   // after any remaining phi nodes) which copies the new incoming register
@@ -294,7 +296,7 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
     // If all sources of a PHI node are implicit_def or undef uses, just emit an
     // implicit_def instead of a copy.
     PHICopy = BuildMI(MBB, AfterPHIsIt, MPhi->getDebugLoc(),
-            TII->get(TargetOpcode::IMPLICIT_DEF), DestReg);
+                      TII->get(TargetOpcode::IMPLICIT_DEF), DestReg);
   else {
     // Can we reuse an earlier PHI node? This only happens for critical edges,
     // typically those created by tail duplication.
@@ -311,8 +313,8 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
       entry = IncomingReg = MF.getRegInfo().createVirtualRegister(RC);
     }
     // Give the target possiblity to handle special cases fallthrough otherwise
-    PHICopy = TII->createPHIDestinationCopy(MBB, AfterPHIsIt, MPhi->getDebugLoc(),
-                                  IncomingReg, DestReg);
+    PHICopy = TII->createPHIDestinationCopy(
+        MBB, AfterPHIsIt, MPhi->getDebugLoc(), IncomingReg, DestReg);
   }
 
   if (MPhi->peekDebugInstrNum()) {
@@ -342,8 +344,8 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
         // by default, so it's before the OldKill. But some Target hooks for
         // createPHIDestinationCopy() may modify the default insert position of
         // PHICopy.
-        for (auto I = MBB.SkipPHIsAndLabels(MBB.begin()), E = MBB.end();
-             I != E; ++I) {
+        for (auto I = MBB.SkipPHIsAndLabels(MBB.begin()), E = MBB.end(); I != E;
+             ++I) {
           if (I == PHICopy)
             break;
 
@@ -395,11 +397,10 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
       LiveInterval &IncomingLI = LIS->createEmptyInterval(IncomingReg);
       VNInfo *IncomingVNI = IncomingLI.getVNInfoAt(MBBStartIndex);
       if (!IncomingVNI)
-        IncomingVNI = IncomingLI.getNextValue(MBBStartIndex,
-                                              LIS->getVNInfoAllocator());
-      IncomingLI.addSegment(LiveInterval::Segment(MBBStartIndex,
-                                                  DestCopyIndex.getRegSlot(),
-                                                  IncomingVNI));
+        IncomingVNI =
+            IncomingLI.getNextValue(MBBStartIndex, LIS->getVNInfoAllocator());
+      IncomingLI.addSegment(LiveInterval::Segment(
+          MBBStartIndex, DestCopyIndex.getRegSlot(), IncomingVNI));
     }
 
     LiveInterval &DestLI = LIS->getInterval(DestReg);
@@ -435,24 +436,24 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
 
   // Now loop over all of the incoming arguments, changing them to copy into the
   // IncomingReg register in the corresponding predecessor basic block.
-  SmallPtrSet<MachineBasicBlock*, 8> MBBsInsertedInto;
+  SmallPtrSet<MachineBasicBlock *, 8> MBBsInsertedInto;
   for (int i = NumSrcs - 1; i >= 0; --i) {
     Register SrcReg = MPhi->getOperand(i * 2 + 1).getReg();
-    unsigned SrcSubReg = MPhi->getOperand(i*2+1).getSubReg();
-    bool SrcUndef = MPhi->getOperand(i*2+1).isUndef() ||
-      isImplicitlyDefined(SrcReg, *MRI);
+    unsigned SrcSubReg = MPhi->getOperand(i * 2 + 1).getSubReg();
+    bool SrcUndef = MPhi->getOperand(i * 2 + 1).isUndef() ||
+                    isImplicitlyDefined(SrcReg, *MRI);
     assert(SrcReg.isVirtual() &&
            "Machine PHI Operands must all be virtual registers!");
 
     // Get the MachineBasicBlock equivalent of the BasicBlock that is the source
     // path the PHI.
-    MachineBasicBlock &opBlock = *MPhi->getOperand(i*2+2).getMBB();
+    MachineBasicBlock &opBlock = *MPhi->getOperand(i * 2 + 2).getMBB();
 
     // Check to make sure we haven't already emitted the copy for this block.
     // This can happen because PHI nodes may have multiple entries for the same
     // basic block.
     if (!MBBsInsertedInto.insert(&opBlock).second)
-      continue;  // If the copy has already been emitted, we're done.
+      continue; // If the copy has already been emitted, we're done.
 
     MachineInstr *SrcRegDef = MRI->getVRegDef(SrcReg);
     if (SrcRegDef && TII->isUnspillableTerminator(SrcRegDef)) {
@@ -479,7 +480,7 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
     // Find a safe location to insert the copy, this may be the first terminator
     // in the block (or end()).
     MachineBasicBlock::iterator InsertPos =
-      findPHICopyInsertPoint(&opBlock, &MBB, SrcReg);
+        findPHICopyInsertPoint(&opBlock, &MBB, SrcReg);
 
     // Insert the copy.
     MachineInstr *NewSrcInstr = nullptr;
@@ -488,9 +489,9 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
         // The source register is undefined, so there is no need for a real
         // COPY, but we still need to ensure joint dominance by defs.
         // Insert an IMPLICIT_DEF instruction.
-        NewSrcInstr = BuildMI(opBlock, InsertPos, MPhi->getDebugLoc(),
-                              TII->get(TargetOpcode::IMPLICIT_DEF),
-                              IncomingReg);
+        NewSrcInstr =
+            BuildMI(opBlock, InsertPos, MPhi->getDebugLoc(),
+                    TII->get(TargetOpcode::IMPLICIT_DEF), IncomingReg);
 
         // Clean up the old implicit-def, if there even was one.
         if (MachineInstr *DefMI = MRI->getVRegDef(SrcReg))
@@ -632,7 +633,7 @@ void PHIElimination::LowerPHINode(MachineBasicBlock &MBB,
 /// particular, we want to map the number of uses of a virtual register which is
 /// used in a PHI node. We map that to the BB the vreg is coming from. This is
 /// used later to determine when the vreg is killed in the BB.
-void PHIElimination::analyzePHINodes(const MachineFunction& MF) {
+void PHIElimination::analyzePHINodes(const MachineFunction &MF) {
   for (const auto &MBB : MF) {
     for (const auto &BBI : MBB) {
       if (!BBI.isPHI())
@@ -648,12 +649,11 @@ void PHIElimination::analyzePHINodes(const MachineFunction& MF) {
   }
 }
 
-bool PHIElimination::SplitPHIEdges(MachineFunction &MF,
-                                   MachineBasicBlock &MBB,
+bool PHIElimination::SplitPHIEdges(MachineFunction &MF, MachineBasicBlock &MBB,
                                    MachineLoopInfo *MLI,
                                    std::vector<SparseBitVector<>> *LiveInSets) {
   if (MBB.empty() || !MBB.front().isPHI() || MBB.isEHPad())
-    return false;   // Quick exit for basic blocks without PHIs.
+    return false; // Quick exit for basic blocks without PHIs.
 
   const MachineLoop *CurLoop = MLI ? MLI->getLoopFor(&MBB) : nullptr;
   bool IsLoopHeader = CurLoop && &MBB == CurLoop->getHeader();
@@ -663,7 +663,7 @@ bool PHIElimination::SplitPHIEdges(MachineFunction &MF,
        BBI != BBE && BBI->isPHI(); ++BBI) {
     for (unsigned i = 1, e = BBI->getNumOperands(); i != e; i += 2) {
       Register Reg = BBI->getOperand(i).getReg();
-      MachineBasicBlock *PreMBB = BBI->getOperand(i+1).getMBB();
+      MachineBasicBlock *PreMBB = BBI->getOperand(i + 1).getMBB();
       // Is there a critical edge from PreMBB to MBB?
       if (PreMBB->succ_size() == 1)
         continue;
@@ -744,9 +744,9 @@ bool PHIElimination::isLiveOutPastPHIs(Register Reg,
          "isLiveOutPastPHIs() requires either LiveVariables or LiveIntervals");
   // LiveVariables considers uses in PHIs to be in the predecessor basic block,
   // so that a register used only in a PHI is not live out of the block. In
-  // contrast, LiveIntervals considers uses in PHIs to be on the edge rather than
-  // in the predecessor basic block, so that a register used only in a PHI is live
-  // out of the block.
+  // contrast, LiveIntervals considers uses in PHIs to be on the edge rather
+  // than in the predecessor basic block, so that a register used only in a PHI
+  // is live out of the block.
   if (LIS) {
     const LiveInterval &LI = LIS->getInterval(Reg);
     for (const MachineBasicBlock *SI : MBB->successors())
diff --git a/llvm/lib/CodeGen/PHIEliminationUtils.cpp b/llvm/lib/CodeGen/PHIEliminationUtils.cpp
index 016335f420d3e68..4f54f440aa5667e 100644
--- a/llvm/lib/CodeGen/PHIEliminationUtils.cpp
+++ b/llvm/lib/CodeGen/PHIEliminationUtils.cpp
@@ -18,7 +18,7 @@ using namespace llvm;
 // SrcReg, but before any subsequent point where control flow might jump out of
 // the basic block.
 MachineBasicBlock::iterator
-llvm::findPHICopyInsertPoint(MachineBasicBlock* MBB, MachineBasicBlock* SuccMBB,
+llvm::findPHICopyInsertPoint(MachineBasicBlock *MBB, MachineBasicBlock *SuccMBB,
                              unsigned SrcReg) {
   // Handle the trivial case trivially.
   if (MBB->empty())
@@ -37,7 +37,7 @@ llvm::findPHICopyInsertPoint(MachineBasicBlock* MBB, MachineBasicBlock* SuccMBB,
 
   // Discover any defs in this basic block.
   SmallPtrSet<MachineInstr *, 8> DefsInMBB;
-  MachineRegisterInfo& MRI = MBB->getParent()->getRegInfo();
+  MachineRegisterInfo &MRI = MBB->getParent()->getRegInfo();
   for (MachineInstr &RI : MRI.def_instructions(SrcReg))
     if (RI.getParent() == MBB)
       DefsInMBB.insert(&RI);
diff --git a/llvm/lib/CodeGen/PHIEliminationUtils.h b/llvm/lib/CodeGen/PHIEliminationUtils.h
index 0ff3a41f47d30ef..13f7c764a1a88ac 100644
--- a/llvm/lib/CodeGen/PHIEliminationUtils.h
+++ b/llvm/lib/CodeGen/PHIEliminationUtils.h
@@ -12,13 +12,13 @@
 #include "llvm/CodeGen/MachineBasicBlock.h"
 
 namespace llvm {
-    /// findPHICopyInsertPoint - Find a safe place in MBB to insert a copy from
-    /// SrcReg when following the CFG edge to SuccMBB. This needs to be after
-    /// any def of SrcReg, but before any subsequent point where control flow
-    /// might jump out of the basic block.
-    MachineBasicBlock::iterator
-    findPHICopyInsertPoint(MachineBasicBlock* MBB, MachineBasicBlock* SuccMBB,
-                           unsigned SrcReg);
-}
+/// findPHICopyInsertPoint - Find a safe place in MBB to insert a copy from
+/// SrcReg when following the CFG edge to SuccMBB. This needs to be after
+/// any def of SrcReg, but before any subsequent point where control flow
+/// might jump out of the basic block.
+MachineBasicBlock::iterator findPHICopyInsertPoint(MachineBasicBlock *MBB,
+                                                   MachineBasicBlock *SuccMBB,
+                                                   unsigned SrcReg);
+} // namespace llvm
 
 #endif
diff --git a/llvm/lib/CodeGen/PatchableFunction.cpp b/llvm/lib/CodeGen/PatchableFunction.cpp
index 9449f143366f0fe..9c5f8c0317ff826 100644
--- a/llvm/lib/CodeGen/PatchableFunction.cpp
+++ b/llvm/lib/CodeGen/PatchableFunction.cpp
@@ -30,12 +30,12 @@ struct PatchableFunction : public MachineFunctionPass {
   }
 
   bool runOnMachineFunction(MachineFunction &F) override;
-   MachineFunctionProperties getRequiredProperties() const override {
+  MachineFunctionProperties getRequiredProperties() const override {
     return MachineFunctionProperties().set(
         MachineFunctionProperties::Property::NoVRegs);
   }
 };
-}
+} // namespace
 
 bool PatchableFunction::runOnMachineFunction(MachineFunction &MF) {
   if (MF.getFunction().hasFnAttribute("patchable-function-entry")) {
diff --git a/llvm/lib/CodeGen/PeepholeOptimizer.cpp b/llvm/lib/CodeGen/PeepholeOptimizer.cpp
index a08cc78f11b1b08..d36391d8e2c73d7 100644
--- a/llvm/lib/CodeGen/PeepholeOptimizer.cpp
+++ b/llvm/lib/CodeGen/PeepholeOptimizer.cpp
@@ -102,20 +102,19 @@ using RegSubRegPairAndIdx = TargetInstrInfo::RegSubRegPairAndIdx;
 #define DEBUG_TYPE "peephole-opt"
 
 // Optimize Extensions
-static cl::opt<bool>
-Aggressive("aggressive-ext-opt", cl::Hidden,
-           cl::desc("Aggressive extension optimization"));
+static cl::opt<bool> Aggressive("aggressive-ext-opt", cl::Hidden,
+                                cl::desc("Aggressive extension optimization"));
 
 static cl::opt<bool>
-DisablePeephole("disable-peephole", cl::Hidden, cl::init(false),
-                cl::desc("Disable the peephole optimizer"));
+    DisablePeephole("disable-peephole", cl::Hidden, cl::init(false),
+                    cl::desc("Disable the peephole optimizer"));
 
 /// Specifiy whether or not the value tracking looks through
 /// complex instructions. When this is true, the value tracker
 /// bails on everything that is not a copy or a bitcast.
 static cl::opt<bool>
-DisableAdvCopyOpt("disable-adv-copy-opt", cl::Hidden, cl::init(false),
-                  cl::desc("Disable advanced copy optimization"));
+    DisableAdvCopyOpt("disable-adv-copy-opt", cl::Hidden, cl::init(false),
+                      cl::desc("Disable advanced copy optimization"));
 
 static cl::opt<bool> DisableNAPhysCopyOpt(
     "disable-non-allocatable-phys-copy-opt", cl::Hidden, cl::init(false),
@@ -123,9 +122,9 @@ static cl::opt<bool> DisableNAPhysCopyOpt(
 
 // Limit the number of PHI instructions to process
 // in PeepholeOptimizer::getNextSource.
-static cl::opt<unsigned> RewritePHILimit(
-    "rewrite-phi-limit", cl::Hidden, cl::init(10),
-    cl::desc("Limit the length of PHI chains to lookup"));
+static cl::opt<unsigned>
+    RewritePHILimit("rewrite-phi-limit", cl::Hidden, cl::init(10),
+                    cl::desc("Limit the length of PHI chains to lookup"));
 
 // Limit the length of recurrence chain when evaluating the benefit of
 // commuting operands.
@@ -134,7 +133,6 @@ static cl::opt<unsigned> MaxRecurrenceChain(
     cl::desc("Maximum length of recurrence chain when evaluating the benefit "
              "of commuting operands"));
 
-
 STATISTIC(NumReuse, "Number of extension results reused");
 STATISTIC(NumCmps, "Number of compares eliminated");
 STATISTIC(NumImmFold, "Number of move immediate folded");
@@ -146,294 +144,289 @@ STATISTIC(NumNAPhysCopies, "Number of non-allocatable physical copies removed");
 
 namespace {
 
-  class ValueTrackerResult;
-  class RecurrenceInstr;
+class ValueTrackerResult;
+class RecurrenceInstr;
 
-  class PeepholeOptimizer : public MachineFunctionPass {
-    const TargetInstrInfo *TII = nullptr;
-    const TargetRegisterInfo *TRI = nullptr;
-    MachineRegisterInfo *MRI = nullptr;
-    MachineDominatorTree *DT = nullptr; // Machine dominator tree
-    MachineLoopInfo *MLI = nullptr;
+class PeepholeOptimizer : public MachineFunctionPass {
+  const TargetInstrInfo *TII = nullptr;
+  const TargetRegisterInfo *TRI = nullptr;
+  MachineRegisterInfo *MRI = nullptr;
+  MachineDominatorTree *DT = nullptr; // Machine dominator tree
+  MachineLoopInfo *MLI = nullptr;
 
-  public:
-    static char ID; // Pass identification
+public:
+  static char ID; // Pass identification
 
-    PeepholeOptimizer() : MachineFunctionPass(ID) {
-      initializePeepholeOptimizerPass(*PassRegistry::getPassRegistry());
-    }
+  PeepholeOptimizer() : MachineFunctionPass(ID) {
+    initializePeepholeOptimizerPass(*PassRegistry::getPassRegistry());
+  }
 
-    bool runOnMachineFunction(MachineFunction &MF) override;
+  bool runOnMachineFunction(MachineFunction &MF) override;
 
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.setPreservesCFG();
-      MachineFunctionPass::getAnalysisUsage(AU);
-      AU.addRequired<MachineLoopInfo>();
-      AU.addPreserved<MachineLoopInfo>();
-      if (Aggressive) {
-        AU.addRequired<MachineDominatorTree>();
-        AU.addPreserved<MachineDominatorTree>();
-      }
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.setPreservesCFG();
+    MachineFunctionPass::getAnalysisUsage(AU);
+    AU.addRequired<MachineLoopInfo>();
+    AU.addPreserved<MachineLoopInfo>();
+    if (Aggressive) {
+      AU.addRequired<MachineDominatorTree>();
+      AU.addPreserved<MachineDominatorTree>();
     }
+  }
 
-    MachineFunctionProperties getRequiredProperties() const override {
-      return MachineFunctionProperties()
-        .set(MachineFunctionProperties::Property::IsSSA);
-    }
+  MachineFunctionProperties getRequiredProperties() const override {
+    return MachineFunctionProperties().set(
+        MachineFunctionProperties::Property::IsSSA);
+  }
 
-    /// Track Def -> Use info used for rewriting copies.
-    using RewriteMapTy = SmallDenseMap<RegSubRegPair, ValueTrackerResult>;
+  /// Track Def -> Use info used for rewriting copies.
+  using RewriteMapTy = SmallDenseMap<RegSubRegPair, ValueTrackerResult>;
 
-    /// Sequence of instructions that formulate recurrence cycle.
-    using RecurrenceCycle = SmallVector<RecurrenceInstr, 4>;
+  /// Sequence of instructions that formulate recurrence cycle.
+  using RecurrenceCycle = SmallVector<RecurrenceInstr, 4>;
 
-  private:
-    bool optimizeCmpInstr(MachineInstr &MI);
-    bool optimizeExtInstr(MachineInstr &MI, MachineBasicBlock &MBB,
-                          SmallPtrSetImpl<MachineInstr*> &LocalMIs);
-    bool optimizeSelect(MachineInstr &MI,
+private:
+  bool optimizeCmpInstr(MachineInstr &MI);
+  bool optimizeExtInstr(MachineInstr &MI, MachineBasicBlock &MBB,
                         SmallPtrSetImpl<MachineInstr *> &LocalMIs);
-    bool optimizeCondBranch(MachineInstr &MI);
-    bool optimizeCoalescableCopy(MachineInstr &MI);
-    bool optimizeUncoalescableCopy(MachineInstr &MI,
-                                   SmallPtrSetImpl<MachineInstr *> &LocalMIs);
-    bool optimizeRecurrence(MachineInstr &PHI);
-    bool findNextSource(RegSubRegPair RegSubReg, RewriteMapTy &RewriteMap);
-    bool isMoveImmediate(MachineInstr &MI, SmallSet<Register, 4> &ImmDefRegs,
-                         DenseMap<Register, MachineInstr *> &ImmDefMIs);
-    bool foldImmediate(MachineInstr &MI, SmallSet<Register, 4> &ImmDefRegs,
+  bool optimizeSelect(MachineInstr &MI,
+                      SmallPtrSetImpl<MachineInstr *> &LocalMIs);
+  bool optimizeCondBranch(MachineInstr &MI);
+  bool optimizeCoalescableCopy(MachineInstr &MI);
+  bool optimizeUncoalescableCopy(MachineInstr &MI,
+                                 SmallPtrSetImpl<MachineInstr *> &LocalMIs);
+  bool optimizeRecurrence(MachineInstr &PHI);
+  bool findNextSource(RegSubRegPair RegSubReg, RewriteMapTy &RewriteMap);
+  bool isMoveImmediate(MachineInstr &MI, SmallSet<Register, 4> &ImmDefRegs,
                        DenseMap<Register, MachineInstr *> &ImmDefMIs);
+  bool foldImmediate(MachineInstr &MI, SmallSet<Register, 4> &ImmDefRegs,
+                     DenseMap<Register, MachineInstr *> &ImmDefMIs);
+
+  /// Finds recurrence cycles, but only ones that formulated around
+  /// a def operand and a use operand that are tied. If there is a use
+  /// operand commutable with the tied use operand, find recurrence cycle
+  /// along that operand as well.
+  bool findTargetRecurrence(Register Reg,
+                            const SmallSet<Register, 2> &TargetReg,
+                            RecurrenceCycle &RC);
+
+  /// If copy instruction \p MI is a virtual register copy or a copy of a
+  /// constant physical register to a virtual register, track it in the
+  /// set \p CopyMIs. If this virtual register was previously seen as a
+  /// copy, replace the uses of this copy with the previously seen copy's
+  /// destination register.
+  bool foldRedundantCopy(MachineInstr &MI,
+                         DenseMap<RegSubRegPair, MachineInstr *> &CopyMIs);
+
+  /// Is the register \p Reg a non-allocatable physical register?
+  bool isNAPhysCopy(Register Reg);
+
+  /// If copy instruction \p MI is a non-allocatable virtual<->physical
+  /// register copy, track it in the \p NAPhysToVirtMIs map. If this
+  /// non-allocatable physical register was previously copied to a virtual
+  /// registered and hasn't been clobbered, the virt->phys copy can be
+  /// deleted.
+  bool
+  foldRedundantNAPhysCopy(MachineInstr &MI,
+                          DenseMap<Register, MachineInstr *> &NAPhysToVirtMIs);
+
+  bool isLoadFoldable(MachineInstr &MI,
+                      SmallSet<Register, 16> &FoldAsLoadDefCandidates);
+
+  /// Check whether \p MI is understood by the register coalescer
+  /// but may require some rewriting.
+  bool isCoalescableCopy(const MachineInstr &MI) {
+    // SubregToRegs are not interesting, because they are already register
+    // coalescer friendly.
+    return MI.isCopy() ||
+           (!DisableAdvCopyOpt && (MI.isRegSequence() || MI.isInsertSubreg() ||
+                                   MI.isExtractSubreg()));
+  }
 
-    /// Finds recurrence cycles, but only ones that formulated around
-    /// a def operand and a use operand that are tied. If there is a use
-    /// operand commutable with the tied use operand, find recurrence cycle
-    /// along that operand as well.
-    bool findTargetRecurrence(Register Reg,
-                              const SmallSet<Register, 2> &TargetReg,
-                              RecurrenceCycle &RC);
-
-    /// If copy instruction \p MI is a virtual register copy or a copy of a
-    /// constant physical register to a virtual register, track it in the
-    /// set \p CopyMIs. If this virtual register was previously seen as a
-    /// copy, replace the uses of this copy with the previously seen copy's
-    /// destination register.
-    bool foldRedundantCopy(MachineInstr &MI,
-                           DenseMap<RegSubRegPair, MachineInstr *> &CopyMIs);
-
-    /// Is the register \p Reg a non-allocatable physical register?
-    bool isNAPhysCopy(Register Reg);
-
-    /// If copy instruction \p MI is a non-allocatable virtual<->physical
-    /// register copy, track it in the \p NAPhysToVirtMIs map. If this
-    /// non-allocatable physical register was previously copied to a virtual
-    /// registered and hasn't been clobbered, the virt->phys copy can be
-    /// deleted.
-    bool foldRedundantNAPhysCopy(
-        MachineInstr &MI, DenseMap<Register, MachineInstr *> &NAPhysToVirtMIs);
-
-    bool isLoadFoldable(MachineInstr &MI,
-                        SmallSet<Register, 16> &FoldAsLoadDefCandidates);
-
-    /// Check whether \p MI is understood by the register coalescer
-    /// but may require some rewriting.
-    bool isCoalescableCopy(const MachineInstr &MI) {
-      // SubregToRegs are not interesting, because they are already register
-      // coalescer friendly.
-      return MI.isCopy() || (!DisableAdvCopyOpt &&
-                             (MI.isRegSequence() || MI.isInsertSubreg() ||
-                              MI.isExtractSubreg()));
-    }
+  /// Check whether \p MI is a copy like instruction that is
+  /// not recognized by the register coalescer.
+  bool isUncoalescableCopy(const MachineInstr &MI) {
+    return MI.isBitcast() || (!DisableAdvCopyOpt && (MI.isRegSequenceLike() ||
+                                                     MI.isInsertSubregLike() ||
+                                                     MI.isExtractSubregLike()));
+  }
 
-    /// Check whether \p MI is a copy like instruction that is
-    /// not recognized by the register coalescer.
-    bool isUncoalescableCopy(const MachineInstr &MI) {
-      return MI.isBitcast() ||
-             (!DisableAdvCopyOpt &&
-              (MI.isRegSequenceLike() || MI.isInsertSubregLike() ||
-               MI.isExtractSubregLike()));
-    }
+  MachineInstr &rewriteSource(MachineInstr &CopyLike, RegSubRegPair Def,
+                              RewriteMapTy &RewriteMap);
+};
+
+/// Helper class to hold instructions that are inside recurrence cycles.
+/// The recurrence cycle is formulated around 1) a def operand and its
+/// tied use operand, or 2) a def operand and a use operand that is commutable
+/// with another use operand which is tied to the def operand. In the latter
+/// case, index of the tied use operand and the commutable use operand are
+/// maintained with CommutePair.
+class RecurrenceInstr {
+public:
+  using IndexPair = std::pair<unsigned, unsigned>;
 
-    MachineInstr &rewriteSource(MachineInstr &CopyLike,
-                                RegSubRegPair Def, RewriteMapTy &RewriteMap);
-  };
-
-  /// Helper class to hold instructions that are inside recurrence cycles.
-  /// The recurrence cycle is formulated around 1) a def operand and its
-  /// tied use operand, or 2) a def operand and a use operand that is commutable
-  /// with another use operand which is tied to the def operand. In the latter
-  /// case, index of the tied use operand and the commutable use operand are
-  /// maintained with CommutePair.
-  class RecurrenceInstr {
-  public:
-    using IndexPair = std::pair<unsigned, unsigned>;
-
-    RecurrenceInstr(MachineInstr *MI) : MI(MI) {}
-    RecurrenceInstr(MachineInstr *MI, unsigned Idx1, unsigned Idx2)
+  RecurrenceInstr(MachineInstr *MI) : MI(MI) {}
+  RecurrenceInstr(MachineInstr *MI, unsigned Idx1, unsigned Idx2)
       : MI(MI), CommutePair(std::make_pair(Idx1, Idx2)) {}
 
-    MachineInstr *getMI() const { return MI; }
-    std::optional<IndexPair> getCommutePair() const { return CommutePair; }
+  MachineInstr *getMI() const { return MI; }
+  std::optional<IndexPair> getCommutePair() const { return CommutePair; }
 
-  private:
-    MachineInstr *MI;
-    std::optional<IndexPair> CommutePair;
-  };
+private:
+  MachineInstr *MI;
+  std::optional<IndexPair> CommutePair;
+};
 
-  /// Helper class to hold a reply for ValueTracker queries.
-  /// Contains the returned sources for a given search and the instructions
-  /// where the sources were tracked from.
-  class ValueTrackerResult {
-  private:
-    /// Track all sources found by one ValueTracker query.
-    SmallVector<RegSubRegPair, 2> RegSrcs;
+/// Helper class to hold a reply for ValueTracker queries.
+/// Contains the returned sources for a given search and the instructions
+/// where the sources were tracked from.
+class ValueTrackerResult {
+private:
+  /// Track all sources found by one ValueTracker query.
+  SmallVector<RegSubRegPair, 2> RegSrcs;
 
-    /// Instruction using the sources in 'RegSrcs'.
-    const MachineInstr *Inst = nullptr;
+  /// Instruction using the sources in 'RegSrcs'.
+  const MachineInstr *Inst = nullptr;
 
-  public:
-    ValueTrackerResult() = default;
+public:
+  ValueTrackerResult() = default;
 
-    ValueTrackerResult(Register Reg, unsigned SubReg) {
-      addSource(Reg, SubReg);
-    }
+  ValueTrackerResult(Register Reg, unsigned SubReg) { addSource(Reg, SubReg); }
 
-    bool isValid() const { return getNumSources() > 0; }
+  bool isValid() const { return getNumSources() > 0; }
 
-    void setInst(const MachineInstr *I) { Inst = I; }
-    const MachineInstr *getInst() const { return Inst; }
+  void setInst(const MachineInstr *I) { Inst = I; }
+  const MachineInstr *getInst() const { return Inst; }
 
-    void clear() {
-      RegSrcs.clear();
-      Inst = nullptr;
-    }
+  void clear() {
+    RegSrcs.clear();
+    Inst = nullptr;
+  }
 
-    void addSource(Register SrcReg, unsigned SrcSubReg) {
-      RegSrcs.push_back(RegSubRegPair(SrcReg, SrcSubReg));
-    }
+  void addSource(Register SrcReg, unsigned SrcSubReg) {
+    RegSrcs.push_back(RegSubRegPair(SrcReg, SrcSubReg));
+  }
 
-    void setSource(int Idx, Register SrcReg, unsigned SrcSubReg) {
-      assert(Idx < getNumSources() && "Reg pair source out of index");
-      RegSrcs[Idx] = RegSubRegPair(SrcReg, SrcSubReg);
-    }
+  void setSource(int Idx, Register SrcReg, unsigned SrcSubReg) {
+    assert(Idx < getNumSources() && "Reg pair source out of index");
+    RegSrcs[Idx] = RegSubRegPair(SrcReg, SrcSubReg);
+  }
 
-    int getNumSources() const { return RegSrcs.size(); }
+  int getNumSources() const { return RegSrcs.size(); }
 
-    RegSubRegPair getSrc(int Idx) const {
-      return RegSrcs[Idx];
-    }
+  RegSubRegPair getSrc(int Idx) const { return RegSrcs[Idx]; }
 
-    Register getSrcReg(int Idx) const {
-      assert(Idx < getNumSources() && "Reg source out of index");
-      return RegSrcs[Idx].Reg;
-    }
+  Register getSrcReg(int Idx) const {
+    assert(Idx < getNumSources() && "Reg source out of index");
+    return RegSrcs[Idx].Reg;
+  }
 
-    unsigned getSrcSubReg(int Idx) const {
-      assert(Idx < getNumSources() && "SubReg source out of index");
-      return RegSrcs[Idx].SubReg;
-    }
+  unsigned getSrcSubReg(int Idx) const {
+    assert(Idx < getNumSources() && "SubReg source out of index");
+    return RegSrcs[Idx].SubReg;
+  }
 
-    bool operator==(const ValueTrackerResult &Other) const {
-      if (Other.getInst() != getInst())
-        return false;
+  bool operator==(const ValueTrackerResult &Other) const {
+    if (Other.getInst() != getInst())
+      return false;
 
-      if (Other.getNumSources() != getNumSources())
+    if (Other.getNumSources() != getNumSources())
+      return false;
+
+    for (int i = 0, e = Other.getNumSources(); i != e; ++i)
+      if (Other.getSrcReg(i) != getSrcReg(i) ||
+          Other.getSrcSubReg(i) != getSrcSubReg(i))
         return false;
+    return true;
+  }
+};
 
-      for (int i = 0, e = Other.getNumSources(); i != e; ++i)
-        if (Other.getSrcReg(i) != getSrcReg(i) ||
-            Other.getSrcSubReg(i) != getSrcSubReg(i))
-          return false;
-      return true;
-    }
-  };
-
-  /// Helper class to track the possible sources of a value defined by
-  /// a (chain of) copy related instructions.
-  /// Given a definition (instruction and definition index), this class
-  /// follows the use-def chain to find successive suitable sources.
-  /// The given source can be used to rewrite the definition into
-  /// def = COPY src.
-  ///
-  /// For instance, let us consider the following snippet:
-  /// v0 =
-  /// v2 = INSERT_SUBREG v1, v0, sub0
-  /// def = COPY v2.sub0
-  ///
-  /// Using a ValueTracker for def = COPY v2.sub0 will give the following
-  /// suitable sources:
-  /// v2.sub0 and v0.
-  /// Then, def can be rewritten into def = COPY v0.
-  class ValueTracker {
-  private:
-    /// The current point into the use-def chain.
-    const MachineInstr *Def = nullptr;
-
-    /// The index of the definition in Def.
-    unsigned DefIdx = 0;
-
-    /// The sub register index of the definition.
-    unsigned DefSubReg;
-
-    /// The register where the value can be found.
-    Register Reg;
-
-    /// MachineRegisterInfo used to perform tracking.
-    const MachineRegisterInfo &MRI;
-
-    /// Optional TargetInstrInfo used to perform some complex tracking.
-    const TargetInstrInfo *TII;
-
-    /// Dispatcher to the right underlying implementation of getNextSource.
-    ValueTrackerResult getNextSourceImpl();
-
-    /// Specialized version of getNextSource for Copy instructions.
-    ValueTrackerResult getNextSourceFromCopy();
-
-    /// Specialized version of getNextSource for Bitcast instructions.
-    ValueTrackerResult getNextSourceFromBitcast();
-
-    /// Specialized version of getNextSource for RegSequence instructions.
-    ValueTrackerResult getNextSourceFromRegSequence();
-
-    /// Specialized version of getNextSource for InsertSubreg instructions.
-    ValueTrackerResult getNextSourceFromInsertSubreg();
-
-    /// Specialized version of getNextSource for ExtractSubreg instructions.
-    ValueTrackerResult getNextSourceFromExtractSubreg();
-
-    /// Specialized version of getNextSource for SubregToReg instructions.
-    ValueTrackerResult getNextSourceFromSubregToReg();
-
-    /// Specialized version of getNextSource for PHI instructions.
-    ValueTrackerResult getNextSourceFromPHI();
-
-  public:
-    /// Create a ValueTracker instance for the value defined by \p Reg.
-    /// \p DefSubReg represents the sub register index the value tracker will
-    /// track. It does not need to match the sub register index used in the
-    /// definition of \p Reg.
-    /// If \p Reg is a physical register, a value tracker constructed with
-    /// this constructor will not find any alternative source.
-    /// Indeed, when \p Reg is a physical register that constructor does not
-    /// know which definition of \p Reg it should track.
-    /// Use the next constructor to track a physical register.
-    ValueTracker(Register Reg, unsigned DefSubReg,
-                 const MachineRegisterInfo &MRI,
-                 const TargetInstrInfo *TII = nullptr)
-        : DefSubReg(DefSubReg), Reg(Reg), MRI(MRI), TII(TII) {
-      if (!Reg.isPhysical()) {
-        Def = MRI.getVRegDef(Reg);
-        DefIdx = MRI.def_begin(Reg).getOperandNo();
-      }
+/// Helper class to track the possible sources of a value defined by
+/// a (chain of) copy related instructions.
+/// Given a definition (instruction and definition index), this class
+/// follows the use-def chain to find successive suitable sources.
+/// The given source can be used to rewrite the definition into
+/// def = COPY src.
+///
+/// For instance, let us consider the following snippet:
+/// v0 =
+/// v2 = INSERT_SUBREG v1, v0, sub0
+/// def = COPY v2.sub0
+///
+/// Using a ValueTracker for def = COPY v2.sub0 will give the following
+/// suitable sources:
+/// v2.sub0 and v0.
+/// Then, def can be rewritten into def = COPY v0.
+class ValueTracker {
+private:
+  /// The current point into the use-def chain.
+  const MachineInstr *Def = nullptr;
+
+  /// The index of the definition in Def.
+  unsigned DefIdx = 0;
+
+  /// The sub register index of the definition.
+  unsigned DefSubReg;
+
+  /// The register where the value can be found.
+  Register Reg;
+
+  /// MachineRegisterInfo used to perform tracking.
+  const MachineRegisterInfo &MRI;
+
+  /// Optional TargetInstrInfo used to perform some complex tracking.
+  const TargetInstrInfo *TII;
+
+  /// Dispatcher to the right underlying implementation of getNextSource.
+  ValueTrackerResult getNextSourceImpl();
+
+  /// Specialized version of getNextSource for Copy instructions.
+  ValueTrackerResult getNextSourceFromCopy();
+
+  /// Specialized version of getNextSource for Bitcast instructions.
+  ValueTrackerResult getNextSourceFromBitcast();
+
+  /// Specialized version of getNextSource for RegSequence instructions.
+  ValueTrackerResult getNextSourceFromRegSequence();
+
+  /// Specialized version of getNextSource for InsertSubreg instructions.
+  ValueTrackerResult getNextSourceFromInsertSubreg();
+
+  /// Specialized version of getNextSource for ExtractSubreg instructions.
+  ValueTrackerResult getNextSourceFromExtractSubreg();
+
+  /// Specialized version of getNextSource for SubregToReg instructions.
+  ValueTrackerResult getNextSourceFromSubregToReg();
+
+  /// Specialized version of getNextSource for PHI instructions.
+  ValueTrackerResult getNextSourceFromPHI();
+
+public:
+  /// Create a ValueTracker instance for the value defined by \p Reg.
+  /// \p DefSubReg represents the sub register index the value tracker will
+  /// track. It does not need to match the sub register index used in the
+  /// definition of \p Reg.
+  /// If \p Reg is a physical register, a value tracker constructed with
+  /// this constructor will not find any alternative source.
+  /// Indeed, when \p Reg is a physical register that constructor does not
+  /// know which definition of \p Reg it should track.
+  /// Use the next constructor to track a physical register.
+  ValueTracker(Register Reg, unsigned DefSubReg, const MachineRegisterInfo &MRI,
+               const TargetInstrInfo *TII = nullptr)
+      : DefSubReg(DefSubReg), Reg(Reg), MRI(MRI), TII(TII) {
+    if (!Reg.isPhysical()) {
+      Def = MRI.getVRegDef(Reg);
+      DefIdx = MRI.def_begin(Reg).getOperandNo();
     }
+  }
 
-    /// Following the use-def chain, get the next available source
-    /// for the tracked value.
-    /// \return A ValueTrackerResult containing a set of registers
-    /// and sub registers with tracked values. A ValueTrackerResult with
-    /// an empty set of registers means no source was found.
-    ValueTrackerResult getNextSource();
-  };
+  /// Following the use-def chain, get the next available source
+  /// for the tracked value.
+  /// \return A ValueTrackerResult containing a set of registers
+  /// and sub registers with tracked values. A ValueTrackerResult with
+  /// an empty set of registers means no source was found.
+  ValueTrackerResult getNextSource();
+};
 
 } // end anonymous namespace
 
@@ -441,12 +434,12 @@ char PeepholeOptimizer::ID = 0;
 
 char &llvm::PeepholeOptimizerID = PeepholeOptimizer::ID;
 
-INITIALIZE_PASS_BEGIN(PeepholeOptimizer, DEBUG_TYPE,
-                      "Peephole Optimizations", false, false)
+INITIALIZE_PASS_BEGIN(PeepholeOptimizer, DEBUG_TYPE, "Peephole Optimizations",
+                      false, false)
 INITIALIZE_PASS_DEPENDENCY(MachineDominatorTree)
 INITIALIZE_PASS_DEPENDENCY(MachineLoopInfo)
-INITIALIZE_PASS_END(PeepholeOptimizer, DEBUG_TYPE,
-                    "Peephole Optimizations", false, false)
+INITIALIZE_PASS_END(PeepholeOptimizer, DEBUG_TYPE, "Peephole Optimizations",
+                    false, false)
 
 /// If instruction is a copy-like instruction, i.e. it reads a single register
 /// and writes a single register and it does not modify the source, and if the
@@ -456,9 +449,9 @@ INITIALIZE_PASS_END(PeepholeOptimizer, DEBUG_TYPE,
 /// Do not generate an EXTRACT that is used only in a debug use, as this changes
 /// the code. Since this code does not currently share EXTRACTs, just ignore all
 /// debug uses.
-bool PeepholeOptimizer::
-optimizeExtInstr(MachineInstr &MI, MachineBasicBlock &MBB,
-                 SmallPtrSetImpl<MachineInstr*> &LocalMIs) {
+bool PeepholeOptimizer::optimizeExtInstr(
+    MachineInstr &MI, MachineBasicBlock &MBB,
+    SmallPtrSetImpl<MachineInstr *> &LocalMIs) {
   Register SrcReg, DstReg;
   unsigned SubIdx;
   if (!TII->isCoalescableExtInstr(MI, SrcReg, DstReg, SubIdx))
@@ -488,15 +481,15 @@ optimizeExtInstr(MachineInstr &MI, MachineBasicBlock &MBB,
 
   // The source has other uses. See if we can replace the other uses with use of
   // the result of the extension.
-  SmallPtrSet<MachineBasicBlock*, 4> ReachedBBs;
+  SmallPtrSet<MachineBasicBlock *, 4> ReachedBBs;
   for (MachineInstr &UI : MRI->use_nodbg_instructions(DstReg))
     ReachedBBs.insert(UI.getParent());
 
   // Uses that are in the same BB of uses of the result of the instruction.
-  SmallVector<MachineOperand*, 8> Uses;
+  SmallVector<MachineOperand *, 8> Uses;
 
   // Uses that the result of the instruction can reach.
-  SmallVector<MachineOperand*, 8> ExtendedUses;
+  SmallVector<MachineOperand *, 8> ExtendedUses;
 
   bool ExtendLife = true;
   for (MachineOperand &UseMO : MRI->use_nodbg_operands(SrcReg)) {
@@ -561,7 +554,7 @@ optimizeExtInstr(MachineInstr &MI, MachineBasicBlock &MBB,
   // Now replace all uses.
   bool Changed = false;
   if (!Uses.empty()) {
-    SmallPtrSet<MachineBasicBlock*, 4> PHIBBs;
+    SmallPtrSet<MachineBasicBlock *, 4> PHIBBs;
 
     // Look for PHI uses of the extended result, we don't want to extend the
     // liveness of a PHI input. It breaks all kinds of assumptions down
@@ -604,7 +597,7 @@ optimizeExtInstr(MachineInstr &MI, MachineBasicBlock &MBB,
       Register NewVR = MRI->createVirtualRegister(RC);
       BuildMI(*UseMBB, UseMI, UseMI->getDebugLoc(),
               TII->get(TargetOpcode::COPY), NewVR)
-        .addReg(DstReg, 0, SubIdx);
+          .addReg(DstReg, 0, SubIdx);
       if (UseSrcSubIdx)
         UseMO->setSubReg(0);
 
@@ -642,8 +635,8 @@ bool PeepholeOptimizer::optimizeCmpInstr(MachineInstr &MI) {
 }
 
 /// Optimize a select instruction.
-bool PeepholeOptimizer::optimizeSelect(MachineInstr &MI,
-                            SmallPtrSetImpl<MachineInstr *> &LocalMIs) {
+bool PeepholeOptimizer::optimizeSelect(
+    MachineInstr &MI, SmallPtrSetImpl<MachineInstr *> &LocalMIs) {
   unsigned TrueOp = 0;
   unsigned FalseOp = 0;
   bool Optimizable = false;
@@ -771,10 +764,10 @@ bool PeepholeOptimizer::findNextSource(RegSubRegPair RegSubReg,
 /// successfully traverse a PHI instruction and find suitable sources coming
 /// from its edges. By inserting a new PHI, we provide a rewritten PHI def
 /// suitable to be used in a new COPY instruction.
-static MachineInstr &
-insertPHI(MachineRegisterInfo &MRI, const TargetInstrInfo &TII,
-          const SmallVectorImpl<RegSubRegPair> &SrcRegs,
-          MachineInstr &OrigPHI) {
+static MachineInstr &insertPHI(MachineRegisterInfo &MRI,
+                               const TargetInstrInfo &TII,
+                               const SmallVectorImpl<RegSubRegPair> &SrcRegs,
+                               MachineInstr &OrigPHI) {
   assert(!SrcRegs.empty() && "No sources to create a PHI instruction?");
 
   const TargetRegisterClass *NewRC = MRI.getRegClass(SrcRegs[0].Reg);
@@ -806,7 +799,7 @@ namespace {
 class Rewriter {
 protected:
   MachineInstr &CopyLike;
-  unsigned CurrentSrcIdx = 0;   ///< The index of the source being rewritten.
+  unsigned CurrentSrcIdx = 0; ///< The index of the source being rewritten.
 public:
   Rewriter(MachineInstr &CopyLike) : CopyLike(CopyLike) {}
   virtual ~Rewriter() = default;
@@ -882,7 +875,7 @@ class CopyRewriter : public Rewriter {
 /// Helper class to rewrite uncoalescable copy like instructions
 /// into new COPY (coalescable friendly) instructions.
 class UncoalescableRewriter : public Rewriter {
-  unsigned NumDefs;  ///< Number of defs in the bitcast.
+  unsigned NumDefs; ///< Number of defs in the bitcast.
 
 public:
   UncoalescableRewriter(MachineInstr &MI) : Rewriter(MI) {
@@ -996,8 +989,8 @@ class ExtractSubregRewriter : public Rewriter {
     if (MOExtractedReg.getSubReg())
       return false;
 
-    Src = RegSubRegPair(MOExtractedReg.getReg(),
-                        CopyLike.getOperand(2).getImm());
+    Src =
+        RegSubRegPair(MOExtractedReg.getReg(), CopyLike.getOperand(2).getImm());
 
     // We want to track something that is compatible with the definition.
     const MachineOperand &MODef = CopyLike.getOperand(0);
@@ -1238,9 +1231,9 @@ bool PeepholeOptimizer::optimizeCoalescableCopy(MachineInstr &MI) {
 /// PeepholeOptimizer::findNextSource. Right now this is only used to handle
 /// Uncoalescable copies, since they are copy like instructions that aren't
 /// recognized by the register allocator.
-MachineInstr &
-PeepholeOptimizer::rewriteSource(MachineInstr &CopyLike,
-                                 RegSubRegPair Def, RewriteMapTy &RewriteMap) {
+MachineInstr &PeepholeOptimizer::rewriteSource(MachineInstr &CopyLike,
+                                               RegSubRegPair Def,
+                                               RewriteMapTy &RewriteMap) {
   assert(!Def.Reg.isPhysical() && "We do not rewrite physical registers");
 
   // Find the new source to use in the COPY rewrite.
@@ -1615,7 +1608,7 @@ bool PeepholeOptimizer::runOnMachineFunction(MachineFunction &MF) {
   TII = MF.getSubtarget().getInstrInfo();
   TRI = MF.getSubtarget().getRegisterInfo();
   MRI = &MF.getRegInfo();
-  DT  = Aggressive ? &getAnalysis<MachineDominatorTree>() : nullptr;
+  DT = Aggressive ? &getAnalysis<MachineDominatorTree>() : nullptr;
   MLI = &getAnalysis<MachineLoopInfo>();
 
   bool Changed = false;
@@ -1629,7 +1622,7 @@ bool PeepholeOptimizer::runOnMachineFunction(MachineFunction &MF) {
     // To perform this, the following set keeps track of the MIs already seen
     // during the scan, if a MI is not in the set, it is assumed to be located
     // after. Newly created MIs have to be inserted in the set as well.
-    SmallPtrSet<MachineInstr*, 16> LocalMIs;
+    SmallPtrSet<MachineInstr *, 16> LocalMIs;
     SmallSet<Register, 4> ImmDefRegs;
     DenseMap<Register, MachineInstr *> ImmDefMIs;
     SmallSet<Register, 16> FoldAsLoadDefCandidates;
@@ -1648,7 +1641,7 @@ bool PeepholeOptimizer::runOnMachineFunction(MachineFunction &MF) {
     bool IsLoopHeader = MLI->isLoopHeader(&MBB);
 
     for (MachineBasicBlock::iterator MII = MBB.begin(), MIE = MBB.end();
-         MII != MIE; ) {
+         MII != MIE;) {
       MachineInstr *MI = &*MII;
       // We may be erasing MI below, increment MII now.
       ++MII;
@@ -1766,8 +1759,7 @@ bool PeepholeOptimizer::runOnMachineFunction(MachineFunction &MF) {
         // foldable uses earlier in the argument list.  Since we don't restart
         // iteration, we'd miss such cases.
         const MCInstrDesc &MIDesc = MI->getDesc();
-        for (unsigned i = MIDesc.getNumDefs(); i != MI->getNumOperands();
-             ++i) {
+        for (unsigned i = MIDesc.getNumDefs(); i != MI->getNumOperands(); ++i) {
           const MachineOperand &MOp = MI->getOperand(i);
           if (!MOp.isReg())
             continue;
@@ -1977,9 +1969,9 @@ ValueTrackerResult ValueTracker::getNextSourceFromInsertSubreg() {
   // Get the TRI and check if the inserted sub-register overlaps with the
   // sub-register we are tracking.
   const TargetRegisterInfo *TRI = MRI.getTargetRegisterInfo();
-  if (!TRI ||
-      !(TRI->getSubRegIndexLaneMask(DefSubReg) &
-        TRI->getSubRegIndexLaneMask(InsertedReg.SubIdx)).none())
+  if (!TRI || !(TRI->getSubRegIndexLaneMask(DefSubReg) &
+                TRI->getSubRegIndexLaneMask(InsertedReg.SubIdx))
+                   .none())
     return ValueTrackerResult();
   // At this point, the value is available in v0 via the same subreg
   // we used for Def.
@@ -1987,8 +1979,8 @@ ValueTrackerResult ValueTracker::getNextSourceFromInsertSubreg() {
 }
 
 ValueTrackerResult ValueTracker::getNextSourceFromExtractSubreg() {
-  assert((Def->isExtractSubreg() ||
-          Def->isExtractSubregLike()) && "Invalid definition");
+  assert((Def->isExtractSubreg() || Def->isExtractSubregLike()) &&
+         "Invalid definition");
   // We are looking at:
   // Def = EXTRACT_SUBREG v0, sub0
 
diff --git a/llvm/lib/CodeGen/PostRAHazardRecognizer.cpp b/llvm/lib/CodeGen/PostRAHazardRecognizer.cpp
index 97b1532300b1715..a097ee9c879ce97 100644
--- a/llvm/lib/CodeGen/PostRAHazardRecognizer.cpp
+++ b/llvm/lib/CodeGen/PostRAHazardRecognizer.cpp
@@ -40,28 +40,27 @@ using namespace llvm;
 STATISTIC(NumNoops, "Number of noops inserted");
 
 namespace {
-  class PostRAHazardRecognizer : public MachineFunctionPass {
+class PostRAHazardRecognizer : public MachineFunctionPass {
 
-  public:
-    static char ID;
-    PostRAHazardRecognizer() : MachineFunctionPass(ID) {}
+public:
+  static char ID;
+  PostRAHazardRecognizer() : MachineFunctionPass(ID) {}
 
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.setPreservesCFG();
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
-
-    bool runOnMachineFunction(MachineFunction &Fn) override;
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.setPreservesCFG();
+    MachineFunctionPass::getAnalysisUsage(AU);
+  }
 
-  };
-  char PostRAHazardRecognizer::ID = 0;
+  bool runOnMachineFunction(MachineFunction &Fn) override;
+};
+char PostRAHazardRecognizer::ID = 0;
 
-}
+} // namespace
 
 char &llvm::PostRAHazardRecognizerID = PostRAHazardRecognizer::ID;
 
-INITIALIZE_PASS(PostRAHazardRecognizer, DEBUG_TYPE,
-                "Post RA hazard recognizer", false, false)
+INITIALIZE_PASS(PostRAHazardRecognizer, DEBUG_TYPE, "Post RA hazard recognizer",
+                false, false)
 
 bool PostRAHazardRecognizer::runOnMachineFunction(MachineFunction &Fn) {
   const TargetInstrInfo *TII = Fn.getSubtarget().getInstrInfo();
diff --git a/llvm/lib/CodeGen/PostRASchedulerList.cpp b/llvm/lib/CodeGen/PostRASchedulerList.cpp
index 170008ab67cb693..6d4c12976b5f60f 100644
--- a/llvm/lib/CodeGen/PostRASchedulerList.cpp
+++ b/llvm/lib/CodeGen/PostRASchedulerList.cpp
@@ -50,149 +50,146 @@ STATISTIC(NumFixedAnti, "Number of fixed anti-dependencies");
 // Post-RA scheduling is enabled with
 // TargetSubtargetInfo.enablePostRAScheduler(). This flag can be used to
 // override the target.
-static cl::opt<bool>
-EnablePostRAScheduler("post-RA-scheduler",
-                       cl::desc("Enable scheduling after register allocation"),
-                       cl::init(false), cl::Hidden);
-static cl::opt<std::string>
-EnableAntiDepBreaking("break-anti-dependencies",
-                      cl::desc("Break post-RA scheduling anti-dependencies: "
-                               "\"critical\", \"all\", or \"none\""),
-                      cl::init("none"), cl::Hidden);
+static cl::opt<bool> EnablePostRAScheduler(
+    "post-RA-scheduler",
+    cl::desc("Enable scheduling after register allocation"), cl::init(false),
+    cl::Hidden);
+static cl::opt<std::string> EnableAntiDepBreaking(
+    "break-anti-dependencies",
+    cl::desc("Break post-RA scheduling anti-dependencies: "
+             "\"critical\", \"all\", or \"none\""),
+    cl::init("none"), cl::Hidden);
 
 // If DebugDiv > 0 then only schedule MBB with (ID % DebugDiv) == DebugMod
-static cl::opt<int>
-DebugDiv("postra-sched-debugdiv",
-                      cl::desc("Debug control MBBs that are scheduled"),
-                      cl::init(0), cl::Hidden);
-static cl::opt<int>
-DebugMod("postra-sched-debugmod",
-                      cl::desc("Debug control MBBs that are scheduled"),
-                      cl::init(0), cl::Hidden);
+static cl::opt<int> DebugDiv("postra-sched-debugdiv",
+                             cl::desc("Debug control MBBs that are scheduled"),
+                             cl::init(0), cl::Hidden);
+static cl::opt<int> DebugMod("postra-sched-debugmod",
+                             cl::desc("Debug control MBBs that are scheduled"),
+                             cl::init(0), cl::Hidden);
 
 AntiDepBreaker::~AntiDepBreaker() = default;
 
 namespace {
-  class PostRAScheduler : public MachineFunctionPass {
-    const TargetInstrInfo *TII = nullptr;
-    RegisterClassInfo RegClassInfo;
-
-  public:
-    static char ID;
-    PostRAScheduler() : MachineFunctionPass(ID) {}
-
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.setPreservesCFG();
-      AU.addRequired<AAResultsWrapperPass>();
-      AU.addRequired<TargetPassConfig>();
-      AU.addRequired<MachineDominatorTree>();
-      AU.addPreserved<MachineDominatorTree>();
-      AU.addRequired<MachineLoopInfo>();
-      AU.addPreserved<MachineLoopInfo>();
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
+class PostRAScheduler : public MachineFunctionPass {
+  const TargetInstrInfo *TII = nullptr;
+  RegisterClassInfo RegClassInfo;
+
+public:
+  static char ID;
+  PostRAScheduler() : MachineFunctionPass(ID) {}
+
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.setPreservesCFG();
+    AU.addRequired<AAResultsWrapperPass>();
+    AU.addRequired<TargetPassConfig>();
+    AU.addRequired<MachineDominatorTree>();
+    AU.addPreserved<MachineDominatorTree>();
+    AU.addRequired<MachineLoopInfo>();
+    AU.addPreserved<MachineLoopInfo>();
+    MachineFunctionPass::getAnalysisUsage(AU);
+  }
 
-    MachineFunctionProperties getRequiredProperties() const override {
-      return MachineFunctionProperties().set(
-          MachineFunctionProperties::Property::NoVRegs);
-    }
+  MachineFunctionProperties getRequiredProperties() const override {
+    return MachineFunctionProperties().set(
+        MachineFunctionProperties::Property::NoVRegs);
+  }
 
-    bool runOnMachineFunction(MachineFunction &Fn) override;
-
-  private:
-    bool enablePostRAScheduler(
-        const TargetSubtargetInfo &ST, CodeGenOpt::Level OptLevel,
-        TargetSubtargetInfo::AntiDepBreakMode &Mode,
-        TargetSubtargetInfo::RegClassVector &CriticalPathRCs) const;
-  };
-  char PostRAScheduler::ID = 0;
-
-  class SchedulePostRATDList : public ScheduleDAGInstrs {
-    /// AvailableQueue - The priority queue to use for the available SUnits.
-    ///
-    LatencyPriorityQueue AvailableQueue;
-
-    /// PendingQueue - This contains all of the instructions whose operands have
-    /// been issued, but their results are not ready yet (due to the latency of
-    /// the operation).  Once the operands becomes available, the instruction is
-    /// added to the AvailableQueue.
-    std::vector<SUnit*> PendingQueue;
-
-    /// HazardRec - The hazard recognizer to use.
-    ScheduleHazardRecognizer *HazardRec;
-
-    /// AntiDepBreak - Anti-dependence breaking object, or NULL if none
-    AntiDepBreaker *AntiDepBreak;
-
-    /// AA - AliasAnalysis for making memory reference queries.
-    AliasAnalysis *AA;
-
-    /// The schedule. Null SUnit*'s represent noop instructions.
-    std::vector<SUnit*> Sequence;
-
-    /// Ordered list of DAG postprocessing steps.
-    std::vector<std::unique_ptr<ScheduleDAGMutation>> Mutations;
-
-    /// The index in BB of RegionEnd.
-    ///
-    /// This is the instruction number from the top of the current block, not
-    /// the SlotIndex. It is only used by the AntiDepBreaker.
-    unsigned EndIndex = 0;
-
-  public:
-    SchedulePostRATDList(
-        MachineFunction &MF, MachineLoopInfo &MLI, AliasAnalysis *AA,
-        const RegisterClassInfo &,
-        TargetSubtargetInfo::AntiDepBreakMode AntiDepMode,
-        SmallVectorImpl<const TargetRegisterClass *> &CriticalPathRCs);
-
-    ~SchedulePostRATDList() override;
-
-    /// startBlock - Initialize register live-range state for scheduling in
-    /// this block.
-    ///
-    void startBlock(MachineBasicBlock *BB) override;
-
-    // Set the index of RegionEnd within the current BB.
-    void setEndIndex(unsigned EndIdx) { EndIndex = EndIdx; }
-
-    /// Initialize the scheduler state for the next scheduling region.
-    void enterRegion(MachineBasicBlock *bb,
-                     MachineBasicBlock::iterator begin,
-                     MachineBasicBlock::iterator end,
-                     unsigned regioninstrs) override;
-
-    /// Notify that the scheduler has finished scheduling the current region.
-    void exitRegion() override;
-
-    /// Schedule - Schedule the instruction range using list scheduling.
-    ///
-    void schedule() override;
-
-    void EmitSchedule();
-
-    /// Observe - Update liveness information to account for the current
-    /// instruction, which will not be scheduled.
-    ///
-    void Observe(MachineInstr &MI, unsigned Count);
-
-    /// finishBlock - Clean up register live-range state.
-    ///
-    void finishBlock() override;
-
-  private:
-    /// Apply each ScheduleDAGMutation step in order.
-    void postProcessDAG();
-
-    void ReleaseSucc(SUnit *SU, SDep *SuccEdge);
-    void ReleaseSuccessors(SUnit *SU);
-    void ScheduleNodeTopDown(SUnit *SU, unsigned CurCycle);
-    void ListScheduleTopDown();
-
-    void dumpSchedule() const;
-    void emitNoop(unsigned CurCycle);
-  };
-}
+  bool runOnMachineFunction(MachineFunction &Fn) override;
+
+private:
+  bool enablePostRAScheduler(
+      const TargetSubtargetInfo &ST, CodeGenOpt::Level OptLevel,
+      TargetSubtargetInfo::AntiDepBreakMode &Mode,
+      TargetSubtargetInfo::RegClassVector &CriticalPathRCs) const;
+};
+char PostRAScheduler::ID = 0;
+
+class SchedulePostRATDList : public ScheduleDAGInstrs {
+  /// AvailableQueue - The priority queue to use for the available SUnits.
+  ///
+  LatencyPriorityQueue AvailableQueue;
+
+  /// PendingQueue - This contains all of the instructions whose operands have
+  /// been issued, but their results are not ready yet (due to the latency of
+  /// the operation).  Once the operands becomes available, the instruction is
+  /// added to the AvailableQueue.
+  std::vector<SUnit *> PendingQueue;
+
+  /// HazardRec - The hazard recognizer to use.
+  ScheduleHazardRecognizer *HazardRec;
+
+  /// AntiDepBreak - Anti-dependence breaking object, or NULL if none
+  AntiDepBreaker *AntiDepBreak;
+
+  /// AA - AliasAnalysis for making memory reference queries.
+  AliasAnalysis *AA;
+
+  /// The schedule. Null SUnit*'s represent noop instructions.
+  std::vector<SUnit *> Sequence;
+
+  /// Ordered list of DAG postprocessing steps.
+  std::vector<std::unique_ptr<ScheduleDAGMutation>> Mutations;
+
+  /// The index in BB of RegionEnd.
+  ///
+  /// This is the instruction number from the top of the current block, not
+  /// the SlotIndex. It is only used by the AntiDepBreaker.
+  unsigned EndIndex = 0;
+
+public:
+  SchedulePostRATDList(
+      MachineFunction &MF, MachineLoopInfo &MLI, AliasAnalysis *AA,
+      const RegisterClassInfo &,
+      TargetSubtargetInfo::AntiDepBreakMode AntiDepMode,
+      SmallVectorImpl<const TargetRegisterClass *> &CriticalPathRCs);
+
+  ~SchedulePostRATDList() override;
+
+  /// startBlock - Initialize register live-range state for scheduling in
+  /// this block.
+  ///
+  void startBlock(MachineBasicBlock *BB) override;
+
+  // Set the index of RegionEnd within the current BB.
+  void setEndIndex(unsigned EndIdx) { EndIndex = EndIdx; }
+
+  /// Initialize the scheduler state for the next scheduling region.
+  void enterRegion(MachineBasicBlock *bb, MachineBasicBlock::iterator begin,
+                   MachineBasicBlock::iterator end,
+                   unsigned regioninstrs) override;
+
+  /// Notify that the scheduler has finished scheduling the current region.
+  void exitRegion() override;
+
+  /// Schedule - Schedule the instruction range using list scheduling.
+  ///
+  void schedule() override;
+
+  void EmitSchedule();
+
+  /// Observe - Update liveness information to account for the current
+  /// instruction, which will not be scheduled.
+  ///
+  void Observe(MachineInstr &MI, unsigned Count);
+
+  /// finishBlock - Clean up register live-range state.
+  ///
+  void finishBlock() override;
+
+private:
+  /// Apply each ScheduleDAGMutation step in order.
+  void postProcessDAG();
+
+  void ReleaseSucc(SUnit *SU, SDep *SuccEdge);
+  void ReleaseSuccessors(SUnit *SU);
+  void ScheduleNodeTopDown(SUnit *SU, unsigned CurCycle);
+  void ListScheduleTopDown();
+
+  void dumpSchedule() const;
+  void emitNoop(unsigned CurCycle);
+};
+} // namespace
 
 char &llvm::PostRASchedulerID = PostRAScheduler::ID;
 
@@ -230,9 +227,9 @@ SchedulePostRATDList::~SchedulePostRATDList() {
 
 /// Initialize state associated with the next scheduling region.
 void SchedulePostRATDList::enterRegion(MachineBasicBlock *bb,
-                 MachineBasicBlock::iterator begin,
-                 MachineBasicBlock::iterator end,
-                 unsigned regioninstrs) {
+                                       MachineBasicBlock::iterator begin,
+                                       MachineBasicBlock::iterator end,
+                                       unsigned regioninstrs) {
   ScheduleDAGInstrs::enterRegion(bb, begin, end, regioninstrs);
   Sequence.clear();
 }
@@ -260,8 +257,7 @@ LLVM_DUMP_METHOD void SchedulePostRATDList::dumpSchedule() const {
 #endif
 
 bool PostRAScheduler::enablePostRAScheduler(
-    const TargetSubtargetInfo &ST,
-    CodeGenOpt::Level OptLevel,
+    const TargetSubtargetInfo &ST, CodeGenOpt::Level OptLevel,
     TargetSubtargetInfo::AntiDepBreakMode &Mode,
     TargetSubtargetInfo::RegClassVector &CriticalPathRCs) const {
   Mode = ST.getAntiDepBreakMode();
@@ -287,8 +283,8 @@ bool PostRAScheduler::runOnMachineFunction(MachineFunction &Fn) {
   RegClassInfo.runOnMachineFunction(Fn);
 
   TargetSubtargetInfo::AntiDepBreakMode AntiDepMode =
-    TargetSubtargetInfo::ANTIDEP_NONE;
-  SmallVector<const TargetRegisterClass*, 4> CriticalPathRCs;
+      TargetSubtargetInfo::ANTIDEP_NONE;
+  SmallVector<const TargetRegisterClass *, 4> CriticalPathRCs;
 
   // Check that post-RA scheduling is enabled for this target.
   // This may upgrade the AntiDepMode.
@@ -299,10 +295,10 @@ bool PostRAScheduler::runOnMachineFunction(MachineFunction &Fn) {
   // Check for antidep breaking override...
   if (EnableAntiDepBreaking.getPosition() > 0) {
     AntiDepMode = (EnableAntiDepBreaking == "all")
-      ? TargetSubtargetInfo::ANTIDEP_ALL
-      : ((EnableAntiDepBreaking == "critical")
-         ? TargetSubtargetInfo::ANTIDEP_CRITICAL
-         : TargetSubtargetInfo::ANTIDEP_NONE);
+                      ? TargetSubtargetInfo::ANTIDEP_ALL
+                      : ((EnableAntiDepBreaking == "critical")
+                             ? TargetSubtargetInfo::ANTIDEP_CRITICAL
+                             : TargetSubtargetInfo::ANTIDEP_NONE);
   }
 
   LLVM_DEBUG(dbgs() << "PostRAScheduler\n");
@@ -389,9 +385,8 @@ void SchedulePostRATDList::schedule() {
   buildSchedGraph(AA);
 
   if (AntiDepBreak) {
-    unsigned Broken =
-      AntiDepBreak->BreakAntiDependencies(SUnits, RegionBegin, RegionEnd,
-                                          EndIndex, DbgValues);
+    unsigned Broken = AntiDepBreak->BreakAntiDependencies(
+        SUnits, RegionBegin, RegionEnd, EndIndex, DbgValues);
 
     if (Broken != 0) {
       // We made changes. Update the dependency graph.
@@ -483,8 +478,8 @@ void SchedulePostRATDList::ReleaseSucc(SUnit *SU, SDep *SuccEdge) {
 
 /// ReleaseSuccessors - Call ReleaseSucc on each of SU's successors.
 void SchedulePostRATDList::ReleaseSuccessors(SUnit *SU) {
-  for (SUnit::succ_iterator I = SU->Succs.begin(), E = SU->Succs.end();
-       I != E; ++I) {
+  for (SUnit::succ_iterator I = SU->Succs.begin(), E = SU->Succs.end(); I != E;
+       ++I) {
     ReleaseSucc(SU, &*I);
   }
 }
@@ -497,8 +492,7 @@ void SchedulePostRATDList::ScheduleNodeTopDown(SUnit *SU, unsigned CurCycle) {
   LLVM_DEBUG(dumpNode(*SU));
 
   Sequence.push_back(SU);
-  assert(CurCycle >= SU->getDepth() &&
-         "Node scheduled above its depth!");
+  assert(CurCycle >= SU->getDepth() && "Node scheduled above its depth!");
   SU->setDepthToAtLeast(CurCycle);
 
   ReleaseSuccessors(SU);
@@ -510,7 +504,7 @@ void SchedulePostRATDList::ScheduleNodeTopDown(SUnit *SU, unsigned CurCycle) {
 void SchedulePostRATDList::emitNoop(unsigned CurCycle) {
   LLVM_DEBUG(dbgs() << "*** Emitting noop in cycle " << CurCycle << '\n');
   HazardRec->EmitNoop();
-  Sequence.push_back(nullptr);   // NULL here means noop
+  Sequence.push_back(nullptr); // NULL here means noop
   ++NumNoops;
 }
 
@@ -543,7 +537,7 @@ void SchedulePostRATDList::ListScheduleTopDown() {
 
   // While Available queue is not empty, grab the node with the highest
   // priority. If it is not ready put it back.  Schedule the node.
-  std::vector<SUnit*> NotReady;
+  std::vector<SUnit *> NotReady;
   Sequence.reserve(SUnits.size());
   while (!AvailableQueue.empty() || !PendingQueue.empty()) {
     // Check to see if any of the pending instructions are ready to issue.  If
@@ -555,7 +549,8 @@ void SchedulePostRATDList::ListScheduleTopDown() {
         PendingQueue[i]->isAvailable = true;
         PendingQueue[i] = PendingQueue.back();
         PendingQueue.pop_back();
-        --i; --e;
+        --i;
+        --e;
       } else if (PendingQueue[i]->getDepth() < MinDepth)
         MinDepth = PendingQueue[i]->getDepth();
     }
@@ -569,7 +564,7 @@ void SchedulePostRATDList::ListScheduleTopDown() {
       SUnit *CurSUnit = AvailableQueue.pop();
 
       ScheduleHazardRecognizer::HazardType HT =
-        HazardRec->getHazardType(CurSUnit, 0/*no stalls*/);
+          HazardRec->getHazardType(CurSUnit, 0 /*no stalls*/);
       if (HT == ScheduleHazardRecognizer::NoHazard) {
         if (HazardRec->ShouldPreferAnother(CurSUnit)) {
           if (!NotPreferredSUnit) {
@@ -684,8 +679,10 @@ void SchedulePostRATDList::EmitSchedule() {
   }
 
   // Reinsert any remaining debug_values.
-  for (std::vector<std::pair<MachineInstr *, MachineInstr *> >::iterator
-         DI = DbgValues.end(), DE = DbgValues.begin(); DI != DE; --DI) {
+  for (std::vector<std::pair<MachineInstr *, MachineInstr *>>::iterator
+           DI = DbgValues.end(),
+           DE = DbgValues.begin();
+       DI != DE; --DI) {
     std::pair<MachineInstr *, MachineInstr *> P = *std::prev(DI);
     MachineInstr *DbgValue = P.first;
     MachineBasicBlock::iterator OrigPrivMI = P.second;
diff --git a/llvm/lib/CodeGen/ProcessImplicitDefs.cpp b/llvm/lib/CodeGen/ProcessImplicitDefs.cpp
index be81ecab9c89702..7b3b501b5994b2e 100644
--- a/llvm/lib/CodeGen/ProcessImplicitDefs.cpp
+++ b/llvm/lib/CodeGen/ProcessImplicitDefs.cpp
@@ -31,7 +31,7 @@ class ProcessImplicitDefs : public MachineFunctionPass {
   const TargetRegisterInfo *TRI = nullptr;
   MachineRegisterInfo *MRI = nullptr;
 
-  SmallSetVector<MachineInstr*, 16> WorkList;
+  SmallSetVector<MachineInstr *, 16> WorkList;
 
   void processImplicitDef(MachineInstr *MI);
   bool canTurnIntoImplicitDef(MachineInstr *MI);
@@ -57,8 +57,8 @@ class ProcessImplicitDefs : public MachineFunctionPass {
 char ProcessImplicitDefs::ID = 0;
 char &llvm::ProcessImplicitDefsID = ProcessImplicitDefs::ID;
 
-INITIALIZE_PASS(ProcessImplicitDefs, DEBUG_TYPE,
-                "Process Implicit Definitions", false, false)
+INITIALIZE_PASS(ProcessImplicitDefs, DEBUG_TYPE, "Process Implicit Definitions",
+                false, false)
 
 void ProcessImplicitDefs::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.setPreservesCFG();
@@ -67,9 +67,7 @@ void ProcessImplicitDefs::getAnalysisUsage(AnalysisUsage &AU) const {
 }
 
 bool ProcessImplicitDefs::canTurnIntoImplicitDef(MachineInstr *MI) {
-  if (!MI->isCopyLike() &&
-      !MI->isInsertSubreg() &&
-      !MI->isRegSequence() &&
+  if (!MI->isCopyLike() && !MI->isInsertSubreg() && !MI->isRegSequence() &&
       !MI->isPHI())
     return false;
   for (const MachineOperand &MO : MI->all_uses())
@@ -161,7 +159,8 @@ bool ProcessImplicitDefs::runOnMachineFunction(MachineFunction &MF) {
     Changed = true;
 
     // Drain the WorkList to recursively process any new implicit defs.
-    do processImplicitDef(WorkList.pop_back_val());
+    do
+      processImplicitDef(WorkList.pop_back_val());
     while (!WorkList.empty());
   }
   return Changed;
diff --git a/llvm/lib/CodeGen/PrologEpilogInserter.cpp b/llvm/lib/CodeGen/PrologEpilogInserter.cpp
index 6c8be5cff75360e..c666ae563694b26 100644
--- a/llvm/lib/CodeGen/PrologEpilogInserter.cpp
+++ b/llvm/lib/CodeGen/PrologEpilogInserter.cpp
@@ -77,7 +77,6 @@ using MBBVector = SmallVector<MachineBasicBlock *, 4>;
 STATISTIC(NumLeafFuncWithSpills, "Number of leaf functions with CSRs");
 STATISTIC(NumFuncSeen, "Number of functions seen in PEI");
 
-
 namespace {
 
 class PEI : public MachineFunctionPass {
@@ -373,7 +372,8 @@ void PEI::calculateCallFrameInfo(MachineFunction &MF) {
     for (MachineBasicBlock::iterator I = BB.begin(); I != BB.end(); ++I)
       if (TII.isFrameInstr(*I)) {
         unsigned Size = TII.getFrameSize(*I);
-        if (Size > MaxCallFrameSize) MaxCallFrameSize = Size;
+        if (Size > MaxCallFrameSize)
+          MaxCallFrameSize = Size;
         AdjustsStack = true;
         FrameSDOps.push_back(I);
       } else if (I->isInlineAsm()) {
@@ -515,8 +515,10 @@ static void assignCalleeSavedSpillSlots(MachineFunction &F,
         // min.
         Alignment = std::min(Alignment, TFI->getStackAlign());
         FrameIdx = MFI.CreateStackObject(Size, Alignment, true);
-        if ((unsigned)FrameIdx < MinCSFrameIndex) MinCSFrameIndex = FrameIdx;
-        if ((unsigned)FrameIdx > MaxCSFrameIndex) MaxCSFrameIndex = FrameIdx;
+        if ((unsigned)FrameIdx < MinCSFrameIndex)
+          MinCSFrameIndex = FrameIdx;
+        if ((unsigned)FrameIdx > MaxCSFrameIndex)
+          MaxCSFrameIndex = FrameIdx;
       } else {
         // Spill it to the stack where we must.
         FrameIdx = MFI.CreateFixedSpillStackObject(Size, FixedSlot->Offset);
@@ -617,9 +619,9 @@ static void insertCSRSaves(MachineBasicBlock &SaveBlock,
       unsigned Reg = CS.getReg();
 
       if (CS.isSpilledToReg()) {
-        BuildMI(SaveBlock, I, DebugLoc(),
-                TII.get(TargetOpcode::COPY), CS.getDstReg())
-          .addReg(Reg, getKillRegState(true));
+        BuildMI(SaveBlock, I, DebugLoc(), TII.get(TargetOpcode::COPY),
+                CS.getDstReg())
+            .addReg(Reg, getKillRegState(true));
       } else {
         const TargetRegisterClass *RC = TRI->getMinimalPhysRegClass(Reg);
         TII.storeRegToStackSlot(SaveBlock, I, Reg, true, CS.getFrameIdx(), RC,
@@ -646,7 +648,7 @@ static void insertCSRRestores(MachineBasicBlock &RestoreBlock,
       unsigned Reg = CI.getReg();
       if (CI.isSpilledToReg()) {
         BuildMI(RestoreBlock, I, DebugLoc(), TII.get(TargetOpcode::COPY), Reg)
-          .addReg(CI.getDstReg(), getKillRegState(true));
+            .addReg(CI.getDstReg(), getKillRegState(true));
       } else {
         const TargetRegisterClass *RC = TRI->getMinimalPhysRegClass(Reg);
         TII.loadRegFromStackSlot(RestoreBlock, I, Reg, CI.getFrameIdx(), RC,
@@ -856,7 +858,7 @@ void PEI::calculateFrameObjectOffsets(MachineFunction &MF) {
   const TargetFrameLowering &TFI = *MF.getSubtarget().getFrameLowering();
 
   bool StackGrowsDown =
-    TFI.getStackGrowthDirection() == TargetFrameLowering::StackGrowsDown;
+      TFI.getStackGrowthDirection() == TargetFrameLowering::StackGrowsDown;
 
   // Loop over all of the stack objects, assigning sequential addresses...
   MachineFrameInfo &MFI = MF.getFrameInfo();
@@ -867,8 +869,8 @@ void PEI::calculateFrameObjectOffsets(MachineFunction &MF) {
   int LocalAreaOffset = TFI.getOffsetOfLocalArea();
   if (StackGrowsDown)
     LocalAreaOffset = -LocalAreaOffset;
-  assert(LocalAreaOffset >= 0
-         && "Local area offset should be in direction of stack growth");
+  assert(LocalAreaOffset >= 0 &&
+         "Local area offset should be in direction of stack growth");
   int64_t Offset = LocalAreaOffset;
 
 #ifdef EXPENSIVE_CHECKS
@@ -899,7 +901,8 @@ void PEI::calculateFrameObjectOffsets(MachineFunction &MF) {
       // address of the object.
       FixedOff = MFI.getObjectOffset(i) + MFI.getObjectSize(i);
     }
-    if (FixedOff > Offset) Offset = FixedOff;
+    if (FixedOff > Offset)
+      Offset = FixedOff;
   }
 
   Align MaxAlign = MFI.getMaxAlign();
@@ -934,7 +937,8 @@ void PEI::calculateFrameObjectOffsets(MachineFunction &MF) {
   // incoming stack pointer if a frame pointer is required and is closer
   // to the incoming rather than the final stack pointer.
   const TargetRegisterInfo *RegInfo = MF.getSubtarget().getRegisterInfo();
-  bool EarlyScavengingSlots = TFI.allocateScavengingFrameIndexesNearIncomingSP(MF);
+  bool EarlyScavengingSlots =
+      TFI.allocateScavengingFrameIndexesNearIncomingSP(MF);
   if (RS && EarlyScavengingSlots) {
     SmallVector<int, 2> SFIs;
     RS->getScavengingFrameIndices(SFIs);
@@ -1533,7 +1537,7 @@ void PEI::replaceFrameIndices(MachineBasicBlock *BB, MachineFunction &MF,
 
   bool InsideCallSequence = false;
 
-  for (MachineBasicBlock::iterator I = BB->begin(); I != BB->end(); ) {
+  for (MachineBasicBlock::iterator I = BB->begin(); I != BB->end();) {
     if (TII.isFrameInstr(*I)) {
       InsideCallSequence = TII.isFrameSetup(*I);
       SPAdj += TII.getSPAdjust(*I);
@@ -1559,7 +1563,8 @@ void PEI::replaceFrameIndices(MachineBasicBlock *BB, MachineFunction &MF,
       // iterator at the point before insertion so that we can
       // revisit them in full.
       bool AtBeginning = (I == BB->begin());
-      if (!AtBeginning) --I;
+      if (!AtBeginning)
+        --I;
 
       // If this instruction has a FrameIndex operand, we need to
       // use that target machine register info object to eliminate
diff --git a/llvm/lib/CodeGen/PseudoSourceValue.cpp b/llvm/lib/CodeGen/PseudoSourceValue.cpp
index 40c52b9d9707654..0ebd70ad532f96e 100644
--- a/llvm/lib/CodeGen/PseudoSourceValue.cpp
+++ b/llvm/lib/CodeGen/PseudoSourceValue.cpp
@@ -18,9 +18,13 @@
 
 using namespace llvm;
 
-static const char *const PSVNames[] = {
-    "Stack", "GOT", "JumpTable", "ConstantPool", "FixedStack",
-    "GlobalValueCallEntry", "ExternalSymbolCallEntry"};
+static const char *const PSVNames[] = {"Stack",
+                                       "GOT",
+                                       "JumpTable",
+                                       "ConstantPool",
+                                       "FixedStack",
+                                       "GlobalValueCallEntry",
+                                       "ExternalSymbolCallEntry"};
 
 PseudoSourceValue::PseudoSourceValue(unsigned Kind, const TargetMachine &TM)
     : Kind(Kind) {
@@ -119,8 +123,7 @@ const PseudoSourceValue *PseudoSourceValueManager::getJumpTable() {
   return &JumpTablePSV;
 }
 
-const PseudoSourceValue *
-PseudoSourceValueManager::getFixedStack(int FI) {
+const PseudoSourceValue *PseudoSourceValueManager::getFixedStack(int FI) {
   std::unique_ptr<FixedStackPseudoSourceValue> &V = FSValues[FI];
   if (!V)
     V = std::make_unique<FixedStackPseudoSourceValue>(FI, TM);
diff --git a/llvm/lib/CodeGen/RDFGraph.cpp b/llvm/lib/CodeGen/RDFGraph.cpp
index abf3b1e6fbb9eeb..446b9198af51833 100644
--- a/llvm/lib/CodeGen/RDFGraph.cpp
+++ b/llvm/lib/CodeGen/RDFGraph.cpp
@@ -8,6 +8,7 @@
 //
 // Target-independent, SSA-based data flow graph for register data flow (RDF).
 //
+#include "llvm/CodeGen/RDFGraph.h"
 #include "llvm/ADT/BitVector.h"
 #include "llvm/ADT/STLExtras.h"
 #include "llvm/ADT/SetVector.h"
@@ -18,7 +19,6 @@
 #include "llvm/CodeGen/MachineInstr.h"
 #include "llvm/CodeGen/MachineOperand.h"
 #include "llvm/CodeGen/MachineRegisterInfo.h"
-#include "llvm/CodeGen/RDFGraph.h"
 #include "llvm/CodeGen/RDFRegisters.h"
 #include "llvm/CodeGen/TargetInstrInfo.h"
 #include "llvm/CodeGen/TargetLowering.h"
diff --git a/llvm/lib/CodeGen/RDFLiveness.cpp b/llvm/lib/CodeGen/RDFLiveness.cpp
index 11f3fedaa5f9262..3069ddc18f4fc3d 100644
--- a/llvm/lib/CodeGen/RDFLiveness.cpp
+++ b/llvm/lib/CodeGen/RDFLiveness.cpp
@@ -22,6 +22,7 @@
 // and Embedded Architectures and Compilers", 8 (4),
 // <10.1145/2086696.2086706>. <hal-00647369>
 //
+#include "llvm/CodeGen/RDFLiveness.h"
 #include "llvm/ADT/BitVector.h"
 #include "llvm/ADT/DenseMap.h"
 #include "llvm/ADT/STLExtras.h"
@@ -33,7 +34,6 @@
 #include "llvm/CodeGen/MachineFunction.h"
 #include "llvm/CodeGen/MachineInstr.h"
 #include "llvm/CodeGen/RDFGraph.h"
-#include "llvm/CodeGen/RDFLiveness.h"
 #include "llvm/CodeGen/RDFRegisters.h"
 #include "llvm/CodeGen/TargetRegisterInfo.h"
 #include "llvm/MC/LaneBitmask.h"
diff --git a/llvm/lib/CodeGen/RDFRegisters.cpp b/llvm/lib/CodeGen/RDFRegisters.cpp
index 7ce00a66b3ae6c8..84baa1956d79cd2 100644
--- a/llvm/lib/CodeGen/RDFRegisters.cpp
+++ b/llvm/lib/CodeGen/RDFRegisters.cpp
@@ -6,11 +6,11 @@
 //
 //===----------------------------------------------------------------------===//
 
+#include "llvm/CodeGen/RDFRegisters.h"
 #include "llvm/ADT/BitVector.h"
 #include "llvm/CodeGen/MachineFunction.h"
 #include "llvm/CodeGen/MachineInstr.h"
 #include "llvm/CodeGen/MachineOperand.h"
-#include "llvm/CodeGen/RDFRegisters.h"
 #include "llvm/CodeGen/TargetRegisterInfo.h"
 #include "llvm/MC/LaneBitmask.h"
 #include "llvm/MC/MCRegisterInfo.h"
diff --git a/llvm/lib/CodeGen/README.txt b/llvm/lib/CodeGen/README.txt
index d8958715c6b4545..6c0cec860ac606f 100644
--- a/llvm/lib/CodeGen/README.txt
+++ b/llvm/lib/CodeGen/README.txt
@@ -1,130 +1,143 @@
 //===---------------------------------------------------------------------===//
 
-Common register allocation / spilling problem:
-
-        mul lr, r4, lr
-        str lr, [sp, #+52]
-        ldr lr, [r1, #+32]
-        sxth r3, r3
-        ldr r4, [sp, #+52]
-        mla r4, r3, lr, r4
-
-can be:
-
-        mul lr, r4, lr
-        mov r4, lr
-        str lr, [sp, #+52]
-        ldr lr, [r1, #+32]
-        sxth r3, r3
-        mla r4, r3, lr, r4
-
-and then "merge" mul and mov:
-
-        mul r4, r4, lr
-        str r4, [sp, #+52]
-        ldr lr, [r1, #+32]
-        sxth r3, r3
-        mla r4, r3, lr, r4
-
-It also increase the likelihood the store may become dead.
-
-//===---------------------------------------------------------------------===//
-
-bb27 ...
-        ...
-        %reg1037 = ADDri %reg1039, 1
-        %reg1038 = ADDrs %reg1032, %reg1039, %noreg, 10
-    Successors according to CFG: 0x8b03bf0 (#5)
-
-bb76 (0x8b03bf0, LLVM BB @0x8b032d0, ID#5):
-    Predecessors according to CFG: 0x8b0c5f0 (#3) 0x8b0a7c0 (#4)
-        %reg1039 = PHI %reg1070, mbb<bb76.outer,0x8b0c5f0>, %reg1037, mbb<bb27,0x8b0a7c0>
-
-Note ADDri is not a two-address instruction. However, its result %reg1037 is an
-operand of the PHI node in bb76 and its operand %reg1039 is the result of the
-PHI node. We should treat it as a two-address code and make sure the ADDri is
-scheduled after any node that reads %reg1039.
-
-//===---------------------------------------------------------------------===//
-
-Use local info (i.e. register scavenger) to assign it a free register to allow
-reuse:
-        ldr r3, [sp, #+4]
-        add r3, r3, #3
-        ldr r2, [sp, #+8]
-        add r2, r2, #2
-        ldr r1, [sp, #+4]  <==
-        add r1, r1, #1
-        ldr r0, [sp, #+4]
-        add r0, r0, #2
-
-//===---------------------------------------------------------------------===//
-
-LLVM aggressively lift CSE out of loop. Sometimes this can be negative side-
-effects:
-
-R1 = X + 4
-R2 = X + 7
-R3 = X + 15
-
-loop:
-load [i + R1]
-...
-load [i + R2]
-...
-load [i + R3]
-
-Suppose there is high register pressure, R1, R2, R3, can be spilled. We need
-to implement proper re-materialization to handle this:
-
-R1 = X + 4
-R2 = X + 7
-R3 = X + 15
-
-loop:
-R1 = X + 4  @ re-materialized
-load [i + R1]
-...
-R2 = X + 7 @ re-materialized
-load [i + R2]
-...
-R3 = X + 15 @ re-materialized
-load [i + R3]
-
-Furthermore, with re-association, we can enable sharing:
-
-R1 = X + 4
-R2 = X + 7
-R3 = X + 15
-
-loop:
-T = i + X
-load [T + 4]
-...
-load [T + 7]
-...
-load [T + 15]
-//===---------------------------------------------------------------------===//
-
-It's not always a good idea to choose rematerialization over spilling. If all
-the load / store instructions would be folded then spilling is cheaper because
-it won't require new live intervals / registers. See 2003-05-31-LongShifts for
-an example.
-
-//===---------------------------------------------------------------------===//
-
-With a copying garbage collector, derived pointers must not be retained across
-collector safe points; the collector could move the objects and invalidate the
-derived pointer. This is bad enough in the first place, but safe points can
-crop up unpredictably. Consider:
-
-        %array = load { i32, [0 x %obj] }** %array_addr
-        %nth_el = getelementptr { i32, [0 x %obj] }* %array, i32 0, i32 %n
-        %old = load %obj** %nth_el
-        %z = div i64 %x, %y
-        store %obj* %new, %obj** %nth_el
-
-If the i64 division is lowered to a libcall, then a safe point will (must)
+Common register allocation / spilling problem :
+
+    mul lr,
+    r4, lr str lr, [ sp, # + 52 ] ldr lr, [ r1, # + 32 ] sxth r3, r3 ldr r4,
+    [ sp, # + 52 ] mla r4, r3, lr,
+    r4
+
+        can be :
+
+    mul lr,
+    r4,
+    lr mov r4,
+    lr str lr,
+    [ sp, # + 52 ] ldr lr,
+    [ r1, # + 32 ] sxth r3,
+    r3 mla r4,
+    r3,
+    lr,
+    r4
+
+        and then "merge" mul and mov :
+
+    mul r4,
+    r4,
+    lr str r4,
+    [ sp, # + 52 ] ldr lr,
+    [ r1, # + 32 ] sxth r3,
+    r3 mla r4,
+    r3,
+    lr,
+    r4
+
+            It also increase the likelihood the store may become dead.
+
+        //===---------------------------------------------------------------------===//
+
+        bb27...... %
+        reg1037 = ADDri % reg1039,
+        1 % reg1038 = ADDrs % reg1032,
+        % reg1039,
+        % noreg,
+        10 Successors according to CFG : 0x8b03bf0(#5)
+
+                bb76(0x8b03bf0, LLVM BB @0x8b032d0, ID #5)
+    : Predecessors according to CFG : 0x8b0c5f0(#3)0x8b0a7c0(#4) % reg1039 =
+            PHI % reg1070,
+        mbb<bb76.outer, 0x8b0c5f0>,
+        % reg1037,
+        mbb<bb27, 0x8b0a7c0>
+
+            Note ADDri is not a two - address instruction.However,
+        its result % reg1037 is an operand of the PHI node in bb76 and
+            its operand % reg1039 is the result of the PHI node
+                              .We should treat it as a two
+                - address code
+            and
+            make sure the ADDri is scheduled after any node that reads %
+                reg1039.
+
+                //===---------------------------------------------------------------------===//
+
+                Use local info(i.e.register scavenger) to assign it a free
+                register to allow reuse : ldr r3,
+        [ sp, # + 4 ] add r3,
+        r3,
+        #3 ldr r2,
+        [ sp, # + 8 ] add r2,
+        r2,
+        #2 ldr r1,
+        [ sp, # + 4 ] <= = add r1,
+        r1,
+        #1 ldr r0,
+        [ sp, # + 4 ] add r0,
+        r0,
+        #2
+
+            //===---------------------------------------------------------------------===//
+
+            LLVM aggressively lift CSE out of loop.Sometimes
+            this can be negative side
+            - effects :
+
+    R1 = X + 4 R2 = X + 7 R3 = X + 15
+
+                               loop
+    : load[i + R1]... load[i + R2]... load[i + R3]
+
+      Suppose there is high register pressure,
+        R1,
+        R2,
+        R3,
+        can be spilled.We need to implement proper re
+            - materialization to handle this :
+
+    R1 = X + 4 R2 = X + 7 R3 = X + 15
+
+                               loop
+    : R1 = X + 4 @re - materialized load[i + R1]... R2 =
+               X + 7 @re - materialized load[i + R2]... R3 =
+                   X + 15 @re -
+                   materialized load[i + R3]
+
+                   Furthermore,
+        with re - association,
+        we can enable sharing :
+
+    R1 = X + 4 R2 = X + 7 R3 = X + 15
+
+                               loop
+    : T = i +
+          X load[T + 4]... load[T + 7]... load[T + 15]
+              //===---------------------------------------------------------------------===//
+
+              It's not always a good idea to choose rematerialization over spilling. If all the
+              load
+              /
+              store
+              instructions would be folded then spilling is cheaper because it
+              won't require new live intervals / registers. See 2003-05-31-LongShifts for an
+              example.
+
+              //===---------------------------------------------------------------------===//
+
+              With a copying garbage collector,
+        derived pointers must not be retained across collector safe points;
+the collector could move the objects and invalidate the derived pointer.This
+    is bad enough in the first place,
+    but safe points can crop up unpredictably.Consider :
+
+        % array = load{i32, [0 x % obj]} ** % array_addr %
+                  nth_el = getelementptr{i32, [0 x % obj]} * % array,
+          i32 0, i32 % n % old = load % obj ** % nth_el % z = div i64 % x,
+          % y store % obj * % new,
+          % obj ** %
+              nth_el
+
+                  If the i64 division is lowered to a libcall,
+          then a safe point will(must)
 appear for the call site. If a collection occurs, %array and %nth_el no longer
 point into the correct object.
 
@@ -196,4 +209,3 @@ this. It would be easy to have a rule telling isel to avoid matching MOV32mi
 if the immediate has more than some fixed number of uses. It's more involved
 to teach the register allocator how to do late folding to recover from
 excessive register pressure.
-
diff --git a/llvm/lib/CodeGen/ReachingDefAnalysis.cpp b/llvm/lib/CodeGen/ReachingDefAnalysis.cpp
index 61a668907be77d5..d711f149ebe4641 100644
--- a/llvm/lib/CodeGen/ReachingDefAnalysis.cpp
+++ b/llvm/lib/CodeGen/ReachingDefAnalysis.cpp
@@ -6,10 +6,10 @@
 //
 //===----------------------------------------------------------------------===//
 
-#include "llvm/ADT/SmallSet.h"
+#include "llvm/CodeGen/ReachingDefAnalysis.h"
 #include "llvm/ADT/SetOperations.h"
+#include "llvm/ADT/SmallSet.h"
 #include "llvm/CodeGen/LivePhysRegs.h"
-#include "llvm/CodeGen/ReachingDefAnalysis.h"
 #include "llvm/CodeGen/TargetRegisterInfo.h"
 #include "llvm/CodeGen/TargetSubtargetInfo.h"
 #include "llvm/Support/Debug.h"
@@ -151,7 +151,7 @@ void ReachingDefAnalysis::reprocessBasicBlock(MachineBasicBlock *MBB) {
 
   // Count number of non-debug instructions for end of block adjustment.
   auto NonDbgInsts =
-    instructionsWithoutDebug(MBB->instr_begin(), MBB->instr_end());
+      instructionsWithoutDebug(MBB->instr_begin(), MBB->instr_end());
   int NumInsts = std::distance(NonDbgInsts.begin(), NonDbgInsts.end());
 
   // When reprocessing a block, the only thing we need to do is check whether
@@ -283,8 +283,8 @@ MachineInstr *
 ReachingDefAnalysis::getReachingLocalMIDef(MachineInstr *MI,
                                            MCRegister PhysReg) const {
   return hasLocalDefBefore(MI, PhysReg)
-    ? getInstFromId(MI->getParent(), getReachingDef(MI, PhysReg))
-    : nullptr;
+             ? getInstFromId(MI->getParent(), getReachingDef(MI, PhysReg))
+             : nullptr;
 }
 
 bool ReachingDefAnalysis::hasSameReachingDef(MachineInstr *A, MachineInstr *B,
@@ -384,7 +384,7 @@ void ReachingDefAnalysis::getGlobalUses(MachineInstr *MI, MCRegister PhysReg,
       return;
 
     SmallVector<MachineBasicBlock *, 4> ToVisit(MBB->successors());
-    SmallPtrSet<MachineBasicBlock*, 4>Visited;
+    SmallPtrSet<MachineBasicBlock *, 4> Visited;
     while (!ToVisit.empty()) {
       MachineBasicBlock *MBB = ToVisit.pop_back_val();
       if (Visited.count(MBB) || !MBB->isLiveIn(PhysReg))
@@ -410,7 +410,7 @@ void ReachingDefAnalysis::getGlobalReachingDefs(MachineInstr *MI,
 
 void ReachingDefAnalysis::getLiveOuts(MachineBasicBlock *MBB,
                                       MCRegister PhysReg, InstSet &Defs) const {
-  SmallPtrSet<MachineBasicBlock*, 2> VisitedBBs;
+  SmallPtrSet<MachineBasicBlock *, 2> VisitedBBs;
   getLiveOuts(MBB, PhysReg, Defs, VisitedBBs);
 }
 
@@ -441,7 +441,7 @@ ReachingDefAnalysis::getUniqueReachingMIDef(MachineInstr *MI,
   if (LocalDef && InstIds.lookup(LocalDef) < InstIds.lookup(MI))
     return LocalDef;
 
-  SmallPtrSet<MachineInstr*, 2> Incoming;
+  SmallPtrSet<MachineInstr *, 2> Incoming;
   MachineBasicBlock *Parent = MI->getParent();
   for (auto *Pred : Parent->predecessors())
     getLiveOuts(Pred, PhysReg, Incoming);
@@ -544,14 +544,14 @@ ReachingDefAnalysis::getLocalLiveOutMIDef(MachineBasicBlock *MBB,
 
 static bool mayHaveSideEffects(MachineInstr &MI) {
   return MI.mayLoadOrStore() || MI.mayRaiseFPException() ||
-         MI.hasUnmodeledSideEffects() || MI.isTerminator() ||
-         MI.isCall() || MI.isBarrier() || MI.isBranch() || MI.isReturn();
+         MI.hasUnmodeledSideEffects() || MI.isTerminator() || MI.isCall() ||
+         MI.isBarrier() || MI.isBranch() || MI.isReturn();
 }
 
 // Can we safely move 'From' to just before 'To'? To satisfy this, 'From' must
 // not define a register that is used by any instructions, after and including,
 // 'To'. These instructions also must not redefine any of Froms operands.
-template<typename Iterator>
+template <typename Iterator>
 bool ReachingDefAnalysis::isSafeToMove(MachineInstr *From,
                                        MachineInstr *To) const {
   if (From->getParent() != To->getParent() || From == To)
@@ -603,21 +603,20 @@ bool ReachingDefAnalysis::isSafeToMoveBackwards(MachineInstr *From,
 
 bool ReachingDefAnalysis::isSafeToRemove(MachineInstr *MI,
                                          InstSet &ToRemove) const {
-  SmallPtrSet<MachineInstr*, 1> Ignore;
-  SmallPtrSet<MachineInstr*, 2> Visited;
+  SmallPtrSet<MachineInstr *, 1> Ignore;
+  SmallPtrSet<MachineInstr *, 2> Visited;
   return isSafeToRemove(MI, Visited, ToRemove, Ignore);
 }
 
-bool
-ReachingDefAnalysis::isSafeToRemove(MachineInstr *MI, InstSet &ToRemove,
-                                    InstSet &Ignore) const {
-  SmallPtrSet<MachineInstr*, 2> Visited;
+bool ReachingDefAnalysis::isSafeToRemove(MachineInstr *MI, InstSet &ToRemove,
+                                         InstSet &Ignore) const {
+  SmallPtrSet<MachineInstr *, 2> Visited;
   return isSafeToRemove(MI, Visited, ToRemove, Ignore);
 }
 
-bool
-ReachingDefAnalysis::isSafeToRemove(MachineInstr *MI, InstSet &Visited,
-                                    InstSet &ToRemove, InstSet &Ignore) const {
+bool ReachingDefAnalysis::isSafeToRemove(MachineInstr *MI, InstSet &Visited,
+                                         InstSet &ToRemove,
+                                         InstSet &Ignore) const {
   if (Visited.count(MI) || Ignore.count(MI))
     return true;
   else if (mayHaveSideEffects(*MI)) {
@@ -631,7 +630,7 @@ ReachingDefAnalysis::isSafeToRemove(MachineInstr *MI, InstSet &Visited,
     if (!isValidRegDef(MO))
       continue;
 
-    SmallPtrSet<MachineInstr*, 4> Uses;
+    SmallPtrSet<MachineInstr *, 4> Uses;
     getGlobalUses(MI, MO.getReg(), Uses);
 
     for (auto *I : Uses) {
@@ -663,7 +662,7 @@ void ReachingDefAnalysis::collectKilledOperands(MachineInstr *MI,
     if (LiveDefs > 1)
       return false;
 
-    SmallPtrSet<MachineInstr*, 4> Uses;
+    SmallPtrSet<MachineInstr *, 4> Uses;
     getGlobalUses(Def, PhysReg, Uses);
     return llvm::set_is_subset(Uses, Dead);
   };
@@ -679,7 +678,7 @@ void ReachingDefAnalysis::collectKilledOperands(MachineInstr *MI,
 
 bool ReachingDefAnalysis::isSafeToDefRegAt(MachineInstr *MI,
                                            MCRegister PhysReg) const {
-  SmallPtrSet<MachineInstr*, 1> Ignore;
+  SmallPtrSet<MachineInstr *, 1> Ignore;
   return isSafeToDefRegAt(MI, PhysReg, Ignore);
 }
 
@@ -688,7 +687,7 @@ bool ReachingDefAnalysis::isSafeToDefRegAt(MachineInstr *MI, MCRegister PhysReg,
   // Check for any uses of the register after MI.
   if (isRegUsedAfter(MI, PhysReg)) {
     if (auto *Def = getReachingLocalMIDef(MI, PhysReg)) {
-      SmallPtrSet<MachineInstr*, 2> Uses;
+      SmallPtrSet<MachineInstr *, 2> Uses;
       getGlobalUses(Def, PhysReg, Uses);
       if (!llvm::set_is_subset(Uses, Ignore))
         return false;
diff --git a/llvm/lib/CodeGen/RegAllocBase.h b/llvm/lib/CodeGen/RegAllocBase.h
index a8bf305a50c9814..0b0b4ecd40de99e 100644
--- a/llvm/lib/CodeGen/RegAllocBase.h
+++ b/llvm/lib/CodeGen/RegAllocBase.h
@@ -47,7 +47,7 @@ class LiveIntervals;
 class LiveRegMatrix;
 class MachineInstr;
 class MachineRegisterInfo;
-template<typename T> class SmallVectorImpl;
+template <typename T> class SmallVectorImpl;
 class Spiller;
 class TargetRegisterInfo;
 class VirtRegMap;
@@ -76,8 +76,8 @@ class RegAllocBase {
   /// always available for the remat of all the siblings of the original reg.
   SmallPtrSet<MachineInstr *, 32> DeadRemats;
 
-  RegAllocBase(const RegClassFilterFunc F = allocateAllRegClasses) :
-    ShouldAllocateClass(F) {}
+  RegAllocBase(const RegClassFilterFunc F = allocateAllRegClasses)
+      : ShouldAllocateClass(F) {}
 
   virtual ~RegAllocBase() = default;
 
diff --git a/llvm/lib/CodeGen/RegAllocBasic.cpp b/llvm/lib/CodeGen/RegAllocBasic.cpp
index 6661991396302ec..419c175cb4ede1b 100644
--- a/llvm/lib/CodeGen/RegAllocBasic.cpp
+++ b/llvm/lib/CodeGen/RegAllocBasic.cpp
@@ -41,12 +41,12 @@ static RegisterRegAlloc basicRegAlloc("basic", "basic register allocator",
                                       createBasicRegisterAllocator);
 
 namespace {
-  struct CompSpillWeight {
-    bool operator()(const LiveInterval *A, const LiveInterval *B) const {
-      return A->weight() < B->weight();
-    }
-  };
-}
+struct CompSpillWeight {
+  bool operator()(const LiveInterval *A, const LiveInterval *B) const {
+    return A->weight() < B->weight();
+  }
+};
+} // namespace
 
 namespace {
 /// RABasic provides a minimal implementation of the basic register allocation
@@ -109,7 +109,7 @@ class RABasic : public MachineFunctionPass,
 
   MachineFunctionProperties getClearedProperties() const override {
     return MachineFunctionProperties().set(
-      MachineFunctionProperties::Property::IsSSA);
+        MachineFunctionProperties::Property::IsSSA);
   }
 
   // Helper for spilling all live virtual registers currently unified under preg
@@ -168,10 +168,8 @@ void RABasic::LRE_WillShrinkVirtReg(Register VirtReg) {
   enqueue(&LI);
 }
 
-RABasic::RABasic(RegClassFilterFunc F):
-  MachineFunctionPass(ID),
-  RegAllocBase(F) {
-}
+RABasic::RABasic(RegClassFilterFunc F)
+    : MachineFunctionPass(ID), RegAllocBase(F) {}
 
 void RABasic::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.setPreservesCFG();
@@ -197,10 +195,7 @@ void RABasic::getAnalysisUsage(AnalysisUsage &AU) const {
   MachineFunctionPass::getAnalysisUsage(AU);
 }
 
-void RABasic::releaseMemory() {
-  SpillerInstance.reset();
-}
-
+void RABasic::releaseMemory() { SpillerInstance.reset(); }
 
 // Spill or split all live virtual registers currently unified under PhysReg
 // that interfere with VirtReg. The newly spilled or split live intervals are
@@ -311,8 +306,7 @@ bool RABasic::runOnMachineFunction(MachineFunction &mf) {
                     << "********** Function: " << mf.getName() << '\n');
 
   MF = &mf;
-  RegAllocBase::init(getAnalysis<VirtRegMap>(),
-                     getAnalysis<LiveIntervals>(),
+  RegAllocBase::init(getAnalysis<VirtRegMap>(), getAnalysis<LiveIntervals>(),
                      getAnalysis<LiveRegMatrix>());
   VirtRegAuxInfo VRAI(*MF, *LIS, *VRM, getAnalysis<MachineLoopInfo>(),
                       getAnalysis<MachineBlockFrequencyInfo>());
@@ -330,10 +324,8 @@ bool RABasic::runOnMachineFunction(MachineFunction &mf) {
   return true;
 }
 
-FunctionPass* llvm::createBasicRegisterAllocator() {
-  return new RABasic();
-}
+FunctionPass *llvm::createBasicRegisterAllocator() { return new RABasic(); }
 
-FunctionPass* llvm::createBasicRegisterAllocator(RegClassFilterFunc F) {
+FunctionPass *llvm::createBasicRegisterAllocator(RegClassFilterFunc F) {
   return new RABasic(F);
 }
diff --git a/llvm/lib/CodeGen/RegAllocFast.cpp b/llvm/lib/CodeGen/RegAllocFast.cpp
index 9f3c17e2799ceae..b216d7296446262 100644
--- a/llvm/lib/CodeGen/RegAllocFast.cpp
+++ b/llvm/lib/CodeGen/RegAllocFast.cpp
@@ -50,251 +50,248 @@ using namespace llvm;
 #define DEBUG_TYPE "regalloc"
 
 STATISTIC(NumStores, "Number of stores added");
-STATISTIC(NumLoads , "Number of loads added");
+STATISTIC(NumLoads, "Number of loads added");
 STATISTIC(NumCoalesced, "Number of copies coalesced");
 
 // FIXME: Remove this switch when all testcases are fixed!
 static cl::opt<bool> IgnoreMissingDefs("rafast-ignore-missing-defs",
                                        cl::Hidden);
 
-static RegisterRegAlloc
-  fastRegAlloc("fast", "fast register allocator", createFastRegisterAllocator);
+static RegisterRegAlloc fastRegAlloc("fast", "fast register allocator",
+                                     createFastRegisterAllocator);
 
 namespace {
 
-  class RegAllocFast : public MachineFunctionPass {
-  public:
-    static char ID;
+class RegAllocFast : public MachineFunctionPass {
+public:
+  static char ID;
 
-    RegAllocFast(const RegClassFilterFunc F = allocateAllRegClasses,
-                 bool ClearVirtRegs_ = true) :
-      MachineFunctionPass(ID),
-      ShouldAllocateClass(F),
-      StackSlotForVirtReg(-1),
-      ClearVirtRegs(ClearVirtRegs_) {
-    }
+  RegAllocFast(const RegClassFilterFunc F = allocateAllRegClasses,
+               bool ClearVirtRegs_ = true)
+      : MachineFunctionPass(ID), ShouldAllocateClass(F),
+        StackSlotForVirtReg(-1), ClearVirtRegs(ClearVirtRegs_) {}
 
-  private:
-    MachineFrameInfo *MFI = nullptr;
-    MachineRegisterInfo *MRI = nullptr;
-    const TargetRegisterInfo *TRI = nullptr;
-    const TargetInstrInfo *TII = nullptr;
-    RegisterClassInfo RegClassInfo;
-    const RegClassFilterFunc ShouldAllocateClass;
+private:
+  MachineFrameInfo *MFI = nullptr;
+  MachineRegisterInfo *MRI = nullptr;
+  const TargetRegisterInfo *TRI = nullptr;
+  const TargetInstrInfo *TII = nullptr;
+  RegisterClassInfo RegClassInfo;
+  const RegClassFilterFunc ShouldAllocateClass;
 
-    /// Basic block currently being allocated.
-    MachineBasicBlock *MBB = nullptr;
+  /// Basic block currently being allocated.
+  MachineBasicBlock *MBB = nullptr;
 
-    /// Maps virtual regs to the frame index where these values are spilled.
-    IndexedMap<int, VirtReg2IndexFunctor> StackSlotForVirtReg;
+  /// Maps virtual regs to the frame index where these values are spilled.
+  IndexedMap<int, VirtReg2IndexFunctor> StackSlotForVirtReg;
 
-    bool ClearVirtRegs;
+  bool ClearVirtRegs;
 
-    /// Everything we know about a live virtual register.
-    struct LiveReg {
-      MachineInstr *LastUse = nullptr; ///< Last instr to use reg.
-      Register VirtReg;                ///< Virtual register number.
-      MCPhysReg PhysReg = 0;           ///< Currently held here.
-      bool LiveOut = false;            ///< Register is possibly live out.
-      bool Reloaded = false;           ///< Register was reloaded.
-      bool Error = false;              ///< Could not allocate.
+  /// Everything we know about a live virtual register.
+  struct LiveReg {
+    MachineInstr *LastUse = nullptr; ///< Last instr to use reg.
+    Register VirtReg;                ///< Virtual register number.
+    MCPhysReg PhysReg = 0;           ///< Currently held here.
+    bool LiveOut = false;            ///< Register is possibly live out.
+    bool Reloaded = false;           ///< Register was reloaded.
+    bool Error = false;              ///< Could not allocate.
 
-      explicit LiveReg(Register VirtReg) : VirtReg(VirtReg) {}
+    explicit LiveReg(Register VirtReg) : VirtReg(VirtReg) {}
 
-      unsigned getSparseSetIndex() const {
-        return Register::virtReg2Index(VirtReg);
-      }
-    };
-
-    using LiveRegMap = SparseSet<LiveReg, identity<unsigned>, uint16_t>;
-    /// This map contains entries for each virtual register that is currently
-    /// available in a physical register.
-    LiveRegMap LiveVirtRegs;
-
-    /// Stores assigned virtual registers present in the bundle MI.
-    DenseMap<Register, MCPhysReg> BundleVirtRegsMap;
-
-    DenseMap<unsigned, SmallVector<MachineOperand *, 2>> LiveDbgValueMap;
-    /// List of DBG_VALUE that we encountered without the vreg being assigned
-    /// because they were placed after the last use of the vreg.
-    DenseMap<unsigned, SmallVector<MachineInstr *, 1>> DanglingDbgValues;
-
-    /// Has a bit set for every virtual register for which it was determined
-    /// that it is alive across blocks.
-    BitVector MayLiveAcrossBlocks;
-
-    /// State of a register unit.
-    enum RegUnitState {
-      /// A free register is not currently in use and can be allocated
-      /// immediately without checking aliases.
-      regFree,
-
-      /// A pre-assigned register has been assigned before register allocation
-      /// (e.g., setting up a call parameter).
-      regPreAssigned,
-
-      /// Used temporarily in reloadAtBegin() to mark register units that are
-      /// live-in to the basic block.
-      regLiveIn,
-
-      /// A register state may also be a virtual register number, indication
-      /// that the physical register is currently allocated to a virtual
-      /// register. In that case, LiveVirtRegs contains the inverse mapping.
-    };
-
-    /// Maps each physical register to a RegUnitState enum or virtual register.
-    std::vector<unsigned> RegUnitStates;
-
-    SmallVector<MachineInstr *, 32> Coalesced;
-
-    using RegUnitSet = SparseSet<uint16_t, identity<uint16_t>>;
-    /// Set of register units that are used in the current instruction, and so
-    /// cannot be allocated.
-    RegUnitSet UsedInInstr;
-    RegUnitSet PhysRegUses;
-    SmallVector<uint16_t, 8> DefOperandIndexes;
-    // Register masks attached to the current instruction.
-    SmallVector<const uint32_t *> RegMasks;
-
-    void setPhysRegState(MCPhysReg PhysReg, unsigned NewState);
-    bool isPhysRegFree(MCPhysReg PhysReg) const;
-
-    /// Mark a physreg as used in this instruction.
-    void markRegUsedInInstr(MCPhysReg PhysReg) {
-      for (MCRegUnit Unit : TRI->regunits(PhysReg))
-        UsedInInstr.insert(Unit);
+    unsigned getSparseSetIndex() const {
+      return Register::virtReg2Index(VirtReg);
     }
+  };
 
-    // Check if physreg is clobbered by instruction's regmask(s).
-    bool isClobberedByRegMasks(MCPhysReg PhysReg) const {
-      return llvm::any_of(RegMasks, [PhysReg](const uint32_t *Mask) {
-        return MachineOperand::clobbersPhysReg(Mask, PhysReg);
-      });
-    }
+  using LiveRegMap = SparseSet<LiveReg, identity<unsigned>, uint16_t>;
+  /// This map contains entries for each virtual register that is currently
+  /// available in a physical register.
+  LiveRegMap LiveVirtRegs;
+
+  /// Stores assigned virtual registers present in the bundle MI.
+  DenseMap<Register, MCPhysReg> BundleVirtRegsMap;
+
+  DenseMap<unsigned, SmallVector<MachineOperand *, 2>> LiveDbgValueMap;
+  /// List of DBG_VALUE that we encountered without the vreg being assigned
+  /// because they were placed after the last use of the vreg.
+  DenseMap<unsigned, SmallVector<MachineInstr *, 1>> DanglingDbgValues;
+
+  /// Has a bit set for every virtual register for which it was determined
+  /// that it is alive across blocks.
+  BitVector MayLiveAcrossBlocks;
+
+  /// State of a register unit.
+  enum RegUnitState {
+    /// A free register is not currently in use and can be allocated
+    /// immediately without checking aliases.
+    regFree,
+
+    /// A pre-assigned register has been assigned before register allocation
+    /// (e.g., setting up a call parameter).
+    regPreAssigned,
+
+    /// Used temporarily in reloadAtBegin() to mark register units that are
+    /// live-in to the basic block.
+    regLiveIn,
+
+    /// A register state may also be a virtual register number, indication
+    /// that the physical register is currently allocated to a virtual
+    /// register. In that case, LiveVirtRegs contains the inverse mapping.
+  };
+
+  /// Maps each physical register to a RegUnitState enum or virtual register.
+  std::vector<unsigned> RegUnitStates;
+
+  SmallVector<MachineInstr *, 32> Coalesced;
+
+  using RegUnitSet = SparseSet<uint16_t, identity<uint16_t>>;
+  /// Set of register units that are used in the current instruction, and so
+  /// cannot be allocated.
+  RegUnitSet UsedInInstr;
+  RegUnitSet PhysRegUses;
+  SmallVector<uint16_t, 8> DefOperandIndexes;
+  // Register masks attached to the current instruction.
+  SmallVector<const uint32_t *> RegMasks;
+
+  void setPhysRegState(MCPhysReg PhysReg, unsigned NewState);
+  bool isPhysRegFree(MCPhysReg PhysReg) const;
 
-    /// Check if a physreg or any of its aliases are used in this instruction.
-    bool isRegUsedInInstr(MCPhysReg PhysReg, bool LookAtPhysRegUses) const {
-      if (LookAtPhysRegUses && isClobberedByRegMasks(PhysReg))
+  /// Mark a physreg as used in this instruction.
+  void markRegUsedInInstr(MCPhysReg PhysReg) {
+    for (MCRegUnit Unit : TRI->regunits(PhysReg))
+      UsedInInstr.insert(Unit);
+  }
+
+  // Check if physreg is clobbered by instruction's regmask(s).
+  bool isClobberedByRegMasks(MCPhysReg PhysReg) const {
+    return llvm::any_of(RegMasks, [PhysReg](const uint32_t *Mask) {
+      return MachineOperand::clobbersPhysReg(Mask, PhysReg);
+    });
+  }
+
+  /// Check if a physreg or any of its aliases are used in this instruction.
+  bool isRegUsedInInstr(MCPhysReg PhysReg, bool LookAtPhysRegUses) const {
+    if (LookAtPhysRegUses && isClobberedByRegMasks(PhysReg))
+      return true;
+    for (MCRegUnit Unit : TRI->regunits(PhysReg)) {
+      if (UsedInInstr.count(Unit))
+        return true;
+      if (LookAtPhysRegUses && PhysRegUses.count(Unit))
         return true;
-      for (MCRegUnit Unit : TRI->regunits(PhysReg)) {
-        if (UsedInInstr.count(Unit))
-          return true;
-        if (LookAtPhysRegUses && PhysRegUses.count(Unit))
-          return true;
-      }
-      return false;
     }
+    return false;
+  }
 
-    /// Mark physical register as being used in a register use operand.
-    /// This is only used by the special livethrough handling code.
-    void markPhysRegUsedInInstr(MCPhysReg PhysReg) {
-      for (MCRegUnit Unit : TRI->regunits(PhysReg))
-        PhysRegUses.insert(Unit);
-    }
+  /// Mark physical register as being used in a register use operand.
+  /// This is only used by the special livethrough handling code.
+  void markPhysRegUsedInInstr(MCPhysReg PhysReg) {
+    for (MCRegUnit Unit : TRI->regunits(PhysReg))
+      PhysRegUses.insert(Unit);
+  }
 
-    /// Remove mark of physical register being used in the instruction.
-    void unmarkRegUsedInInstr(MCPhysReg PhysReg) {
-      for (MCRegUnit Unit : TRI->regunits(PhysReg))
-        UsedInInstr.erase(Unit);
-    }
+  /// Remove mark of physical register being used in the instruction.
+  void unmarkRegUsedInInstr(MCPhysReg PhysReg) {
+    for (MCRegUnit Unit : TRI->regunits(PhysReg))
+      UsedInInstr.erase(Unit);
+  }
 
-    enum : unsigned {
-      spillClean = 50,
-      spillDirty = 100,
-      spillPrefBonus = 20,
-      spillImpossible = ~0u
-    };
+  enum : unsigned {
+    spillClean = 50,
+    spillDirty = 100,
+    spillPrefBonus = 20,
+    spillImpossible = ~0u
+  };
 
-  public:
-    StringRef getPassName() const override { return "Fast Register Allocator"; }
+public:
+  StringRef getPassName() const override { return "Fast Register Allocator"; }
 
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.setPreservesCFG();
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.setPreservesCFG();
+    MachineFunctionPass::getAnalysisUsage(AU);
+  }
 
-    MachineFunctionProperties getRequiredProperties() const override {
-      return MachineFunctionProperties().set(
-          MachineFunctionProperties::Property::NoPHIs);
-    }
+  MachineFunctionProperties getRequiredProperties() const override {
+    return MachineFunctionProperties().set(
+        MachineFunctionProperties::Property::NoPHIs);
+  }
 
-    MachineFunctionProperties getSetProperties() const override {
-      if (ClearVirtRegs) {
-        return MachineFunctionProperties().set(
+  MachineFunctionProperties getSetProperties() const override {
+    if (ClearVirtRegs) {
+      return MachineFunctionProperties().set(
           MachineFunctionProperties::Property::NoVRegs);
-      }
-
-      return MachineFunctionProperties();
     }
 
-    MachineFunctionProperties getClearedProperties() const override {
-      return MachineFunctionProperties().set(
+    return MachineFunctionProperties();
+  }
+
+  MachineFunctionProperties getClearedProperties() const override {
+    return MachineFunctionProperties().set(
         MachineFunctionProperties::Property::IsSSA);
-    }
+  }
 
-  private:
-    bool runOnMachineFunction(MachineFunction &MF) override;
+private:
+  bool runOnMachineFunction(MachineFunction &MF) override;
 
-    void allocateBasicBlock(MachineBasicBlock &MBB);
+  void allocateBasicBlock(MachineBasicBlock &MBB);
 
-    void addRegClassDefCounts(std::vector<unsigned> &RegClassDefCounts,
-                              Register Reg) const;
+  void addRegClassDefCounts(std::vector<unsigned> &RegClassDefCounts,
+                            Register Reg) const;
 
-    void findAndSortDefOperandIndexes(const MachineInstr &MI);
+  void findAndSortDefOperandIndexes(const MachineInstr &MI);
 
-    void allocateInstruction(MachineInstr &MI);
-    void handleDebugValue(MachineInstr &MI);
-    void handleBundle(MachineInstr &MI);
+  void allocateInstruction(MachineInstr &MI);
+  void handleDebugValue(MachineInstr &MI);
+  void handleBundle(MachineInstr &MI);
 
-    bool usePhysReg(MachineInstr &MI, MCPhysReg PhysReg);
-    bool definePhysReg(MachineInstr &MI, MCPhysReg PhysReg);
-    bool displacePhysReg(MachineInstr &MI, MCPhysReg PhysReg);
-    void freePhysReg(MCPhysReg PhysReg);
+  bool usePhysReg(MachineInstr &MI, MCPhysReg PhysReg);
+  bool definePhysReg(MachineInstr &MI, MCPhysReg PhysReg);
+  bool displacePhysReg(MachineInstr &MI, MCPhysReg PhysReg);
+  void freePhysReg(MCPhysReg PhysReg);
 
-    unsigned calcSpillCost(MCPhysReg PhysReg) const;
+  unsigned calcSpillCost(MCPhysReg PhysReg) const;
 
-    LiveRegMap::iterator findLiveVirtReg(Register VirtReg) {
-      return LiveVirtRegs.find(Register::virtReg2Index(VirtReg));
-    }
+  LiveRegMap::iterator findLiveVirtReg(Register VirtReg) {
+    return LiveVirtRegs.find(Register::virtReg2Index(VirtReg));
+  }
 
-    LiveRegMap::const_iterator findLiveVirtReg(Register VirtReg) const {
-      return LiveVirtRegs.find(Register::virtReg2Index(VirtReg));
-    }
+  LiveRegMap::const_iterator findLiveVirtReg(Register VirtReg) const {
+    return LiveVirtRegs.find(Register::virtReg2Index(VirtReg));
+  }
 
-    void assignVirtToPhysReg(MachineInstr &MI, LiveReg &, MCPhysReg PhysReg);
-    void allocVirtReg(MachineInstr &MI, LiveReg &LR, Register Hint,
-                      bool LookAtPhysRegUses = false);
-    void allocVirtRegUndef(MachineOperand &MO);
-    void assignDanglingDebugValues(MachineInstr &Def, Register VirtReg,
-                                   MCPhysReg Reg);
-    bool defineLiveThroughVirtReg(MachineInstr &MI, unsigned OpNum,
-                                  Register VirtReg);
-    bool defineVirtReg(MachineInstr &MI, unsigned OpNum, Register VirtReg,
-                       bool LookAtPhysRegUses = false);
-    bool useVirtReg(MachineInstr &MI, unsigned OpNum, Register VirtReg);
-
-    MachineBasicBlock::iterator
-    getMBBBeginInsertionPoint(MachineBasicBlock &MBB,
-                              SmallSet<Register, 2> &PrologLiveIns) const;
-
-    void reloadAtBegin(MachineBasicBlock &MBB);
-    bool setPhysReg(MachineInstr &MI, MachineOperand &MO, MCPhysReg PhysReg);
-
-    Register traceCopies(Register VirtReg) const;
-    Register traceCopyChain(Register Reg) const;
-
-    bool shouldAllocateRegister(const Register Reg) const;
-    int getStackSpaceFor(Register VirtReg);
-    void spill(MachineBasicBlock::iterator Before, Register VirtReg,
-               MCPhysReg AssignedReg, bool Kill, bool LiveOut);
-    void reload(MachineBasicBlock::iterator Before, Register VirtReg,
-                MCPhysReg PhysReg);
-
-    bool mayLiveOut(Register VirtReg);
-    bool mayLiveIn(Register VirtReg);
-
-    void dumpState() const;
-  };
+  void assignVirtToPhysReg(MachineInstr &MI, LiveReg &, MCPhysReg PhysReg);
+  void allocVirtReg(MachineInstr &MI, LiveReg &LR, Register Hint,
+                    bool LookAtPhysRegUses = false);
+  void allocVirtRegUndef(MachineOperand &MO);
+  void assignDanglingDebugValues(MachineInstr &Def, Register VirtReg,
+                                 MCPhysReg Reg);
+  bool defineLiveThroughVirtReg(MachineInstr &MI, unsigned OpNum,
+                                Register VirtReg);
+  bool defineVirtReg(MachineInstr &MI, unsigned OpNum, Register VirtReg,
+                     bool LookAtPhysRegUses = false);
+  bool useVirtReg(MachineInstr &MI, unsigned OpNum, Register VirtReg);
+
+  MachineBasicBlock::iterator
+  getMBBBeginInsertionPoint(MachineBasicBlock &MBB,
+                            SmallSet<Register, 2> &PrologLiveIns) const;
+
+  void reloadAtBegin(MachineBasicBlock &MBB);
+  bool setPhysReg(MachineInstr &MI, MachineOperand &MO, MCPhysReg PhysReg);
+
+  Register traceCopies(Register VirtReg) const;
+  Register traceCopyChain(Register Reg) const;
+
+  bool shouldAllocateRegister(const Register Reg) const;
+  int getStackSpaceFor(Register VirtReg);
+  void spill(MachineBasicBlock::iterator Before, Register VirtReg,
+             MCPhysReg AssignedReg, bool Kill, bool LiveOut);
+  void reload(MachineBasicBlock::iterator Before, Register VirtReg,
+              MCPhysReg PhysReg);
+
+  bool mayLiveOut(Register VirtReg);
+  bool mayLiveIn(Register VirtReg);
+
+  void dumpState() const;
+};
 
 } // end anonymous namespace
 
@@ -431,8 +428,8 @@ bool RegAllocFast::mayLiveIn(Register VirtReg) {
 /// DBG_VALUEs with \p VirtReg operands with the stack slot.
 void RegAllocFast::spill(MachineBasicBlock::iterator Before, Register VirtReg,
                          MCPhysReg AssignedReg, bool Kill, bool LiveOut) {
-  LLVM_DEBUG(dbgs() << "Spilling " << printReg(VirtReg, TRI)
-                    << " in " << printReg(AssignedReg, TRI));
+  LLVM_DEBUG(dbgs() << "Spilling " << printReg(VirtReg, TRI) << " in "
+                    << printReg(AssignedReg, TRI));
   int FI = getStackSpaceFor(VirtReg);
   LLVM_DEBUG(dbgs() << " to stack slot #" << FI << '\n');
 
@@ -503,9 +500,8 @@ void RegAllocFast::reload(MachineBasicBlock::iterator Before, Register VirtReg,
 /// This is not just MBB.begin() because surprisingly we have EH_LABEL
 /// instructions marking the begin of a basic block. This means we must insert
 /// new instructions after such labels...
-MachineBasicBlock::iterator
-RegAllocFast::getMBBBeginInsertionPoint(
-  MachineBasicBlock &MBB, SmallSet<Register, 2> &PrologLiveIns) const {
+MachineBasicBlock::iterator RegAllocFast::getMBBBeginInsertionPoint(
+    MachineBasicBlock &MBB, SmallSet<Register, 2> &PrologLiveIns) const {
   MachineBasicBlock::iterator I = MBB.begin();
   while (I != MBB.end()) {
     if (I->isLabel()) {
@@ -542,13 +538,12 @@ void RegAllocFast::reloadAtBegin(MachineBasicBlock &MBB) {
     setPhysRegState(Reg, regLiveIn);
   }
 
-
   SmallSet<Register, 2> PrologLiveIns;
 
   // The LiveRegMap is keyed by an unsigned (the virtreg number), so the order
   // of spilling here is deterministic, if arbitrary.
-  MachineBasicBlock::iterator InsertBefore
-    = getMBBBeginInsertionPoint(MBB, PrologLiveIns);
+  MachineBasicBlock::iterator InsertBefore =
+      getMBBBeginInsertionPoint(MBB, PrologLiveIns);
   for (const LiveReg &LR : LiveVirtRegs) {
     MCPhysReg PhysReg = LR.PhysReg;
     if (PhysReg == 0)
@@ -634,12 +629,12 @@ void RegAllocFast::freePhysReg(MCPhysReg PhysReg) {
     setPhysRegState(PhysReg, regFree);
     return;
   default: {
-      LiveRegMap::iterator LRI = findLiveVirtReg(VirtReg);
-      assert(LRI != LiveVirtRegs.end());
-      LLVM_DEBUG(dbgs() << ' ' << printReg(LRI->VirtReg, TRI) << '\n');
-      setPhysRegState(LRI->PhysReg, regFree);
-      LRI->PhysReg = 0;
-    }
+    LiveRegMap::iterator LRI = findLiveVirtReg(VirtReg);
+    assert(LRI != LiveVirtRegs.end());
+    LLVM_DEBUG(dbgs() << ' ' << printReg(LRI->VirtReg, TRI) << '\n');
+    setPhysRegState(LRI->PhysReg, regFree);
+    LRI->PhysReg = 0;
+  }
     return;
   }
 }
@@ -673,7 +668,7 @@ void RegAllocFast::assignDanglingDebugValues(MachineInstr &Definition,
   if (UDBGValIter == DanglingDbgValues.end())
     return;
 
-  SmallVectorImpl<MachineInstr*> &Dangling = UDBGValIter->second;
+  SmallVectorImpl<MachineInstr *> &Dangling = UDBGValIter->second;
   for (MachineInstr *DbgValue : Dangling) {
     assert(DbgValue->isDebugValue());
     if (!DbgValue->hasDebugOperandForReg(VirtReg))
@@ -683,10 +678,11 @@ void RegAllocFast::assignDanglingDebugValues(MachineInstr &Definition,
     MCPhysReg SetToReg = Reg;
     unsigned Limit = 20;
     for (MachineBasicBlock::iterator I = std::next(Definition.getIterator()),
-         E = DbgValue->getIterator(); I != E; ++I) {
+                                     E = DbgValue->getIterator();
+         I != E; ++I) {
       if (I->modifiesRegister(Reg, TRI) || --Limit == 0) {
         LLVM_DEBUG(dbgs() << "Register did not survive for " << *DbgValue
-                   << '\n');
+                          << '\n');
         SetToReg = 0;
         break;
       }
@@ -716,9 +712,7 @@ void RegAllocFast::assignVirtToPhysReg(MachineInstr &AtMI, LiveReg &LR,
   assignDanglingDebugValues(AtMI, VirtReg, PhysReg);
 }
 
-static bool isCoalescable(const MachineInstr &MI) {
-  return MI.isFullCopy();
-}
+static bool isCoalescable(const MachineInstr &MI) { return MI.isFullCopy(); }
 
 Register RegAllocFast::traceCopyChain(Register Reg) const {
   static const unsigned ChainLengthLimit = 3;
@@ -757,8 +751,8 @@ Register RegAllocFast::traceCopies(Register VirtReg) const {
 }
 
 /// Allocates a physical register for VirtReg.
-void RegAllocFast::allocVirtReg(MachineInstr &MI, LiveReg &LR,
-                                Register Hint0, bool LookAtPhysRegUses) {
+void RegAllocFast::allocVirtReg(MachineInstr &MI, LiveReg &LR, Register Hint0,
+                                bool LookAtPhysRegUses) {
   const Register VirtReg = LR.VirtReg;
   assert(LR.PhysReg == 0);
 
@@ -784,7 +778,6 @@ void RegAllocFast::allocVirtReg(MachineInstr &MI, LiveReg &LR,
     Hint0 = Register();
   }
 
-
   // Try other hint.
   Register Hint1 = traceCopies(VirtReg);
   if (Hint1.isPhysical() && MRI->isAllocatable(Hint1) && RC.contains(Hint1) &&
@@ -792,12 +785,12 @@ void RegAllocFast::allocVirtReg(MachineInstr &MI, LiveReg &LR,
     // Take hint if the register is currently free.
     if (isPhysRegFree(Hint1)) {
       LLVM_DEBUG(dbgs() << "\tPreferred Register 0: " << printReg(Hint1, TRI)
-                 << '\n');
+                        << '\n');
       assignVirtToPhysReg(MI, LR, Hint1);
       return;
     } else {
       LLVM_DEBUG(dbgs() << "\tPreferred Register 1: " << printReg(Hint1, TRI)
-                 << " occupied\n");
+                        << " occupied\n");
     }
   } else {
     Hint1 = Register();
@@ -891,12 +884,12 @@ bool RegAllocFast::defineLiveThroughVirtReg(MachineInstr &MI, unsigned OpNum,
       LRI->PhysReg = 0;
       allocVirtReg(MI, *LRI, 0, true);
       MachineBasicBlock::iterator InsertBefore =
-        std::next((MachineBasicBlock::iterator)MI.getIterator());
+          std::next((MachineBasicBlock::iterator)MI.getIterator());
       LLVM_DEBUG(dbgs() << "Copy " << printReg(LRI->PhysReg, TRI) << " to "
                         << printReg(PrevReg, TRI) << '\n');
       BuildMI(*MBB, InsertBefore, MI.getDebugLoc(),
               TII->get(TargetOpcode::COPY), PrevReg)
-        .addReg(LRI->PhysReg, llvm::RegState::Kill);
+          .addReg(LRI->PhysReg, llvm::RegState::Kill);
     }
     MachineOperand &MO = MI.getOperand(OpNum);
     if (MO.getSubReg() && !MO.isUndef()) {
@@ -956,8 +949,8 @@ bool RegAllocFast::defineVirtReg(MachineInstr &MI, unsigned OpNum,
     if (!MI.isImplicitDef()) {
       MachineBasicBlock::iterator SpillBefore =
           std::next((MachineBasicBlock::iterator)MI.getIterator());
-      LLVM_DEBUG(dbgs() << "Spill Reason: LO: " << LRI->LiveOut << " RL: "
-                        << LRI->Reloaded << '\n');
+      LLVM_DEBUG(dbgs() << "Spill Reason: LO: " << LRI->LiveOut
+                        << " RL: " << LRI->Reloaded << '\n');
       bool Kill = LRI->LastUse == nullptr;
       spill(SpillBefore, VirtReg, PhysReg, Kill, LRI->LiveOut);
 
@@ -969,8 +962,8 @@ bool RegAllocFast::defineVirtReg(MachineInstr &MI, unsigned OpNum,
         for (MachineOperand &MO : MI.operands()) {
           if (MO.isMBB()) {
             MachineBasicBlock *Succ = MO.getMBB();
-            TII->storeRegToStackSlot(*Succ, Succ->begin(), PhysReg, Kill,
-                FI, &RC, TRI, VirtReg);
+            TII->storeRegToStackSlot(*Succ, Succ->begin(), PhysReg, Kill, FI,
+                                     &RC, TRI, VirtReg);
             ++NumStores;
             Succ->addLiveIn(PhysReg);
           }
@@ -1106,8 +1099,10 @@ void RegAllocFast::dumpState() const {
       assert(I != LiveVirtRegs.end() && "have LiveVirtRegs entry");
       if (I->LiveOut || I->Reloaded) {
         dbgs() << '[';
-        if (I->LiveOut) dbgs() << 'O';
-        if (I->Reloaded) dbgs() << 'R';
+        if (I->LiveOut)
+          dbgs() << 'O';
+        if (I->Reloaded)
+          dbgs() << 'R';
         dbgs() << ']';
       }
       assert(TRI->hasRegUnit(I->PhysReg, Unit) && "inverse mapping present");
@@ -1122,8 +1117,7 @@ void RegAllocFast::dumpState() const {
     assert(VirtReg.isVirtual() && "Bad map key");
     MCPhysReg PhysReg = LR.PhysReg;
     if (PhysReg != 0) {
-      assert(Register::isPhysicalRegister(PhysReg) &&
-             "mapped to physreg");
+      assert(Register::isPhysicalRegister(PhysReg) && "mapped to physreg");
       for (MCRegUnit Unit : TRI->regunits(PhysReg)) {
         assert(RegUnitStates[Unit] == VirtReg && "inverse map valid");
       }
@@ -1133,8 +1127,8 @@ void RegAllocFast::dumpState() const {
 #endif
 
 /// Count number of defs consumed from each register class by \p Reg
-void RegAllocFast::addRegClassDefCounts(std::vector<unsigned> &RegClassDefCounts,
-                                        Register Reg) const {
+void RegAllocFast::addRegClassDefCounts(
+    std::vector<unsigned> &RegClassDefCounts, Register Reg) const {
   assert(RegClassDefCounts.size() == TRI->getNumRegClasses());
 
   if (Reg.isVirtual()) {
@@ -1586,10 +1580,7 @@ void RegAllocFast::allocateBasicBlock(MachineBasicBlock &MBB) {
 
   // Traverse block in reverse order allocating instructions one by one.
   for (MachineInstr &MI : reverse(MBB)) {
-    LLVM_DEBUG(
-      dbgs() << "\n>> " << MI << "Regs:";
-      dumpState()
-    );
+    LLVM_DEBUG(dbgs() << "\n>> " << MI << "Regs:"; dumpState());
 
     // Special handling for debug values. Note that they are not allowed to
     // affect codegen of the other instructions in any way.
@@ -1607,10 +1598,7 @@ void RegAllocFast::allocateBasicBlock(MachineBasicBlock &MBB) {
     }
   }
 
-  LLVM_DEBUG(
-    dbgs() << "Begin Regs:";
-    dumpState()
-  );
+  LLVM_DEBUG(dbgs() << "Begin Regs:"; dumpState());
 
   // Spill all physical registers holding virtual registers now.
   LLVM_DEBUG(dbgs() << "Loading live registers at begin of block.\n");
@@ -1629,7 +1617,7 @@ void RegAllocFast::allocateBasicBlock(MachineBasicBlock &MBB) {
       if (!DbgValue->hasDebugOperandForReg(UDBGPair.first))
         continue;
       LLVM_DEBUG(dbgs() << "Register did not survive for " << *DbgValue
-                 << '\n');
+                        << '\n');
       DbgValue->setDebugValueUndef();
     }
   }
@@ -1677,9 +1665,7 @@ bool RegAllocFast::runOnMachineFunction(MachineFunction &MF) {
   return true;
 }
 
-FunctionPass *llvm::createFastRegisterAllocator() {
-  return new RegAllocFast();
-}
+FunctionPass *llvm::createFastRegisterAllocator() { return new RegAllocFast(); }
 
 FunctionPass *llvm::createFastRegisterAllocator(RegClassFilterFunc Ftor,
                                                 bool ClearVirtRegs) {
diff --git a/llvm/lib/CodeGen/RegAllocGreedy.cpp b/llvm/lib/CodeGen/RegAllocGreedy.cpp
index 68f6ea3268a9ae8..6634683fe2a10d2 100644
--- a/llvm/lib/CodeGen/RegAllocGreedy.cpp
+++ b/llvm/lib/CodeGen/RegAllocGreedy.cpp
@@ -80,21 +80,22 @@ using namespace llvm;
 #define DEBUG_TYPE "regalloc"
 
 STATISTIC(NumGlobalSplits, "Number of split global live ranges");
-STATISTIC(NumLocalSplits,  "Number of split local live ranges");
-STATISTIC(NumEvicted,      "Number of interferences evicted");
+STATISTIC(NumLocalSplits, "Number of split local live ranges");
+STATISTIC(NumEvicted, "Number of interferences evicted");
 
 static cl::opt<SplitEditor::ComplementSpillMode> SplitSpillMode(
     "split-spill-mode", cl::Hidden,
     cl::desc("Spill mode for splitting live ranges"),
     cl::values(clEnumValN(SplitEditor::SM_Partition, "default", "Default"),
                clEnumValN(SplitEditor::SM_Size, "size", "Optimize for size"),
-               clEnumValN(SplitEditor::SM_Speed, "speed", "Optimize for speed")),
+               clEnumValN(SplitEditor::SM_Speed, "speed",
+                          "Optimize for speed")),
     cl::init(SplitEditor::SM_Speed));
 
 static cl::opt<unsigned>
-LastChanceRecoloringMaxDepth("lcr-max-depth", cl::Hidden,
-                             cl::desc("Last chance recoloring max depth"),
-                             cl::init(5));
+    LastChanceRecoloringMaxDepth("lcr-max-depth", cl::Hidden,
+                                 cl::desc("Last chance recoloring max depth"),
+                                 cl::init(5));
 
 static cl::opt<unsigned> LastChanceRecoloringMaxInterference(
     "lcr-max-interf", cl::Hidden,
@@ -117,10 +118,10 @@ static cl::opt<bool> EnableDeferredSpilling(
     cl::init(false));
 
 // FIXME: Find a good default for this flag and remove the flag.
-static cl::opt<unsigned>
-CSRFirstTimeCost("regalloc-csr-first-time-cost",
-              cl::desc("Cost for first time use of callee-saved register."),
-              cl::init(0), cl::Hidden);
+static cl::opt<unsigned> CSRFirstTimeCost(
+    "regalloc-csr-first-time-cost",
+    cl::desc("Cost for first time use of callee-saved register."), cl::init(0),
+    cl::Hidden);
 
 static cl::opt<unsigned long> GrowRegionComplexityBudget(
     "grow-region-complexity-budget",
@@ -147,8 +148,8 @@ static RegisterRegAlloc greedyRegAlloc("greedy", "greedy register allocator",
 char RAGreedy::ID = 0;
 char &llvm::RAGreedyID = RAGreedy::ID;
 
-INITIALIZE_PASS_BEGIN(RAGreedy, "greedy",
-                "Greedy Register Allocator", false, false)
+INITIALIZE_PASS_BEGIN(RAGreedy, "greedy", "Greedy Register Allocator", false,
+                      false)
 INITIALIZE_PASS_DEPENDENCY(LiveDebugVariables)
 INITIALIZE_PASS_DEPENDENCY(SlotIndexes)
 INITIALIZE_PASS_DEPENDENCY(LiveIntervals)
@@ -164,37 +165,27 @@ INITIALIZE_PASS_DEPENDENCY(SpillPlacement)
 INITIALIZE_PASS_DEPENDENCY(MachineOptimizationRemarkEmitterPass)
 INITIALIZE_PASS_DEPENDENCY(RegAllocEvictionAdvisorAnalysis)
 INITIALIZE_PASS_DEPENDENCY(RegAllocPriorityAdvisorAnalysis)
-INITIALIZE_PASS_END(RAGreedy, "greedy",
-                "Greedy Register Allocator", false, false)
+INITIALIZE_PASS_END(RAGreedy, "greedy", "Greedy Register Allocator", false,
+                    false)
 
 #ifndef NDEBUG
 const char *const RAGreedy::StageName[] = {
-    "RS_New",
-    "RS_Assign",
-    "RS_Split",
-    "RS_Split2",
-    "RS_Spill",
-    "RS_Memory",
-    "RS_Done"
-};
+    "RS_New",   "RS_Assign", "RS_Split", "RS_Split2",
+    "RS_Spill", "RS_Memory", "RS_Done"};
 #endif
 
 // Hysteresis to use when comparing floats.
 // This helps stabilize decisions based on float comparisons.
 const float Hysteresis = (2007 / 2048.0f); // 0.97998046875
 
-FunctionPass* llvm::createGreedyRegisterAllocator() {
-  return new RAGreedy();
-}
+FunctionPass *llvm::createGreedyRegisterAllocator() { return new RAGreedy(); }
 
 FunctionPass *llvm::createGreedyRegisterAllocator(RegClassFilterFunc Ftor) {
   return new RAGreedy(Ftor);
 }
 
-RAGreedy::RAGreedy(RegClassFilterFunc F):
-  MachineFunctionPass(ID),
-  RegAllocBase(F) {
-}
+RAGreedy::RAGreedy(RegClassFilterFunc F)
+    : MachineFunctionPass(ID), RegAllocBase(F) {}
 
 void RAGreedy::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.setPreservesCFG();
@@ -436,7 +427,8 @@ MCRegister RAGreedy::tryAssign(const LiveInterval &VirtReg,
 
   LLVM_DEBUG(dbgs() << printReg(PhysReg, TRI) << " is available at cost "
                     << (unsigned)Cost << '\n');
-  MCRegister CheapReg = tryEvict(VirtReg, Order, NewVRegs, Cost, FixedRegisters);
+  MCRegister CheapReg =
+      tryEvict(VirtReg, Order, NewVRegs, Cost, FixedRegisters);
   return CheapReg ? CheapReg : PhysReg;
 }
 
@@ -847,7 +839,7 @@ BlockFrequency RAGreedy::calcGlobalSplitCost(GlobalSplitCandidate &Cand,
   for (unsigned I = 0; I != UseBlocks.size(); ++I) {
     const SplitAnalysis::BlockInfo &BI = UseBlocks[I];
     SpillPlacement::BlockConstraint &BC = SplitConstraints[I];
-    bool RegIn  = LiveBundles[Bundles->getBundle(BC.Number, false)];
+    bool RegIn = LiveBundles[Bundles->getBundle(BC.Number, false)];
     bool RegOut = LiveBundles[Bundles->getBundle(BC.Number, true)];
     unsigned Ins = 0;
 
@@ -862,7 +854,7 @@ BlockFrequency RAGreedy::calcGlobalSplitCost(GlobalSplitCandidate &Cand,
   }
 
   for (unsigned Number : Cand.ActiveBlocks) {
-    bool RegIn  = LiveBundles[Bundles->getBundle(Number, false)];
+    bool RegIn = LiveBundles[Bundles->getBundle(Number, false)];
     bool RegOut = LiveBundles[Bundles->getBundle(Number, true)];
     if (!RegIn && !RegOut)
       continue;
@@ -1096,7 +1088,7 @@ unsigned RAGreedy::calculateRegionSplitCost(const LiveInterval &VirtReg,
     }
 
     if (GlobalCand.size() <= NumCands)
-      GlobalCand.resize(NumCands+1);
+      GlobalCand.resize(NumCands + 1);
     GlobalSplitCandidate &Cand = GlobalCand[NumCands];
     Cand.reset(IntfCache, PhysReg);
 
@@ -1391,13 +1383,12 @@ void RAGreedy::calcGapWeights(MCRegister PhysReg,
   assert(SA->getUseBlocks().size() == 1 && "Not a local interval");
   const SplitAnalysis::BlockInfo &BI = SA->getUseBlocks().front();
   ArrayRef<SlotIndex> Uses = SA->getUseSlots();
-  const unsigned NumGaps = Uses.size()-1;
+  const unsigned NumGaps = Uses.size() - 1;
 
   // Start and end points for the interference check.
-  SlotIndex StartIdx =
-    BI.LiveIn ? BI.FirstInstr.getBaseIndex() : BI.FirstInstr;
+  SlotIndex StartIdx = BI.LiveIn ? BI.FirstInstr.getBaseIndex() : BI.FirstInstr;
   SlotIndex StopIdx =
-    BI.LiveOut ? BI.LastInstr.getBoundaryIndex() : BI.LastInstr;
+      BI.LiveOut ? BI.LastInstr.getBoundaryIndex() : BI.LastInstr;
 
   GapWeight.assign(NumGaps, 0.0f);
 
@@ -1418,7 +1409,7 @@ void RAGreedy::calcGapWeights(MCRegister PhysReg,
         Matrix->getLiveUnions()[Unit].find(StartIdx);
     for (unsigned Gap = 0; IntI.valid() && IntI.start() < StopIdx; ++IntI) {
       // Skip the gaps before IntI.
-      while (Uses[Gap+1].getBoundaryIndex() < IntI.start())
+      while (Uses[Gap + 1].getBoundaryIndex() < IntI.start())
         if (++Gap == NumGaps)
           break;
       if (Gap == NumGaps)
@@ -1428,7 +1419,7 @@ void RAGreedy::calcGapWeights(MCRegister PhysReg,
       const float weight = IntI.value()->weight();
       for (; Gap != NumGaps; ++Gap) {
         GapWeight[Gap] = std::max(GapWeight[Gap], weight);
-        if (Uses[Gap+1].getBaseIndex() >= IntI.stop())
+        if (Uses[Gap + 1].getBaseIndex() >= IntI.stop())
           break;
       }
       if (Gap == NumGaps)
@@ -1444,7 +1435,7 @@ void RAGreedy::calcGapWeights(MCRegister PhysReg,
 
     // Same loop as above. Mark any overlapped gaps as HUGE_VALF.
     for (unsigned Gap = 0; I != E && I->start < StopIdx; ++I) {
-      while (Uses[Gap+1].getBoundaryIndex() < I->start)
+      while (Uses[Gap + 1].getBoundaryIndex() < I->start)
         if (++Gap == NumGaps)
           break;
       if (Gap == NumGaps)
@@ -1452,7 +1443,7 @@ void RAGreedy::calcGapWeights(MCRegister PhysReg,
 
       for (; Gap != NumGaps; ++Gap) {
         GapWeight[Gap] = huge_valf;
-        if (Uses[Gap+1].getBaseIndex() >= I->end)
+        if (Uses[Gap + 1].getBaseIndex() >= I->end)
           break;
       }
       if (Gap == NumGaps)
@@ -1484,7 +1475,7 @@ unsigned RAGreedy::tryLocalSplit(const LiveInterval &VirtReg,
   ArrayRef<SlotIndex> Uses = SA->getUseSlots();
   if (Uses.size() <= 2)
     return 0;
-  const unsigned NumGaps = Uses.size()-1;
+  const unsigned NumGaps = Uses.size() - 1;
 
   LLVM_DEBUG({
     dbgs() << "tryLocalSplit: ";
@@ -1550,8 +1541,8 @@ unsigned RAGreedy::tryLocalSplit(const LiveInterval &VirtReg,
   float BestDiff = 0;
 
   const float blockFreq =
-    SpillPlacer->getBlockFrequency(BI.MBB->getNumber()).getFrequency() *
-    (1.0f / MBFI->getEntryFreq());
+      SpillPlacer->getBlockFrequency(BI.MBB->getNumber()).getFrequency() *
+      (1.0f / MBFI->getEntryFreq());
   SmallVector<float, 8> GapWeight;
 
   for (MCPhysReg PhysReg : Order) {
@@ -1662,7 +1653,7 @@ unsigned RAGreedy::tryLocalSplit(const LiveInterval &VirtReg,
 
   SE->openIntv();
   SlotIndex SegStart = SE->enterIntvBefore(Uses[BestBefore]);
-  SlotIndex SegStop  = SE->leaveIntvAfter(Uses[BestAfter]);
+  SlotIndex SegStop = SE->leaveIntvAfter(Uses[BestAfter]);
   SE->useIntv(SegStart, SegStop);
   SmallVector<unsigned, 8> IntvMap;
   SE->finish(&IntvMap);
@@ -2099,7 +2090,7 @@ MCRegister RAGreedy::tryAssignCSRFirstTime(
       return PhysReg;
 
     // Perform the actual pre-splitting.
-    doRegionSplit(VirtReg, BestCand, false/*HasCompact*/, NewVRegs);
+    doRegionSplit(VirtReg, BestCand, false /*HasCompact*/, NewVRegs);
     return 0;
   }
   return PhysReg;
@@ -2338,9 +2329,8 @@ MCRegister RAGreedy::selectOrSplitImpl(const LiveInterval &VirtReg,
   // queue. The RS_Split ranges already failed to do this, and they should not
   // get a second chance until they have been split.
   if (Stage != RS_Split)
-    if (Register PhysReg =
-            tryEvict(VirtReg, Order, NewVRegs, CostPerUseLimit,
-                     FixedRegisters)) {
+    if (Register PhysReg = tryEvict(VirtReg, Order, NewVRegs, CostPerUseLimit,
+                                    FixedRegisters)) {
       Register Hint = MRI->getSimpleHint(VirtReg.reg());
       // If VirtReg has a hint and that hint is broken record this
       // virtual register as a recoloring candidate for broken hint.
@@ -2446,8 +2436,9 @@ RAGreedy::RAGreedyStats RAGreedy::computeStats(MachineBasicBlock &MBB) {
   int FI;
 
   auto isSpillSlotAccess = [&MFI](const MachineMemOperand *A) {
-    return MFI.isSpillSlotObjectIndex(cast<FixedStackPseudoSourceValue>(
-        A->getPseudoValue())->getFrameIndex());
+    return MFI.isSpillSlotObjectIndex(
+        cast<FixedStackPseudoSourceValue>(A->getPseudoValue())
+            ->getFrameIndex());
   };
   auto isPatchpointInstr = [](const MachineInstr &MI) {
     return MI.getOpcode() == TargetOpcode::PATCHPOINT ||
@@ -2609,8 +2600,7 @@ bool RAGreedy::runOnMachineFunction(MachineFunction &mf) {
   if (VerifyEnabled)
     MF->verify(this, "Before greedy register allocator");
 
-  RegAllocBase::init(getAnalysis<VirtRegMap>(),
-                     getAnalysis<LiveIntervals>(),
+  RegAllocBase::init(getAnalysis<VirtRegMap>(), getAnalysis<LiveIntervals>(),
                      getAnalysis<LiveRegMatrix>());
 
   // Early return if there is no virtual register to be allocated to a
@@ -2656,7 +2646,7 @@ bool RAGreedy::runOnMachineFunction(MachineFunction &mf) {
   SE.reset(new SplitEditor(*SA, *LIS, *VRM, *DomTree, *MBFI, *VRAI));
 
   IntfCache.init(MF, Matrix->getLiveUnions(), Indexes, LIS, TRI);
-  GlobalCand.resize(32);  // This will grow as needed.
+  GlobalCand.resize(32); // This will grow as needed.
   SetOfBrokenHints.clear();
 
   allocatePhysRegs();
diff --git a/llvm/lib/CodeGen/RegAllocPBQP.cpp b/llvm/lib/CodeGen/RegAllocPBQP.cpp
index b8ee5dc0f8494b6..30b1c8f8296d2d6 100644
--- a/llvm/lib/CodeGen/RegAllocPBQP.cpp
+++ b/llvm/lib/CodeGen/RegAllocPBQP.cpp
@@ -92,19 +92,19 @@ using namespace llvm;
 #define DEBUG_TYPE "regalloc"
 
 static RegisterRegAlloc
-RegisterPBQPRepAlloc("pbqp", "PBQP register allocator",
-                       createDefaultPBQPRegisterAllocator);
+    RegisterPBQPRepAlloc("pbqp", "PBQP register allocator",
+                         createDefaultPBQPRegisterAllocator);
 
-static cl::opt<bool>
-PBQPCoalescing("pbqp-coalescing",
-                cl::desc("Attempt coalescing during PBQP register allocation."),
-                cl::init(false), cl::Hidden);
+static cl::opt<bool> PBQPCoalescing(
+    "pbqp-coalescing",
+    cl::desc("Attempt coalescing during PBQP register allocation."),
+    cl::init(false), cl::Hidden);
 
 #ifndef NDEBUG
-static cl::opt<bool>
-PBQPDumpGraphs("pbqp-dump-graphs",
-               cl::desc("Dump graphs for each function/round in the compilation unit."),
-               cl::init(false), cl::Hidden);
+static cl::opt<bool> PBQPDumpGraphs(
+    "pbqp-dump-graphs",
+    cl::desc("Dump graphs for each function/round in the compilation unit."),
+    cl::init(false), cl::Hidden);
 #endif
 
 namespace {
@@ -142,7 +142,7 @@ class RegAllocPBQP : public MachineFunctionPass {
 
   MachineFunctionProperties getClearedProperties() const override {
     return MachineFunctionProperties().set(
-      MachineFunctionProperties::Property::IsSSA);
+        MachineFunctionProperties::Property::IsSSA);
   }
 
 private:
@@ -171,10 +171,8 @@ class RegAllocPBQP : public MachineFunctionPass {
 
   /// Given a solved PBQP problem maps this solution back to a register
   /// assignment.
-  bool mapPBQPToRegAlloc(const PBQPRAGraph &G,
-                         const PBQP::Solution &Solution,
-                         VirtRegMap &VRM,
-                         Spiller &VRegSpiller);
+  bool mapPBQPToRegAlloc(const PBQPRAGraph &G, const PBQP::Solution &Solution,
+                         VirtRegMap &VRM, Spiller &VRegSpiller);
 
   /// Postprocessing before final spilling. Sets basic block "live in"
   /// variables.
@@ -254,7 +252,7 @@ class Interference : public PBQPRAConstraint {
   // to save us from looking up node ids via the VRegToNode map in the graph
   // metadata.
   using IntervalInfo =
-      std::tuple<LiveInterval*, size_t, PBQP::GraphBase::NodeId>;
+      std::tuple<LiveInterval *, size_t, PBQP::GraphBase::NodeId>;
 
   static SlotIndex getStartPoint(const IntervalInfo &I) {
     return std::get<0>(I)->segments[std::get<1>(I)].start;
@@ -268,15 +266,13 @@ class Interference : public PBQPRAConstraint {
     return std::get<2>(I);
   }
 
-  static bool lowestStartPoint(const IntervalInfo &I1,
-                               const IntervalInfo &I2) {
+  static bool lowestStartPoint(const IntervalInfo &I1, const IntervalInfo &I2) {
     // Condition reversed because priority queue has the *highest* element at
     // the front, rather than the lowest.
     return getStartPoint(I1) > getStartPoint(I2);
   }
 
-  static bool lowestEndPoint(const IntervalInfo &I1,
-                             const IntervalInfo &I2) {
+  static bool lowestEndPoint(const IntervalInfo &I1, const IntervalInfo &I2) {
     SlotIndex E1 = getEndPoint(I1);
     SlotIndex E2 = getEndPoint(I2);
 
@@ -393,9 +389,8 @@ class Interference : public PBQPRAConstraint {
   // interference. This case occurs frequently between integer and floating
   // point registers for example.
   // return true iff both nodes interferes.
-  bool createInterferenceEdge(PBQPRAGraph &G,
-                              PBQPRAGraph::NodeId NId, PBQPRAGraph::NodeId MId,
-                              IMatrixCache &C) {
+  bool createInterferenceEdge(PBQPRAGraph &G, PBQPRAGraph::NodeId NId,
+                              PBQPRAGraph::NodeId MId, IMatrixCache &C) {
     const TargetRegisterInfo &TRI =
         *G.getMetadata().MF.getSubtarget().getRegisterInfo();
     const auto &NRegs = G.getNodeMetadata(NId).getAllowedRegs();
@@ -439,8 +434,8 @@ class Coalescing : public PBQPRAConstraint {
     MachineBlockFrequencyInfo &MBFI = G.getMetadata().MBFI;
     CoalescerPair CP(*MF.getSubtarget().getRegisterInfo());
 
-    // Scan the machine function and add a coalescing cost whenever CoalescerPair
-    // gives the Ok.
+    // Scan the machine function and add a coalescing cost whenever
+    // CoalescerPair gives the Ok.
     for (const auto &MBB : MF) {
       for (const auto &MI : MBB) {
         // Skip not-coalescable or already coalesced copies.
@@ -459,7 +454,7 @@ class Coalescing : public PBQPRAConstraint {
           PBQPRAGraph::NodeId NId = G.getMetadata().getNodeIdForVReg(SrcReg);
 
           const PBQPRAGraph::NodeMetadata::AllowedRegVector &Allowed =
-            G.getNodeMetadata(NId).getAllowedRegs();
+              G.getNodeMetadata(NId).getAllowedRegs();
 
           unsigned PRegOpt = 0;
           while (PRegOpt < Allowed.size() && Allowed[PRegOpt].id() != DstReg)
@@ -474,9 +469,9 @@ class Coalescing : public PBQPRAConstraint {
           PBQPRAGraph::NodeId N1Id = G.getMetadata().getNodeIdForVReg(DstReg);
           PBQPRAGraph::NodeId N2Id = G.getMetadata().getNodeIdForVReg(SrcReg);
           const PBQPRAGraph::NodeMetadata::AllowedRegVector *Allowed1 =
-            &G.getNodeMetadata(N1Id).getAllowedRegs();
+              &G.getNodeMetadata(N1Id).getAllowedRegs();
           const PBQPRAGraph::NodeMetadata::AllowedRegVector *Allowed2 =
-            &G.getNodeMetadata(N2Id).getAllowedRegs();
+              &G.getNodeMetadata(N2Id).getAllowedRegs();
 
           PBQPRAGraph::EdgeId EId = G.findEdge(N1Id, N2Id);
           if (EId == G.invalidEdgeId()) {
@@ -500,10 +495,10 @@ class Coalescing : public PBQPRAConstraint {
 
 private:
   void addVirtRegCoalesce(
-                    PBQPRAGraph::RawMatrix &CostMat,
-                    const PBQPRAGraph::NodeMetadata::AllowedRegVector &Allowed1,
-                    const PBQPRAGraph::NodeMetadata::AllowedRegVector &Allowed2,
-                    PBQP::PBQPNum Benefit) {
+      PBQPRAGraph::RawMatrix &CostMat,
+      const PBQPRAGraph::NodeMetadata::AllowedRegVector &Allowed1,
+      const PBQPRAGraph::NodeMetadata::AllowedRegVector &Allowed2,
+      PBQP::PBQPNum Benefit) {
     assert(CostMat.getRows() == Allowed1.size() + 1 && "Size mismatch.");
     assert(CostMat.getCols() == Allowed2.size() + 1 && "Size mismatch.");
     for (unsigned I = 0; I != Allowed1.size(); ++I) {
@@ -548,7 +543,7 @@ void RegAllocPBQP::getAnalysisUsage(AnalysisUsage &au) const {
   au.addPreserved<SlotIndexes>();
   au.addRequired<LiveIntervals>();
   au.addPreserved<LiveIntervals>();
-  //au.addRequiredID(SplitCriticalEdgesID);
+  // au.addRequiredID(SplitCriticalEdgesID);
   if (customPassID)
     au.addRequiredID(*customPassID);
   au.addRequired<LiveStacks>();
@@ -682,7 +677,7 @@ void RegAllocPBQP::initializeGraph(PBQPRAGraph &G, VirtRegMap &VRM,
     PBQPRAGraph::NodeId NId = G.addNode(std::move(NodeCosts));
     G.getNodeMetadata(NId).setVReg(VReg);
     G.getNodeMetadata(NId).setAllowedRegs(
-      G.getMetadata().getAllowedRegs(std::move(VRegAllowed)));
+        G.getMetadata().getAllowedRegs(std::move(VRegAllowed)));
     G.getMetadata().setNodeIdForVReg(VReg, NId);
   }
 }
@@ -715,8 +710,7 @@ void RegAllocPBQP::spillVReg(Register VReg,
 
 bool RegAllocPBQP::mapPBQPToRegAlloc(const PBQPRAGraph &G,
                                      const PBQP::Solution &Solution,
-                                     VirtRegMap &VRM,
-                                     Spiller &VRegSpiller) {
+                                     VirtRegMap &VRM, Spiller &VRegSpiller) {
   MachineFunction &MF = G.getMetadata().MF;
   LiveIntervals &LIS = G.getMetadata().LIS;
   const TargetRegisterInfo &TRI = *MF.getSubtarget().getRegisterInfo();
@@ -752,8 +746,7 @@ bool RegAllocPBQP::mapPBQPToRegAlloc(const PBQPRAGraph &G,
   return !AnotherRoundNeeded;
 }
 
-void RegAllocPBQP::finalizeAlloc(MachineFunction &MF,
-                                 LiveIntervals &LIS,
+void RegAllocPBQP::finalizeAlloc(MachineFunction &MF, LiveIntervals &LIS,
                                  VirtRegMap &VRM) const {
   MachineRegisterInfo &MRI = MF.getRegInfo();
 
@@ -792,8 +785,7 @@ void RegAllocPBQP::postOptimization(Spiller &VRegSpiller, LiveIntervals &LIS) {
 
 bool RegAllocPBQP::runOnMachineFunction(MachineFunction &MF) {
   LiveIntervals &LIS = getAnalysis<LiveIntervals>();
-  MachineBlockFrequencyInfo &MBFI =
-    getAnalysis<MachineBlockFrequencyInfo>();
+  MachineBlockFrequencyInfo &MBFI = getAnalysis<MachineBlockFrequencyInfo>();
 
   VirtRegMap &VRM = getAnalysis<VirtRegMap>();
 
@@ -828,14 +820,14 @@ bool RegAllocPBQP::runOnMachineFunction(MachineFunction &MF) {
 #ifndef NDEBUG
   const Function &F = MF.getFunction();
   std::string FullyQualifiedName =
-    F.getParent()->getModuleIdentifier() + "." + F.getName().str();
+      F.getParent()->getModuleIdentifier() + "." + F.getName().str();
 #endif
 
   // If there are non-empty intervals allocate them using pbqp.
   if (!VRegsToAlloc.empty()) {
     const TargetSubtargetInfo &Subtarget = MF.getSubtarget();
     std::unique_ptr<PBQPRAConstraintList> ConstraintsRoot =
-      std::make_unique<PBQPRAConstraintList>();
+        std::make_unique<PBQPRAConstraintList>();
     ConstraintsRoot->addConstraint(std::make_unique<SpillCosts>());
     ConstraintsRoot->addConstraint(std::make_unique<Interference>());
     if (PBQPCoalescing)
@@ -847,7 +839,7 @@ bool RegAllocPBQP::runOnMachineFunction(MachineFunction &MF) {
 
     while (!PBQPAllocComplete) {
       LLVM_DEBUG(dbgs() << "  PBQP Regalloc round " << Round << ":\n");
-      (void) Round;
+      (void)Round;
 
       PBQPRAGraph G(PBQPRAGraph::GraphMetadata(MF, LIS, MBFI));
       initializeGraph(G, VRM, *VRegSpiller);
@@ -857,8 +849,8 @@ bool RegAllocPBQP::runOnMachineFunction(MachineFunction &MF) {
       if (PBQPDumpGraphs) {
         std::ostringstream RS;
         RS << Round;
-        std::string GraphFileName = FullyQualifiedName + "." + RS.str() +
-                                    ".pbqpgraph";
+        std::string GraphFileName =
+            FullyQualifiedName + "." + RS.str() + ".pbqpgraph";
         std::error_code EC;
         raw_fd_ostream OS(GraphFileName, EC, sys::fs::OF_TextWithCRLF);
         LLVM_DEBUG(dbgs() << "Dumping graph for round " << Round << " to \""
@@ -926,15 +918,13 @@ LLVM_DUMP_METHOD void PBQP::RegAlloc::PBQPRAGraph::dump() const {
 void PBQP::RegAlloc::PBQPRAGraph::printDot(raw_ostream &OS) const {
   OS << "graph {\n";
   for (auto NId : nodeIds()) {
-    OS << "  node" << NId << " [ label=\""
-       << PrintNodeInfo(NId, *this) << "\\n"
+    OS << "  node" << NId << " [ label=\"" << PrintNodeInfo(NId, *this) << "\\n"
        << getNodeCosts(NId) << "\" ]\n";
   }
 
   OS << "  edge [ len=" << nodeIds().size() << " ]\n";
   for (auto EId : edgeIds()) {
-    OS << "  node" << getEdgeNode1Id(EId)
-       << " -- node" << getEdgeNode2Id(EId)
+    OS << "  node" << getEdgeNode1Id(EId) << " -- node" << getEdgeNode2Id(EId)
        << " [ label=\"";
     const Matrix &EdgeCosts = getEdgeCosts(EId);
     for (unsigned i = 0; i < EdgeCosts.getRows(); ++i) {
@@ -949,6 +939,6 @@ FunctionPass *llvm::createPBQPRegisterAllocator(char *customPassID) {
   return new RegAllocPBQP(customPassID);
 }
 
-FunctionPass* llvm::createDefaultPBQPRegisterAllocator() {
+FunctionPass *llvm::createDefaultPBQPRegisterAllocator() {
   return createPBQPRegisterAllocator();
 }
diff --git a/llvm/lib/CodeGen/RegUsageInfoCollector.cpp b/llvm/lib/CodeGen/RegUsageInfoCollector.cpp
index 6657cf3c1ef4abc..e82aab36f331651 100644
--- a/llvm/lib/CodeGen/RegUsageInfoCollector.cpp
+++ b/llvm/lib/CodeGen/RegUsageInfoCollector.cpp
@@ -138,7 +138,7 @@ bool RegUsageInfoCollector::runOnMachineFunction(MachineFunction &MF) {
   computeCalleeSavedRegs(SavedRegs, MF);
 
   const BitVector &UsedPhysRegsMask = MRI->getUsedPhysRegsMask();
-  auto SetRegAsDefined = [&RegMask] (unsigned Reg) {
+  auto SetRegAsDefined = [&RegMask](unsigned Reg) {
     RegMask[Reg / 32] &= ~(1u << Reg % 32);
   };
 
@@ -178,21 +178,21 @@ bool RegUsageInfoCollector::runOnMachineFunction(MachineFunction &MF) {
   }
 
   LLVM_DEBUG(
-    for (unsigned PReg = 1, PRegE = TRI->getNumRegs(); PReg < PRegE; ++PReg) {
-      if (MachineOperand::clobbersPhysReg(&(RegMask[0]), PReg))
-        dbgs() << printReg(PReg, TRI) << " ";
-    }
+      for (unsigned PReg = 1, PRegE = TRI->getNumRegs(); PReg < PRegE; ++PReg) {
+        if (MachineOperand::clobbersPhysReg(&(RegMask[0]), PReg))
+          dbgs() << printReg(PReg, TRI) << " ";
+      }
 
-    dbgs() << " \n----------------------------------------\n";
-  );
+          dbgs()
+          << " \n----------------------------------------\n";);
 
   PRUI.storeUpdateRegUsageInfo(F, RegMask);
 
   return false;
 }
 
-void RegUsageInfoCollector::
-computeCalleeSavedRegs(BitVector &SavedRegs, MachineFunction &MF) {
+void RegUsageInfoCollector::computeCalleeSavedRegs(BitVector &SavedRegs,
+                                                   MachineFunction &MF) {
   const TargetFrameLowering &TFI = *MF.getSubtarget().getFrameLowering();
   const TargetRegisterInfo &TRI = *MF.getSubtarget().getRegisterInfo();
 
diff --git a/llvm/lib/CodeGen/RegUsageInfoPropagate.cpp b/llvm/lib/CodeGen/RegUsageInfoPropagate.cpp
index d356962e0d78a24..2a99d7d9027c066 100644
--- a/llvm/lib/CodeGen/RegUsageInfoPropagate.cpp
+++ b/llvm/lib/CodeGen/RegUsageInfoPropagate.cpp
@@ -60,10 +60,12 @@ class RegUsageInfoPropagation : public MachineFunctionPass {
 private:
   static void setRegMask(MachineInstr &MI, ArrayRef<uint32_t> RegMask) {
     assert(RegMask.size() ==
-           MachineOperand::getRegMaskSize(MI.getParent()->getParent()
-                                          ->getRegInfo().getTargetRegisterInfo()
-                                          ->getNumRegs())
-           && "expected register mask size");
+               MachineOperand::getRegMaskSize(MI.getParent()
+                                                  ->getParent()
+                                                  ->getRegInfo()
+                                                  .getTargetRegisterInfo()
+                                                  ->getNumRegs()) &&
+           "expected register mask size");
     for (MachineOperand &MO : MI.operands()) {
       if (MO.isRegMask())
         MO.setRegMask(RegMask.data());
@@ -76,8 +78,8 @@ class RegUsageInfoPropagation : public MachineFunctionPass {
 INITIALIZE_PASS_BEGIN(RegUsageInfoPropagation, "reg-usage-propagation",
                       RUIP_NAME, false, false)
 INITIALIZE_PASS_DEPENDENCY(PhysicalRegisterUsageInfo)
-INITIALIZE_PASS_END(RegUsageInfoPropagation, "reg-usage-propagation",
-                    RUIP_NAME, false, false)
+INITIALIZE_PASS_END(RegUsageInfoPropagation, "reg-usage-propagation", RUIP_NAME,
+                    false, false)
 
 char RegUsageInfoPropagation::ID = 0;
 
diff --git a/llvm/lib/CodeGen/RegisterBankInfo.cpp b/llvm/lib/CodeGen/RegisterBankInfo.cpp
index 658a09fd870094e..372d51e748cf8fa 100644
--- a/llvm/lib/CodeGen/RegisterBankInfo.cpp
+++ b/llvm/lib/CodeGen/RegisterBankInfo.cpp
@@ -220,8 +220,8 @@ RegisterBankInfo::getInstrMappingImpl(const MachineInstr &MI) const {
       if (!OperandsMapping[0]) {
         if (MI.isRegSequence()) {
           // For reg_sequence, the result size does not match the input.
-          unsigned ResultSize = getSizeInBits(MI.getOperand(0).getReg(),
-                                              MRI, TRI);
+          unsigned ResultSize =
+              getSizeInBits(MI.getOperand(0).getReg(), MRI, TRI);
           OperandsMapping[0] = &getValueMapping(0, ResultSize, *CurRegBank);
         } else {
           OperandsMapping[0] = ValMapping;
@@ -398,8 +398,8 @@ RegisterBankInfo::getInstructionMappingImpl(
   ++NumInstructionMappingsCreated;
 
   auto &InstrMapping = MapOfInstructionMappings[Hash];
-  InstrMapping = std::make_unique<InstructionMapping>(
-      ID, Cost, OperandsMapping, NumOperands);
+  InstrMapping = std::make_unique<InstructionMapping>(ID, Cost, OperandsMapping,
+                                                      NumOperands);
   return *InstrMapping;
 }
 
diff --git a/llvm/lib/CodeGen/RegisterClassInfo.cpp b/llvm/lib/CodeGen/RegisterClassInfo.cpp
index fba8c35ecec260b..79efdc13d0ab59a 100644
--- a/llvm/lib/CodeGen/RegisterClassInfo.cpp
+++ b/llvm/lib/CodeGen/RegisterClassInfo.cpp
@@ -34,8 +34,8 @@ using namespace llvm;
 #define DEBUG_TYPE "regalloc"
 
 static cl::opt<unsigned>
-StressRA("stress-regalloc", cl::Hidden, cl::init(0), cl::value_desc("N"),
-         cl::desc("Limit all regclasses to N registers"));
+    StressRA("stress-regalloc", cl::Hidden, cl::init(0), cl::value_desc("N"),
+             cl::desc("Limit all regclasses to N registers"));
 
 RegisterClassInfo::RegisterClassInfo() = default;
 
diff --git a/llvm/lib/CodeGen/RegisterCoalescer.cpp b/llvm/lib/CodeGen/RegisterCoalescer.cpp
index 826fc916ec083ba..847688446f71f12 100644
--- a/llvm/lib/CodeGen/RegisterCoalescer.cpp
+++ b/llvm/lib/CodeGen/RegisterCoalescer.cpp
@@ -62,15 +62,15 @@ using namespace llvm;
 
 #define DEBUG_TYPE "regalloc"
 
-STATISTIC(numJoins    , "Number of interval joins performed");
-STATISTIC(numCrossRCs , "Number of cross class joins performed");
-STATISTIC(numCommutes , "Number of instruction commuting performed");
-STATISTIC(numExtends  , "Number of copies extended");
-STATISTIC(NumReMats   , "Number of instructions re-materialized");
-STATISTIC(NumInflated , "Number of register classes inflated");
+STATISTIC(numJoins, "Number of interval joins performed");
+STATISTIC(numCrossRCs, "Number of cross class joins performed");
+STATISTIC(numCommutes, "Number of instruction commuting performed");
+STATISTIC(numExtends, "Number of copies extended");
+STATISTIC(NumReMats, "Number of instructions re-materialized");
+STATISTIC(NumInflated, "Number of register classes inflated");
 STATISTIC(NumLaneConflicts, "Number of dead lane conflicts tested");
-STATISTIC(NumLaneResolves,  "Number of dead lane conflicts resolved");
-STATISTIC(NumShrinkToUses,  "Number of shrinkToUses called");
+STATISTIC(NumLaneResolves, "Number of dead lane conflicts resolved");
+STATISTIC(NumShrinkToUses, "Number of shrinkToUses called");
 
 static cl::opt<bool> EnableJoining("join-liveintervals",
                                    cl::desc("Coalesce copies (default=true)"),
@@ -81,20 +81,20 @@ static cl::opt<bool> UseTerminalRule("terminal-rule",
                                      cl::init(false), cl::Hidden);
 
 /// Temporary flag to test critical edge unsplitting.
-static cl::opt<bool>
-EnableJoinSplits("join-splitedges",
-  cl::desc("Coalesce copies on split edges (default=subtarget)"), cl::Hidden);
+static cl::opt<bool> EnableJoinSplits(
+    "join-splitedges",
+    cl::desc("Coalesce copies on split edges (default=subtarget)"), cl::Hidden);
 
 /// Temporary flag to test global copy optimization.
-static cl::opt<cl::boolOrDefault>
-EnableGlobalCopies("join-globalcopies",
-  cl::desc("Coalesce copies that span blocks (default=subtarget)"),
-  cl::init(cl::BOU_UNSET), cl::Hidden);
+static cl::opt<cl::boolOrDefault> EnableGlobalCopies(
+    "join-globalcopies",
+    cl::desc("Coalesce copies that span blocks (default=subtarget)"),
+    cl::init(cl::BOU_UNSET), cl::Hidden);
 
-static cl::opt<bool>
-VerifyCoalescing("verify-coalescing",
-         cl::desc("Verify machine instrs before and after register coalescing"),
-         cl::Hidden);
+static cl::opt<bool> VerifyCoalescing(
+    "verify-coalescing",
+    cl::desc("Verify machine instrs before and after register coalescing"),
+    cl::Hidden);
 
 static cl::opt<unsigned> LateRematUpdateThreshold(
     "late-remat-update-threshold", cl::Hidden,
@@ -120,277 +120,276 @@ static cl::opt<unsigned> LargeIntervalFreqThreshold(
 
 namespace {
 
-  class JoinVals;
-
-  class RegisterCoalescer : public MachineFunctionPass,
-                            private LiveRangeEdit::Delegate {
-    MachineFunction* MF = nullptr;
-    MachineRegisterInfo* MRI = nullptr;
-    const TargetRegisterInfo* TRI = nullptr;
-    const TargetInstrInfo* TII = nullptr;
-    LiveIntervals *LIS = nullptr;
-    const MachineLoopInfo* Loops = nullptr;
-    AliasAnalysis *AA = nullptr;
-    RegisterClassInfo RegClassInfo;
-
-    /// Position and VReg of a PHI instruction during coalescing.
-    struct PHIValPos {
-      SlotIndex SI;    ///< Slot where this PHI occurs.
-      Register Reg;    ///< VReg the PHI occurs in.
-      unsigned SubReg; ///< Qualifying subregister for Reg.
-    };
-
-    /// Map from debug instruction number to PHI position during coalescing.
-    DenseMap<unsigned, PHIValPos> PHIValToPos;
-    /// Index of, for each VReg, which debug instruction numbers and
-    /// corresponding PHIs are sensitive to coalescing. Each VReg may have
-    /// multiple PHI defs, at different positions.
-    DenseMap<Register, SmallVector<unsigned, 2>> RegToPHIIdx;
-
-    /// Debug variable location tracking -- for each VReg, maintain an
-    /// ordered-by-slot-index set of DBG_VALUEs, to help quick
-    /// identification of whether coalescing may change location validity.
-    using DbgValueLoc = std::pair<SlotIndex, MachineInstr*>;
-    DenseMap<Register, std::vector<DbgValueLoc>> DbgVRegToValues;
-
-    /// A LaneMask to remember on which subregister live ranges we need to call
-    /// shrinkToUses() later.
-    LaneBitmask ShrinkMask;
-
-    /// True if the main range of the currently coalesced intervals should be
-    /// checked for smaller live intervals.
-    bool ShrinkMainRange = false;
-
-    /// True if the coalescer should aggressively coalesce global copies
-    /// in favor of keeping local copies.
-    bool JoinGlobalCopies = false;
-
-    /// True if the coalescer should aggressively coalesce fall-thru
-    /// blocks exclusively containing copies.
-    bool JoinSplitEdges = false;
-
-    /// Copy instructions yet to be coalesced.
-    SmallVector<MachineInstr*, 8> WorkList;
-    SmallVector<MachineInstr*, 8> LocalWorkList;
-
-    /// Set of instruction pointers that have been erased, and
-    /// that may be present in WorkList.
-    SmallPtrSet<MachineInstr*, 8> ErasedInstrs;
-
-    /// Dead instructions that are about to be deleted.
-    SmallVector<MachineInstr*, 8> DeadDefs;
-
-    /// Virtual registers to be considered for register class inflation.
-    SmallVector<Register, 8> InflateRegs;
-
-    /// The collection of live intervals which should have been updated
-    /// immediately after rematerialiation but delayed until
-    /// lateLiveIntervalUpdate is called.
-    DenseSet<Register> ToBeUpdated;
-
-    /// Record how many times the large live interval with many valnos
-    /// has been tried to join with other live interval.
-    DenseMap<Register, unsigned long> LargeLIVisitCounter;
-
-    /// Recursively eliminate dead defs in DeadDefs.
-    void eliminateDeadDefs(LiveRangeEdit *Edit = nullptr);
-
-    /// LiveRangeEdit callback for eliminateDeadDefs().
-    void LRE_WillEraseInstruction(MachineInstr *MI) override;
-
-    /// Coalesce the LocalWorkList.
-    void coalesceLocals();
-
-    /// Join compatible live intervals
-    void joinAllIntervals();
-
-    /// Coalesce copies in the specified MBB, putting
-    /// copies that cannot yet be coalesced into WorkList.
-    void copyCoalesceInMBB(MachineBasicBlock *MBB);
-
-    /// Tries to coalesce all copies in CurrList. Returns true if any progress
-    /// was made.
-    bool copyCoalesceWorkList(MutableArrayRef<MachineInstr*> CurrList);
-
-    /// If one def has many copy like uses, and those copy uses are all
-    /// rematerialized, the live interval update needed for those
-    /// rematerializations will be delayed and done all at once instead
-    /// of being done multiple times. This is to save compile cost because
-    /// live interval update is costly.
-    void lateLiveIntervalUpdate();
-
-    /// Check if the incoming value defined by a COPY at \p SLRQ in the subrange
-    /// has no value defined in the predecessors. If the incoming value is the
-    /// same as defined by the copy itself, the value is considered undefined.
-    bool copyValueUndefInPredecessors(LiveRange &S,
-                                      const MachineBasicBlock *MBB,
-                                      LiveQueryResult SLRQ);
-
-    /// Set necessary undef flags on subregister uses after pruning out undef
-    /// lane segments from the subrange.
-    void setUndefOnPrunedSubRegUses(LiveInterval &LI, Register Reg,
-                                    LaneBitmask PrunedLanes);
-
-    /// Attempt to join intervals corresponding to SrcReg/DstReg, which are the
-    /// src/dst of the copy instruction CopyMI.  This returns true if the copy
-    /// was successfully coalesced away. If it is not currently possible to
-    /// coalesce this interval, but it may be possible if other things get
-    /// coalesced, then it returns true by reference in 'Again'.
-    bool joinCopy(MachineInstr *CopyMI, bool &Again);
-
-    /// Attempt to join these two intervals.  On failure, this
-    /// returns false.  The output "SrcInt" will not have been modified, so we
-    /// can use this information below to update aliases.
-    bool joinIntervals(CoalescerPair &CP);
-
-    /// Attempt joining two virtual registers. Return true on success.
-    bool joinVirtRegs(CoalescerPair &CP);
-
-    /// If a live interval has many valnos and is coalesced with other
-    /// live intervals many times, we regard such live interval as having
-    /// high compile time cost.
-    bool isHighCostLiveInterval(LiveInterval &LI);
-
-    /// Attempt joining with a reserved physreg.
-    bool joinReservedPhysReg(CoalescerPair &CP);
-
-    /// Add the LiveRange @p ToMerge as a subregister liverange of @p LI.
-    /// Subranges in @p LI which only partially interfere with the desired
-    /// LaneMask are split as necessary. @p LaneMask are the lanes that
-    /// @p ToMerge will occupy in the coalescer register. @p LI has its subrange
-    /// lanemasks already adjusted to the coalesced register.
-    void mergeSubRangeInto(LiveInterval &LI, const LiveRange &ToMerge,
-                           LaneBitmask LaneMask, CoalescerPair &CP,
-                           unsigned DstIdx);
-
-    /// Join the liveranges of two subregisters. Joins @p RRange into
-    /// @p LRange, @p RRange may be invalid afterwards.
-    void joinSubRegRanges(LiveRange &LRange, LiveRange &RRange,
-                          LaneBitmask LaneMask, const CoalescerPair &CP);
-
-    /// We found a non-trivially-coalescable copy. If the source value number is
-    /// defined by a copy from the destination reg see if we can merge these two
-    /// destination reg valno# into a single value number, eliminating a copy.
-    /// This returns true if an interval was modified.
-    bool adjustCopiesBackFrom(const CoalescerPair &CP, MachineInstr *CopyMI);
-
-    /// Return true if there are definitions of IntB
-    /// other than BValNo val# that can reach uses of AValno val# of IntA.
-    bool hasOtherReachingDefs(LiveInterval &IntA, LiveInterval &IntB,
-                              VNInfo *AValNo, VNInfo *BValNo);
-
-    /// We found a non-trivially-coalescable copy.
-    /// If the source value number is defined by a commutable instruction and
-    /// its other operand is coalesced to the copy dest register, see if we
-    /// can transform the copy into a noop by commuting the definition.
-    /// This returns a pair of two flags:
-    /// - the first element is true if an interval was modified,
-    /// - the second element is true if the destination interval needs
-    ///   to be shrunk after deleting the copy.
-    std::pair<bool,bool> removeCopyByCommutingDef(const CoalescerPair &CP,
-                                                  MachineInstr *CopyMI);
-
-    /// We found a copy which can be moved to its less frequent predecessor.
-    bool removePartialRedundancy(const CoalescerPair &CP, MachineInstr &CopyMI);
-
-    /// If the source of a copy is defined by a
-    /// trivial computation, replace the copy by rematerialize the definition.
-    bool reMaterializeTrivialDef(const CoalescerPair &CP, MachineInstr *CopyMI,
-                                 bool &IsDefCopy);
-
-    /// Return true if a copy involving a physreg should be joined.
-    bool canJoinPhys(const CoalescerPair &CP);
-
-    /// Replace all defs and uses of SrcReg to DstReg and update the subregister
-    /// number if it is not zero. If DstReg is a physical register and the
-    /// existing subregister number of the def / use being updated is not zero,
-    /// make sure to set it to the correct physical subregister.
-    void updateRegDefsUses(Register SrcReg, Register DstReg, unsigned SubIdx);
-
-    /// If the given machine operand reads only undefined lanes add an undef
-    /// flag.
-    /// This can happen when undef uses were previously concealed by a copy
-    /// which we coalesced. Example:
-    ///    %0:sub0<def,read-undef> = ...
-    ///    %1 = COPY %0           <-- Coalescing COPY reveals undef
-    ///       = use %1:sub1       <-- hidden undef use
-    void addUndefFlag(const LiveInterval &Int, SlotIndex UseIdx,
-                      MachineOperand &MO, unsigned SubRegIdx);
-
-    /// Handle copies of undef values. If the undef value is an incoming
-    /// PHI value, it will convert @p CopyMI to an IMPLICIT_DEF.
-    /// Returns nullptr if @p CopyMI was not in any way eliminable. Otherwise,
-    /// it returns @p CopyMI (which could be an IMPLICIT_DEF at this point).
-    MachineInstr *eliminateUndefCopy(MachineInstr *CopyMI);
-
-    /// Check whether or not we should apply the terminal rule on the
-    /// destination (Dst) of \p Copy.
-    /// When the terminal rule applies, Copy is not profitable to
-    /// coalesce.
-    /// Dst is terminal if it has exactly one affinity (Dst, Src) and
-    /// at least one interference (Dst, Dst2). If Dst is terminal, the
-    /// terminal rule consists in checking that at least one of
-    /// interfering node, say Dst2, has an affinity of equal or greater
-    /// weight with Src.
-    /// In that case, Dst2 and Dst will not be able to be both coalesced
-    /// with Src. Since Dst2 exposes more coalescing opportunities than
-    /// Dst, we can drop \p Copy.
-    bool applyTerminalRule(const MachineInstr &Copy) const;
-
-    /// Wrapper method for \see LiveIntervals::shrinkToUses.
-    /// This method does the proper fixing of the live-ranges when the afore
-    /// mentioned method returns true.
-    void shrinkToUses(LiveInterval *LI,
-                      SmallVectorImpl<MachineInstr * > *Dead = nullptr) {
-      NumShrinkToUses++;
-      if (LIS->shrinkToUses(LI, Dead)) {
-        /// Check whether or not \p LI is composed by multiple connected
-        /// components and if that is the case, fix that.
-        SmallVector<LiveInterval*, 8> SplitLIs;
-        LIS->splitSeparateComponents(*LI, SplitLIs);
-      }
-    }
+class JoinVals;
+
+class RegisterCoalescer : public MachineFunctionPass,
+                          private LiveRangeEdit::Delegate {
+  MachineFunction *MF = nullptr;
+  MachineRegisterInfo *MRI = nullptr;
+  const TargetRegisterInfo *TRI = nullptr;
+  const TargetInstrInfo *TII = nullptr;
+  LiveIntervals *LIS = nullptr;
+  const MachineLoopInfo *Loops = nullptr;
+  AliasAnalysis *AA = nullptr;
+  RegisterClassInfo RegClassInfo;
+
+  /// Position and VReg of a PHI instruction during coalescing.
+  struct PHIValPos {
+    SlotIndex SI;    ///< Slot where this PHI occurs.
+    Register Reg;    ///< VReg the PHI occurs in.
+    unsigned SubReg; ///< Qualifying subregister for Reg.
+  };
 
-    /// Wrapper Method to do all the necessary work when an Instruction is
-    /// deleted.
-    /// Optimizations should use this to make sure that deleted instructions
-    /// are always accounted for.
-    void deleteInstr(MachineInstr* MI) {
-      ErasedInstrs.insert(MI);
-      LIS->RemoveMachineInstrFromMaps(*MI);
-      MI->eraseFromParent();
+  /// Map from debug instruction number to PHI position during coalescing.
+  DenseMap<unsigned, PHIValPos> PHIValToPos;
+  /// Index of, for each VReg, which debug instruction numbers and
+  /// corresponding PHIs are sensitive to coalescing. Each VReg may have
+  /// multiple PHI defs, at different positions.
+  DenseMap<Register, SmallVector<unsigned, 2>> RegToPHIIdx;
+
+  /// Debug variable location tracking -- for each VReg, maintain an
+  /// ordered-by-slot-index set of DBG_VALUEs, to help quick
+  /// identification of whether coalescing may change location validity.
+  using DbgValueLoc = std::pair<SlotIndex, MachineInstr *>;
+  DenseMap<Register, std::vector<DbgValueLoc>> DbgVRegToValues;
+
+  /// A LaneMask to remember on which subregister live ranges we need to call
+  /// shrinkToUses() later.
+  LaneBitmask ShrinkMask;
+
+  /// True if the main range of the currently coalesced intervals should be
+  /// checked for smaller live intervals.
+  bool ShrinkMainRange = false;
+
+  /// True if the coalescer should aggressively coalesce global copies
+  /// in favor of keeping local copies.
+  bool JoinGlobalCopies = false;
+
+  /// True if the coalescer should aggressively coalesce fall-thru
+  /// blocks exclusively containing copies.
+  bool JoinSplitEdges = false;
+
+  /// Copy instructions yet to be coalesced.
+  SmallVector<MachineInstr *, 8> WorkList;
+  SmallVector<MachineInstr *, 8> LocalWorkList;
+
+  /// Set of instruction pointers that have been erased, and
+  /// that may be present in WorkList.
+  SmallPtrSet<MachineInstr *, 8> ErasedInstrs;
+
+  /// Dead instructions that are about to be deleted.
+  SmallVector<MachineInstr *, 8> DeadDefs;
+
+  /// Virtual registers to be considered for register class inflation.
+  SmallVector<Register, 8> InflateRegs;
+
+  /// The collection of live intervals which should have been updated
+  /// immediately after rematerialiation but delayed until
+  /// lateLiveIntervalUpdate is called.
+  DenseSet<Register> ToBeUpdated;
+
+  /// Record how many times the large live interval with many valnos
+  /// has been tried to join with other live interval.
+  DenseMap<Register, unsigned long> LargeLIVisitCounter;
+
+  /// Recursively eliminate dead defs in DeadDefs.
+  void eliminateDeadDefs(LiveRangeEdit *Edit = nullptr);
+
+  /// LiveRangeEdit callback for eliminateDeadDefs().
+  void LRE_WillEraseInstruction(MachineInstr *MI) override;
+
+  /// Coalesce the LocalWorkList.
+  void coalesceLocals();
+
+  /// Join compatible live intervals
+  void joinAllIntervals();
+
+  /// Coalesce copies in the specified MBB, putting
+  /// copies that cannot yet be coalesced into WorkList.
+  void copyCoalesceInMBB(MachineBasicBlock *MBB);
+
+  /// Tries to coalesce all copies in CurrList. Returns true if any progress
+  /// was made.
+  bool copyCoalesceWorkList(MutableArrayRef<MachineInstr *> CurrList);
+
+  /// If one def has many copy like uses, and those copy uses are all
+  /// rematerialized, the live interval update needed for those
+  /// rematerializations will be delayed and done all at once instead
+  /// of being done multiple times. This is to save compile cost because
+  /// live interval update is costly.
+  void lateLiveIntervalUpdate();
+
+  /// Check if the incoming value defined by a COPY at \p SLRQ in the subrange
+  /// has no value defined in the predecessors. If the incoming value is the
+  /// same as defined by the copy itself, the value is considered undefined.
+  bool copyValueUndefInPredecessors(LiveRange &S, const MachineBasicBlock *MBB,
+                                    LiveQueryResult SLRQ);
+
+  /// Set necessary undef flags on subregister uses after pruning out undef
+  /// lane segments from the subrange.
+  void setUndefOnPrunedSubRegUses(LiveInterval &LI, Register Reg,
+                                  LaneBitmask PrunedLanes);
+
+  /// Attempt to join intervals corresponding to SrcReg/DstReg, which are the
+  /// src/dst of the copy instruction CopyMI.  This returns true if the copy
+  /// was successfully coalesced away. If it is not currently possible to
+  /// coalesce this interval, but it may be possible if other things get
+  /// coalesced, then it returns true by reference in 'Again'.
+  bool joinCopy(MachineInstr *CopyMI, bool &Again);
+
+  /// Attempt to join these two intervals.  On failure, this
+  /// returns false.  The output "SrcInt" will not have been modified, so we
+  /// can use this information below to update aliases.
+  bool joinIntervals(CoalescerPair &CP);
+
+  /// Attempt joining two virtual registers. Return true on success.
+  bool joinVirtRegs(CoalescerPair &CP);
+
+  /// If a live interval has many valnos and is coalesced with other
+  /// live intervals many times, we regard such live interval as having
+  /// high compile time cost.
+  bool isHighCostLiveInterval(LiveInterval &LI);
+
+  /// Attempt joining with a reserved physreg.
+  bool joinReservedPhysReg(CoalescerPair &CP);
+
+  /// Add the LiveRange @p ToMerge as a subregister liverange of @p LI.
+  /// Subranges in @p LI which only partially interfere with the desired
+  /// LaneMask are split as necessary. @p LaneMask are the lanes that
+  /// @p ToMerge will occupy in the coalescer register. @p LI has its subrange
+  /// lanemasks already adjusted to the coalesced register.
+  void mergeSubRangeInto(LiveInterval &LI, const LiveRange &ToMerge,
+                         LaneBitmask LaneMask, CoalescerPair &CP,
+                         unsigned DstIdx);
+
+  /// Join the liveranges of two subregisters. Joins @p RRange into
+  /// @p LRange, @p RRange may be invalid afterwards.
+  void joinSubRegRanges(LiveRange &LRange, LiveRange &RRange,
+                        LaneBitmask LaneMask, const CoalescerPair &CP);
+
+  /// We found a non-trivially-coalescable copy. If the source value number is
+  /// defined by a copy from the destination reg see if we can merge these two
+  /// destination reg valno# into a single value number, eliminating a copy.
+  /// This returns true if an interval was modified.
+  bool adjustCopiesBackFrom(const CoalescerPair &CP, MachineInstr *CopyMI);
+
+  /// Return true if there are definitions of IntB
+  /// other than BValNo val# that can reach uses of AValno val# of IntA.
+  bool hasOtherReachingDefs(LiveInterval &IntA, LiveInterval &IntB,
+                            VNInfo *AValNo, VNInfo *BValNo);
+
+  /// We found a non-trivially-coalescable copy.
+  /// If the source value number is defined by a commutable instruction and
+  /// its other operand is coalesced to the copy dest register, see if we
+  /// can transform the copy into a noop by commuting the definition.
+  /// This returns a pair of two flags:
+  /// - the first element is true if an interval was modified,
+  /// - the second element is true if the destination interval needs
+  ///   to be shrunk after deleting the copy.
+  std::pair<bool, bool> removeCopyByCommutingDef(const CoalescerPair &CP,
+                                                 MachineInstr *CopyMI);
+
+  /// We found a copy which can be moved to its less frequent predecessor.
+  bool removePartialRedundancy(const CoalescerPair &CP, MachineInstr &CopyMI);
+
+  /// If the source of a copy is defined by a
+  /// trivial computation, replace the copy by rematerialize the definition.
+  bool reMaterializeTrivialDef(const CoalescerPair &CP, MachineInstr *CopyMI,
+                               bool &IsDefCopy);
+
+  /// Return true if a copy involving a physreg should be joined.
+  bool canJoinPhys(const CoalescerPair &CP);
+
+  /// Replace all defs and uses of SrcReg to DstReg and update the subregister
+  /// number if it is not zero. If DstReg is a physical register and the
+  /// existing subregister number of the def / use being updated is not zero,
+  /// make sure to set it to the correct physical subregister.
+  void updateRegDefsUses(Register SrcReg, Register DstReg, unsigned SubIdx);
+
+  /// If the given machine operand reads only undefined lanes add an undef
+  /// flag.
+  /// This can happen when undef uses were previously concealed by a copy
+  /// which we coalesced. Example:
+  ///    %0:sub0<def,read-undef> = ...
+  ///    %1 = COPY %0           <-- Coalescing COPY reveals undef
+  ///       = use %1:sub1       <-- hidden undef use
+  void addUndefFlag(const LiveInterval &Int, SlotIndex UseIdx,
+                    MachineOperand &MO, unsigned SubRegIdx);
+
+  /// Handle copies of undef values. If the undef value is an incoming
+  /// PHI value, it will convert @p CopyMI to an IMPLICIT_DEF.
+  /// Returns nullptr if @p CopyMI was not in any way eliminable. Otherwise,
+  /// it returns @p CopyMI (which could be an IMPLICIT_DEF at this point).
+  MachineInstr *eliminateUndefCopy(MachineInstr *CopyMI);
+
+  /// Check whether or not we should apply the terminal rule on the
+  /// destination (Dst) of \p Copy.
+  /// When the terminal rule applies, Copy is not profitable to
+  /// coalesce.
+  /// Dst is terminal if it has exactly one affinity (Dst, Src) and
+  /// at least one interference (Dst, Dst2). If Dst is terminal, the
+  /// terminal rule consists in checking that at least one of
+  /// interfering node, say Dst2, has an affinity of equal or greater
+  /// weight with Src.
+  /// In that case, Dst2 and Dst will not be able to be both coalesced
+  /// with Src. Since Dst2 exposes more coalescing opportunities than
+  /// Dst, we can drop \p Copy.
+  bool applyTerminalRule(const MachineInstr &Copy) const;
+
+  /// Wrapper method for \see LiveIntervals::shrinkToUses.
+  /// This method does the proper fixing of the live-ranges when the afore
+  /// mentioned method returns true.
+  void shrinkToUses(LiveInterval *LI,
+                    SmallVectorImpl<MachineInstr *> *Dead = nullptr) {
+    NumShrinkToUses++;
+    if (LIS->shrinkToUses(LI, Dead)) {
+      /// Check whether or not \p LI is composed by multiple connected
+      /// components and if that is the case, fix that.
+      SmallVector<LiveInterval *, 8> SplitLIs;
+      LIS->splitSeparateComponents(*LI, SplitLIs);
     }
+  }
 
-    /// Walk over function and initialize the DbgVRegToValues map.
-    void buildVRegToDbgValueMap(MachineFunction &MF);
+  /// Wrapper Method to do all the necessary work when an Instruction is
+  /// deleted.
+  /// Optimizations should use this to make sure that deleted instructions
+  /// are always accounted for.
+  void deleteInstr(MachineInstr *MI) {
+    ErasedInstrs.insert(MI);
+    LIS->RemoveMachineInstrFromMaps(*MI);
+    MI->eraseFromParent();
+  }
 
-    /// Test whether, after merging, any DBG_VALUEs would refer to a
-    /// different value number than before merging, and whether this can
-    /// be resolved. If not, mark the DBG_VALUE as being undef.
-    void checkMergingChangesDbgValues(CoalescerPair &CP, LiveRange &LHS,
-                                      JoinVals &LHSVals, LiveRange &RHS,
-                                      JoinVals &RHSVals);
+  /// Walk over function and initialize the DbgVRegToValues map.
+  void buildVRegToDbgValueMap(MachineFunction &MF);
 
-    void checkMergingChangesDbgValuesImpl(Register Reg, LiveRange &OtherRange,
-                                          LiveRange &RegRange, JoinVals &Vals2);
+  /// Test whether, after merging, any DBG_VALUEs would refer to a
+  /// different value number than before merging, and whether this can
+  /// be resolved. If not, mark the DBG_VALUE as being undef.
+  void checkMergingChangesDbgValues(CoalescerPair &CP, LiveRange &LHS,
+                                    JoinVals &LHSVals, LiveRange &RHS,
+                                    JoinVals &RHSVals);
 
-  public:
-    static char ID; ///< Class identification, replacement for typeinfo
+  void checkMergingChangesDbgValuesImpl(Register Reg, LiveRange &OtherRange,
+                                        LiveRange &RegRange, JoinVals &Vals2);
 
-    RegisterCoalescer() : MachineFunctionPass(ID) {
-      initializeRegisterCoalescerPass(*PassRegistry::getPassRegistry());
-    }
+public:
+  static char ID; ///< Class identification, replacement for typeinfo
+
+  RegisterCoalescer() : MachineFunctionPass(ID) {
+    initializeRegisterCoalescerPass(*PassRegistry::getPassRegistry());
+  }
 
-    void getAnalysisUsage(AnalysisUsage &AU) const override;
+  void getAnalysisUsage(AnalysisUsage &AU) const override;
 
-    void releaseMemory() override;
+  void releaseMemory() override;
 
-    /// This is the pass entry point.
-    bool runOnMachineFunction(MachineFunction&) override;
+  /// This is the pass entry point.
+  bool runOnMachineFunction(MachineFunction &) override;
 
-    /// Implement the dump method.
-    void print(raw_ostream &O, const Module* = nullptr) const override;
-  };
+  /// Implement the dump method.
+  void print(raw_ostream &O, const Module * = nullptr) const override;
+};
 
 } // end anonymous namespace
 
@@ -411,20 +410,20 @@ INITIALIZE_PASS_END(RegisterCoalescer, "register-coalescer",
                                       const MachineInstr *MI, Register &Src,
                                       Register &Dst, unsigned &SrcSub,
                                       unsigned &DstSub) {
-    if (MI->isCopy()) {
-      Dst = MI->getOperand(0).getReg();
-      DstSub = MI->getOperand(0).getSubReg();
-      Src = MI->getOperand(1).getReg();
-      SrcSub = MI->getOperand(1).getSubReg();
-    } else if (MI->isSubregToReg()) {
-      Dst = MI->getOperand(0).getReg();
-      DstSub = tri.composeSubRegIndices(MI->getOperand(0).getSubReg(),
-                                        MI->getOperand(3).getImm());
-      Src = MI->getOperand(2).getReg();
-      SrcSub = MI->getOperand(2).getSubReg();
-    } else
-      return false;
-    return true;
+  if (MI->isCopy()) {
+    Dst = MI->getOperand(0).getReg();
+    DstSub = MI->getOperand(0).getSubReg();
+    Src = MI->getOperand(1).getReg();
+    SrcSub = MI->getOperand(1).getSubReg();
+  } else if (MI->isSubregToReg()) {
+    Dst = MI->getOperand(0).getReg();
+    DstSub = tri.composeSubRegIndices(MI->getOperand(0).getSubReg(),
+                                      MI->getOperand(3).getImm());
+    Src = MI->getOperand(2).getReg();
+    SrcSub = MI->getOperand(2).getSubReg();
+  } else
+    return false;
+  return true;
 }
 
 /// Return true if this block should be vacated by the coalescer to eliminate
@@ -470,14 +469,16 @@ bool CoalescerPair::setRegisters(const MachineInstr *MI) {
     // Eliminate DstSub on a physreg.
     if (DstSub) {
       Dst = TRI.getSubReg(Dst, DstSub);
-      if (!Dst) return false;
+      if (!Dst)
+        return false;
       DstSub = 0;
     }
 
     // Eliminate SrcSub by picking a corresponding Dst superregister.
     if (SrcSub) {
       Dst = TRI.getMatchingSuperReg(Dst, SrcSub, MRI.getRegClass(Src));
-      if (!Dst) return false;
+      if (!Dst)
+        return false;
     } else if (!MRI.getRegClass(Src)->contains(Dst)) {
       return false;
     }
@@ -492,8 +493,8 @@ bool CoalescerPair::setRegisters(const MachineInstr *MI) {
       if (Src == Dst && SrcSub != DstSub)
         return false;
 
-      NewRC = TRI.getCommonSuperRegClass(SrcRC, SrcSub, DstRC, DstSub,
-                                         SrcIdx, DstIdx);
+      NewRC = TRI.getCommonSuperRegClass(SrcRC, SrcSub, DstRC, DstSub, SrcIdx,
+                                         DstIdx);
       if (!NewRC)
         return false;
     } else if (DstSub) {
@@ -597,8 +598,8 @@ void RegisterCoalescer::eliminateDeadDefs(LiveRangeEdit *Edit) {
     return;
   }
   SmallVector<Register, 8> NewRegs;
-  LiveRangeEdit(nullptr, NewRegs, *MF, *LIS,
-                nullptr, this).eliminateDeadDefs(DeadDefs);
+  LiveRangeEdit(nullptr, NewRegs, *MF, *LIS, nullptr, this)
+      .eliminateDeadDefs(DeadDefs);
 }
 
 void RegisterCoalescer::LRE_WillEraseInstruction(MachineInstr *MI) {
@@ -612,9 +613,9 @@ bool RegisterCoalescer::adjustCopiesBackFrom(const CoalescerPair &CP,
   assert(!CP.isPhys() && "This doesn't work for physreg copies.");
 
   LiveInterval &IntA =
-    LIS->getInterval(CP.isFlipped() ? CP.getDstReg() : CP.getSrcReg());
+      LIS->getInterval(CP.isFlipped() ? CP.getDstReg() : CP.getSrcReg());
   LiveInterval &IntB =
-    LIS->getInterval(CP.isFlipped() ? CP.getSrcReg() : CP.getDstReg());
+      LIS->getInterval(CP.isFlipped() ? CP.getSrcReg() : CP.getDstReg());
   SlotIndex CopyIdx = LIS->getInstructionIndex(*CopyMI).getRegSlot();
 
   // We have a non-trivially-coalescable copy with IntA being the source and
@@ -634,19 +635,22 @@ bool RegisterCoalescer::adjustCopiesBackFrom(const CoalescerPair &CP,
   // BValNo is a value number in B that is defined by a copy from A.  'B1' in
   // the example above.
   LiveInterval::iterator BS = IntB.FindSegmentContaining(CopyIdx);
-  if (BS == IntB.end()) return false;
+  if (BS == IntB.end())
+    return false;
   VNInfo *BValNo = BS->valno;
 
   // Get the location that B is defined at.  Two options: either this value has
   // an unknown definition point or it is defined at CopyIdx.  If unknown, we
   // can't process it.
-  if (BValNo->def != CopyIdx) return false;
+  if (BValNo->def != CopyIdx)
+    return false;
 
   // AValNo is the value number in A that defines the copy, A3 in the example.
   SlotIndex CopyUseIdx = CopyIdx.getRegSlot(true);
   LiveInterval::iterator AS = IntA.FindSegmentContaining(CopyUseIdx);
   // The live segment might not exist after fun with physreg coalescing.
-  if (AS == IntA.end()) return false;
+  if (AS == IntA.end())
+    return false;
   VNInfo *AValNo = AS->valno;
 
   // If AValNo is defined as a copy from IntB, we can potentially process this.
@@ -658,21 +662,22 @@ bool RegisterCoalescer::adjustCopiesBackFrom(const CoalescerPair &CP,
 
   // Get the Segment in IntB that this value number starts with.
   LiveInterval::iterator ValS =
-    IntB.FindSegmentContaining(AValNo->def.getPrevSlot());
+      IntB.FindSegmentContaining(AValNo->def.getPrevSlot());
   if (ValS == IntB.end())
     return false;
 
   // Make sure that the end of the live segment is inside the same block as
   // CopyMI.
   MachineInstr *ValSEndInst =
-    LIS->getInstructionFromIndex(ValS->end.getPrevSlot());
+      LIS->getInstructionFromIndex(ValS->end.getPrevSlot());
   if (!ValSEndInst || ValSEndInst->getParent() != CopyMI->getParent())
     return false;
 
   // Okay, we now know that ValS ends in the same block that the CopyMI
   // live-range starts.  If there are no intervening live segments between them
   // in IntB, we can merge them.
-  if (ValS+1 != BS) return false;
+  if (ValS + 1 != BS)
+    return false;
 
   LLVM_DEBUG(dbgs() << "Extending: " << printReg(IntB.reg(), TRI));
 
@@ -744,8 +749,7 @@ bool RegisterCoalescer::adjustCopiesBackFrom(const CoalescerPair &CP,
 }
 
 bool RegisterCoalescer::hasOtherReachingDefs(LiveInterval &IntA,
-                                             LiveInterval &IntB,
-                                             VNInfo *AValNo,
+                                             LiveInterval &IntB, VNInfo *AValNo,
                                              VNInfo *BValNo) {
   // If AValNo has PHI kills, conservatively assume that IntB defs can reach
   // the PHI values.
@@ -753,7 +757,8 @@ bool RegisterCoalescer::hasOtherReachingDefs(LiveInterval &IntA,
     return true;
 
   for (LiveRange::Segment &ASeg : IntA.segments) {
-    if (ASeg.valno != AValNo) continue;
+    if (ASeg.valno != AValNo)
+      continue;
     LiveInterval::iterator BI = llvm::upper_bound(IntB, ASeg.start);
     if (BI != IntB.begin())
       --BI;
@@ -771,9 +776,10 @@ bool RegisterCoalescer::hasOtherReachingDefs(LiveInterval &IntA,
 
 /// Copy segments with value number @p SrcValNo from liverange @p Src to live
 /// range @Dst and use value number @p DstValNo there.
-static std::pair<bool,bool>
-addSegmentsWithValNo(LiveRange &Dst, VNInfo *DstValNo, const LiveRange &Src,
-                     const VNInfo *SrcValNo) {
+static std::pair<bool, bool> addSegmentsWithValNo(LiveRange &Dst,
+                                                  VNInfo *DstValNo,
+                                                  const LiveRange &Src,
+                                                  const VNInfo *SrcValNo) {
   bool Changed = false;
   bool MergedWithDead = false;
   for (const LiveRange::Segment &S : Src.segments) {
@@ -794,7 +800,7 @@ addSegmentsWithValNo(LiveRange &Dst, VNInfo *DstValNo, const LiveRange &Src,
   return std::make_pair(Changed, MergedWithDead);
 }
 
-std::pair<bool,bool>
+std::pair<bool, bool>
 RegisterCoalescer::removeCopyByCommutingDef(const CoalescerPair &CP,
                                             MachineInstr *CopyMI) {
   assert(!CP.isPhys());
@@ -834,19 +840,19 @@ RegisterCoalescer::removeCopyByCommutingDef(const CoalescerPair &CP,
   VNInfo *AValNo = IntA.getVNInfoAt(CopyIdx.getRegSlot(true));
   assert(AValNo && !AValNo->isUnused() && "COPY source not live");
   if (AValNo->isPHIDef())
-    return { false, false };
+    return {false, false};
   MachineInstr *DefMI = LIS->getInstructionFromIndex(AValNo->def);
   if (!DefMI)
-    return { false, false };
+    return {false, false};
   if (!DefMI->isCommutable())
-    return { false, false };
+    return {false, false};
   // If DefMI is a two-address instruction then commuting it will change the
   // destination register.
   int DefIdx = DefMI->findRegisterDefOperandIdx(IntA.reg());
   assert(DefIdx != -1);
   unsigned UseOpIdx;
   if (!DefMI->isRegTiedToUseOperand(DefIdx, &UseOpIdx))
-    return { false, false };
+    return {false, false};
 
   // FIXME: The code below tries to commute 'UseOpIdx' operand with some other
   // commutable operand which is expressed by 'CommuteAnyOperandIndex'value
@@ -859,17 +865,17 @@ RegisterCoalescer::removeCopyByCommutingDef(const CoalescerPair &CP,
   // op#2<->op#3) of commute transformation should be considered/tried here.
   unsigned NewDstIdx = TargetInstrInfo::CommuteAnyOperandIndex;
   if (!TII->findCommutedOpIndices(*DefMI, UseOpIdx, NewDstIdx))
-    return { false, false };
+    return {false, false};
 
   MachineOperand &NewDstMO = DefMI->getOperand(NewDstIdx);
   Register NewReg = NewDstMO.getReg();
   if (NewReg != IntB.reg() || !IntB.Query(AValNo->def).isKill())
-    return { false, false };
+    return {false, false};
 
   // Make sure there are no other definitions of IntB that would reach the
   // uses which the new definition can reach.
   if (hasOtherReachingDefs(IntA, IntB, AValNo, BValNo))
-    return { false, false };
+    return {false, false};
 
   // If some of the uses of IntA.reg is already coalesced away, return false.
   // It's not possible to determine whether it's safe to perform the coalescing.
@@ -882,7 +888,7 @@ RegisterCoalescer::removeCopyByCommutingDef(const CoalescerPair &CP,
       continue;
     // If this use is tied to a def, we can't rewrite the register.
     if (UseMI->isRegTiedToDefOperand(OpNo))
-      return { false, false };
+      return {false, false};
   }
 
   LLVM_DEBUG(dbgs() << "\tremoveCopyByCommutingDef: " << AValNo->def << '\t'
@@ -894,10 +900,10 @@ RegisterCoalescer::removeCopyByCommutingDef(const CoalescerPair &CP,
   MachineInstr *NewMI =
       TII->commuteInstruction(*DefMI, false, UseOpIdx, NewDstIdx);
   if (!NewMI)
-    return { false, false };
+    return {false, false};
   if (IntA.reg().isVirtual() && IntB.reg().isVirtual() &&
       !MRI->constrainRegClass(IntB.reg(), MRI->getRegClass(IntA.reg())))
-    return { false, false };
+    return {false, false};
   if (NewMI != DefMI) {
     LIS->ReplaceMachineInstrInMaps(*DefMI, *NewMI);
     MachineBasicBlock::iterator Pos = DefMI;
@@ -1028,7 +1034,7 @@ RegisterCoalescer::removeCopyByCommutingDef(const CoalescerPair &CP,
 
   LLVM_DEBUG(dbgs() << "\t\ttrimmed:  " << IntA << '\n');
   ++numCommutes;
-  return { true, ShrinkB };
+  return {true, ShrinkB};
 }
 
 /// For copy B = A in BB2, if A is defined by A = B in BB0 which is a
@@ -1187,9 +1193,9 @@ bool RegisterCoalescer::removePartialRedundancy(const CoalescerPair &CP,
     for (LiveInterval::SubRange &SR : IntB.subranges())
       SR.createDeadDef(NewCopyIdx, LIS->getVNInfoAllocator());
 
-    // If the newly created Instruction has an address of an instruction that was
-    // deleted before (object recycled by the allocator) it needs to be removed from
-    // the deleted list.
+    // If the newly created Instruction has an address of an instruction that
+    // was deleted before (object recycled by the allocator) it needs to be
+    // removed from the deleted list.
     ErasedInstrs.erase(NewCopyMI);
   } else {
     LLVM_DEBUG(dbgs() << "\tremovePartialRedundancy: Remove the copy from "
@@ -1225,7 +1231,7 @@ bool RegisterCoalescer::removePartialRedundancy(const CoalescerPair &CP,
     // to because the copy has been removed.  We can go ahead and remove that
     // endpoint; there is no other situation here that there could be a use at
     // the same place as we know that the copy is a full copy.
-    for (unsigned I = 0; I != EndPoints.size(); ) {
+    for (unsigned I = 0; I != EndPoints.size();) {
       if (SlotIndex::isSameInstr(EndPoints[I], CopyIdx)) {
         EndPoints[I] = EndPoints.back();
         EndPoints.pop_back();
@@ -1322,8 +1328,8 @@ bool RegisterCoalescer::reMaterializeTrivialDef(const CoalescerPair &CP,
     if (DstReg.isPhysical()) {
       Register NewDstReg = DstReg;
 
-      unsigned NewDstIdx = TRI->composeSubRegIndices(CP.getSrcIdx(),
-                                              DefMI->getOperand(0).getSubReg());
+      unsigned NewDstIdx = TRI->composeSubRegIndices(
+          CP.getSrcIdx(), DefMI->getOperand(0).getSubReg());
       if (NewDstIdx)
         NewDstReg = TRI->getSubReg(DstReg, NewDstIdx);
 
@@ -1347,7 +1353,7 @@ bool RegisterCoalescer::reMaterializeTrivialDef(const CoalescerPair &CP,
   DebugLoc DL = CopyMI->getDebugLoc();
   MachineBasicBlock *MBB = CopyMI->getParent();
   MachineBasicBlock::iterator MII =
-    std::next(MachineBasicBlock::iterator(CopyMI));
+      std::next(MachineBasicBlock::iterator(CopyMI));
   Edit.rematerializeAt(*MBB, MII, DstReg, RM, *TRI, false, SrcIdx, CopyMI);
   MachineInstr &NewMI = *std::prev(MII);
   NewMI.setDebugLoc(DL);
@@ -1361,11 +1367,11 @@ bool RegisterCoalescer::reMaterializeTrivialDef(const CoalescerPair &CP,
   if (DstIdx != 0) {
     MachineOperand &DefMO = NewMI.getOperand(0);
     if (DefMO.getSubReg() == DstIdx) {
-      assert(SrcIdx == 0 && CP.isFlipped()
-             && "Shouldn't have SrcIdx+DstIdx at this point");
+      assert(SrcIdx == 0 && CP.isFlipped() &&
+             "Shouldn't have SrcIdx+DstIdx at this point");
       const TargetRegisterClass *DstRC = MRI->getRegClass(DstReg);
       const TargetRegisterClass *CommonRC =
-        TRI->getCommonSubClass(DefRC, DstRC);
+          TRI->getCommonSubClass(DefRC, DstRC);
       if (CommonRC != nullptr) {
         NewRC = CommonRC;
 
@@ -1395,8 +1401,10 @@ bool RegisterCoalescer::reMaterializeTrivialDef(const CoalescerPair &CP,
        I != E; ++I) {
     MachineOperand &MO = CopyMI->getOperand(I);
     if (MO.isReg()) {
-      assert(MO.isImplicit() && "No explicit operands after implicit operands.");
-      assert(MO.getReg().isPhysical() && "unexpected implicit virtual register def");
+      assert(MO.isImplicit() &&
+             "No explicit operands after implicit operands.");
+      assert(MO.getReg().isPhysical() &&
+             "unexpected implicit virtual register def");
       ImplicitOps.push_back(MO);
     }
   }
@@ -1462,7 +1470,7 @@ bool RegisterCoalescer::reMaterializeTrivialDef(const CoalescerPair &CP,
       SlotIndex DefIndex =
           CurrIdx.getRegSlot(NewMI.getOperand(0).isEarlyClobber());
       LaneBitmask MaxMask = MRI->getMaxLaneMaskForVReg(DstReg);
-      VNInfo::Allocator& Alloc = LIS->getVNInfoAllocator();
+      VNInfo::Allocator &Alloc = LIS->getVNInfoAllocator();
       for (LiveInterval::SubRange &SR : DstInt.subranges()) {
         if (!SR.liveAt(DefIndex))
           SR.createDeadDef(DefIndex, Alloc);
@@ -1620,7 +1628,7 @@ MachineInstr *RegisterCoalescer::eliminateUndefCopy(MachineInstr *CopyMI) {
   // at this point.
   Register SrcReg, DstReg;
   unsigned SrcSubIdx = 0, DstSubIdx = 0;
-  if(!isMoveInstr(*TRI, CopyMI, SrcReg, DstReg, SrcSubIdx, DstSubIdx))
+  if (!isMoveInstr(*TRI, CopyMI, SrcReg, DstReg, SrcSubIdx, DstSubIdx))
     return nullptr;
 
   SlotIndex Idx = LIS->getInstructionIndex(*CopyMI);
@@ -1650,12 +1658,12 @@ MachineInstr *RegisterCoalescer::eliminateUndefCopy(MachineInstr *CopyMI) {
   if (((V && V->isPHIDef()) || (!V && !DstLI.liveAt(Idx)))) {
     CopyMI->setDesc(TII->get(TargetOpcode::IMPLICIT_DEF));
     for (unsigned i = CopyMI->getNumOperands(); i != 0; --i) {
-      MachineOperand &MO = CopyMI->getOperand(i-1);
+      MachineOperand &MO = CopyMI->getOperand(i - 1);
       if (MO.isReg() && MO.isUse())
-        CopyMI->removeOperand(i-1);
+        CopyMI->removeOperand(i - 1);
     }
     LLVM_DEBUG(dbgs() << "\tReplaced copy of <undef> value with an "
-               "implicit def\n");
+                         "implicit def\n");
     return CopyMI;
   }
 
@@ -1764,10 +1772,10 @@ void RegisterCoalescer::updateRegDefsUses(Register SrcReg, Register DstReg,
     }
   }
 
-  SmallPtrSet<MachineInstr*, 8> Visited;
-  for (MachineRegisterInfo::reg_instr_iterator
-       I = MRI->reg_instr_begin(SrcReg), E = MRI->reg_instr_end();
-       I != E; ) {
+  SmallPtrSet<MachineInstr *, 8> Visited;
+  for (MachineRegisterInfo::reg_instr_iterator I = MRI->reg_instr_begin(SrcReg),
+                                               E = MRI->reg_instr_end();
+       I != E;) {
     MachineInstr *UseMI = &*(I++);
 
     // Each instruction can only be rewritten once because sub-register
@@ -1778,7 +1786,7 @@ void RegisterCoalescer::updateRegDefsUses(Register SrcReg, Register DstReg,
     if (SrcReg == DstReg && !Visited.insert(UseMI).second)
       continue;
 
-    SmallVector<unsigned,8> Ops;
+    SmallVector<unsigned, 8> Ops;
     bool Reads, Writes;
     std::tie(Reads, Writes) = UseMI->readsWritesVirtualRegister(SrcReg, &Ops);
 
@@ -1815,8 +1823,8 @@ void RegisterCoalescer::updateRegDefsUses(Register SrcReg, Register DstReg,
             DstInt->createSubRange(Allocator, UnusedLanes);
           }
           SlotIndex MIIdx = UseMI->isDebugInstr()
-            ? LIS->getSlotIndexes()->getIndexBefore(*UseMI)
-            : LIS->getInstructionIndex(*UseMI);
+                                ? LIS->getSlotIndexes()->getIndexBefore(*UseMI)
+                                : LIS->getInstructionIndex(*UseMI);
           SlotIndex UseIdx = MIIdx.getRegSlot(true);
           addUndefFlag(*DstInt, UseIdx, MO, SubUseIdx);
         }
@@ -1941,7 +1949,7 @@ bool RegisterCoalescer::joinCopy(MachineInstr *CopyMI, bool &Again) {
       if (UndefMI->isImplicitDef())
         return false;
       deleteInstr(CopyMI);
-      return false;  // Not coalescable.
+      return false; // Not coalescable.
     }
   }
 
@@ -1981,8 +1989,8 @@ bool RegisterCoalescer::joinCopy(MachineInstr *CopyMI, bool &Again) {
 
       LI.MergeValueNumberInto(DefVNI, ReadVNI);
       if (PrunedLanes.any()) {
-        LLVM_DEBUG(dbgs() << "Pruning undef incoming lanes: "
-                          << PrunedLanes << '\n');
+        LLVM_DEBUG(dbgs() << "Pruning undef incoming lanes: " << PrunedLanes
+                          << '\n');
         setUndefOnPrunedSubRegUses(LI, CP.getSrcReg(), PrunedLanes);
       }
 
@@ -2004,13 +2012,13 @@ bool RegisterCoalescer::joinCopy(MachineInstr *CopyMI, bool &Again) {
       if (reMaterializeTrivialDef(CP, CopyMI, IsDefCopy))
         return true;
       if (IsDefCopy)
-        Again = true;  // May be possible to coalesce later.
+        Again = true; // May be possible to coalesce later.
       return false;
     }
   } else {
     // When possible, let DstReg be the larger interval.
     if (!CP.isPartial() && LIS->getInterval(CP.getSrcReg()).size() >
-                           LIS->getInterval(CP.getDstReg()).size())
+                               LIS->getInterval(CP.getDstReg()).size())
       CP.flip();
 
     LLVM_DEBUG({
@@ -2071,7 +2079,7 @@ bool RegisterCoalescer::joinCopy(MachineInstr *CopyMI, bool &Again) {
 
     // Otherwise, we are unable to join the intervals.
     LLVM_DEBUG(dbgs() << "\tInterference!\n");
-    Again = true;  // May be possible to coalesce later.
+    Again = true; // May be possible to coalesce later.
     return false;
   }
 
@@ -2355,7 +2363,7 @@ class JoinVals {
   const bool TrackSubRegLiveness;
 
   /// Values that will be present in the final live range.
-  SmallVectorImpl<VNInfo*> &NewVNInfo;
+  SmallVectorImpl<VNInfo *> &NewVNInfo;
 
   const CoalescerPair &CP;
   LiveIntervals *LIS;
@@ -2366,7 +2374,7 @@ class JoinVals {
   /// NewVNInfo. This is suitable for passing to LiveInterval::join().
   SmallVector<int, 8> Assignments;
 
-  public:
+public:
   /// Conflict resolution for overlapping values.
   enum ConflictResolution {
     /// No overlap, simply keep this value.
@@ -2395,7 +2403,7 @@ class JoinVals {
     CR_Impossible
   };
 
-  private:
+private:
   /// Per-value info for LI. The lane bit masks are all relative to the final
   /// joined register, so they can be compared directly between SrcReg and
   /// DstReg.
@@ -2458,7 +2466,8 @@ class JoinVals {
   /// Find the ultimate value that VNI was copied from.
   std::pair<const VNInfo *, Register> followCopyChain(const VNInfo *VNI) const;
 
-  bool valuesIdentical(VNInfo *Value0, VNInfo *Value1, const JoinVals &Other) const;
+  bool valuesIdentical(VNInfo *Value0, VNInfo *Value1,
+                       const JoinVals &Other) const;
 
   /// Analyze ValNo in this live range, and set all fields of Vals[ValNo].
   /// Return a conflict resolution when possible, but leave the hard cases as
@@ -2548,7 +2557,7 @@ class JoinVals {
   /// Add erased instructions to ErasedInstrs.
   /// Add foreign virtual registers to ShrinkRegs if their live range ended at
   /// the erased instrs.
-  void eraseInstrs(SmallPtrSetImpl<MachineInstr*> &ErasedInstrs,
+  void eraseInstrs(SmallPtrSetImpl<MachineInstr *> &ErasedInstrs,
                    SmallVectorImpl<Register> &ShrinkRegs,
                    LiveInterval *LI = nullptr);
 
@@ -2566,14 +2575,14 @@ class JoinVals {
 
 } // end anonymous namespace
 
-LaneBitmask JoinVals::computeWriteLanes(const MachineInstr *DefMI, bool &Redef)
-  const {
+LaneBitmask JoinVals::computeWriteLanes(const MachineInstr *DefMI,
+                                        bool &Redef) const {
   LaneBitmask L;
   for (const MachineOperand &MO : DefMI->all_defs()) {
     if (MO.getReg() != Reg)
       continue;
     L |= TRI->getSubRegIndexLaneMask(
-           TRI->composeSubRegIndices(SubIdx, MO.getSubReg()));
+        TRI->composeSubRegIndices(SubIdx, MO.getSubReg()));
     if (MO.readsReg())
       Redef = true;
   }
@@ -2657,8 +2666,8 @@ bool JoinVals::valuesIdentical(VNInfo *Value0, VNInfo *Value1,
   return Orig0->def == Orig1->def && Reg0 == Reg1;
 }
 
-JoinVals::ConflictResolution
-JoinVals::analyzeValue(unsigned ValNo, JoinVals &Other) {
+JoinVals::ConflictResolution JoinVals::analyzeValue(unsigned ValNo,
+                                                    JoinVals &Other) {
   Val &V = Vals[ValNo];
   assert(!V.isAnalyzed() && "Value has already been analyzed!");
   VNInfo *VNI = LR.getValNumInfo(ValNo);
@@ -2789,9 +2798,9 @@ JoinVals::analyzeValue(unsigned ValNo, JoinVals &Other) {
     MachineBasicBlock *OtherMBB = Indexes->getMBBFromIndex(V.OtherVNI->def);
     if (DefMI && DefMI->getParent() != OtherMBB) {
       LLVM_DEBUG(dbgs() << "IMPLICIT_DEF defined at " << V.OtherVNI->def
-                 << " extends into "
-                 << printMBBReference(*DefMI->getParent())
-                 << ", keeping it.\n");
+                        << " extends into "
+                        << printMBBReference(*DefMI->getParent())
+                        << ", keeping it.\n");
       OtherV.ErasableImplicitDef = false;
     } else if (OtherMBB->hasEHPadSuccessor()) {
       // If OtherV is defined in a basic block that has EH pad successors then
@@ -2959,8 +2968,7 @@ void JoinVals::computeAssignment(unsigned ValNo, JoinVals &Other) {
     Val &OtherV = Other.Vals[V.OtherVNI->id];
     // We cannot erase an IMPLICIT_DEF if we don't have valid values for all
     // its lanes.
-    if (OtherV.ErasableImplicitDef &&
-        TrackSubRegLiveness &&
+    if (OtherV.ErasableImplicitDef && TrackSubRegLiveness &&
         (OtherV.ValidLanes & ~V.ValidLanes).any()) {
       LLVM_DEBUG(dbgs() << "Cannot erase implicit_def with missing values\n");
 
@@ -2994,9 +3002,9 @@ bool JoinVals::mapValues(JoinVals &Other) {
   return true;
 }
 
-bool JoinVals::
-taintExtent(unsigned ValNo, LaneBitmask TaintedLanes, JoinVals &Other,
-            SmallVectorImpl<std::pair<SlotIndex, LaneBitmask>> &TaintExtent) {
+bool JoinVals::taintExtent(
+    unsigned ValNo, LaneBitmask TaintedLanes, JoinVals &Other,
+    SmallVectorImpl<std::pair<SlotIndex, LaneBitmask>> &TaintExtent) {
   VNInfo *VNI = LR.getValNumInfo(ValNo);
   MachineBasicBlock *MBB = Indexes->getMBBFromIndex(VNI->def);
   SlotIndex MBBEnd = Indexes->getMBBEndIdx(MBB);
@@ -3057,8 +3065,8 @@ bool JoinVals::resolveConflicts(JoinVals &Other) {
     if (V.Resolution != CR_Unresolved)
       continue;
     LLVM_DEBUG(dbgs() << "\t\tconflict at " << printReg(Reg) << ':' << i << '@'
-                      << LR.getValNumInfo(i)->def
-                      << ' ' << PrintLaneMask(LaneMask) << '\n');
+                      << LR.getValNumInfo(i)->def << ' '
+                      << PrintLaneMask(LaneMask) << '\n');
     if (SubRangeJoin)
       return false;
 
@@ -3091,7 +3099,7 @@ bool JoinVals::resolveConflicts(JoinVals &Other) {
     assert(!SlotIndex::isSameInstr(VNI->def, TaintExtent.front().first) &&
            "Interference ends on VNI->def. Should have been handled earlier");
     MachineInstr *LastMI =
-      Indexes->getInstructionFromIndex(TaintExtent.front().first);
+        Indexes->getInstructionFromIndex(TaintExtent.front().first);
     assert(LastMI && "Range must end at a proper instruction");
     unsigned TaintNum = 0;
     while (true) {
@@ -3149,8 +3157,8 @@ void JoinVals::pruneValues(JoinVals &Other,
       // predecessors, so the instruction should simply go away once its value
       // has been replaced.
       Val &OtherV = Other.Vals[Vals[i].OtherVNI->id];
-      bool EraseImpDef = OtherV.ErasableImplicitDef &&
-                         OtherV.Resolution == CR_Keep;
+      bool EraseImpDef =
+          OtherV.ErasableImplicitDef && OtherV.Resolution == CR_Keep;
       if (!Def.isBlock()) {
         if (changeInstrs) {
           // Remove <def,read-undef> flags. This def is now a partial redef.
@@ -3265,12 +3273,12 @@ void JoinVals::pruneSubRegValues(LiveInterval &LI, LaneBitmask &ShrinkMask) {
       // If a subrange starts at the copy then an undefined value has been
       // copied and we must remove that subrange value as well.
       VNInfo *ValueOut = Q.valueOutOrDead();
-      if (ValueOut != nullptr && (Q.valueIn() == nullptr ||
-                                  (V.Identical && V.Resolution == CR_Erase &&
-                                   ValueOut->def == Def))) {
+      if (ValueOut != nullptr &&
+          (Q.valueIn() == nullptr ||
+           (V.Identical && V.Resolution == CR_Erase && ValueOut->def == Def))) {
         LLVM_DEBUG(dbgs() << "\t\tPrune sublane " << PrintLaneMask(S.LaneMask)
                           << " at " << Def << "\n");
-        SmallVector<SlotIndex,8> EndPoints;
+        SmallVector<SlotIndex, 8> EndPoints;
         LIS->pruneValue(S, Def, &EndPoints);
         DidPrune = true;
         // Mark value number as unused.
@@ -3319,7 +3327,7 @@ static bool isDefInSubRange(LiveInterval &LI, SlotIndex Def) {
 }
 
 void JoinVals::pruneMainSegments(LiveInterval &LI, bool &ShrinkMainRange) {
-  assert(&static_cast<LiveRange&>(LI) == &LR);
+  assert(&static_cast<LiveRange &>(LI) == &LR);
 
   for (unsigned i = 0, e = LR.getNumValNums(); i != e; ++i) {
     if (Vals[i].Resolution != CR_Keep)
@@ -3344,7 +3352,7 @@ void JoinVals::removeImplicitDefs() {
   }
 }
 
-void JoinVals::eraseInstrs(SmallPtrSetImpl<MachineInstr*> &ErasedInstrs,
+void JoinVals::eraseInstrs(SmallPtrSetImpl<MachineInstr *> &ErasedInstrs,
                            SmallVectorImpl<Register> &ShrinkRegs,
                            LiveInterval *LI) {
   for (unsigned i = 0, e = LR.getNumValNums(); i != e; ++i) {
@@ -3383,7 +3391,7 @@ void JoinVals::eraseInstrs(SmallPtrSetImpl<MachineInstr*> &ErasedInstrs,
       VNI->markUnused();
 
       if (LI != nullptr && LI->hasSubRanges()) {
-        assert(static_cast<LiveRange*>(LI) == &LR);
+        assert(static_cast<LiveRange *>(LI) == &LR);
         // Determine the end point based on the subrange information:
         // minimum of (earliest def of next segment,
         //             latest end point of containing segment)
@@ -3441,11 +3449,11 @@ void JoinVals::eraseInstrs(SmallPtrSetImpl<MachineInstr*> &ErasedInstrs,
 void RegisterCoalescer::joinSubRegRanges(LiveRange &LRange, LiveRange &RRange,
                                          LaneBitmask LaneMask,
                                          const CoalescerPair &CP) {
-  SmallVector<VNInfo*, 16> NewVNInfo;
-  JoinVals RHSVals(RRange, CP.getSrcReg(), CP.getSrcIdx(), LaneMask,
-                   NewVNInfo, CP, LIS, TRI, true, true);
-  JoinVals LHSVals(LRange, CP.getDstReg(), CP.getDstIdx(), LaneMask,
-                   NewVNInfo, CP, LIS, TRI, true, true);
+  SmallVector<VNInfo *, 16> NewVNInfo;
+  JoinVals RHSVals(RRange, CP.getSrcReg(), CP.getSrcIdx(), LaneMask, NewVNInfo,
+                   CP, LIS, TRI, true, true);
+  JoinVals LHSVals(LRange, CP.getDstReg(), CP.getDstIdx(), LaneMask, NewVNInfo,
+                   CP, LIS, TRI, true, true);
 
   // Compute NewVNInfo and resolve conflicts (see also joinVirtRegs())
   // We should be able to resolve all conflicts here as we could successfully do
@@ -3482,8 +3490,8 @@ void RegisterCoalescer::joinSubRegRanges(LiveRange &LRange, LiveRange &RRange,
   LRange.join(RRange, LHSVals.getAssignments(), RHSVals.getAssignments(),
               NewVNInfo);
 
-  LLVM_DEBUG(dbgs() << "\t\tjoined lanes: " << PrintLaneMask(LaneMask)
-                    << ' ' << LRange << "\n");
+  LLVM_DEBUG(dbgs() << "\t\tjoined lanes: " << PrintLaneMask(LaneMask) << ' '
+                    << LRange << "\n");
   if (EndPoints.empty())
     return;
 
@@ -3493,7 +3501,7 @@ void RegisterCoalescer::joinSubRegRanges(LiveRange &LRange, LiveRange &RRange,
     dbgs() << "\t\trestoring liveness to " << EndPoints.size() << " points: ";
     for (unsigned i = 0, n = EndPoints.size(); i != n; ++i) {
       dbgs() << EndPoints[i];
-      if (i != n-1)
+      if (i != n - 1)
         dbgs() << ',';
     }
     dbgs() << ":  " << LRange << '\n';
@@ -3533,7 +3541,7 @@ bool RegisterCoalescer::isHighCostLiveInterval(LiveInterval &LI) {
 }
 
 bool RegisterCoalescer::joinVirtRegs(CoalescerPair &CP) {
-  SmallVector<VNInfo*, 16> NewVNInfo;
+  SmallVector<VNInfo *, 16> NewVNInfo;
   LiveInterval &RHS = LIS->getInterval(CP.getSrcReg());
   LiveInterval &LHS = LIS->getInterval(CP.getDstReg());
   bool TrackSubRegLiveness = MRI->shouldTrackSubRegLiveness(*CP.getNewRC());
@@ -3695,12 +3703,12 @@ bool RegisterCoalescer::joinVirtRegs(CoalescerPair &CP) {
       dbgs() << "\t\trestoring liveness to " << EndPoints.size() << " points: ";
       for (unsigned i = 0, n = EndPoints.size(); i != n; ++i) {
         dbgs() << EndPoints[i];
-        if (i != n-1)
+        if (i != n - 1)
           dbgs() << ',';
       }
       dbgs() << ":  " << LHS << '\n';
     });
-    LIS->extendToIndices((LiveRange&)LHS, EndPoints);
+    LIS->extendToIndices((LiveRange &)LHS, EndPoints);
   }
 
   return true;
@@ -3710,8 +3718,7 @@ bool RegisterCoalescer::joinIntervals(CoalescerPair &CP) {
   return CP.isPhys() ? joinReservedPhysReg(CP) : joinVirtRegs(CP);
 }
 
-void RegisterCoalescer::buildVRegToDbgValueMap(MachineFunction &MF)
-{
+void RegisterCoalescer::buildVRegToDbgValueMap(MachineFunction &MF) {
   const SlotIndexes &Slots = *LIS->getSlotIndexes();
   SmallVector<MachineInstr *, 8> ToInsert;
 
@@ -3814,8 +3821,8 @@ void RegisterCoalescer::checkMergingChangesDbgValuesImpl(Register Reg,
     // was coalesced and Reg deleted. It's safe to refer to the other register
     // (which will be the source of the copy).
     auto Resolution = RegVals.getResolution(OtherIt->valno->id);
-    LastUndefResult = Resolution != JoinVals::CR_Keep &&
-                      Resolution != JoinVals::CR_Erase;
+    LastUndefResult =
+        Resolution != JoinVals::CR_Keep && Resolution != JoinVals::CR_Erase;
     LastUndefIdx = Idx;
     return LastUndefResult;
   };
@@ -3852,7 +3859,7 @@ struct MBBPriorityInfo {
   bool IsSplit;
 
   MBBPriorityInfo(MachineBasicBlock *mbb, unsigned depth, bool issplit)
-    : MBB(mbb), Depth(depth), IsSplit(issplit) {}
+      : MBB(mbb), Depth(depth), IsSplit(issplit) {}
 };
 
 } // end anonymous namespace
@@ -3895,8 +3902,8 @@ static bool isLocalCopy(MachineInstr *Copy, const LiveIntervals *LIS) {
   if (SrcReg.isPhysical() || DstReg.isPhysical())
     return false;
 
-  return LIS->intervalIsInOneMBB(LIS->getInterval(SrcReg))
-    || LIS->intervalIsInOneMBB(LIS->getInterval(DstReg));
+  return LIS->intervalIsInOneMBB(LIS->getInterval(SrcReg)) ||
+         LIS->intervalIsInOneMBB(LIS->getInterval(DstReg));
 }
 
 void RegisterCoalescer::lateLiveIntervalUpdate() {
@@ -3911,8 +3918,8 @@ void RegisterCoalescer::lateLiveIntervalUpdate() {
   ToBeUpdated.clear();
 }
 
-bool RegisterCoalescer::
-copyCoalesceWorkList(MutableArrayRef<MachineInstr*> CurrList) {
+bool RegisterCoalescer::copyCoalesceWorkList(
+    MutableArrayRef<MachineInstr *> CurrList) {
   bool Progress = false;
   for (MachineInstr *&MI : CurrList) {
     if (!MI)
@@ -3976,7 +3983,7 @@ bool RegisterCoalescer::applyTerminalRule(const MachineInstr &Copy) const {
     Register OtherSrcReg, OtherReg;
     unsigned OtherSrcSubReg = 0, OtherSubReg = 0;
     if (!isMoveInstr(*TRI, &Copy, OtherSrcReg, OtherReg, OtherSrcSubReg,
-                OtherSubReg))
+                     OtherSubReg))
       return false;
     if (OtherReg == SrcReg)
       OtherReg = OtherSrcReg;
@@ -3993,16 +4000,15 @@ bool RegisterCoalescer::applyTerminalRule(const MachineInstr &Copy) const {
   return false;
 }
 
-void
-RegisterCoalescer::copyCoalesceInMBB(MachineBasicBlock *MBB) {
+void RegisterCoalescer::copyCoalesceInMBB(MachineBasicBlock *MBB) {
   LLVM_DEBUG(dbgs() << MBB->getName() << ":\n");
 
   // Collect all copy-like instructions in MBB. Don't start coalescing anything
   // yet, it might invalidate the iterator.
   const unsigned PrevSize = WorkList.size();
   if (JoinGlobalCopies) {
-    SmallVector<MachineInstr*, 2> LocalTerminals;
-    SmallVector<MachineInstr*, 2> GlobalTerminals;
+    SmallVector<MachineInstr *, 2> LocalTerminals;
+    SmallVector<MachineInstr *, 2> GlobalTerminals;
     // Coalesce copies bottom-up to coalesce local defs before local uses. They
     // are not inherently easier to resolve, but slightly preferable until we
     // have local live range splitting. In particular this is required by
@@ -4026,9 +4032,8 @@ RegisterCoalescer::copyCoalesceInMBB(MachineBasicBlock *MBB) {
     // Append the copies evicted by the terminal rule at the end of the list.
     LocalWorkList.append(LocalTerminals.begin(), LocalTerminals.end());
     WorkList.append(GlobalTerminals.begin(), GlobalTerminals.end());
-  }
-  else {
-    SmallVector<MachineInstr*, 2> Terminals;
+  } else {
+    SmallVector<MachineInstr *, 2> Terminals;
     for (MachineInstr &MII : *MBB)
       if (MII.isCopyLike()) {
         if (applyTerminalRule(MII))
@@ -4042,11 +4047,12 @@ RegisterCoalescer::copyCoalesceInMBB(MachineBasicBlock *MBB) {
   // Try coalescing the collected copies immediately, and remove the nulls.
   // This prevents the WorkList from getting too large since most copies are
   // joinable on the first attempt.
-  MutableArrayRef<MachineInstr*>
-    CurrList(WorkList.begin() + PrevSize, WorkList.end());
+  MutableArrayRef<MachineInstr *> CurrList(WorkList.begin() + PrevSize,
+                                           WorkList.end());
   if (copyCoalesceWorkList(CurrList))
-    WorkList.erase(std::remove(WorkList.begin() + PrevSize, WorkList.end(),
-                               nullptr), WorkList.end());
+    WorkList.erase(
+        std::remove(WorkList.begin() + PrevSize, WorkList.end(), nullptr),
+        WorkList.end());
 }
 
 void RegisterCoalescer::coalesceLocals() {
@@ -4086,7 +4092,7 @@ void RegisterCoalescer::joinAllIntervals() {
   // Joining intervals can allow other intervals to be joined.  Iteratively join
   // until we make no progress.
   while (copyCoalesceWorkList(WorkList))
-    /* empty */ ;
+    /* empty */;
   lateLiveIntervalUpdate();
 }
 
@@ -4214,6 +4220,6 @@ bool RegisterCoalescer::runOnMachineFunction(MachineFunction &fn) {
   return true;
 }
 
-void RegisterCoalescer::print(raw_ostream &O, const Module* m) const {
-   LIS->print(O, m);
+void RegisterCoalescer::print(raw_ostream &O, const Module *m) const {
+  LIS->print(O, m);
 }
diff --git a/llvm/lib/CodeGen/RegisterCoalescer.h b/llvm/lib/CodeGen/RegisterCoalescer.h
index f265d93fb0d63d5..6926e9b5d188f0e 100644
--- a/llvm/lib/CodeGen/RegisterCoalescer.h
+++ b/llvm/lib/CodeGen/RegisterCoalescer.h
@@ -22,92 +22,92 @@ class MachineInstr;
 class TargetRegisterClass;
 class TargetRegisterInfo;
 
-  /// A helper class for register coalescers. When deciding if
-  /// two registers can be coalesced, CoalescerPair can determine if a copy
-  /// instruction would become an identity copy after coalescing.
-  class CoalescerPair {
-    const TargetRegisterInfo &TRI;
+/// A helper class for register coalescers. When deciding if
+/// two registers can be coalesced, CoalescerPair can determine if a copy
+/// instruction would become an identity copy after coalescing.
+class CoalescerPair {
+  const TargetRegisterInfo &TRI;
 
-    /// The register that will be left after coalescing. It can be a
-    /// virtual or physical register.
-    Register DstReg;
+  /// The register that will be left after coalescing. It can be a
+  /// virtual or physical register.
+  Register DstReg;
 
-    /// The virtual register that will be coalesced into dstReg.
-    Register SrcReg;
+  /// The virtual register that will be coalesced into dstReg.
+  Register SrcReg;
 
-    /// The sub-register index of the old DstReg in the new coalesced register.
-    unsigned DstIdx = 0;
+  /// The sub-register index of the old DstReg in the new coalesced register.
+  unsigned DstIdx = 0;
 
-    /// The sub-register index of the old SrcReg in the new coalesced register.
-    unsigned SrcIdx = 0;
+  /// The sub-register index of the old SrcReg in the new coalesced register.
+  unsigned SrcIdx = 0;
 
-    /// True when the original copy was a partial subregister copy.
-    bool Partial = false;
+  /// True when the original copy was a partial subregister copy.
+  bool Partial = false;
 
-    /// True when both regs are virtual and newRC is constrained.
-    bool CrossClass = false;
+  /// True when both regs are virtual and newRC is constrained.
+  bool CrossClass = false;
 
-    /// True when DstReg and SrcReg are reversed from the original
-    /// copy instruction.
-    bool Flipped = false;
+  /// True when DstReg and SrcReg are reversed from the original
+  /// copy instruction.
+  bool Flipped = false;
 
-    /// The register class of the coalesced register, or NULL if DstReg
-    /// is a physreg. This register class may be a super-register of both
-    /// SrcReg and DstReg.
-    const TargetRegisterClass *NewRC = nullptr;
+  /// The register class of the coalesced register, or NULL if DstReg
+  /// is a physreg. This register class may be a super-register of both
+  /// SrcReg and DstReg.
+  const TargetRegisterClass *NewRC = nullptr;
 
-  public:
-    CoalescerPair(const TargetRegisterInfo &tri) : TRI(tri) {}
+public:
+  CoalescerPair(const TargetRegisterInfo &tri) : TRI(tri) {}
 
-    /// Create a CoalescerPair representing a virtreg-to-physreg copy.
-    /// No need to call setRegisters().
-    CoalescerPair(Register VirtReg, MCRegister PhysReg,
-                  const TargetRegisterInfo &tri)
-        : TRI(tri), DstReg(PhysReg), SrcReg(VirtReg) {}
+  /// Create a CoalescerPair representing a virtreg-to-physreg copy.
+  /// No need to call setRegisters().
+  CoalescerPair(Register VirtReg, MCRegister PhysReg,
+                const TargetRegisterInfo &tri)
+      : TRI(tri), DstReg(PhysReg), SrcReg(VirtReg) {}
 
-    /// Set registers to match the copy instruction MI. Return
-    /// false if MI is not a coalescable copy instruction.
-    bool setRegisters(const MachineInstr*);
+  /// Set registers to match the copy instruction MI. Return
+  /// false if MI is not a coalescable copy instruction.
+  bool setRegisters(const MachineInstr *);
 
-    /// Swap SrcReg and DstReg. Return false if swapping is impossible
-    /// because DstReg is a physical register, or SubIdx is set.
-    bool flip();
+  /// Swap SrcReg and DstReg. Return false if swapping is impossible
+  /// because DstReg is a physical register, or SubIdx is set.
+  bool flip();
 
-    /// Return true if MI is a copy instruction that will become
-    /// an identity copy after coalescing.
-    bool isCoalescable(const MachineInstr*) const;
+  /// Return true if MI is a copy instruction that will become
+  /// an identity copy after coalescing.
+  bool isCoalescable(const MachineInstr *) const;
 
-    /// Return true if DstReg is a physical register.
-    bool isPhys() const { return !NewRC; }
+  /// Return true if DstReg is a physical register.
+  bool isPhys() const { return !NewRC; }
 
-    /// Return true if the original copy instruction did not copy
-    /// the full register, but was a subreg operation.
-    bool isPartial() const { return Partial; }
+  /// Return true if the original copy instruction did not copy
+  /// the full register, but was a subreg operation.
+  bool isPartial() const { return Partial; }
 
-    /// Return true if DstReg is virtual and NewRC is a smaller
-    /// register class than DstReg's.
-    bool isCrossClass() const { return CrossClass; }
+  /// Return true if DstReg is virtual and NewRC is a smaller
+  /// register class than DstReg's.
+  bool isCrossClass() const { return CrossClass; }
 
-    /// Return true when getSrcReg is the register being defined by
-    /// the original copy instruction.
-    bool isFlipped() const { return Flipped; }
+  /// Return true when getSrcReg is the register being defined by
+  /// the original copy instruction.
+  bool isFlipped() const { return Flipped; }
 
-    /// Return the register (virtual or physical) that will remain
-    /// after coalescing.
-    Register getDstReg() const { return DstReg; }
+  /// Return the register (virtual or physical) that will remain
+  /// after coalescing.
+  Register getDstReg() const { return DstReg; }
 
-    /// Return the virtual register that will be coalesced away.
-    Register getSrcReg() const { return SrcReg; }
+  /// Return the virtual register that will be coalesced away.
+  Register getSrcReg() const { return SrcReg; }
 
-    /// Return the subregister index that DstReg will be coalesced into, or 0.
-    unsigned getDstIdx() const { return DstIdx; }
+  /// Return the subregister index that DstReg will be coalesced into, or 0.
+  unsigned getDstIdx() const { return DstIdx; }
 
-    /// Return the subregister index that SrcReg will be coalesced into, or 0.
-    unsigned getSrcIdx() const { return SrcIdx; }
+  /// Return the subregister index that SrcReg will be coalesced into, or 0.
+  unsigned getSrcIdx() const { return SrcIdx; }
 
-    /// Return the register class of the coalesced register.
-    const TargetRegisterClass *getNewRC() const { return NewRC; }
-  };
+  /// Return the register class of the coalesced register.
+  const TargetRegisterClass *getNewRC() const { return NewRC; }
+};
 
 } // end namespace llvm
 
diff --git a/llvm/lib/CodeGen/RegisterPressure.cpp b/llvm/lib/CodeGen/RegisterPressure.cpp
index f86aa3a167202f4..663194a647f1d8a 100644
--- a/llvm/lib/CodeGen/RegisterPressure.cpp
+++ b/llvm/lib/CodeGen/RegisterPressure.cpp
@@ -64,7 +64,7 @@ static void increaseSetPressure(std::vector<unsigned> &CurrSetPressure,
 static void decreaseSetPressure(std::vector<unsigned> &CurrSetPressure,
                                 const MachineRegisterInfo &MRI, Register Reg,
                                 LaneBitmask PrevMask, LaneBitmask NewMask) {
-  //assert((NewMask & !PrevMask) == 0 && "Must not add bits");
+  // assert((NewMask & !PrevMask) == 0 && "Must not add bits");
   if (NewMask.any() || PrevMask.none())
     return;
 
@@ -128,8 +128,8 @@ void PressureDiff::dump(const TargetRegisterInfo &TRI) const {
   for (const PressureChange &Change : *this) {
     if (!Change.isValid())
       break;
-    dbgs() << sep << TRI.getRegPressureSetName(Change.getPSet())
-           << " " << Change.getUnitInc();
+    dbgs() << sep << TRI.getRegPressureSetName(Change.getPSet()) << " "
+           << Change.getUnitInc();
     sep = "    ";
   }
   dbgs() << '\n';
@@ -230,9 +230,7 @@ void LiveRegSet::init(const MachineRegisterInfo &MRI) {
   this->NumRegUnits = NumRegUnits;
 }
 
-void LiveRegSet::clear() {
-  Regs.clear();
-}
+void LiveRegSet::clear() { Regs.clear(); }
 
 static const LiveRange *getLiveRange(const LiveIntervals &LIS, unsigned Reg) {
   if (Register::isVirtualRegister(Reg))
@@ -249,9 +247,9 @@ void RegPressureTracker::reset() {
   P.MaxSetPressure.clear();
 
   if (RequireIntervals)
-    static_cast<IntervalPressure&>(P).reset();
+    static_cast<IntervalPressure &>(P).reset();
   else
-    static_cast<RegionPressure&>(P).reset();
+    static_cast<RegionPressure &>(P).reset();
 
   LiveRegs.clear();
   UntiedDefs.clear();
@@ -294,22 +292,22 @@ void RegPressureTracker::init(const MachineFunction *mf,
 /// Does this pressure result have a valid top position and live ins.
 bool RegPressureTracker::isTopClosed() const {
   if (RequireIntervals)
-    return static_cast<IntervalPressure&>(P).TopIdx.isValid();
-  return (static_cast<RegionPressure&>(P).TopPos ==
+    return static_cast<IntervalPressure &>(P).TopIdx.isValid();
+  return (static_cast<RegionPressure &>(P).TopPos ==
           MachineBasicBlock::const_iterator());
 }
 
 /// Does this pressure result have a valid bottom position and live outs.
 bool RegPressureTracker::isBottomClosed() const {
   if (RequireIntervals)
-    return static_cast<IntervalPressure&>(P).BottomIdx.isValid();
-  return (static_cast<RegionPressure&>(P).BottomPos ==
+    return static_cast<IntervalPressure &>(P).BottomIdx.isValid();
+  return (static_cast<RegionPressure &>(P).BottomPos ==
           MachineBasicBlock::const_iterator());
 }
 
 SlotIndex RegPressureTracker::getCurrSlot() const {
   MachineBasicBlock::const_iterator IdxPos =
-    skipDebugInstructionsForward(CurrPos, MBB->end());
+      skipDebugInstructionsForward(CurrPos, MBB->end());
   if (IdxPos == MBB->end())
     return LIS->getMBBEndIdx(MBB);
   return LIS->getInstructionIndex(*IdxPos).getRegSlot();
@@ -318,9 +316,9 @@ SlotIndex RegPressureTracker::getCurrSlot() const {
 /// Set the boundary for the top of the region and summarize live ins.
 void RegPressureTracker::closeTop() {
   if (RequireIntervals)
-    static_cast<IntervalPressure&>(P).TopIdx = getCurrSlot();
+    static_cast<IntervalPressure &>(P).TopIdx = getCurrSlot();
   else
-    static_cast<RegionPressure&>(P).TopPos = CurrPos;
+    static_cast<RegionPressure &>(P).TopPos = CurrPos;
 
   assert(P.LiveInRegs.empty() && "inconsistent max pressure result");
   P.LiveInRegs.reserve(LiveRegs.size());
@@ -330,9 +328,9 @@ void RegPressureTracker::closeTop() {
 /// Set the boundary for the bottom of the region and summarize live outs.
 void RegPressureTracker::closeBottom() {
   if (RequireIntervals)
-    static_cast<IntervalPressure&>(P).BottomIdx = getCurrSlot();
+    static_cast<IntervalPressure &>(P).BottomIdx = getCurrSlot();
   else
-    static_cast<RegionPressure&>(P).BottomPos = CurrPos;
+    static_cast<RegionPressure &>(P).BottomPos = CurrPos;
 
   assert(P.LiveOutRegs.empty() && "inconsistent max pressure result");
   P.LiveOutRegs.reserve(LiveRegs.size());
@@ -426,10 +424,10 @@ getLanesWithProperty(const LiveIntervals &LIS, const MachineRegisterInfo &MRI,
     const LiveInterval &LI = LIS.getInterval(RegUnit);
     LaneBitmask Result;
     if (TrackLaneMasks && LI.hasSubRanges()) {
-        for (const LiveInterval::SubRange &SR : LI.subranges()) {
-          if (Property(SR, Pos))
-            Result |= SR.LaneMask;
-        }
+      for (const LiveInterval::SubRange &SR : LI.subranges()) {
+        if (Property(SR, Pos))
+          Result |= SR.LaneMask;
+      }
     } else if (Property(LI, Pos)) {
       Result = TrackLaneMasks ? MRI.getMaxLaneMaskForVReg(RegUnit)
                               : LaneBitmask::getAll();
@@ -450,11 +448,9 @@ static LaneBitmask getLiveLanesAt(const LiveIntervals &LIS,
                                   const MachineRegisterInfo &MRI,
                                   bool TrackLaneMasks, Register RegUnit,
                                   SlotIndex Pos) {
-  return getLanesWithProperty(LIS, MRI, TrackLaneMasks, RegUnit, Pos,
-                              LaneBitmask::getAll(),
-                              [](const LiveRange &LR, SlotIndex Pos) {
-                                return LR.liveAt(Pos);
-                              });
+  return getLanesWithProperty(
+      LIS, MRI, TrackLaneMasks, RegUnit, Pos, LaneBitmask::getAll(),
+      [](const LiveRange &LR, SlotIndex Pos) { return LR.liveAt(Pos); });
 }
 
 namespace {
@@ -474,7 +470,7 @@ class RegisterOperandsCollector {
   RegisterOperandsCollector(RegisterOperands &RegOpers,
                             const TargetRegisterInfo &TRI,
                             const MachineRegisterInfo &MRI, bool IgnoreDead)
-    : RegOpers(RegOpers), TRI(TRI), MRI(MRI), IgnoreDead(IgnoreDead) {}
+      : RegOpers(RegOpers), TRI(TRI), MRI(MRI), IgnoreDead(IgnoreDead) {}
 
   void collectInstr(const MachineInstr &MI) const {
     for (ConstMIBundleOperands OperI(MI); OperI.isValid(); ++OperI)
@@ -552,8 +548,8 @@ class RegisterOperandsCollector {
                     SmallVectorImpl<RegisterMaskPair> &RegUnits) const {
     if (Reg.isVirtual()) {
       LaneBitmask LaneMask = SubRegIdx != 0
-                             ? TRI.getSubRegIndexLaneMask(SubRegIdx)
-                             : MRI.getMaxLaneMaskForVReg(Reg);
+                                 ? TRI.getSubRegIndexLaneMask(SubRegIdx)
+                                 : MRI.getMaxLaneMaskForVReg(Reg);
       addRegLanes(RegUnits, RegisterMaskPair(Reg, LaneMask));
     } else if (MRI.isAllocatable(Reg)) {
       for (MCRegUnit Unit : TRI.regunits(Reg.asMCReg()))
@@ -600,8 +596,8 @@ void RegisterOperands::adjustLaneLiveness(const LiveIntervals &LIS,
                                           SlotIndex Pos,
                                           MachineInstr *AddFlagsMI) {
   for (auto *I = Defs.begin(); I != Defs.end();) {
-    LaneBitmask LiveAfter = getLiveLanesAt(LIS, MRI, true, I->RegUnit,
-                                           Pos.getDeadSlot());
+    LaneBitmask LiveAfter =
+        getLiveLanesAt(LIS, MRI, true, I->RegUnit, Pos.getDeadSlot());
     // If the def is all that is live after the instruction, then in case
     // of a subregister def we need a read-undef flag.
     Register RegUnit = I->RegUnit;
@@ -618,8 +614,8 @@ void RegisterOperands::adjustLaneLiveness(const LiveIntervals &LIS,
     }
   }
   for (auto *I = Uses.begin(); I != Uses.end();) {
-    LaneBitmask LiveBefore = getLiveLanesAt(LIS, MRI, true, I->RegUnit,
-                                            Pos.getBaseIndex());
+    LaneBitmask LiveBefore =
+        getLiveLanesAt(LIS, MRI, true, I->RegUnit, Pos.getBaseIndex());
     LaneBitmask LaneMask = I->LaneMask & LiveBefore;
     if (LaneMask.none()) {
       I = Uses.erase(I);
@@ -633,8 +629,8 @@ void RegisterOperands::adjustLaneLiveness(const LiveIntervals &LIS,
       Register RegUnit = P.RegUnit;
       if (!RegUnit.isVirtual())
         continue;
-      LaneBitmask LiveAfter = getLiveLanesAt(LIS, MRI, true, RegUnit,
-                                             Pos.getDeadSlot());
+      LaneBitmask LiveAfter =
+          getLiveLanesAt(LIS, MRI, true, RegUnit, Pos.getDeadSlot());
       if (LiveAfter.none())
         AddFlagsMI->setRegisterDefReadUndef(RegUnit);
     }
@@ -650,7 +646,8 @@ void PressureDiffs::init(unsigned N) {
   }
   Max = Size;
   free(PDiffArray);
-  PDiffArray = static_cast<PressureDiff*>(safe_calloc(N, sizeof(PressureDiff)));
+  PDiffArray =
+      static_cast<PressureDiff *>(safe_calloc(N, sizeof(PressureDiff)));
 }
 
 void PressureDiffs::addInstruction(unsigned Idx,
@@ -709,8 +706,8 @@ void RegPressureTracker::addLiveRegs(ArrayRef<RegisterMaskPair> Regs) {
   }
 }
 
-void RegPressureTracker::discoverLiveInOrOut(RegisterMaskPair Pair,
-    SmallVectorImpl<RegisterMaskPair> &LiveInOrOut) {
+void RegPressureTracker::discoverLiveInOrOut(
+    RegisterMaskPair Pair, SmallVectorImpl<RegisterMaskPair> &LiveInOrOut) {
   assert(Pair.LaneMask.any());
 
   Register RegUnit = Pair.RegUnit;
@@ -854,7 +851,7 @@ void RegPressureTracker::recedeSkipDebugValues() {
 
   // Open the top of the region using block iterators.
   if (!RequireIntervals && isTopClosed())
-    static_cast<RegionPressure&>(P).openTop(CurrPos);
+    static_cast<RegionPressure &>(P).openTop(CurrPos);
 
   // Find the previous instruction.
   CurrPos = prev_nodbg(CurrPos, MBB->begin());
@@ -865,7 +862,7 @@ void RegPressureTracker::recedeSkipDebugValues() {
 
   // Open the top of the region using slot indexes.
   if (RequireIntervals && isTopClosed())
-    static_cast<IntervalPressure&>(P).openTop(SlotIdx);
+    static_cast<IntervalPressure &>(P).openTop(SlotIdx);
 }
 
 void RegPressureTracker::recede(SmallVectorImpl<RegisterMaskPair> *LiveUses) {
@@ -904,9 +901,9 @@ void RegPressureTracker::advance(const RegisterOperands &RegOpers) {
   // Open the bottom of the region using slot indexes.
   if (isBottomClosed()) {
     if (RequireIntervals)
-      static_cast<IntervalPressure&>(P).openBottom(SlotIdx);
+      static_cast<IntervalPressure &>(P).openBottom(SlotIdx);
     else
-      static_cast<RegionPressure&>(P).openBottom(CurrPos);
+      static_cast<RegionPressure &>(P).openBottom(CurrPos);
   }
 
   for (const RegisterMaskPair &Use : RegOpers.Uses) {
@@ -973,11 +970,11 @@ static void computeExcessPressureDelta(ArrayRef<unsigned> OldPressureVec,
 
     if (Limit > POld) {
       if (Limit > PNew)
-        PDiff = 0;            // Under the limit
+        PDiff = 0; // Under the limit
       else
         PDiff = PNew - Limit; // Just exceeded limit.
     } else if (Limit > PNew)
-      PDiff = Limit - POld;   // Just obeyed limit.
+      PDiff = Limit - POld; // Just obeyed limit.
 
     if (PDiff) {
       Delta.Excess = PressureChange(i);
@@ -1088,11 +1085,10 @@ void RegPressureTracker::bumpUpwardPressure(const MachineInstr *MI) {
 /// liveness. This mainly exists to verify correctness, e.g. with
 /// -verify-misched. getUpwardPressureDelta is the fast version of this query
 /// that uses the per-SUnit cache of the PressureDiff.
-void RegPressureTracker::
-getMaxUpwardPressureDelta(const MachineInstr *MI, PressureDiff *PDiff,
-                          RegPressureDelta &Delta,
-                          ArrayRef<PressureChange> CriticalPSets,
-                          ArrayRef<unsigned> MaxPressureLimit) {
+void RegPressureTracker::getMaxUpwardPressureDelta(
+    const MachineInstr *MI, PressureDiff *PDiff, RegPressureDelta &Delta,
+    ArrayRef<PressureChange> CriticalPSets,
+    ArrayRef<unsigned> MaxPressureLimit) {
   // Snapshot Pressure.
   // FIXME: The snapshot heap space should persist. But I'm planning to
   // summarize the pressure effect so we don't need to snapshot at all.
@@ -1127,20 +1123,25 @@ getMaxUpwardPressureDelta(const MachineInstr *MI, PressureDiff *PDiff,
       dbgs() << "Excess1 " << TRI->getRegPressureSetName(Delta.Excess.getPSet())
              << " " << Delta.Excess.getUnitInc() << "\n";
     if (Delta.CriticalMax.isValid())
-      dbgs() << "Critic1 " << TRI->getRegPressureSetName(Delta.CriticalMax.getPSet())
-             << " " << Delta.CriticalMax.getUnitInc() << "\n";
+      dbgs() << "Critic1 "
+             << TRI->getRegPressureSetName(Delta.CriticalMax.getPSet()) << " "
+             << Delta.CriticalMax.getUnitInc() << "\n";
     if (Delta.CurrentMax.isValid())
-      dbgs() << "CurrMx1 " << TRI->getRegPressureSetName(Delta.CurrentMax.getPSet())
-             << " " << Delta.CurrentMax.getUnitInc() << "\n";
+      dbgs() << "CurrMx1 "
+             << TRI->getRegPressureSetName(Delta.CurrentMax.getPSet()) << " "
+             << Delta.CurrentMax.getUnitInc() << "\n";
     if (Delta2.Excess.isValid())
-      dbgs() << "Excess2 " << TRI->getRegPressureSetName(Delta2.Excess.getPSet())
-             << " " << Delta2.Excess.getUnitInc() << "\n";
+      dbgs() << "Excess2 "
+             << TRI->getRegPressureSetName(Delta2.Excess.getPSet()) << " "
+             << Delta2.Excess.getUnitInc() << "\n";
     if (Delta2.CriticalMax.isValid())
-      dbgs() << "Critic2 " << TRI->getRegPressureSetName(Delta2.CriticalMax.getPSet())
-             << " " << Delta2.CriticalMax.getUnitInc() << "\n";
+      dbgs() << "Critic2 "
+             << TRI->getRegPressureSetName(Delta2.CriticalMax.getPSet()) << " "
+             << Delta2.CriticalMax.getUnitInc() << "\n";
     if (Delta2.CurrentMax.isValid())
-      dbgs() << "CurrMx2 " << TRI->getRegPressureSetName(Delta2.CurrentMax.getPSet())
-             << " " << Delta2.CurrentMax.getUnitInc() << "\n";
+      dbgs() << "CurrMx2 "
+             << TRI->getRegPressureSetName(Delta2.CurrentMax.getPSet()) << " "
+             << Delta2.CurrentMax.getUnitInc() << "\n";
     llvm_unreachable("RegP Delta Mismatch");
   }
 #endif
@@ -1156,14 +1157,13 @@ getMaxUpwardPressureDelta(const MachineInstr *MI, PressureDiff *PDiff,
 ///
 /// @param MaxPressureLimit Is the max pressure within the region, not
 /// necessarily at the current position.
-void RegPressureTracker::
-getUpwardPressureDelta(const MachineInstr *MI, /*const*/ PressureDiff &PDiff,
-                       RegPressureDelta &Delta,
-                       ArrayRef<PressureChange> CriticalPSets,
-                       ArrayRef<unsigned> MaxPressureLimit) const {
+void RegPressureTracker::getUpwardPressureDelta(
+    const MachineInstr *MI, /*const*/ PressureDiff &PDiff,
+    RegPressureDelta &Delta, ArrayRef<PressureChange> CriticalPSets,
+    ArrayRef<unsigned> MaxPressureLimit) const {
   unsigned CritIdx = 0, CritEnd = CriticalPSets.size();
-  for (PressureDiff::const_iterator
-         PDiffI = PDiff.begin(), PDiffE = PDiff.end();
+  for (PressureDiff::const_iterator PDiffI = PDiff.begin(),
+                                    PDiffE = PDiff.end();
        PDiffI != PDiffE && PDiffI->isValid(); ++PDiffI) {
 
     unsigned PSetID = PDiffI->getPSet();
@@ -1176,8 +1176,8 @@ getUpwardPressureDelta(const MachineInstr *MI, /*const*/ PressureDiff &PDiff,
     unsigned MNew = MOld;
     // Ignore DeadDefs here because they aren't captured by PressureChange.
     unsigned PNew = POld + PDiffI->getUnitInc();
-    assert((PDiffI->getUnitInc() >= 0) == (PNew >= POld)
-           && "PSet overflow/underflow");
+    assert((PDiffI->getUnitInc() >= 0) == (PNew >= POld) &&
+           "PSet overflow/underflow");
     if (PNew > MOld)
       MNew = PNew;
     // Check if current pressure has exceeded the limit.
@@ -1242,19 +1242,17 @@ static LaneBitmask findUseBetween(unsigned Reg, LaneBitmask LastUseMask,
 LaneBitmask RegPressureTracker::getLiveLanesAt(Register RegUnit,
                                                SlotIndex Pos) const {
   assert(RequireIntervals);
-  return getLanesWithProperty(*LIS, *MRI, TrackLaneMasks, RegUnit, Pos,
-                              LaneBitmask::getAll(),
-      [](const LiveRange &LR, SlotIndex Pos) {
-        return LR.liveAt(Pos);
-      });
+  return getLanesWithProperty(
+      *LIS, *MRI, TrackLaneMasks, RegUnit, Pos, LaneBitmask::getAll(),
+      [](const LiveRange &LR, SlotIndex Pos) { return LR.liveAt(Pos); });
 }
 
 LaneBitmask RegPressureTracker::getLastUsedLanes(Register RegUnit,
                                                  SlotIndex Pos) const {
   assert(RequireIntervals);
-  return getLanesWithProperty(*LIS, *MRI, TrackLaneMasks, RegUnit,
-                              Pos.getBaseIndex(), LaneBitmask::getNone(),
-      [](const LiveRange &LR, SlotIndex Pos) {
+  return getLanesWithProperty(
+      *LIS, *MRI, TrackLaneMasks, RegUnit, Pos.getBaseIndex(),
+      LaneBitmask::getNone(), [](const LiveRange &LR, SlotIndex Pos) {
         const LiveRange::Segment *S = LR.getSegmentContaining(Pos);
         return S != nullptr && S->end == Pos.getRegSlot();
       });
@@ -1263,8 +1261,8 @@ LaneBitmask RegPressureTracker::getLastUsedLanes(Register RegUnit,
 LaneBitmask RegPressureTracker::getLiveThroughAt(Register RegUnit,
                                                  SlotIndex Pos) const {
   assert(RequireIntervals);
-  return getLanesWithProperty(*LIS, *MRI, TrackLaneMasks, RegUnit, Pos,
-                              LaneBitmask::getNone(),
+  return getLanesWithProperty(
+      *LIS, *MRI, TrackLaneMasks, RegUnit, Pos, LaneBitmask::getNone(),
       [](const LiveRange &LR, SlotIndex Pos) {
         const LiveRange::Segment *S = LR.getSegmentContaining(Pos);
         return S != nullptr && S->start < Pos.getRegSlot(true) &&
@@ -1303,8 +1301,8 @@ void RegPressureTracker::bumpDownwardPressure(const MachineInstr *MI) {
       // FIXME: allow the caller to pass in the list of vreg uses that remain
       // to be bottom-scheduled to avoid searching uses at each query.
       SlotIndex CurrIdx = getCurrSlot();
-      LastUseMask
-        = findUseBetween(Reg, LastUseMask, CurrIdx, SlotIdx, *MRI, LIS);
+      LastUseMask =
+          findUseBetween(Reg, LastUseMask, CurrIdx, SlotIdx, *MRI, LIS);
       if (LastUseMask.none())
         continue;
 
@@ -1337,10 +1335,10 @@ void RegPressureTracker::bumpDownwardPressure(const MachineInstr *MI) {
 /// bumpDownwardPressure to recompute the pressure sets based on current
 /// liveness. We don't yet have a fast version of downward pressure tracking
 /// analogous to getUpwardPressureDelta.
-void RegPressureTracker::
-getMaxDownwardPressureDelta(const MachineInstr *MI, RegPressureDelta &Delta,
-                            ArrayRef<PressureChange> CriticalPSets,
-                            ArrayRef<unsigned> MaxPressureLimit) {
+void RegPressureTracker::getMaxDownwardPressureDelta(
+    const MachineInstr *MI, RegPressureDelta &Delta,
+    ArrayRef<PressureChange> CriticalPSets,
+    ArrayRef<unsigned> MaxPressureLimit) {
   // Snapshot Pressure.
   std::vector<unsigned> SavedPressure = CurrSetPressure;
   std::vector<unsigned> SavedMaxPressure = P.MaxSetPressure;
@@ -1360,10 +1358,9 @@ getMaxDownwardPressureDelta(const MachineInstr *MI, RegPressureDelta &Delta,
 }
 
 /// Get the pressure of each PSet after traversing this instruction bottom-up.
-void RegPressureTracker::
-getUpwardPressure(const MachineInstr *MI,
-                  std::vector<unsigned> &PressureResult,
-                  std::vector<unsigned> &MaxPressureResult) {
+void RegPressureTracker::getUpwardPressure(
+    const MachineInstr *MI, std::vector<unsigned> &PressureResult,
+    std::vector<unsigned> &MaxPressureResult) {
   // Snapshot pressure.
   PressureResult = CurrSetPressure;
   MaxPressureResult = P.MaxSetPressure;
@@ -1376,10 +1373,9 @@ getUpwardPressure(const MachineInstr *MI,
 }
 
 /// Get the pressure of each PSet after traversing this instruction top-down.
-void RegPressureTracker::
-getDownwardPressure(const MachineInstr *MI,
-                    std::vector<unsigned> &PressureResult,
-                    std::vector<unsigned> &MaxPressureResult) {
+void RegPressureTracker::getDownwardPressure(
+    const MachineInstr *MI, std::vector<unsigned> &PressureResult,
+    std::vector<unsigned> &MaxPressureResult) {
   // Snapshot pressure.
   PressureResult = CurrSetPressure;
   MaxPressureResult = P.MaxSetPressure;
diff --git a/llvm/lib/CodeGen/RegisterScavenging.cpp b/llvm/lib/CodeGen/RegisterScavenging.cpp
index e6ff5701bc4bd20..fb68eb01f9ec9f1 100644
--- a/llvm/lib/CodeGen/RegisterScavenging.cpp
+++ b/llvm/lib/CodeGen/RegisterScavenging.cpp
@@ -138,11 +138,10 @@ BitVector RegScavenger::getRegsAvailable(const TargetRegisterClass *RC) {
 /// clobbered for the longest time.
 /// Returns the register and the earliest position we know it to be free or
 /// the position MBB.end() if no register is available.
-static std::pair<MCPhysReg, MachineBasicBlock::iterator>
-findSurvivorBackwards(const MachineRegisterInfo &MRI,
-    MachineBasicBlock::iterator From, MachineBasicBlock::iterator To,
-    const LiveRegUnits &LiveOut, ArrayRef<MCPhysReg> AllocationOrder,
-    bool RestoreAfter) {
+static std::pair<MCPhysReg, MachineBasicBlock::iterator> findSurvivorBackwards(
+    const MachineRegisterInfo &MRI, MachineBasicBlock::iterator From,
+    MachineBasicBlock::iterator To, const LiveRegUnits &LiveOut,
+    ArrayRef<MCPhysReg> AllocationOrder, bool RestoreAfter) {
   bool FoundTo = false;
   MCPhysReg Survivor = 0;
   MachineBasicBlock::iterator Pos;
@@ -317,15 +316,14 @@ Register RegScavenger::scavengeRegisterBackwards(const TargetRegisterClass &RC,
   // Find the register whose use is furthest away.
   MachineBasicBlock::iterator UseMI;
   ArrayRef<MCPhysReg> AllocationOrder = RC.getRawAllocationOrder(MF);
-  std::pair<MCPhysReg, MachineBasicBlock::iterator> P =
-      findSurvivorBackwards(*MRI, MBBI, To, LiveUnits, AllocationOrder,
-                            RestoreAfter);
+  std::pair<MCPhysReg, MachineBasicBlock::iterator> P = findSurvivorBackwards(
+      *MRI, MBBI, To, LiveUnits, AllocationOrder, RestoreAfter);
   MCPhysReg Reg = P.first;
   MachineBasicBlock::iterator SpillBefore = P.second;
   // Found an available register?
   if (Reg != 0 && SpillBefore == MBB.end()) {
     LLVM_DEBUG(dbgs() << "Scavenged free register: " << printReg(Reg, TRI)
-               << '\n');
+                      << '\n');
     return Reg;
   }
 
@@ -335,7 +333,7 @@ Register RegScavenger::scavengeRegisterBackwards(const TargetRegisterClass &RC,
   assert(Reg != 0 && "No register left to scavenge!");
 
   MachineBasicBlock::iterator ReloadAfter =
-    RestoreAfter ? std::next(MBBI) : MBBI;
+      RestoreAfter ? std::next(MBBI) : MBBI;
   MachineBasicBlock::iterator ReloadBefore = std::next(ReloadAfter);
   if (ReloadBefore != MBB.end())
     LLVM_DEBUG(dbgs() << "Reload before: " << *ReloadBefore << '\n');
@@ -343,7 +341,7 @@ Register RegScavenger::scavengeRegisterBackwards(const TargetRegisterClass &RC,
   Scavenged.Restore = &*std::prev(SpillBefore);
   LiveUnits.removeReg(Reg);
   LLVM_DEBUG(dbgs() << "Scavenged register with spill: " << printReg(Reg, TRI)
-             << " until " << *SpillBefore);
+                    << " until " << *SpillBefore);
   return Reg;
 }
 
@@ -413,7 +411,7 @@ static bool scavengeFrameVirtualRegsInBlock(MachineRegisterInfo &MRI,
 
   unsigned InitialNumVirtRegs = MRI.getNumVirtRegs();
   bool NextInstructionReadsVReg = false;
-  for (MachineBasicBlock::iterator I = MBB.end(); I != MBB.begin(); ) {
+  for (MachineBasicBlock::iterator I = MBB.end(); I != MBB.begin();) {
     --I;
     // Move RegScavenger to the position between *I and *std::next(I).
     RS.backward(I);
diff --git a/llvm/lib/CodeGen/RegisterUsageInfo.cpp b/llvm/lib/CodeGen/RegisterUsageInfo.cpp
index 51bac3fc0a233ad..b28945d4c21e66a 100644
--- a/llvm/lib/CodeGen/RegisterUsageInfo.cpp
+++ b/llvm/lib/CodeGen/RegisterUsageInfo.cpp
@@ -86,9 +86,9 @@ void PhysicalRegisterUsageInfo::print(raw_ostream &OS, const Module *M) const {
   for (const FuncPtrRegMaskPair *FPRMPair : FPRMPairVector) {
     OS << FPRMPair->first->getName() << " "
        << "Clobbered Registers: ";
-    const TargetRegisterInfo *TRI
-        = TM->getSubtarget<TargetSubtargetInfo>(*(FPRMPair->first))
-          .getRegisterInfo();
+    const TargetRegisterInfo *TRI =
+        TM->getSubtarget<TargetSubtargetInfo>(*(FPRMPair->first))
+            .getRegisterInfo();
 
     for (unsigned PReg = 1, PRegE = TRI->getNumRegs(); PReg < PRegE; ++PReg) {
       if (MachineOperand::clobbersPhysReg(&(FPRMPair->second[0]), PReg))
diff --git a/llvm/lib/CodeGen/RenameIndependentSubregs.cpp b/llvm/lib/CodeGen/RenameIndependentSubregs.cpp
index bc3ef1c0329a983..c45391ab31297d3 100644
--- a/llvm/lib/CodeGen/RenameIndependentSubregs.cpp
+++ b/llvm/lib/CodeGen/RenameIndependentSubregs.cpp
@@ -69,9 +69,8 @@ class RenameIndependentSubregs : public MachineFunctionPass {
     LiveInterval::SubRange *SR;
     unsigned Index;
 
-    SubRangeInfo(LiveIntervals &LIS, LiveInterval::SubRange &SR,
-                 unsigned Index)
-      : ConEQ(LIS), SR(&SR), Index(Index) {}
+    SubRangeInfo(LiveIntervals &LIS, LiveInterval::SubRange &SR, unsigned Index)
+        : ConEQ(LIS), SR(&SR), Index(Index) {}
   };
 
   /// Split unrelated subregister components and rename them to new vregs.
@@ -88,18 +87,18 @@ class RenameIndependentSubregs : public MachineFunctionPass {
   /// belonging to their class.
   void distribute(const IntEqClasses &Classes,
                   const SmallVectorImpl<SubRangeInfo> &SubRangeInfos,
-                  const SmallVectorImpl<LiveInterval*> &Intervals) const;
+                  const SmallVectorImpl<LiveInterval *> &Intervals) const;
 
   /// Constructs main liverange and add missing undef+dead flags.
-  void computeMainRangesFixFlags(const IntEqClasses &Classes,
+  void computeMainRangesFixFlags(
+      const IntEqClasses &Classes,
       const SmallVectorImpl<SubRangeInfo> &SubRangeInfos,
-      const SmallVectorImpl<LiveInterval*> &Intervals) const;
+      const SmallVectorImpl<LiveInterval *> &Intervals) const;
 
   /// Rewrite Machine Operands to use the new vreg belonging to their class.
   void rewriteOperands(const IntEqClasses &Classes,
                        const SmallVectorImpl<SubRangeInfo> &SubRangeInfos,
-                       const SmallVectorImpl<LiveInterval*> &Intervals) const;
-
+                       const SmallVectorImpl<LiveInterval *> &Intervals) const;
 
   LiveIntervals *LIS = nullptr;
   MachineRegisterInfo *MRI = nullptr;
@@ -132,7 +131,7 @@ bool RenameIndependentSubregs::renameComponents(LiveInterval &LI) const {
   // Create a new VReg for each class.
   Register Reg = LI.reg();
   const TargetRegisterClass *RegClass = MRI->getRegClass(Reg);
-  SmallVector<LiveInterval*, 4> Intervals;
+  SmallVector<LiveInterval *, 4> Intervals;
   Intervals.push_back(&LI);
   LLVM_DEBUG(dbgs() << printReg(Reg) << ": Found " << Classes.getNumClasses()
                     << " equivalence classes.\n");
@@ -152,7 +151,8 @@ bool RenameIndependentSubregs::renameComponents(LiveInterval &LI) const {
   return true;
 }
 
-bool RenameIndependentSubregs::findComponents(IntEqClasses &Classes,
+bool RenameIndependentSubregs::findComponents(
+    IntEqClasses &Classes,
     SmallVectorImpl<RenameIndependentSubregs::SubRangeInfo> &SubRangeInfos,
     LiveInterval &LI) const {
   // First step: Create connected components for the VNInfos inside the
@@ -187,8 +187,8 @@ bool RenameIndependentSubregs::findComponents(IntEqClasses &Classes,
       if ((SR.LaneMask & LaneMask).none())
         continue;
       SlotIndex Pos = LIS->getInstructionIndex(*MO.getParent());
-      Pos = MO.isDef() ? Pos.getRegSlot(MO.isEarlyClobber())
-                       : Pos.getBaseIndex();
+      Pos =
+          MO.isDef() ? Pos.getRegSlot(MO.isEarlyClobber()) : Pos.getBaseIndex();
       const VNInfo *VNI = SR.getVNInfoAt(Pos);
       if (VNI == nullptr)
         continue;
@@ -208,21 +208,22 @@ bool RenameIndependentSubregs::findComponents(IntEqClasses &Classes,
   return NumClasses > 1;
 }
 
-void RenameIndependentSubregs::rewriteOperands(const IntEqClasses &Classes,
+void RenameIndependentSubregs::rewriteOperands(
+    const IntEqClasses &Classes,
     const SmallVectorImpl<SubRangeInfo> &SubRangeInfos,
-    const SmallVectorImpl<LiveInterval*> &Intervals) const {
+    const SmallVectorImpl<LiveInterval *> &Intervals) const {
   const TargetRegisterInfo &TRI = *MRI->getTargetRegisterInfo();
   unsigned Reg = Intervals[0]->reg();
   for (MachineRegisterInfo::reg_nodbg_iterator I = MRI->reg_nodbg_begin(Reg),
-       E = MRI->reg_nodbg_end(); I != E; ) {
+                                               E = MRI->reg_nodbg_end();
+       I != E;) {
     MachineOperand &MO = *I++;
     if (!MO.isDef() && !MO.readsReg())
       continue;
 
     auto *MI = MO.getParent();
     SlotIndex Pos = LIS->getInstructionIndex(*MI);
-    Pos = MO.isDef() ? Pos.getRegSlot(MO.isEarlyClobber())
-                     : Pos.getBaseIndex();
+    Pos = MO.isDef() ? Pos.getRegSlot(MO.isEarlyClobber()) : Pos.getBaseIndex();
     unsigned SubRegIdx = MO.getSubReg();
     LaneBitmask LaneMask = TRI.getSubRegIndexLaneMask(SubRegIdx);
 
@@ -262,12 +263,13 @@ void RenameIndependentSubregs::rewriteOperands(const IntEqClasses &Classes,
   // classes than the original vreg.
 }
 
-void RenameIndependentSubregs::distribute(const IntEqClasses &Classes,
+void RenameIndependentSubregs::distribute(
+    const IntEqClasses &Classes,
     const SmallVectorImpl<SubRangeInfo> &SubRangeInfos,
-    const SmallVectorImpl<LiveInterval*> &Intervals) const {
+    const SmallVectorImpl<LiveInterval *> &Intervals) const {
   unsigned NumClasses = Classes.getNumClasses();
   SmallVector<unsigned, 8> VNIMapping;
-  SmallVector<LiveInterval::SubRange*, 8> SubRanges;
+  SmallVector<LiveInterval::SubRange *, 8> SubRanges;
   BumpPtrAllocator &Allocator = LIS->getVNInfoAllocator();
   for (const SubRangeInfo &SRInfo : SubRangeInfos) {
     LiveInterval::SubRange &SR = *SRInfo.SR;
@@ -275,14 +277,15 @@ void RenameIndependentSubregs::distribute(const IntEqClasses &Classes,
     VNIMapping.clear();
     VNIMapping.reserve(NumValNos);
     SubRanges.clear();
-    SubRanges.resize(NumClasses-1, nullptr);
+    SubRanges.resize(NumClasses - 1, nullptr);
     for (unsigned I = 0; I < NumValNos; ++I) {
       const VNInfo &VNI = *SR.valnos[I];
       unsigned LocalID = SRInfo.ConEQ.getEqClass(&VNI);
       unsigned ID = Classes[LocalID + SRInfo.Index];
       VNIMapping.push_back(ID);
-      if (ID > 0 && SubRanges[ID-1] == nullptr)
-        SubRanges[ID-1] = Intervals[ID]->createSubRange(Allocator, SR.LaneMask);
+      if (ID > 0 && SubRanges[ID - 1] == nullptr)
+        SubRanges[ID - 1] =
+            Intervals[ID]->createSubRange(Allocator, SR.LaneMask);
     }
     DistributeRange(SR, SubRanges.data(), VNIMapping);
   }
@@ -299,7 +302,7 @@ static bool subRangeLiveAt(const LiveInterval &LI, SlotIndex Pos) {
 void RenameIndependentSubregs::computeMainRangesFixFlags(
     const IntEqClasses &Classes,
     const SmallVectorImpl<SubRangeInfo> &SubRangeInfos,
-    const SmallVectorImpl<LiveInterval*> &Intervals) const {
+    const SmallVectorImpl<LiveInterval *> &Intervals) const {
   BumpPtrAllocator &Allocator = LIS->getVNInfoAllocator();
   const SlotIndexes &Indexes = *LIS->getSlotIndexes();
   for (size_t I = 0, E = Intervals.size(); I < E; ++I) {
@@ -328,10 +331,10 @@ void RenameIndependentSubregs::computeMainRangesFixFlags(
             continue;
 
           MachineBasicBlock::iterator InsertPos =
-            llvm::findPHICopyInsertPoint(PredMBB, &MBB, Reg);
+              llvm::findPHICopyInsertPoint(PredMBB, &MBB, Reg);
           const MCInstrDesc &MCDesc = TII->get(TargetOpcode::IMPLICIT_DEF);
-          MachineInstrBuilder ImpDef = BuildMI(*PredMBB, InsertPos,
-                                               DebugLoc(), MCDesc, Reg);
+          MachineInstrBuilder ImpDef =
+              BuildMI(*PredMBB, InsertPos, DebugLoc(), MCDesc, Reg);
           SlotIndex DefIdx = LIS->InsertMachineInstrInMaps(*ImpDef);
           SlotIndex RegDefIdx = DefIdx.getRegSlot();
           for (LiveInterval::SubRange &SR : LI.subranges()) {
diff --git a/llvm/lib/CodeGen/ResetMachineFunctionPass.cpp b/llvm/lib/CodeGen/ResetMachineFunctionPass.cpp
index 11bdf3bb2ba8c28..95552b00d5a8dba 100644
--- a/llvm/lib/CodeGen/ResetMachineFunctionPass.cpp
+++ b/llvm/lib/CodeGen/ResetMachineFunctionPass.cpp
@@ -31,59 +31,58 @@ STATISTIC(NumFunctionsReset, "Number of functions reset");
 STATISTIC(NumFunctionsVisited, "Number of functions visited");
 
 namespace {
-  class ResetMachineFunction : public MachineFunctionPass {
-    /// Tells whether or not this pass should emit a fallback
-    /// diagnostic when it resets a function.
-    bool EmitFallbackDiag;
-    /// Whether we should abort immediately instead of resetting the function.
-    bool AbortOnFailedISel;
+class ResetMachineFunction : public MachineFunctionPass {
+  /// Tells whether or not this pass should emit a fallback
+  /// diagnostic when it resets a function.
+  bool EmitFallbackDiag;
+  /// Whether we should abort immediately instead of resetting the function.
+  bool AbortOnFailedISel;
 
-  public:
-    static char ID; // Pass identification, replacement for typeid
-    ResetMachineFunction(bool EmitFallbackDiag = false,
-                         bool AbortOnFailedISel = false)
-        : MachineFunctionPass(ID), EmitFallbackDiag(EmitFallbackDiag),
-          AbortOnFailedISel(AbortOnFailedISel) {}
+public:
+  static char ID; // Pass identification, replacement for typeid
+  ResetMachineFunction(bool EmitFallbackDiag = false,
+                       bool AbortOnFailedISel = false)
+      : MachineFunctionPass(ID), EmitFallbackDiag(EmitFallbackDiag),
+        AbortOnFailedISel(AbortOnFailedISel) {}
 
-    StringRef getPassName() const override { return "ResetMachineFunction"; }
+  StringRef getPassName() const override { return "ResetMachineFunction"; }
 
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.addPreserved<StackProtector>();
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.addPreserved<StackProtector>();
+    MachineFunctionPass::getAnalysisUsage(AU);
+  }
 
-    bool runOnMachineFunction(MachineFunction &MF) override {
-      ++NumFunctionsVisited;
-      // No matter what happened, whether we successfully selected the function
-      // or not, nothing is going to use the vreg types after us. Make sure they
-      // disappear.
-      auto ClearVRegTypesOnReturn =
-          make_scope_exit([&MF]() { MF.getRegInfo().clearVirtRegTypes(); });
+  bool runOnMachineFunction(MachineFunction &MF) override {
+    ++NumFunctionsVisited;
+    // No matter what happened, whether we successfully selected the function
+    // or not, nothing is going to use the vreg types after us. Make sure they
+    // disappear.
+    auto ClearVRegTypesOnReturn =
+        make_scope_exit([&MF]() { MF.getRegInfo().clearVirtRegTypes(); });
 
-      if (MF.getProperties().hasProperty(
-              MachineFunctionProperties::Property::FailedISel)) {
-        if (AbortOnFailedISel)
-          report_fatal_error("Instruction selection failed");
-        LLVM_DEBUG(dbgs() << "Resetting: " << MF.getName() << '\n');
-        ++NumFunctionsReset;
-        MF.reset();
-        MF.initTargetMachineFunctionInfo(MF.getSubtarget());
+    if (MF.getProperties().hasProperty(
+            MachineFunctionProperties::Property::FailedISel)) {
+      if (AbortOnFailedISel)
+        report_fatal_error("Instruction selection failed");
+      LLVM_DEBUG(dbgs() << "Resetting: " << MF.getName() << '\n');
+      ++NumFunctionsReset;
+      MF.reset();
+      MF.initTargetMachineFunctionInfo(MF.getSubtarget());
 
-        const LLVMTargetMachine &TM = MF.getTarget();
-        // MRI callback for target specific initializations.
-        TM.registerMachineRegisterInfoCallback(MF);
+      const LLVMTargetMachine &TM = MF.getTarget();
+      // MRI callback for target specific initializations.
+      TM.registerMachineRegisterInfoCallback(MF);
 
-        if (EmitFallbackDiag) {
-          const Function &F = MF.getFunction();
-          DiagnosticInfoISelFallback DiagFallback(F);
-          F.getContext().diagnose(DiagFallback);
-        }
-        return true;
+      if (EmitFallbackDiag) {
+        const Function &F = MF.getFunction();
+        DiagnosticInfoISelFallback DiagFallback(F);
+        F.getContext().diagnose(DiagFallback);
       }
-      return false;
+      return true;
     }
-
-  };
+    return false;
+  }
+};
 } // end anonymous namespace
 
 char ResetMachineFunction::ID = 0;
diff --git a/llvm/lib/CodeGen/SafeStack.cpp b/llvm/lib/CodeGen/SafeStack.cpp
index bcad7a3f24dabe8..9c5a68c6d97be5f 100644
--- a/llvm/lib/CodeGen/SafeStack.cpp
+++ b/llvm/lib/CodeGen/SafeStack.cpp
@@ -93,9 +93,8 @@ STATISTIC(NumUnsafeStackRestorePoints, "Number of setjmps and landingpads");
 
 /// Use __safestack_pointer_address even if the platform has a faster way of
 /// access safe stack pointer.
-static cl::opt<bool>
-    SafeStackUsePointerAddress("safestack-use-pointer-address",
-                                  cl::init(false), cl::Hidden);
+static cl::opt<bool> SafeStackUsePointerAddress("safestack-use-pointer-address",
+                                                cl::init(false), cl::Hidden);
 
 static cl::opt<bool> ClColoring("safe-stack-coloring",
                                 cl::desc("enable safe stack coloring"),
@@ -148,7 +147,7 @@ class SafeStack {
 
   /// Calculate the allocation size of a given alloca. Returns 0 if the
   /// size can not be statically determined.
-  uint64_t getStaticAllocaAllocationSize(const AllocaInst* AI);
+  uint64_t getStaticAllocaAllocationSize(const AllocaInst *AI);
 
   /// Allocate space for all static allocas in \p StaticAllocas,
   /// replace allocas with pointers into the unsafe stack.
@@ -204,7 +203,7 @@ class SafeStack {
 
 constexpr Align SafeStack::StackAlignment;
 
-uint64_t SafeStack::getStaticAllocaAllocationSize(const AllocaInst* AI) {
+uint64_t SafeStack::getStaticAllocaAllocationSize(const AllocaInst *AI) {
   uint64_t Size = DL.getTypeAllocSize(AI->getAllocatedType());
   if (AI->isArrayAllocation()) {
     auto C = dyn_cast<ConstantInt>(AI->getArraySize());
@@ -266,7 +265,8 @@ bool SafeStack::IsMemIntrinsicSafe(const MemIntrinsic *MI, const Use &U,
 
   const auto *Len = dyn_cast<ConstantInt>(MI->getLength());
   // Non-constant size => unsafe. FIXME: try SCEV getRange.
-  if (!Len) return false;
+  if (!Len)
+    return false;
   return IsAccessSafe(U, Len->getZExtValue(), AllocaPtr, AllocaSize);
 }
 
@@ -583,7 +583,7 @@ Value *SafeStack::moveStaticAllocasToUnsafeStack(
     Value *Off = IRB.CreateGEP(Int8Ty, BasePointer, // BasePointer is i8*
                                ConstantInt::get(Int32Ty, -Offset));
     Value *NewArg = IRB.CreateBitCast(Off, Arg->getType(),
-                                     Arg->getName() + ".unsafe-byval");
+                                      Arg->getName() + ".unsafe-byval");
 
     // Replace alloc with the new location.
     replaceDbgDeclare(Arg, BasePointer, DIB, DIExpression::ApplyOffset,
@@ -736,7 +736,7 @@ void SafeStack::TryInlinePointerAddress() {
   if (!CI)
     return;
 
-  if(F.hasOptNone())
+  if (F.hasOptNone())
     return;
 
   Function *Callee = CI->getCalledFunction();
diff --git a/llvm/lib/CodeGen/ScheduleDAG.cpp b/llvm/lib/CodeGen/ScheduleDAG.cpp
index 14ec41920e3e6bb..a23e91283f81f66 100644
--- a/llvm/lib/CodeGen/ScheduleDAG.cpp
+++ b/llvm/lib/CodeGen/ScheduleDAG.cpp
@@ -43,17 +43,16 @@ STATISTIC(NumTopoInits,
           "Number of times the topological order has been recomputed");
 
 #ifndef NDEBUG
-static cl::opt<bool> StressSchedOpt(
-  "stress-sched", cl::Hidden, cl::init(false),
-  cl::desc("Stress test instruction scheduling"));
+static cl::opt<bool>
+    StressSchedOpt("stress-sched", cl::Hidden, cl::init(false),
+                   cl::desc("Stress test instruction scheduling"));
 #endif
 
 void SchedulingPriorityQueue::anchor() {}
 
 ScheduleDAG::ScheduleDAG(MachineFunction &mf)
     : TM(mf.getTarget()), TII(mf.getSubtarget().getInstrInfo()),
-      TRI(mf.getSubtarget().getRegisterInfo()), MF(mf),
-      MRI(mf.getRegInfo()) {
+      TRI(mf.getSubtarget().getRegisterInfo()), MF(mf), MRI(mf.getRegInfo()) {
 #ifndef NDEBUG
   StressSched = StressSchedOpt;
 #endif
@@ -68,16 +67,25 @@ void ScheduleDAG::clearDAG() {
 }
 
 const MCInstrDesc *ScheduleDAG::getNodeDesc(const SDNode *Node) const {
-  if (!Node || !Node->isMachineOpcode()) return nullptr;
+  if (!Node || !Node->isMachineOpcode())
+    return nullptr;
   return &TII->get(Node->getMachineOpcode());
 }
 
 LLVM_DUMP_METHOD void SDep::dump(const TargetRegisterInfo *TRI) const {
   switch (getKind()) {
-  case Data:   dbgs() << "Data"; break;
-  case Anti:   dbgs() << "Anti"; break;
-  case Output: dbgs() << "Out "; break;
-  case Order:  dbgs() << "Ord "; break;
+  case Data:
+    dbgs() << "Data";
+    break;
+  case Anti:
+    dbgs() << "Anti";
+    break;
+  case Output:
+    dbgs() << "Out ";
+    break;
+  case Order:
+    dbgs() << "Ord ";
+    break;
   }
 
   switch (getKind()) {
@@ -92,13 +100,23 @@ LLVM_DUMP_METHOD void SDep::dump(const TargetRegisterInfo *TRI) const {
     break;
   case Order:
     dbgs() << " Latency=" << getLatency();
-    switch(Contents.OrdKind) {
-    case Barrier:      dbgs() << " Barrier"; break;
+    switch (Contents.OrdKind) {
+    case Barrier:
+      dbgs() << " Barrier";
+      break;
     case MayAliasMem:
-    case MustAliasMem: dbgs() << " Memory"; break;
-    case Artificial:   dbgs() << " Artificial"; break;
-    case Weak:         dbgs() << " Weak"; break;
-    case Cluster:      dbgs() << " Cluster"; break;
+    case MustAliasMem:
+      dbgs() << " Memory";
+      break;
+    case Artificial:
+      dbgs() << " Artificial";
+      break;
+    case Weak:
+      dbgs() << " Weak";
+      break;
+    case Cluster:
+      dbgs() << " Cluster";
+      break;
     }
     break;
   }
@@ -146,8 +164,7 @@ bool SUnit::addPred(const SDep &D, bool Required) {
   if (!N->isScheduled) {
     if (D.isWeak()) {
       ++WeakPredsLeft;
-    }
-    else {
+    } else {
       assert(NumPredsLeft < std::numeric_limits<unsigned>::max() &&
              "NumPredsLeft will overflow!");
       ++NumPredsLeft;
@@ -156,8 +173,7 @@ bool SUnit::addPred(const SDep &D, bool Required) {
   if (!isScheduled) {
     if (D.isWeak()) {
       ++N->WeakSuccsLeft;
-    }
-    else {
+    } else {
       assert(N->NumSuccsLeft < std::numeric_limits<unsigned>::max() &&
              "NumSuccsLeft will overflow!");
       ++N->NumSuccsLeft;
@@ -217,8 +233,9 @@ void SUnit::removePred(const SDep &D) {
 }
 
 void SUnit::setDepthDirty() {
-  if (!isDepthCurrent) return;
-  SmallVector<SUnit*, 8> WorkList;
+  if (!isDepthCurrent)
+    return;
+  SmallVector<SUnit *, 8> WorkList;
   WorkList.push_back(this);
   do {
     SUnit *SU = WorkList.pop_back_val();
@@ -232,8 +249,9 @@ void SUnit::setDepthDirty() {
 }
 
 void SUnit::setHeightDirty() {
-  if (!isHeightCurrent) return;
-  SmallVector<SUnit*, 8> WorkList;
+  if (!isHeightCurrent)
+    return;
+  SmallVector<SUnit *, 8> WorkList;
   WorkList.push_back(this);
   do {
     SUnit *SU = WorkList.pop_back_val();
@@ -264,7 +282,7 @@ void SUnit::setHeightToAtLeast(unsigned NewHeight) {
 
 /// Calculates the maximal path from the node to the exit.
 void SUnit::ComputeDepth() {
-  SmallVector<SUnit*, 8> WorkList;
+  SmallVector<SUnit *, 8> WorkList;
   WorkList.push_back(this);
   do {
     SUnit *Cur = WorkList.back();
@@ -274,8 +292,8 @@ void SUnit::ComputeDepth() {
     for (const SDep &PredDep : Cur->Preds) {
       SUnit *PredSU = PredDep.getSUnit();
       if (PredSU->isDepthCurrent)
-        MaxPredDepth = std::max(MaxPredDepth,
-                                PredSU->Depth + PredDep.getLatency());
+        MaxPredDepth =
+            std::max(MaxPredDepth, PredSU->Depth + PredDep.getLatency());
       else {
         Done = false;
         WorkList.push_back(PredSU);
@@ -295,7 +313,7 @@ void SUnit::ComputeDepth() {
 
 /// Calculates the maximal path from the node to the entry.
 void SUnit::ComputeHeight() {
-  SmallVector<SUnit*, 8> WorkList;
+  SmallVector<SUnit *, 8> WorkList;
   WorkList.push_back(this);
   do {
     SUnit *Cur = WorkList.back();
@@ -305,8 +323,8 @@ void SUnit::ComputeHeight() {
     for (const SDep &SuccDep : Cur->Succs) {
       SUnit *SuccSU = SuccDep.getSUnit();
       if (SuccSU->isHeightCurrent)
-        MaxSuccHeight = std::max(MaxSuccHeight,
-                                 SuccSU->Height + SuccDep.getLatency());
+        MaxSuccHeight =
+            std::max(MaxSuccHeight, SuccSU->Height + SuccDep.getLatency());
       else {
         Done = false;
         WorkList.push_back(SuccSU);
@@ -406,12 +424,12 @@ unsigned ScheduleDAG::VerifyScheduledDAG(bool isBottomUp) {
     }
     if (SUnit.isScheduled &&
         (isBottomUp ? SUnit.getHeight() : SUnit.getDepth()) >
-          unsigned(std::numeric_limits<int>::max())) {
+            unsigned(std::numeric_limits<int>::max())) {
       if (!AnyNotSched)
         dbgs() << "*** Scheduling failed! ***\n";
       dumpNode(SUnit);
-      dbgs() << "has an unexpected "
-           << (isBottomUp ? "Height" : "Depth") << " value!\n";
+      dbgs() << "has an unexpected " << (isBottomUp ? "Height" : "Depth")
+             << " value!\n";
       AnyNotSched = true;
     }
     if (isBottomUp) {
@@ -470,7 +488,7 @@ void ScheduleDAGTopologicalSort::InitDAGTopologicalSorting() {
   Updates.clear();
 
   unsigned DAGSize = SUnits.size();
-  std::vector<SUnit*> WorkList;
+  std::vector<SUnit *> WorkList;
   WorkList.reserve(DAGSize);
 
   Index2Node.resize(DAGSize);
@@ -513,10 +531,10 @@ void ScheduleDAGTopologicalSort::InitDAGTopologicalSorting() {
 
 #ifndef NDEBUG
   // Check correctness of the ordering
-  for (SUnit &SU : SUnits)  {
+  for (SUnit &SU : SUnits) {
     for (const SDep &PD : SU.Preds) {
       assert(Node2Index[SU.NodeNum] > Node2Index[PD.getSUnit()->NodeNum] &&
-      "Wrong topological sorting");
+             "Wrong topological sorting");
     }
   }
 #endif
@@ -571,7 +589,7 @@ void ScheduleDAGTopologicalSort::RemovePred(SUnit *M, SUnit *N) {
 
 void ScheduleDAGTopologicalSort::DFS(const SUnit *SU, int UpperBound,
                                      bool &HasLoop) {
-  std::vector<const SUnit*> WorkList;
+  std::vector<const SUnit *> WorkList;
   WorkList.reserve(SUnits.size());
 
   WorkList.push_back(SU);
@@ -599,7 +617,7 @@ void ScheduleDAGTopologicalSort::DFS(const SUnit *SU, int UpperBound,
 std::vector<int> ScheduleDAGTopologicalSort::GetSubGraph(const SUnit &StartSU,
                                                          const SUnit &TargetSU,
                                                          bool &Success) {
-  std::vector<const SUnit*> WorkList;
+  std::vector<const SUnit *> WorkList;
   int LowerBound = Node2Index[StartSU.NodeNum];
   int UpperBound = Node2Index[TargetSU.NodeNum];
   bool Found = false;
@@ -677,7 +695,7 @@ std::vector<int> ScheduleDAGTopologicalSort::GetSubGraph(const SUnit &StartSU,
   return Nodes;
 }
 
-void ScheduleDAGTopologicalSort::Shift(BitVector& Visited, int LowerBound,
+void ScheduleDAGTopologicalSort::Shift(BitVector &Visited, int LowerBound,
                                        int UpperBound) {
   std::vector<int> L;
   int shift = 0;
@@ -708,8 +726,7 @@ bool ScheduleDAGTopologicalSort::WillCreateCycle(SUnit *TargetSU, SUnit *SU) {
   if (IsReachable(SU, TargetSU))
     return true;
   for (const SDep &PredDep : TargetSU->Preds)
-    if (PredDep.isAssignedRegDep() &&
-        IsReachable(SU, PredDep.getSUnit()))
+    if (PredDep.isAssignedRegDep() && IsReachable(SU, PredDep.getSUnit()))
       return true;
   return false;
 }
@@ -747,8 +764,8 @@ void ScheduleDAGTopologicalSort::Allocate(int n, int index) {
   Index2Node[index] = n;
 }
 
-ScheduleDAGTopologicalSort::
-ScheduleDAGTopologicalSort(std::vector<SUnit> &sunits, SUnit *exitsu)
-  : SUnits(sunits), ExitSU(exitsu) {}
+ScheduleDAGTopologicalSort::ScheduleDAGTopologicalSort(
+    std::vector<SUnit> &sunits, SUnit *exitsu)
+    : SUnits(sunits), ExitSU(exitsu) {}
 
 ScheduleHazardRecognizer::~ScheduleHazardRecognizer() = default;
diff --git a/llvm/lib/CodeGen/ScheduleDAGInstrs.cpp b/llvm/lib/CodeGen/ScheduleDAGInstrs.cpp
index 0190fa345eb3632..90058514868feb7 100644
--- a/llvm/lib/CodeGen/ScheduleDAGInstrs.cpp
+++ b/llvm/lib/CodeGen/ScheduleDAGInstrs.cpp
@@ -65,8 +65,9 @@ static cl::opt<bool>
     EnableAASchedMI("enable-aa-sched-mi", cl::Hidden,
                     cl::desc("Enable use of AA during MI DAG construction"));
 
-static cl::opt<bool> UseTBAA("use-tbaa-in-sched-mi", cl::Hidden,
-    cl::init(true), cl::desc("Enable use of TBAA during MI DAG construction"));
+static cl::opt<bool>
+    UseTBAA("use-tbaa-in-sched-mi", cl::Hidden, cl::init(true),
+            cl::desc("Enable use of TBAA during MI DAG construction"));
 
 // Note: the two options below might be used in tuning compile time vs
 // output quality. Setting HugeRegion so large that it will never be
@@ -74,10 +75,11 @@ static cl::opt<bool> UseTBAA("use-tbaa-in-sched-mi", cl::Hidden,
 
 // When Stores and Loads maps (or NonAliasStores and NonAliasLoads)
 // together hold this many SUs, a reduction of maps will be done.
-static cl::opt<unsigned> HugeRegion("dag-maps-huge-region", cl::Hidden,
-    cl::init(1000), cl::desc("The limit to use while constructing the DAG "
-                             "prior to scheduling, at which point a trade-off "
-                             "is made to avoid excessive compile time."));
+static cl::opt<unsigned>
+    HugeRegion("dag-maps-huge-region", cl::Hidden, cl::init(1000),
+               cl::desc("The limit to use while constructing the DAG "
+                        "prior to scheduling, at which point a trade-off "
+                        "is made to avoid excessive compile time."));
 
 static cl::opt<unsigned> ReductionSize(
     "dag-maps-reduction-size", cl::Hidden,
@@ -115,8 +117,9 @@ ScheduleDAGInstrs::ScheduleDAGInstrs(MachineFunction &mf,
                                      bool RemoveKillFlags)
     : ScheduleDAG(mf), MLI(mli), MFI(mf.getFrameInfo()),
       RemoveKillFlags(RemoveKillFlags),
-      UnknownValue(UndefValue::get(
-                             Type::getVoidTy(mf.getFunction().getContext()))), Topo(SUnits, &ExitSU) {
+      UnknownValue(
+          UndefValue::get(Type::getVoidTy(mf.getFunction().getContext()))),
+      Topo(SUnits, &ExitSU) {
   DbgValues.clear();
 
   const TargetSubtargetInfo &ST = mf.getSubtarget();
@@ -177,9 +180,7 @@ static bool getUnderlyingObjectsForInstr(const MachineInstr *MI,
   return true;
 }
 
-void ScheduleDAGInstrs::startBlock(MachineBasicBlock *bb) {
-  BB = bb;
-}
+void ScheduleDAGInstrs::startBlock(MachineBasicBlock *bb) { BB = bb; }
 
 void ScheduleDAGInstrs::finishBlock() {
   // Subclasses should no longer refer to the old block.
@@ -373,8 +374,8 @@ void ScheduleDAGInstrs::addPhysRegDeps(SUnit *SU, unsigned OperIdx) {
   }
 }
 
-LaneBitmask ScheduleDAGInstrs::getLaneMaskForMO(const MachineOperand &MO) const
-{
+LaneBitmask
+ScheduleDAGInstrs::getLaneMaskForMO(const MachineOperand &MO) const {
   Register Reg = MO.getReg();
   // No point in tracking lanemasks if we don't have interesting subregisters.
   const TargetRegisterClass &RC = *MRI.getRegClass(Reg);
@@ -439,7 +440,9 @@ void ScheduleDAGInstrs::addVRegDefDeps(SUnit *SU, unsigned OperIdx) {
     // Add data dependence to all uses we found so far.
     const TargetSubtargetInfo &ST = MF.getSubtarget();
     for (VReg2SUnitOperIdxMultiMap::iterator I = CurrentVRegUses.find(Reg),
-         E = CurrentVRegUses.end(); I != E; /*empty*/) {
+                                             E = CurrentVRegUses.end();
+         I != E;
+         /*empty*/) {
       LaneBitmask LaneMask = I->LaneMask;
       // Ignore uses of other lanes.
       if ((LaneMask & KillLaneMask).none()) {
@@ -479,8 +482,8 @@ void ScheduleDAGInstrs::addVRegDefDeps(SUnit *SU, unsigned OperIdx) {
   // are not eliminated sometime during scheduling. The output dependence edge
   // is also useful if output latency exceeds def-use latency.
   LaneBitmask LaneMask = DefLaneMask;
-  for (VReg2SUnit &V2SU : make_range(CurrentVRegDefs.find(Reg),
-                                     CurrentVRegDefs.end())) {
+  for (VReg2SUnit &V2SU :
+       make_range(CurrentVRegDefs.find(Reg), CurrentVRegDefs.end())) {
     // Ignore defs for other lanes.
     if ((V2SU.LaneMask & LaneMask).none())
       continue;
@@ -495,7 +498,7 @@ void ScheduleDAGInstrs::addVRegDefDeps(SUnit *SU, unsigned OperIdx) {
       continue;
     SDep Dep(SU, SDep::Output, Reg);
     Dep.setLatency(
-      SchedModel.computeOutputLatency(MI, OperIdx, DefSU->getInstr()));
+        SchedModel.computeOutputLatency(MI, OperIdx, DefSU->getInstr()));
     DefSU->addPred(Dep);
 
     // Update current definition. This can get tricky if the def was about a
@@ -527,13 +530,13 @@ void ScheduleDAGInstrs::addVRegUseDeps(SUnit *SU, unsigned OperIdx) {
   Register Reg = MO.getReg();
 
   // Remember the use. Data dependencies will be added when we find the def.
-  LaneBitmask LaneMask = TrackLaneMasks ? getLaneMaskForMO(MO)
-                                        : LaneBitmask::getAll();
+  LaneBitmask LaneMask =
+      TrackLaneMasks ? getLaneMaskForMO(MO) : LaneBitmask::getAll();
   CurrentVRegUses.insert(VReg2SUnitOperIdx(Reg, LaneMask, OperIdx, SU));
 
   // Add antidependences to the following defs of the vreg.
-  for (VReg2SUnit &V2SU : make_range(CurrentVRegDefs.find(Reg),
-                                     CurrentVRegDefs.end())) {
+  for (VReg2SUnit &V2SU :
+       make_range(CurrentVRegDefs.find(Reg), CurrentVRegDefs.end())) {
     // Ignore defs for unrelated lanes.
     LaneBitmask PrevDefLaneMask = V2SU.LaneMask;
     if ((PrevDefLaneMask & LaneMask).none())
@@ -552,8 +555,8 @@ static inline bool isGlobalMemoryObject(MachineInstr *MI) {
          (MI->hasOrderedMemoryRef() && !MI->isDereferenceableInvariantLoad());
 }
 
-void ScheduleDAGInstrs::addChainDependency (SUnit *SUa, SUnit *SUb,
-                                            unsigned Latency) {
+void ScheduleDAGInstrs::addChainDependency(SUnit *SUa, SUnit *SUb,
+                                           unsigned Latency) {
   if (SUa->getInstr()->mayAlias(AAForDep, *SUb->getInstr(), UseTBAA)) {
     SDep Dep(SUa, SDep::MayAliasMem);
     Dep.setLatency(Latency);
@@ -632,7 +635,8 @@ class ScheduleDAGInstrs::Value2SUsMap : public MapVector<ValueType, SUList> {
   /// To keep NumNodes up to date, insert() is used instead of
   /// this operator w/ push_back().
   ValueType &operator[](const SUList &Key) {
-    llvm_unreachable("Don't use. Use insert() instead."); };
+    llvm_unreachable("Don't use. Use insert() instead.");
+  };
 
   /// Adds SU to the SUList of V. If Map grows huge, reduce its size by calling
   /// reduce().
@@ -667,9 +671,7 @@ class ScheduleDAGInstrs::Value2SUsMap : public MapVector<ValueType, SUList> {
       NumNodes += I.second.size();
   }
 
-  unsigned inline getTrueMemOrderLatency() const {
-    return TrueMemOrderLatency;
-  }
+  unsigned inline getTrueMemOrderLatency() const { return TrueMemOrderLatency; }
 
   void dump();
 };
@@ -677,8 +679,7 @@ class ScheduleDAGInstrs::Value2SUsMap : public MapVector<ValueType, SUList> {
 void ScheduleDAGInstrs::addChainDependencies(SUnit *SU,
                                              Value2SUsMap &Val2SUsMap) {
   for (auto &I : Val2SUsMap)
-    addChainDependencies(SU, I.second,
-                         Val2SUsMap.getTrueMemOrderLatency());
+    addChainDependencies(SU, I.second, Val2SUsMap.getTrueMemOrderLatency());
 }
 
 void ScheduleDAGInstrs::addChainDependencies(SUnit *SU,
@@ -686,8 +687,7 @@ void ScheduleDAGInstrs::addChainDependencies(SUnit *SU,
                                              ValueType V) {
   Value2SUsMap::iterator Itr = Val2SUsMap.find(V);
   if (Itr != Val2SUsMap.end())
-    addChainDependencies(SU, Itr->second,
-                         Val2SUsMap.getTrueMemOrderLatency());
+    addChainDependencies(SU, Itr->second, Val2SUsMap.getTrueMemOrderLatency());
 }
 
 void ScheduleDAGInstrs::addBarrierChain(Value2SUsMap &map) {
@@ -728,7 +728,8 @@ void ScheduleDAGInstrs::insertBarrierChain(Value2SUsMap &map) {
 
   // Remove all entries with empty su lists.
   map.remove_if([&](std::pair<ValueType, SUList> &mapEntry) {
-      return (mapEntry.second.empty()); });
+    return (mapEntry.second.empty());
+  });
 
   // Recompute the size of the map (NumNodes).
   map.reComputeSize();
@@ -740,8 +741,8 @@ void ScheduleDAGInstrs::buildSchedGraph(AAResults *AA,
                                         LiveIntervals *LIS,
                                         bool TrackLaneMasks) {
   const TargetSubtargetInfo &ST = MF.getSubtarget();
-  bool UseAA = EnableAASchedMI.getNumOccurrences() > 0 ? EnableAASchedMI
-                                                       : ST.useAA();
+  bool UseAA =
+      EnableAASchedMI.getNumOccurrences() > 0 ? EnableAASchedMI : ST.useAA();
   AAForDep = UseAA ? AA : nullptr;
 
   BarrierChain = nullptr;
@@ -841,9 +842,8 @@ void ScheduleDAGInstrs::buildSchedGraph(AAResults *AA,
       RPTracker->recede(RegOpers);
     }
 
-    assert(
-        (CanHandleTerminators || (!MI.isTerminator() && !MI.isPosition())) &&
-        "Cannot schedule terminators or labels!");
+    assert((CanHandleTerminators || (!MI.isTerminator() && !MI.isPosition())) &&
+           "Cannot schedule terminators or labels!");
 
     // Add register-based dependencies (data, anti, and output).
     // For some instructions (calls, returns, inline-asm, etc.) there can
@@ -945,8 +945,8 @@ void ScheduleDAGInstrs::buildSchedGraph(AAResults *AA,
     // empty, or filled with the Values of memory locations which this
     // SU depends on.
     UnderlyingObjectsVector Objs;
-    bool ObjsFound = getUnderlyingObjectsForInstr(&MI, MFI, Objs,
-                                                  MF.getDataLayout());
+    bool ObjsFound =
+        getUnderlyingObjectsForInstr(&MI, MFI, Objs, MF.getDataLayout());
 
     if (MI.mayStore()) {
       if (!ObjsFound) {
@@ -1030,7 +1030,7 @@ void ScheduleDAGInstrs::buildSchedGraph(AAResults *AA,
   Topo.MarkDirty();
 }
 
-raw_ostream &llvm::operator<<(raw_ostream &OS, const PseudoSourceValue* PSV) {
+raw_ostream &llvm::operator<<(raw_ostream &OS, const PseudoSourceValue *PSV) {
   PSV->printCustom(OS);
   return OS;
 }
@@ -1088,12 +1088,10 @@ void ScheduleDAGInstrs::reduceHugeMemNodeMaps(Value2SUsMap &stores,
       BarrierChain = newBarrierChain;
       LLVM_DEBUG(dbgs() << "Inserting new barrier chain: SU("
                         << BarrierChain->NodeNum << ").\n";);
-    }
-    else
+    } else
       LLVM_DEBUG(dbgs() << "Keeping old barrier chain: SU("
                         << BarrierChain->NodeNum << ").\n";);
-  }
-  else
+  } else
     BarrierChain = newBarrierChain;
 
   insertBarrierChain(stores);
@@ -1241,16 +1239,16 @@ class SchedDFSImpl {
   /// Join DAG nodes into equivalence classes by their subtree.
   IntEqClasses SubtreeClasses;
   /// List PredSU, SuccSU pairs that represent data edges between subtrees.
-  std::vector<std::pair<const SUnit *, const SUnit*>> ConnectionPairs;
+  std::vector<std::pair<const SUnit *, const SUnit *>> ConnectionPairs;
 
   struct RootData {
     unsigned NodeID;
-    unsigned ParentNodeID;  ///< Parent node (member of the parent subtree).
+    unsigned ParentNodeID;      ///< Parent node (member of the parent subtree).
     unsigned SubInstrCount = 0; ///< Instr count in this tree only, not
                                 /// children.
 
-    RootData(unsigned id): NodeID(id),
-                           ParentNodeID(SchedDFSResult::InvalidSubtreeID) {}
+    RootData(unsigned id)
+        : NodeID(id), ParentNodeID(SchedDFSResult::InvalidSubtreeID) {}
 
     unsigned getSparseSetIndex() const { return NodeID; }
   };
@@ -1258,7 +1256,7 @@ class SchedDFSImpl {
   SparseSet<RootData> RootSet;
 
 public:
-  SchedDFSImpl(SchedDFSResult &r): R(r), SubtreeClasses(R.DFSNodeData.size()) {
+  SchedDFSImpl(SchedDFSResult &r) : R(r), SubtreeClasses(R.DFSNodeData.size()) {
     RootSet.setUniverse(R.DFSNodeData.size());
   }
 
@@ -1267,15 +1265,15 @@ class SchedDFSImpl {
   /// During visitPostorderNode the Node's SubtreeID is assigned to the Node
   /// ID. Later, SubtreeID is updated but remains valid.
   bool isVisited(const SUnit *SU) const {
-    return R.DFSNodeData[SU->NodeNum].SubtreeID
-      != SchedDFSResult::InvalidSubtreeID;
+    return R.DFSNodeData[SU->NodeNum].SubtreeID !=
+           SchedDFSResult::InvalidSubtreeID;
   }
 
   /// Initializes this node's instruction count. We don't need to flag the node
   /// visited until visitPostorder because the DAG cannot have cycles.
   void visitPreorder(const SUnit *SU) {
     R.DFSNodeData[SU->NodeNum].InstrCount =
-      SU->getInstr()->isTransient() ? 0 : 1;
+        SU->getInstr()->isTransient() ? 0 : 1;
   }
 
   /// Called once for each node after all predecessors are visited. Revisit this
@@ -1307,8 +1305,7 @@ class SchedDFSImpl {
         // current node is the parent.
         if (RootSet[PredNum].ParentNodeID == SchedDFSResult::InvalidSubtreeID)
           RootSet[PredNum].ParentNodeID = SU->NodeNum;
-      }
-      else if (RootSet.count(PredNum)) {
+      } else if (RootSet.count(PredNum)) {
         // The predecessor is not a root, but is still in the root set. This
         // must be the new parent that it was just joined to. Note that
         // RootSet[PredNum].ParentNodeID may either be invalid or may still be
@@ -1324,8 +1321,8 @@ class SchedDFSImpl {
   /// the predecessor. Increment the parent node's instruction count and
   /// preemptively join this subtree to its parent's if it is small enough.
   void visitPostorderEdge(const SDep &PredDep, const SUnit *Succ) {
-    R.DFSNodeData[Succ->NodeNum].InstrCount
-      += R.DFSNodeData[PredDep.getSUnit()->NodeNum].InstrCount;
+    R.DFSNodeData[Succ->NodeNum].InstrCount +=
+        R.DFSNodeData[PredDep.getSUnit()->NodeNum].InstrCount;
     joinPredSubtree(PredDep, Succ);
   }
 
@@ -1339,8 +1336,8 @@ class SchedDFSImpl {
   void finalize() {
     SubtreeClasses.compress();
     R.DFSTreeData.resize(SubtreeClasses.getNumClasses());
-    assert(SubtreeClasses.getNumClasses() == RootSet.size()
-           && "number of roots should match trees");
+    assert(SubtreeClasses.getNumClasses() == RootSet.size() &&
+           "number of roots should match trees");
     for (const RootData &Root : RootSet) {
       unsigned TreeID = SubtreeClasses[Root.NodeID];
       if (Root.ParentNodeID != SchedDFSResult::InvalidSubtreeID)
@@ -1406,7 +1403,7 @@ class SchedDFSImpl {
 
     do {
       SmallVectorImpl<SchedDFSResult::Connection> &Connections =
-        R.SubtreeConnections[FromTree];
+          R.SubtreeConnections[FromTree];
       for (SchedDFSResult::Connection &C : Connections) {
         if (C.TreeID == ToTree) {
           C.Level = std::max(C.Level, Depth);
@@ -1430,9 +1427,7 @@ class SchedDAGReverseDFS {
 public:
   bool isComplete() const { return DFSStack.empty(); }
 
-  void follow(const SUnit *SU) {
-    DFSStack.emplace_back(SU, SU->Preds.begin());
-  }
+  void follow(const SUnit *SU) { DFSStack.emplace_back(SU, SU->Preds.begin()); }
   void advance() { ++DFSStack.back().second; }
 
   const SDep *backtrack() {
@@ -1480,8 +1475,8 @@ void SchedDFSResult::compute(ArrayRef<SUnit> SUnits) {
         const SDep &PredDep = *DFS.getPred();
         DFS.advance();
         // Ignore non-data edges.
-        if (PredDep.getKind() != SDep::Data
-            || PredDep.getSUnit()->isBoundaryNode()) {
+        if (PredDep.getKind() != SDep::Data ||
+            PredDep.getSUnit()->isBoundaryNode()) {
           continue;
         }
         // An already visited edge is a cross edge, assuming an acyclic DAG.
@@ -1511,7 +1506,7 @@ void SchedDFSResult::compute(ArrayRef<SUnit> SUnits) {
 void SchedDFSResult::scheduleTree(unsigned SubtreeID) {
   for (const Connection &C : SubtreeConnections[SubtreeID]) {
     SubtreeConnectLevels[C.TreeID] =
-      std::max(SubtreeConnectLevels[C.TreeID], C.Level);
+        std::max(SubtreeConnectLevels[C.TreeID], C.Level);
     LLVM_DEBUG(dbgs() << "  Tree: " << C.TreeID << " @"
                       << SubtreeConnectLevels[C.TreeID] << '\n');
   }
@@ -1526,9 +1521,7 @@ LLVM_DUMP_METHOD void ILPValue::print(raw_ostream &OS) const {
     OS << format("%g", ((double)InstrCount / Length));
 }
 
-LLVM_DUMP_METHOD void ILPValue::dump() const {
-  dbgs() << *this << '\n';
-}
+LLVM_DUMP_METHOD void ILPValue::dump() const { dbgs() << *this << '\n'; }
 
 namespace llvm {
 
diff --git a/llvm/lib/CodeGen/ScheduleDAGPrinter.cpp b/llvm/lib/CodeGen/ScheduleDAGPrinter.cpp
index e7b14944acfeec2..74dc81f07a08c3f 100644
--- a/llvm/lib/CodeGen/ScheduleDAGPrinter.cpp
+++ b/llvm/lib/CodeGen/ScheduleDAGPrinter.cpp
@@ -17,59 +17,55 @@
 using namespace llvm;
 
 namespace llvm {
-  template<>
-  struct DOTGraphTraits<ScheduleDAG*> : public DefaultDOTGraphTraits {
+template <>
+struct DOTGraphTraits<ScheduleDAG *> : public DefaultDOTGraphTraits {
 
-  DOTGraphTraits (bool isSimple=false) : DefaultDOTGraphTraits(isSimple) {}
+  DOTGraphTraits(bool isSimple = false) : DefaultDOTGraphTraits(isSimple) {}
 
-    static std::string getGraphName(const ScheduleDAG *G) {
-      return std::string(G->MF.getName());
-    }
+  static std::string getGraphName(const ScheduleDAG *G) {
+    return std::string(G->MF.getName());
+  }
 
-    static bool renderGraphFromBottomUp() {
-      return true;
-    }
+  static bool renderGraphFromBottomUp() { return true; }
 
-    static bool isNodeHidden(const SUnit *Node, const ScheduleDAG *G) {
-      return (Node->NumPreds > 10 || Node->NumSuccs > 10);
-    }
+  static bool isNodeHidden(const SUnit *Node, const ScheduleDAG *G) {
+    return (Node->NumPreds > 10 || Node->NumSuccs > 10);
+  }
 
-    static std::string getNodeIdentifierLabel(const SUnit *Node,
-                                              const ScheduleDAG *Graph) {
-      std::string R;
-      raw_string_ostream OS(R);
-      OS << static_cast<const void *>(Node);
-      return R;
-    }
+  static std::string getNodeIdentifierLabel(const SUnit *Node,
+                                            const ScheduleDAG *Graph) {
+    std::string R;
+    raw_string_ostream OS(R);
+    OS << static_cast<const void *>(Node);
+    return R;
+  }
 
-    /// If you want to override the dot attributes printed for a particular
-    /// edge, override this method.
-    static std::string getEdgeAttributes(const SUnit *Node,
-                                         SUnitIterator EI,
-                                         const ScheduleDAG *Graph) {
-      if (EI.isArtificialDep())
-        return "color=cyan,style=dashed";
-      if (EI.isCtrlDep())
-        return "color=blue,style=dashed";
-      return "";
-    }
+  /// If you want to override the dot attributes printed for a particular
+  /// edge, override this method.
+  static std::string getEdgeAttributes(const SUnit *Node, SUnitIterator EI,
+                                       const ScheduleDAG *Graph) {
+    if (EI.isArtificialDep())
+      return "color=cyan,style=dashed";
+    if (EI.isCtrlDep())
+      return "color=blue,style=dashed";
+    return "";
+  }
 
+  std::string getNodeLabel(const SUnit *SU, const ScheduleDAG *Graph);
+  static std::string getNodeAttributes(const SUnit *N,
+                                       const ScheduleDAG *Graph) {
+    return "shape=Mrecord";
+  }
 
-    std::string getNodeLabel(const SUnit *SU, const ScheduleDAG *Graph);
-    static std::string getNodeAttributes(const SUnit *N,
-                                         const ScheduleDAG *Graph) {
-      return "shape=Mrecord";
-    }
+  static void addCustomGraphFeatures(ScheduleDAG *G,
+                                     GraphWriter<ScheduleDAG *> &GW) {
+    return G->addCustomGraphFeatures(GW);
+  }
+};
+} // namespace llvm
 
-    static void addCustomGraphFeatures(ScheduleDAG *G,
-                                       GraphWriter<ScheduleDAG*> &GW) {
-      return G->addCustomGraphFeatures(GW);
-    }
-  };
-}
-
-std::string DOTGraphTraits<ScheduleDAG*>::getNodeLabel(const SUnit *SU,
-                                                       const ScheduleDAG *G) {
+std::string DOTGraphTraits<ScheduleDAG *>::getNodeLabel(const SUnit *SU,
+                                                        const ScheduleDAG *G) {
   return G->getGraphNodeLabel(SU);
 }
 
@@ -83,7 +79,7 @@ void ScheduleDAG::viewGraph(const Twine &Name, const Twine &Title) {
 #else
   errs() << "ScheduleDAG::viewGraph is only available in debug builds on "
          << "systems with Graphviz or gv!\n";
-#endif  // NDEBUG
+#endif // NDEBUG
 }
 
 /// Out-of-line implementation with no arguments is handy for gdb.
diff --git a/llvm/lib/CodeGen/ScoreboardHazardRecognizer.cpp b/llvm/lib/CodeGen/ScoreboardHazardRecognizer.cpp
index 209c6d81f602030..9e0121f6d1ca5ba 100644
--- a/llvm/lib/CodeGen/ScoreboardHazardRecognizer.cpp
+++ b/llvm/lib/CodeGen/ScoreboardHazardRecognizer.cpp
@@ -37,7 +37,7 @@ ScoreboardHazardRecognizer::ScoreboardHazardRecognizer(
   // avoid dealing with the boundary condition.
   unsigned ScoreboardDepth = 1;
   if (ItinData && !ItinData->isEmpty()) {
-    for (unsigned idx = 0; ; ++idx) {
+    for (unsigned idx = 0;; ++idx) {
       if (ItinData->isEndMarker(idx))
         break;
 
@@ -47,7 +47,8 @@ ScoreboardHazardRecognizer::ScoreboardHazardRecognizer(
       unsigned ItinDepth = 0;
       for (; IS != E; ++IS) {
         unsigned StageDepth = CurCycle + IS->getCycles();
-        if (ItinDepth < StageDepth) ItinDepth = StageDepth;
+        if (ItinDepth < StageDepth)
+          ItinDepth = StageDepth;
         CurCycle += IS->getNextCycles();
       }
 
@@ -93,8 +94,8 @@ LLVM_DUMP_METHOD void ScoreboardHazardRecognizer::Scoreboard::dump() const {
   for (unsigned i = 0; i <= last; i++) {
     InstrStage::FuncUnits FUs = (*this)[i];
     dbgs() << "\t";
-    for (int j = std::numeric_limits<InstrStage::FuncUnits>::digits - 1;
-         j >= 0; j--)
+    for (int j = std::numeric_limits<InstrStage::FuncUnits>::digits - 1; j >= 0;
+         j--)
       dbgs() << ((FUs & (1ULL << j)) ? '1' : '0');
     dbgs() << '\n';
   }
@@ -126,7 +127,8 @@ ScoreboardHazardRecognizer::getHazardType(SUnit *SU, int Stalls) {
   }
   unsigned idx = MCID->getSchedClass();
   for (const InstrStage *IS = ItinData->beginStage(idx),
-         *E = ItinData->endStage(idx); IS != E; ++IS) {
+                        *E = ItinData->endStage(idx);
+       IS != E; ++IS) {
     // We must find one of the stage's units free for every cycle the
     // stage is occupied. FIXME it would be more accurate to find the
     // same unit free in all the cycles.
@@ -185,7 +187,8 @@ void ScoreboardHazardRecognizer::EmitInstruction(SUnit *SU) {
 
   unsigned idx = MCID->getSchedClass();
   for (const InstrStage *IS = ItinData->beginStage(idx),
-         *E = ItinData->endStage(idx); IS != E; ++IS) {
+                        *E = ItinData->endStage(idx);
+       IS != E; ++IS) {
     // We must reserve one of the stage's units for every cycle the
     // stage is occupied. FIXME it would be more accurate to reserve
     // the same unit free in all the cycles.
@@ -228,14 +231,16 @@ void ScoreboardHazardRecognizer::EmitInstruction(SUnit *SU) {
 
 void ScoreboardHazardRecognizer::AdvanceCycle() {
   IssueCount = 0;
-  ReservedScoreboard[0] = 0; ReservedScoreboard.advance();
-  RequiredScoreboard[0] = 0; RequiredScoreboard.advance();
+  ReservedScoreboard[0] = 0;
+  ReservedScoreboard.advance();
+  RequiredScoreboard[0] = 0;
+  RequiredScoreboard.advance();
 }
 
 void ScoreboardHazardRecognizer::RecedeCycle() {
   IssueCount = 0;
-  ReservedScoreboard[ReservedScoreboard.getDepth()-1] = 0;
+  ReservedScoreboard[ReservedScoreboard.getDepth() - 1] = 0;
   ReservedScoreboard.recede();
-  RequiredScoreboard[RequiredScoreboard.getDepth()-1] = 0;
+  RequiredScoreboard[RequiredScoreboard.getDepth() - 1] = 0;
   RequiredScoreboard.recede();
 }
diff --git a/llvm/lib/CodeGen/SelectionDAG/CMakeLists.txt b/llvm/lib/CodeGen/SelectionDAG/CMakeLists.txt
index cbfbfa3a321bcf5..f9931ac363f4c0e 100644
--- a/llvm/lib/CodeGen/SelectionDAG/CMakeLists.txt
+++ b/llvm/lib/CodeGen/SelectionDAG/CMakeLists.txt
@@ -1,41 +1,15 @@
-add_llvm_component_library(LLVMSelectionDAG
-  DAGCombiner.cpp
-  FastISel.cpp
-  FunctionLoweringInfo.cpp
-  InstrEmitter.cpp
-  LegalizeDAG.cpp
-  LegalizeFloatTypes.cpp
-  LegalizeIntegerTypes.cpp
-  LegalizeTypes.cpp
-  LegalizeTypesGeneric.cpp
-  LegalizeVectorOps.cpp
-  LegalizeVectorTypes.cpp
-  ResourcePriorityQueue.cpp
-  ScheduleDAGFast.cpp
-  ScheduleDAGRRList.cpp
-  ScheduleDAGSDNodes.cpp
-  ScheduleDAGVLIW.cpp
-  SelectionDAGBuilder.cpp
-  SelectionDAG.cpp
-  SelectionDAGAddressAnalysis.cpp
-  SelectionDAGDumper.cpp
-  SelectionDAGISel.cpp
-  SelectionDAGPrinter.cpp
-  SelectionDAGTargetInfo.cpp
-  StatepointLowering.cpp
-  TargetLowering.cpp
+add_llvm_component_library(
+    LLVMSelectionDAG DAGCombiner.cpp FastISel.cpp FunctionLoweringInfo
+        .cpp InstrEmitter.cpp LegalizeDAG.cpp LegalizeFloatTypes
+        .cpp LegalizeIntegerTypes.cpp LegalizeTypes.cpp LegalizeTypesGeneric
+        .cpp LegalizeVectorOps.cpp LegalizeVectorTypes.cpp ResourcePriorityQueue
+        .cpp ScheduleDAGFast.cpp ScheduleDAGRRList.cpp ScheduleDAGSDNodes
+        .cpp ScheduleDAGVLIW.cpp SelectionDAGBuilder.cpp SelectionDAG
+        .cpp SelectionDAGAddressAnalysis.cpp SelectionDAGDumper
+        .cpp SelectionDAGISel.cpp SelectionDAGPrinter.cpp SelectionDAGTargetInfo
+        .cpp StatepointLowering.cpp TargetLowering.cpp
 
-  DEPENDS
-  intrinsics_gen
+            DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  Analysis
-  CodeGen
-  CodeGenTypes
-  Core
-  MC
-  Support
-  Target
-  TargetParser
-  TransformUtils
-  )
+                LINK_COMPONENTS Analysis CodeGen CodeGenTypes Core MC Support
+                    Target TargetParser TransformUtils)
diff --git a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
index d917e8c00c4f92a..30af2d2c47da1c0 100644
--- a/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/DAGCombiner.cpp
@@ -79,39 +79,38 @@ using namespace llvm;
 
 #define DEBUG_TYPE "dagcombine"
 
-STATISTIC(NodesCombined   , "Number of dag nodes combined");
-STATISTIC(PreIndexedNodes , "Number of pre-indexed nodes created");
+STATISTIC(NodesCombined, "Number of dag nodes combined");
+STATISTIC(PreIndexedNodes, "Number of pre-indexed nodes created");
 STATISTIC(PostIndexedNodes, "Number of post-indexed nodes created");
-STATISTIC(OpsNarrowed     , "Number of load/op/store narrowed");
-STATISTIC(LdStFP2Int      , "Number of fp load/store pairs transformed to int");
+STATISTIC(OpsNarrowed, "Number of load/op/store narrowed");
+STATISTIC(LdStFP2Int, "Number of fp load/store pairs transformed to int");
 STATISTIC(SlicedLoads, "Number of load sliced");
 STATISTIC(NumFPLogicOpsConv, "Number of logic ops converted to fp ops");
 
-static cl::opt<bool>
-CombinerGlobalAA("combiner-global-alias-analysis", cl::Hidden,
-                 cl::desc("Enable DAG combiner's use of IR alias analysis"));
+static cl::opt<bool> CombinerGlobalAA(
+    "combiner-global-alias-analysis", cl::Hidden,
+    cl::desc("Enable DAG combiner's use of IR alias analysis"));
 
-static cl::opt<bool>
-UseTBAA("combiner-use-tbaa", cl::Hidden, cl::init(true),
-        cl::desc("Enable DAG combiner's use of TBAA"));
+static cl::opt<bool> UseTBAA("combiner-use-tbaa", cl::Hidden, cl::init(true),
+                             cl::desc("Enable DAG combiner's use of TBAA"));
 
 #ifndef NDEBUG
 static cl::opt<std::string>
-CombinerAAOnlyFunc("combiner-aa-only-func", cl::Hidden,
-                   cl::desc("Only use DAG-combiner alias analysis in this"
-                            " function"));
+    CombinerAAOnlyFunc("combiner-aa-only-func", cl::Hidden,
+                       cl::desc("Only use DAG-combiner alias analysis in this"
+                                " function"));
 #endif
 
 /// Hidden option to stress test load slicing, i.e., when this option
 /// is enabled, load slicing bypasses most of its profitability guards.
-static cl::opt<bool>
-StressLoadSlicing("combiner-stress-load-slicing", cl::Hidden,
-                  cl::desc("Bypass the profitability model of load slicing"),
-                  cl::init(false));
+static cl::opt<bool> StressLoadSlicing(
+    "combiner-stress-load-slicing", cl::Hidden,
+    cl::desc("Bypass the profitability model of load slicing"),
+    cl::init(false));
 
 static cl::opt<bool>
-  MaySplitLoadIndex("combiner-split-load-index", cl::Hidden, cl::init(true),
-                    cl::desc("DAG combiner may split indexing from loads"));
+    MaySplitLoadIndex("combiner-split-load-index", cl::Hidden, cl::init(true),
+                      cl::desc("DAG combiner may split indexing from loads"));
 
 static cl::opt<bool>
     EnableStoreMerging("combiner-store-merging", cl::Hidden, cl::init(true),
@@ -144,712 +143,704 @@ static cl::opt<bool> EnableVectorFCopySignExtendRound(
 
 namespace {
 
-  class DAGCombiner {
-    SelectionDAG &DAG;
-    const TargetLowering &TLI;
-    const SelectionDAGTargetInfo *STI;
-    CombineLevel Level = BeforeLegalizeTypes;
-    CodeGenOpt::Level OptLevel;
-    bool LegalDAG = false;
-    bool LegalOperations = false;
-    bool LegalTypes = false;
-    bool ForCodeSize;
-    bool DisableGenericCombines;
-
-    /// Worklist of all of the nodes that need to be simplified.
-    ///
-    /// This must behave as a stack -- new nodes to process are pushed onto the
-    /// back and when processing we pop off of the back.
-    ///
-    /// The worklist will not contain duplicates but may contain null entries
-    /// due to nodes being deleted from the underlying DAG.
-    SmallVector<SDNode *, 64> Worklist;
-
-    /// Mapping from an SDNode to its position on the worklist.
-    ///
-    /// This is used to find and remove nodes from the worklist (by nulling
-    /// them) when they are deleted from the underlying DAG. It relies on
-    /// stable indices of nodes within the worklist.
-    DenseMap<SDNode *, unsigned> WorklistMap;
-
-    /// This records all nodes attempted to be added to the worklist since we
-    /// considered a new worklist entry. As we keep do not add duplicate nodes
-    /// in the worklist, this is different from the tail of the worklist.
-    SmallSetVector<SDNode *, 32> PruningList;
-
-    /// Set of nodes which have been combined (at least once).
-    ///
-    /// This is used to allow us to reliably add any operands of a DAG node
-    /// which have not yet been combined to the worklist.
-    SmallPtrSet<SDNode *, 32> CombinedNodes;
-
-    /// Map from candidate StoreNode to the pair of RootNode and count.
-    /// The count is used to track how many times we have seen the StoreNode
-    /// with the same RootNode bail out in dependence check. If we have seen
-    /// the bail out for the same pair many times over a limit, we won't
-    /// consider the StoreNode with the same RootNode as store merging
-    /// candidate again.
-    DenseMap<SDNode *, std::pair<SDNode *, unsigned>> StoreRootCountMap;
-
-    // AA - Used for DAG load/store alias analysis.
-    AliasAnalysis *AA;
-
-    /// When an instruction is simplified, add all users of the instruction to
-    /// the work lists because they might get more simplified now.
-    void AddUsersToWorklist(SDNode *N) {
-      for (SDNode *Node : N->uses())
-        AddToWorklist(Node);
-    }
-
-    /// Convenient shorthand to add a node and all of its user to the worklist.
-    void AddToWorklistWithUsers(SDNode *N) {
-      AddUsersToWorklist(N);
-      AddToWorklist(N);
+class DAGCombiner {
+  SelectionDAG &DAG;
+  const TargetLowering &TLI;
+  const SelectionDAGTargetInfo *STI;
+  CombineLevel Level = BeforeLegalizeTypes;
+  CodeGenOpt::Level OptLevel;
+  bool LegalDAG = false;
+  bool LegalOperations = false;
+  bool LegalTypes = false;
+  bool ForCodeSize;
+  bool DisableGenericCombines;
+
+  /// Worklist of all of the nodes that need to be simplified.
+  ///
+  /// This must behave as a stack -- new nodes to process are pushed onto the
+  /// back and when processing we pop off of the back.
+  ///
+  /// The worklist will not contain duplicates but may contain null entries
+  /// due to nodes being deleted from the underlying DAG.
+  SmallVector<SDNode *, 64> Worklist;
+
+  /// Mapping from an SDNode to its position on the worklist.
+  ///
+  /// This is used to find and remove nodes from the worklist (by nulling
+  /// them) when they are deleted from the underlying DAG. It relies on
+  /// stable indices of nodes within the worklist.
+  DenseMap<SDNode *, unsigned> WorklistMap;
+
+  /// This records all nodes attempted to be added to the worklist since we
+  /// considered a new worklist entry. As we keep do not add duplicate nodes
+  /// in the worklist, this is different from the tail of the worklist.
+  SmallSetVector<SDNode *, 32> PruningList;
+
+  /// Set of nodes which have been combined (at least once).
+  ///
+  /// This is used to allow us to reliably add any operands of a DAG node
+  /// which have not yet been combined to the worklist.
+  SmallPtrSet<SDNode *, 32> CombinedNodes;
+
+  /// Map from candidate StoreNode to the pair of RootNode and count.
+  /// The count is used to track how many times we have seen the StoreNode
+  /// with the same RootNode bail out in dependence check. If we have seen
+  /// the bail out for the same pair many times over a limit, we won't
+  /// consider the StoreNode with the same RootNode as store merging
+  /// candidate again.
+  DenseMap<SDNode *, std::pair<SDNode *, unsigned>> StoreRootCountMap;
+
+  // AA - Used for DAG load/store alias analysis.
+  AliasAnalysis *AA;
+
+  /// When an instruction is simplified, add all users of the instruction to
+  /// the work lists because they might get more simplified now.
+  void AddUsersToWorklist(SDNode *N) {
+    for (SDNode *Node : N->uses())
+      AddToWorklist(Node);
+  }
+
+  /// Convenient shorthand to add a node and all of its user to the worklist.
+  void AddToWorklistWithUsers(SDNode *N) {
+    AddUsersToWorklist(N);
+    AddToWorklist(N);
+  }
+
+  // Prune potentially dangling nodes. This is called after
+  // any visit to a node, but should also be called during a visit after any
+  // failed combine which may have created a DAG node.
+  void clearAddedDanglingWorklistEntries() {
+    // Check any nodes added to the worklist to see if they are prunable.
+    while (!PruningList.empty()) {
+      auto *N = PruningList.pop_back_val();
+      if (N->use_empty())
+        recursivelyDeleteUnusedNodes(N);
+    }
+  }
+
+  SDNode *getNextWorklistEntry() {
+    // Before we do any work, remove nodes that are not in use.
+    clearAddedDanglingWorklistEntries();
+    SDNode *N = nullptr;
+    // The Worklist holds the SDNodes in order, but it may contain null
+    // entries.
+    while (!N && !Worklist.empty()) {
+      N = Worklist.pop_back_val();
+    }
+
+    if (N) {
+      bool GoodWorklistEntry = WorklistMap.erase(N);
+      (void)GoodWorklistEntry;
+      assert(GoodWorklistEntry &&
+             "Found a worklist entry without a corresponding map entry!");
     }
+    return N;
+  }
 
-    // Prune potentially dangling nodes. This is called after
-    // any visit to a node, but should also be called during a visit after any
-    // failed combine which may have created a DAG node.
-    void clearAddedDanglingWorklistEntries() {
-      // Check any nodes added to the worklist to see if they are prunable.
-      while (!PruningList.empty()) {
-        auto *N = PruningList.pop_back_val();
-        if (N->use_empty())
-          recursivelyDeleteUnusedNodes(N);
-      }
-    }
+  /// Call the node-specific routine that folds each particular type of node.
+  SDValue visit(SDNode *N);
 
-    SDNode *getNextWorklistEntry() {
-      // Before we do any work, remove nodes that are not in use.
-      clearAddedDanglingWorklistEntries();
-      SDNode *N = nullptr;
-      // The Worklist holds the SDNodes in order, but it may contain null
-      // entries.
-      while (!N && !Worklist.empty()) {
-        N = Worklist.pop_back_val();
-      }
+public:
+  DAGCombiner(SelectionDAG &D, AliasAnalysis *AA, CodeGenOpt::Level OL)
+      : DAG(D), TLI(D.getTargetLoweringInfo()),
+        STI(D.getSubtarget().getSelectionDAGInfo()), OptLevel(OL), AA(AA) {
+    ForCodeSize = DAG.shouldOptForSize();
+    DisableGenericCombines = STI && STI->disableGenericCombines(OptLevel);
 
-      if (N) {
-        bool GoodWorklistEntry = WorklistMap.erase(N);
-        (void)GoodWorklistEntry;
-        assert(GoodWorklistEntry &&
-               "Found a worklist entry without a corresponding map entry!");
-      }
-      return N;
-    }
+    MaximumLegalStoreInBits = 0;
+    // We use the minimum store size here, since that's all we can guarantee
+    // for the scalable vector types.
+    for (MVT VT : MVT::all_valuetypes())
+      if (EVT(VT).isSimple() && VT != MVT::Other && TLI.isTypeLegal(EVT(VT)) &&
+          VT.getSizeInBits().getKnownMinValue() >= MaximumLegalStoreInBits)
+        MaximumLegalStoreInBits = VT.getSizeInBits().getKnownMinValue();
+  }
 
-    /// Call the node-specific routine that folds each particular type of node.
-    SDValue visit(SDNode *N);
+  void ConsiderForPruning(SDNode *N) {
+    // Mark this for potential pruning.
+    PruningList.insert(N);
+  }
 
-  public:
-    DAGCombiner(SelectionDAG &D, AliasAnalysis *AA, CodeGenOpt::Level OL)
-        : DAG(D), TLI(D.getTargetLoweringInfo()),
-          STI(D.getSubtarget().getSelectionDAGInfo()), OptLevel(OL), AA(AA) {
-      ForCodeSize = DAG.shouldOptForSize();
-      DisableGenericCombines = STI && STI->disableGenericCombines(OptLevel);
+  /// Add to the worklist making sure its instance is at the back (next to be
+  /// processed.)
+  void AddToWorklist(SDNode *N, bool IsCandidateForPruning = true) {
+    assert(N->getOpcode() != ISD::DELETED_NODE &&
+           "Deleted Node added to Worklist");
 
-      MaximumLegalStoreInBits = 0;
-      // We use the minimum store size here, since that's all we can guarantee
-      // for the scalable vector types.
-      for (MVT VT : MVT::all_valuetypes())
-        if (EVT(VT).isSimple() && VT != MVT::Other &&
-            TLI.isTypeLegal(EVT(VT)) &&
-            VT.getSizeInBits().getKnownMinValue() >= MaximumLegalStoreInBits)
-          MaximumLegalStoreInBits = VT.getSizeInBits().getKnownMinValue();
-    }
+    // Skip handle nodes as they can't usefully be combined and confuse the
+    // zero-use deletion strategy.
+    if (N->getOpcode() == ISD::HANDLENODE)
+      return;
 
-    void ConsiderForPruning(SDNode *N) {
-      // Mark this for potential pruning.
-      PruningList.insert(N);
-    }
+    if (IsCandidateForPruning)
+      ConsiderForPruning(N);
 
-    /// Add to the worklist making sure its instance is at the back (next to be
-    /// processed.)
-    void AddToWorklist(SDNode *N, bool IsCandidateForPruning = true) {
-      assert(N->getOpcode() != ISD::DELETED_NODE &&
-             "Deleted Node added to Worklist");
+    if (WorklistMap.insert(std::make_pair(N, Worklist.size())).second)
+      Worklist.push_back(N);
+  }
 
-      // Skip handle nodes as they can't usefully be combined and confuse the
-      // zero-use deletion strategy.
-      if (N->getOpcode() == ISD::HANDLENODE)
-        return;
+  /// Remove all instances of N from the worklist.
+  void removeFromWorklist(SDNode *N) {
+    CombinedNodes.erase(N);
+    PruningList.remove(N);
+    StoreRootCountMap.erase(N);
 
-      if (IsCandidateForPruning)
-        ConsiderForPruning(N);
+    auto It = WorklistMap.find(N);
+    if (It == WorklistMap.end())
+      return; // Not in the worklist.
 
-      if (WorklistMap.insert(std::make_pair(N, Worklist.size())).second)
-        Worklist.push_back(N);
-    }
+    // Null out the entry rather than erasing it to avoid a linear operation.
+    Worklist[It->second] = nullptr;
+    WorklistMap.erase(It);
+  }
 
-    /// Remove all instances of N from the worklist.
-    void removeFromWorklist(SDNode *N) {
-      CombinedNodes.erase(N);
-      PruningList.remove(N);
-      StoreRootCountMap.erase(N);
+  void deleteAndRecombine(SDNode *N);
+  bool recursivelyDeleteUnusedNodes(SDNode *N);
 
-      auto It = WorklistMap.find(N);
-      if (It == WorklistMap.end())
-        return; // Not in the worklist.
+  /// Replaces all uses of the results of one DAG node with new values.
+  SDValue CombineTo(SDNode *N, const SDValue *To, unsigned NumTo,
+                    bool AddTo = true);
 
-      // Null out the entry rather than erasing it to avoid a linear operation.
-      Worklist[It->second] = nullptr;
-      WorklistMap.erase(It);
-    }
+  /// Replaces all uses of the results of one DAG node with new values.
+  SDValue CombineTo(SDNode *N, SDValue Res, bool AddTo = true) {
+    return CombineTo(N, &Res, 1, AddTo);
+  }
 
-    void deleteAndRecombine(SDNode *N);
-    bool recursivelyDeleteUnusedNodes(SDNode *N);
+  /// Replaces all uses of the results of one DAG node with new values.
+  SDValue CombineTo(SDNode *N, SDValue Res0, SDValue Res1, bool AddTo = true) {
+    SDValue To[] = {Res0, Res1};
+    return CombineTo(N, To, 2, AddTo);
+  }
 
-    /// Replaces all uses of the results of one DAG node with new values.
-    SDValue CombineTo(SDNode *N, const SDValue *To, unsigned NumTo,
-                      bool AddTo = true);
+  void CommitTargetLoweringOpt(const TargetLowering::TargetLoweringOpt &TLO);
 
-    /// Replaces all uses of the results of one DAG node with new values.
-    SDValue CombineTo(SDNode *N, SDValue Res, bool AddTo = true) {
-      return CombineTo(N, &Res, 1, AddTo);
-    }
+private:
+  unsigned MaximumLegalStoreInBits;
 
-    /// Replaces all uses of the results of one DAG node with new values.
-    SDValue CombineTo(SDNode *N, SDValue Res0, SDValue Res1,
-                      bool AddTo = true) {
-      SDValue To[] = { Res0, Res1 };
-      return CombineTo(N, To, 2, AddTo);
-    }
+  /// Check the specified integer node value to see if it can be simplified or
+  /// if things it uses can be simplified by bit propagation.
+  /// If so, return true.
+  bool SimplifyDemandedBits(SDValue Op) {
+    unsigned BitWidth = Op.getScalarValueSizeInBits();
+    APInt DemandedBits = APInt::getAllOnes(BitWidth);
+    return SimplifyDemandedBits(Op, DemandedBits);
+  }
 
-    void CommitTargetLoweringOpt(const TargetLowering::TargetLoweringOpt &TLO);
+  bool SimplifyDemandedBits(SDValue Op, const APInt &DemandedBits) {
+    TargetLowering::TargetLoweringOpt TLO(DAG, LegalTypes, LegalOperations);
+    KnownBits Known;
+    if (!TLI.SimplifyDemandedBits(Op, DemandedBits, Known, TLO, 0, false))
+      return false;
 
-  private:
-    unsigned MaximumLegalStoreInBits;
+    // Revisit the node.
+    AddToWorklist(Op.getNode());
 
-    /// Check the specified integer node value to see if it can be simplified or
-    /// if things it uses can be simplified by bit propagation.
-    /// If so, return true.
-    bool SimplifyDemandedBits(SDValue Op) {
-      unsigned BitWidth = Op.getScalarValueSizeInBits();
-      APInt DemandedBits = APInt::getAllOnes(BitWidth);
-      return SimplifyDemandedBits(Op, DemandedBits);
-    }
+    CommitTargetLoweringOpt(TLO);
+    return true;
+  }
 
-    bool SimplifyDemandedBits(SDValue Op, const APInt &DemandedBits) {
-      TargetLowering::TargetLoweringOpt TLO(DAG, LegalTypes, LegalOperations);
-      KnownBits Known;
-      if (!TLI.SimplifyDemandedBits(Op, DemandedBits, Known, TLO, 0, false))
-        return false;
+  /// Check the specified vector node value to see if it can be simplified or
+  /// if things it uses can be simplified as it only uses some of the
+  /// elements. If so, return true.
+  bool SimplifyDemandedVectorElts(SDValue Op) {
+    // TODO: For now just pretend it cannot be simplified.
+    if (Op.getValueType().isScalableVector())
+      return false;
 
-      // Revisit the node.
-      AddToWorklist(Op.getNode());
+    unsigned NumElts = Op.getValueType().getVectorNumElements();
+    APInt DemandedElts = APInt::getAllOnes(NumElts);
+    return SimplifyDemandedVectorElts(Op, DemandedElts);
+  }
+
+  bool SimplifyDemandedBits(SDValue Op, const APInt &DemandedBits,
+                            const APInt &DemandedElts,
+                            bool AssumeSingleUse = false);
+  bool SimplifyDemandedVectorElts(SDValue Op, const APInt &DemandedElts,
+                                  bool AssumeSingleUse = false);
+
+  bool CombineToPreIndexedLoadStore(SDNode *N);
+  bool CombineToPostIndexedLoadStore(SDNode *N);
+  SDValue SplitIndexingFromLoad(LoadSDNode *LD);
+  bool SliceUpLoad(SDNode *N);
+
+  // Looks up the chain to find a unique (unaliased) store feeding the passed
+  // load. If no such store is found, returns a nullptr.
+  // Note: This will look past a CALLSEQ_START if the load is chained to it so
+  //       so that it can find stack stores for byval params.
+  StoreSDNode *getUniqueStoreFeeding(LoadSDNode *LD, int64_t &Offset);
+  // Scalars have size 0 to distinguish from singleton vectors.
+  SDValue ForwardStoreValueToDirectLoad(LoadSDNode *LD);
+  bool getTruncatedStoreValue(StoreSDNode *ST, SDValue &Val);
+  bool extendLoadedValueToExtension(LoadSDNode *LD, SDValue &Val);
+
+  /// Replace an ISD::EXTRACT_VECTOR_ELT of a load with a narrowed
+  ///   load.
+  ///
+  /// \param EVE ISD::EXTRACT_VECTOR_ELT to be replaced.
+  /// \param InVecVT type of the input vector to EVE with bitcasts resolved.
+  /// \param EltNo index of the vector element to load.
+  /// \param OriginalLoad load that EVE came from to be replaced.
+  /// \returns EVE on success SDValue() on failure.
+  SDValue scalarizeExtractedVectorLoad(SDNode *EVE, EVT InVecVT, SDValue EltNo,
+                                       LoadSDNode *OriginalLoad);
+  void ReplaceLoadWithPromotedLoad(SDNode *Load, SDNode *ExtLoad);
+  SDValue PromoteOperand(SDValue Op, EVT PVT, bool &Replace);
+  SDValue SExtPromoteOperand(SDValue Op, EVT PVT);
+  SDValue ZExtPromoteOperand(SDValue Op, EVT PVT);
+  SDValue PromoteIntBinOp(SDValue Op);
+  SDValue PromoteIntShiftOp(SDValue Op);
+  SDValue PromoteExtend(SDValue Op);
+  bool PromoteLoad(SDValue Op);
+
+  SDValue combineMinNumMaxNum(const SDLoc &DL, EVT VT, SDValue LHS, SDValue RHS,
+                              SDValue True, SDValue False, ISD::CondCode CC);
+
+  /// Call the node-specific routine that knows how to fold each
+  /// particular type of node. If that doesn't do anything, try the
+  /// target-specific DAG combines.
+  SDValue combine(SDNode *N);
+
+  // Visitation implementation - Implement dag node combining for different
+  // node types.  The semantics are as follows:
+  // Return Value:
+  //   SDValue.getNode() == 0 - No change was made
+  //   SDValue.getNode() == N - N was replaced, is dead and has been handled.
+  //   otherwise              - N should be replaced by the returned Operand.
+  //
+  SDValue visitTokenFactor(SDNode *N);
+  SDValue visitMERGE_VALUES(SDNode *N);
+  SDValue visitADD(SDNode *N);
+  SDValue visitADDLike(SDNode *N);
+  SDValue visitADDLikeCommutative(SDValue N0, SDValue N1, SDNode *LocReference);
+  SDValue visitSUB(SDNode *N);
+  SDValue visitADDSAT(SDNode *N);
+  SDValue visitSUBSAT(SDNode *N);
+  SDValue visitADDC(SDNode *N);
+  SDValue visitADDO(SDNode *N);
+  SDValue visitUADDOLike(SDValue N0, SDValue N1, SDNode *N);
+  SDValue visitSUBC(SDNode *N);
+  SDValue visitSUBO(SDNode *N);
+  SDValue visitADDE(SDNode *N);
+  SDValue visitUADDO_CARRY(SDNode *N);
+  SDValue visitSADDO_CARRY(SDNode *N);
+  SDValue visitUADDO_CARRYLike(SDValue N0, SDValue N1, SDValue CarryIn,
+                               SDNode *N);
+  SDValue visitSUBE(SDNode *N);
+  SDValue visitUSUBO_CARRY(SDNode *N);
+  SDValue visitSSUBO_CARRY(SDNode *N);
+  SDValue visitMUL(SDNode *N);
+  SDValue visitMULFIX(SDNode *N);
+  SDValue useDivRem(SDNode *N);
+  SDValue visitSDIV(SDNode *N);
+  SDValue visitSDIVLike(SDValue N0, SDValue N1, SDNode *N);
+  SDValue visitUDIV(SDNode *N);
+  SDValue visitUDIVLike(SDValue N0, SDValue N1, SDNode *N);
+  SDValue visitREM(SDNode *N);
+  SDValue visitMULHU(SDNode *N);
+  SDValue visitMULHS(SDNode *N);
+  SDValue visitAVG(SDNode *N);
+  SDValue visitABD(SDNode *N);
+  SDValue visitSMUL_LOHI(SDNode *N);
+  SDValue visitUMUL_LOHI(SDNode *N);
+  SDValue visitMULO(SDNode *N);
+  SDValue visitIMINMAX(SDNode *N);
+  SDValue visitAND(SDNode *N);
+  SDValue visitANDLike(SDValue N0, SDValue N1, SDNode *N);
+  SDValue visitOR(SDNode *N);
+  SDValue visitORLike(SDValue N0, SDValue N1, SDNode *N);
+  SDValue visitXOR(SDNode *N);
+  SDValue SimplifyVCastOp(SDNode *N, const SDLoc &DL);
+  SDValue SimplifyVBinOp(SDNode *N, const SDLoc &DL);
+  SDValue visitSHL(SDNode *N);
+  SDValue visitSRA(SDNode *N);
+  SDValue visitSRL(SDNode *N);
+  SDValue visitFunnelShift(SDNode *N);
+  SDValue visitSHLSAT(SDNode *N);
+  SDValue visitRotate(SDNode *N);
+  SDValue visitABS(SDNode *N);
+  SDValue visitBSWAP(SDNode *N);
+  SDValue visitBITREVERSE(SDNode *N);
+  SDValue visitCTLZ(SDNode *N);
+  SDValue visitCTLZ_ZERO_UNDEF(SDNode *N);
+  SDValue visitCTTZ(SDNode *N);
+  SDValue visitCTTZ_ZERO_UNDEF(SDNode *N);
+  SDValue visitCTPOP(SDNode *N);
+  SDValue visitSELECT(SDNode *N);
+  SDValue visitVSELECT(SDNode *N);
+  SDValue visitSELECT_CC(SDNode *N);
+  SDValue visitSETCC(SDNode *N);
+  SDValue visitSETCCCARRY(SDNode *N);
+  SDValue visitSIGN_EXTEND(SDNode *N);
+  SDValue visitZERO_EXTEND(SDNode *N);
+  SDValue visitANY_EXTEND(SDNode *N);
+  SDValue visitAssertExt(SDNode *N);
+  SDValue visitAssertAlign(SDNode *N);
+  SDValue visitSIGN_EXTEND_INREG(SDNode *N);
+  SDValue visitEXTEND_VECTOR_INREG(SDNode *N);
+  SDValue visitTRUNCATE(SDNode *N);
+  SDValue visitBITCAST(SDNode *N);
+  SDValue visitFREEZE(SDNode *N);
+  SDValue visitBUILD_PAIR(SDNode *N);
+  SDValue visitFADD(SDNode *N);
+  SDValue visitVP_FADD(SDNode *N);
+  SDValue visitVP_FSUB(SDNode *N);
+  SDValue visitSTRICT_FADD(SDNode *N);
+  SDValue visitFSUB(SDNode *N);
+  SDValue visitFMUL(SDNode *N);
+  template <class MatchContextClass> SDValue visitFMA(SDNode *N);
+  SDValue visitFDIV(SDNode *N);
+  SDValue visitFREM(SDNode *N);
+  SDValue visitFSQRT(SDNode *N);
+  SDValue visitFCOPYSIGN(SDNode *N);
+  SDValue visitFPOW(SDNode *N);
+  SDValue visitSINT_TO_FP(SDNode *N);
+  SDValue visitUINT_TO_FP(SDNode *N);
+  SDValue visitFP_TO_SINT(SDNode *N);
+  SDValue visitFP_TO_UINT(SDNode *N);
+  SDValue visitFP_ROUND(SDNode *N);
+  SDValue visitFP_EXTEND(SDNode *N);
+  SDValue visitFNEG(SDNode *N);
+  SDValue visitFABS(SDNode *N);
+  SDValue visitFCEIL(SDNode *N);
+  SDValue visitFTRUNC(SDNode *N);
+  SDValue visitFFREXP(SDNode *N);
+  SDValue visitFFLOOR(SDNode *N);
+  SDValue visitFMinMax(SDNode *N);
+  SDValue visitBRCOND(SDNode *N);
+  SDValue visitBR_CC(SDNode *N);
+  SDValue visitLOAD(SDNode *N);
+
+  SDValue replaceStoreChain(StoreSDNode *ST, SDValue BetterChain);
+  SDValue replaceStoreOfFPConstant(StoreSDNode *ST);
+  SDValue replaceStoreOfInsertLoad(StoreSDNode *ST);
+
+  bool refineExtractVectorEltIntoMultipleNarrowExtractVectorElts(SDNode *N);
+
+  SDValue visitSTORE(SDNode *N);
+  SDValue visitLIFETIME_END(SDNode *N);
+  SDValue visitINSERT_VECTOR_ELT(SDNode *N);
+  SDValue visitEXTRACT_VECTOR_ELT(SDNode *N);
+  SDValue visitBUILD_VECTOR(SDNode *N);
+  SDValue visitCONCAT_VECTORS(SDNode *N);
+  SDValue visitEXTRACT_SUBVECTOR(SDNode *N);
+  SDValue visitVECTOR_SHUFFLE(SDNode *N);
+  SDValue visitSCALAR_TO_VECTOR(SDNode *N);
+  SDValue visitINSERT_SUBVECTOR(SDNode *N);
+  SDValue visitMLOAD(SDNode *N);
+  SDValue visitMSTORE(SDNode *N);
+  SDValue visitMGATHER(SDNode *N);
+  SDValue visitMSCATTER(SDNode *N);
+  SDValue visitVPGATHER(SDNode *N);
+  SDValue visitVPSCATTER(SDNode *N);
+  SDValue visitFP_TO_FP16(SDNode *N);
+  SDValue visitFP16_TO_FP(SDNode *N);
+  SDValue visitFP_TO_BF16(SDNode *N);
+  SDValue visitVECREDUCE(SDNode *N);
+  SDValue visitVPOp(SDNode *N);
+  SDValue visitGET_FPENV_MEM(SDNode *N);
+  SDValue visitSET_FPENV_MEM(SDNode *N);
+
+  template <class MatchContextClass> SDValue visitFADDForFMACombine(SDNode *N);
+  template <class MatchContextClass> SDValue visitFSUBForFMACombine(SDNode *N);
+  SDValue visitFMULForFMADistributiveCombine(SDNode *N);
+
+  SDValue XformToShuffleWithZero(SDNode *N);
+  bool reassociationCanBreakAddressingModePattern(unsigned Opc, const SDLoc &DL,
+                                                  SDNode *N, SDValue N0,
+                                                  SDValue N1);
+  SDValue reassociateOpsCommutative(unsigned Opc, const SDLoc &DL, SDValue N0,
+                                    SDValue N1, SDNodeFlags Flags);
+  SDValue reassociateOps(unsigned Opc, const SDLoc &DL, SDValue N0, SDValue N1,
+                         SDNodeFlags Flags);
+  SDValue reassociateReduction(unsigned RedOpc, unsigned Opc, const SDLoc &DL,
+                               EVT VT, SDValue N0, SDValue N1,
+                               SDNodeFlags Flags = SDNodeFlags());
+
+  SDValue visitShiftByConstant(SDNode *N);
+
+  SDValue foldSelectOfConstants(SDNode *N);
+  SDValue foldVSelectOfConstants(SDNode *N);
+  SDValue foldBinOpIntoSelect(SDNode *BO);
+  bool SimplifySelectOps(SDNode *SELECT, SDValue LHS, SDValue RHS);
+  SDValue hoistLogicOpWithSameOpcodeHands(SDNode *N);
+  SDValue SimplifySelect(const SDLoc &DL, SDValue N0, SDValue N1, SDValue N2);
+  SDValue SimplifySelectCC(const SDLoc &DL, SDValue N0, SDValue N1, SDValue N2,
+                           SDValue N3, ISD::CondCode CC,
+                           bool NotExtCompare = false);
+  SDValue convertSelectOfFPConstantsToLoadOffset(const SDLoc &DL, SDValue N0,
+                                                 SDValue N1, SDValue N2,
+                                                 SDValue N3, ISD::CondCode CC);
+  SDValue foldSignChangeInBitcast(SDNode *N);
+  SDValue foldSelectCCToShiftAnd(const SDLoc &DL, SDValue N0, SDValue N1,
+                                 SDValue N2, SDValue N3, ISD::CondCode CC);
+  SDValue foldSelectOfBinops(SDNode *N);
+  SDValue foldSextSetcc(SDNode *N);
+  SDValue foldLogicOfSetCCs(bool IsAnd, SDValue N0, SDValue N1,
+                            const SDLoc &DL);
+  SDValue foldSubToUSubSat(EVT DstVT, SDNode *N);
+  SDValue foldABSToABD(SDNode *N);
+  SDValue unfoldMaskedMerge(SDNode *N);
+  SDValue unfoldExtremeBitClearingToShifts(SDNode *N);
+  SDValue SimplifySetCC(EVT VT, SDValue N0, SDValue N1, ISD::CondCode Cond,
+                        const SDLoc &DL, bool foldBooleans);
+  SDValue rebuildSetCC(SDValue N);
+
+  bool isSetCCEquivalent(SDValue N, SDValue &LHS, SDValue &RHS, SDValue &CC,
+                         bool MatchStrict = false) const;
+  bool isOneUseSetCC(SDValue N) const;
+
+  SDValue SimplifyNodeWithTwoResults(SDNode *N, unsigned LoOp, unsigned HiOp);
+  SDValue CombineConsecutiveLoads(SDNode *N, EVT VT);
+  SDValue foldBitcastedFPLogic(SDNode *N, SelectionDAG &DAG,
+                               const TargetLowering &TLI);
+
+  SDValue CombineExtLoad(SDNode *N);
+  SDValue CombineZExtLogicopShiftLoad(SDNode *N);
+  SDValue combineRepeatedFPDivisors(SDNode *N);
+  SDValue mergeInsertEltWithShuffle(SDNode *N, unsigned InsIndex);
+  SDValue combineInsertEltToShuffle(SDNode *N, unsigned InsIndex);
+  SDValue combineInsertEltToLoad(SDNode *N, unsigned InsIndex);
+  SDValue ConstantFoldBITCASTofBUILD_VECTOR(SDNode *, EVT);
+  SDValue BuildSDIV(SDNode *N);
+  SDValue BuildSDIVPow2(SDNode *N);
+  SDValue BuildUDIV(SDNode *N);
+  SDValue BuildSREMPow2(SDNode *N);
+  SDValue buildOptimizedSREM(SDValue N0, SDValue N1, SDNode *N);
+  SDValue BuildLogBase2(SDValue V, const SDLoc &DL);
+  SDValue BuildDivEstimate(SDValue N, SDValue Op, SDNodeFlags Flags);
+  SDValue buildRsqrtEstimate(SDValue Op, SDNodeFlags Flags);
+  SDValue buildSqrtEstimate(SDValue Op, SDNodeFlags Flags);
+  SDValue buildSqrtEstimateImpl(SDValue Op, SDNodeFlags Flags, bool Recip);
+  SDValue buildSqrtNROneConst(SDValue Arg, SDValue Est, unsigned Iterations,
+                              SDNodeFlags Flags, bool Reciprocal);
+  SDValue buildSqrtNRTwoConst(SDValue Arg, SDValue Est, unsigned Iterations,
+                              SDNodeFlags Flags, bool Reciprocal);
+  SDValue MatchBSwapHWordLow(SDNode *N, SDValue N0, SDValue N1,
+                             bool DemandHighBits = true);
+  SDValue MatchBSwapHWord(SDNode *N, SDValue N0, SDValue N1);
+  SDValue MatchRotatePosNeg(SDValue Shifted, SDValue Pos, SDValue Neg,
+                            SDValue InnerPos, SDValue InnerNeg, bool HasPos,
+                            unsigned PosOpcode, unsigned NegOpcode,
+                            const SDLoc &DL);
+  SDValue MatchFunnelPosNeg(SDValue N0, SDValue N1, SDValue Pos, SDValue Neg,
+                            SDValue InnerPos, SDValue InnerNeg, bool HasPos,
+                            unsigned PosOpcode, unsigned NegOpcode,
+                            const SDLoc &DL);
+  SDValue MatchRotate(SDValue LHS, SDValue RHS, const SDLoc &DL);
+  SDValue MatchLoadCombine(SDNode *N);
+  SDValue mergeTruncStores(StoreSDNode *N);
+  SDValue reduceLoadWidth(SDNode *N);
+  SDValue ReduceLoadOpStoreWidth(SDNode *N);
+  SDValue splitMergedValStore(StoreSDNode *ST);
+  SDValue TransformFPLoadStorePair(SDNode *N);
+  SDValue convertBuildVecZextToZext(SDNode *N);
+  SDValue convertBuildVecZextToBuildVecWithZeros(SDNode *N);
+  SDValue reduceBuildVecExtToExtBuildVec(SDNode *N);
+  SDValue reduceBuildVecTruncToBitCast(SDNode *N);
+  SDValue reduceBuildVecToShuffle(SDNode *N);
+  SDValue createBuildVecShuffle(const SDLoc &DL, SDNode *N,
+                                ArrayRef<int> VectorMask, SDValue VecIn1,
+                                SDValue VecIn2, unsigned LeftIdx,
+                                bool DidSplitVec);
+  SDValue matchVSelectOpSizesWithSetCC(SDNode *Cast);
+
+  /// Walk up chain skipping non-aliasing memory nodes,
+  /// looking for aliasing nodes and adding them to the Aliases vector.
+  void GatherAllAliases(SDNode *N, SDValue OriginalChain,
+                        SmallVectorImpl<SDValue> &Aliases);
+
+  /// Return true if there is any possibility that the two addresses overlap.
+  bool mayAlias(SDNode *Op0, SDNode *Op1) const;
+
+  /// Walk up chain skipping non-aliasing memory nodes, looking for a better
+  /// chain (aliasing node.)
+  SDValue FindBetterChain(SDNode *N, SDValue Chain);
+
+  /// Try to replace a store and any possibly adjacent stores on
+  /// consecutive chains with better chains. Return true only if St is
+  /// replaced.
+  ///
+  /// Notice that other chains may still be replaced even if the function
+  /// returns false.
+  bool findBetterNeighborChains(StoreSDNode *St);
+
+  // Helper for findBetterNeighborChains. Walk up store chain add additional
+  // chained stores that do not overlap and can be parallelized.
+  bool parallelizeChainedStores(StoreSDNode *St);
+
+  /// Holds a pointer to an LSBaseSDNode as well as information on where it
+  /// is located in a sequence of memory operations connected by a chain.
+  struct MemOpLink {
+    // Ptr to the mem node.
+    LSBaseSDNode *MemNode;
+
+    // Offset from the base ptr.
+    int64_t OffsetFromBase;
+
+    MemOpLink(LSBaseSDNode *N, int64_t Offset)
+        : MemNode(N), OffsetFromBase(Offset) {}
+  };
 
-      CommitTargetLoweringOpt(TLO);
-      return true;
-    }
+  // Classify the origin of a stored value.
+  enum class StoreSource { Unknown, Constant, Extract, Load };
+  StoreSource getStoreSource(SDValue StoreVal) {
+    switch (StoreVal.getOpcode()) {
+    case ISD::Constant:
+    case ISD::ConstantFP:
+      return StoreSource::Constant;
+    case ISD::BUILD_VECTOR:
+      if (ISD::isBuildVectorOfConstantSDNodes(StoreVal.getNode()) ||
+          ISD::isBuildVectorOfConstantFPSDNodes(StoreVal.getNode()))
+        return StoreSource::Constant;
+      return StoreSource::Unknown;
+    case ISD::EXTRACT_VECTOR_ELT:
+    case ISD::EXTRACT_SUBVECTOR:
+      return StoreSource::Extract;
+    case ISD::LOAD:
+      return StoreSource::Load;
+    default:
+      return StoreSource::Unknown;
+    }
+  }
+
+  /// This is a helper function for visitMUL to check the profitability
+  /// of folding (mul (add x, c1), c2) -> (add (mul x, c2), c1*c2).
+  /// MulNode is the original multiply, AddNode is (add x, c1),
+  /// and ConstNode is c2.
+  bool isMulAddWithConstProfitable(SDNode *MulNode, SDValue AddNode,
+                                   SDValue ConstNode);
+
+  /// This is a helper function for visitAND and visitZERO_EXTEND.  Returns
+  /// true if the (and (load x) c) pattern matches an extload.  ExtVT returns
+  /// the type of the loaded value to be extended.
+  bool isAndLoadExtLoad(ConstantSDNode *AndC, LoadSDNode *LoadN,
+                        EVT LoadResultTy, EVT &ExtVT);
+
+  /// Helper function to calculate whether the given Load/Store can have its
+  /// width reduced to ExtVT.
+  bool isLegalNarrowLdSt(LSBaseSDNode *LDSTN, ISD::LoadExtType ExtType,
+                         EVT &MemVT, unsigned ShAmt = 0);
+
+  /// Used by BackwardsPropagateMask to find suitable loads.
+  bool SearchForAndLoads(SDNode *N, SmallVectorImpl<LoadSDNode *> &Loads,
+                         SmallPtrSetImpl<SDNode *> &NodesWithConsts,
+                         ConstantSDNode *Mask, SDNode *&NodeToMask);
+  /// Attempt to propagate a given AND node back to load leaves so that they
+  /// can be combined into narrow loads.
+  bool BackwardsPropagateMask(SDNode *N);
+
+  /// Helper function for mergeConsecutiveStores which merges the component
+  /// store chains.
+  SDValue getMergeStoreChains(SmallVectorImpl<MemOpLink> &StoreNodes,
+                              unsigned NumStores);
+
+  /// Helper function for mergeConsecutiveStores which checks if all the store
+  /// nodes have the same underlying object. We can still reuse the first
+  /// store's pointer info if all the stores are from the same object.
+  bool hasSameUnderlyingObj(ArrayRef<MemOpLink> StoreNodes);
+
+  /// This is a helper function for mergeConsecutiveStores. When the source
+  /// elements of the consecutive stores are all constants or all extracted
+  /// vector elements, try to merge them into one larger store introducing
+  /// bitcasts if necessary.  \return True if a merged store was created.
+  bool mergeStoresOfConstantsOrVecElts(SmallVectorImpl<MemOpLink> &StoreNodes,
+                                       EVT MemVT, unsigned NumStores,
+                                       bool IsConstantSrc, bool UseVector,
+                                       bool UseTrunc);
+
+  /// This is a helper function for mergeConsecutiveStores. Stores that
+  /// potentially may be merged with St are placed in StoreNodes. RootNode is
+  /// a chain predecessor to all store candidates.
+  void getStoreMergeCandidates(StoreSDNode *St,
+                               SmallVectorImpl<MemOpLink> &StoreNodes,
+                               SDNode *&Root);
+
+  /// Helper function for mergeConsecutiveStores. Checks if candidate stores
+  /// have indirect dependency through their operands. RootNode is the
+  /// predecessor to all stores calculated by getStoreMergeCandidates and is
+  /// used to prune the dependency check. \return True if safe to merge.
+  bool checkMergeStoreCandidatesForDependencies(
+      SmallVectorImpl<MemOpLink> &StoreNodes, unsigned NumStores,
+      SDNode *RootNode);
+
+  /// This is a helper function for mergeConsecutiveStores. Given a list of
+  /// store candidates, find the first N that are consecutive in memory.
+  /// Returns 0 if there are not at least 2 consecutive stores to try merging.
+  unsigned getConsecutiveStores(SmallVectorImpl<MemOpLink> &StoreNodes,
+                                int64_t ElementSizeBytes) const;
+
+  /// This is a helper function for mergeConsecutiveStores. It is used for
+  /// store chains that are composed entirely of constant values.
+  bool tryStoreMergeOfConstants(SmallVectorImpl<MemOpLink> &StoreNodes,
+                                unsigned NumConsecutiveStores, EVT MemVT,
+                                SDNode *Root, bool AllowVectors);
+
+  /// This is a helper function for mergeConsecutiveStores. It is used for
+  /// store chains that are composed entirely of extracted vector elements.
+  /// When extracting multiple vector elements, try to store them in one
+  /// vector store rather than a sequence of scalar stores.
+  bool tryStoreMergeOfExtracts(SmallVectorImpl<MemOpLink> &StoreNodes,
+                               unsigned NumConsecutiveStores, EVT MemVT,
+                               SDNode *Root);
+
+  /// This is a helper function for mergeConsecutiveStores. It is used for
+  /// store chains that are composed entirely of loaded values.
+  bool tryStoreMergeOfLoads(SmallVectorImpl<MemOpLink> &StoreNodes,
+                            unsigned NumConsecutiveStores, EVT MemVT,
+                            SDNode *Root, bool AllowVectors,
+                            bool IsNonTemporalStore, bool IsNonTemporalLoad);
+
+  /// Merge consecutive store operations into a wide store.
+  /// This optimization uses wide integers or vectors when possible.
+  /// \return true if stores were merged.
+  bool mergeConsecutiveStores(StoreSDNode *St);
+
+  /// Try to transform a truncation where C is a constant:
+  ///     (trunc (and X, C)) -> (and (trunc X), (trunc C))
+  ///
+  /// \p N needs to be a truncation and its first operand an AND. Other
+  /// requirements are checked by the function (e.g. that trunc is
+  /// single-use) and if missed an empty SDValue is returned.
+  SDValue distributeTruncateThroughAnd(SDNode *N);
+
+  /// Helper function to determine whether the target supports operation
+  /// given by \p Opcode for type \p VT, that is, whether the operation
+  /// is legal or custom before legalizing operations, and whether is
+  /// legal (but not custom) after legalization.
+  bool hasOperation(unsigned Opcode, EVT VT) {
+    return TLI.isOperationLegalOrCustom(Opcode, VT, LegalOperations);
+  }
 
-    /// Check the specified vector node value to see if it can be simplified or
-    /// if things it uses can be simplified as it only uses some of the
-    /// elements. If so, return true.
-    bool SimplifyDemandedVectorElts(SDValue Op) {
-      // TODO: For now just pretend it cannot be simplified.
-      if (Op.getValueType().isScalableVector())
-        return false;
+public:
+  /// Runs the dag combiner on all nodes in the work list
+  void Run(CombineLevel AtLevel);
 
-      unsigned NumElts = Op.getValueType().getVectorNumElements();
-      APInt DemandedElts = APInt::getAllOnes(NumElts);
-      return SimplifyDemandedVectorElts(Op, DemandedElts);
-    }
-
-    bool SimplifyDemandedBits(SDValue Op, const APInt &DemandedBits,
-                              const APInt &DemandedElts,
-                              bool AssumeSingleUse = false);
-    bool SimplifyDemandedVectorElts(SDValue Op, const APInt &DemandedElts,
-                                    bool AssumeSingleUse = false);
-
-    bool CombineToPreIndexedLoadStore(SDNode *N);
-    bool CombineToPostIndexedLoadStore(SDNode *N);
-    SDValue SplitIndexingFromLoad(LoadSDNode *LD);
-    bool SliceUpLoad(SDNode *N);
-
-    // Looks up the chain to find a unique (unaliased) store feeding the passed
-    // load. If no such store is found, returns a nullptr.
-    // Note: This will look past a CALLSEQ_START if the load is chained to it so
-    //       so that it can find stack stores for byval params.
-    StoreSDNode *getUniqueStoreFeeding(LoadSDNode *LD, int64_t &Offset);
-    // Scalars have size 0 to distinguish from singleton vectors.
-    SDValue ForwardStoreValueToDirectLoad(LoadSDNode *LD);
-    bool getTruncatedStoreValue(StoreSDNode *ST, SDValue &Val);
-    bool extendLoadedValueToExtension(LoadSDNode *LD, SDValue &Val);
-
-    /// Replace an ISD::EXTRACT_VECTOR_ELT of a load with a narrowed
-    ///   load.
-    ///
-    /// \param EVE ISD::EXTRACT_VECTOR_ELT to be replaced.
-    /// \param InVecVT type of the input vector to EVE with bitcasts resolved.
-    /// \param EltNo index of the vector element to load.
-    /// \param OriginalLoad load that EVE came from to be replaced.
-    /// \returns EVE on success SDValue() on failure.
-    SDValue scalarizeExtractedVectorLoad(SDNode *EVE, EVT InVecVT,
-                                         SDValue EltNo,
-                                         LoadSDNode *OriginalLoad);
-    void ReplaceLoadWithPromotedLoad(SDNode *Load, SDNode *ExtLoad);
-    SDValue PromoteOperand(SDValue Op, EVT PVT, bool &Replace);
-    SDValue SExtPromoteOperand(SDValue Op, EVT PVT);
-    SDValue ZExtPromoteOperand(SDValue Op, EVT PVT);
-    SDValue PromoteIntBinOp(SDValue Op);
-    SDValue PromoteIntShiftOp(SDValue Op);
-    SDValue PromoteExtend(SDValue Op);
-    bool PromoteLoad(SDValue Op);
-
-    SDValue combineMinNumMaxNum(const SDLoc &DL, EVT VT, SDValue LHS,
-                                SDValue RHS, SDValue True, SDValue False,
-                                ISD::CondCode CC);
-
-    /// Call the node-specific routine that knows how to fold each
-    /// particular type of node. If that doesn't do anything, try the
-    /// target-specific DAG combines.
-    SDValue combine(SDNode *N);
-
-    // Visitation implementation - Implement dag node combining for different
-    // node types.  The semantics are as follows:
-    // Return Value:
-    //   SDValue.getNode() == 0 - No change was made
-    //   SDValue.getNode() == N - N was replaced, is dead and has been handled.
-    //   otherwise              - N should be replaced by the returned Operand.
-    //
-    SDValue visitTokenFactor(SDNode *N);
-    SDValue visitMERGE_VALUES(SDNode *N);
-    SDValue visitADD(SDNode *N);
-    SDValue visitADDLike(SDNode *N);
-    SDValue visitADDLikeCommutative(SDValue N0, SDValue N1, SDNode *LocReference);
-    SDValue visitSUB(SDNode *N);
-    SDValue visitADDSAT(SDNode *N);
-    SDValue visitSUBSAT(SDNode *N);
-    SDValue visitADDC(SDNode *N);
-    SDValue visitADDO(SDNode *N);
-    SDValue visitUADDOLike(SDValue N0, SDValue N1, SDNode *N);
-    SDValue visitSUBC(SDNode *N);
-    SDValue visitSUBO(SDNode *N);
-    SDValue visitADDE(SDNode *N);
-    SDValue visitUADDO_CARRY(SDNode *N);
-    SDValue visitSADDO_CARRY(SDNode *N);
-    SDValue visitUADDO_CARRYLike(SDValue N0, SDValue N1, SDValue CarryIn,
-                                 SDNode *N);
-    SDValue visitSUBE(SDNode *N);
-    SDValue visitUSUBO_CARRY(SDNode *N);
-    SDValue visitSSUBO_CARRY(SDNode *N);
-    SDValue visitMUL(SDNode *N);
-    SDValue visitMULFIX(SDNode *N);
-    SDValue useDivRem(SDNode *N);
-    SDValue visitSDIV(SDNode *N);
-    SDValue visitSDIVLike(SDValue N0, SDValue N1, SDNode *N);
-    SDValue visitUDIV(SDNode *N);
-    SDValue visitUDIVLike(SDValue N0, SDValue N1, SDNode *N);
-    SDValue visitREM(SDNode *N);
-    SDValue visitMULHU(SDNode *N);
-    SDValue visitMULHS(SDNode *N);
-    SDValue visitAVG(SDNode *N);
-    SDValue visitABD(SDNode *N);
-    SDValue visitSMUL_LOHI(SDNode *N);
-    SDValue visitUMUL_LOHI(SDNode *N);
-    SDValue visitMULO(SDNode *N);
-    SDValue visitIMINMAX(SDNode *N);
-    SDValue visitAND(SDNode *N);
-    SDValue visitANDLike(SDValue N0, SDValue N1, SDNode *N);
-    SDValue visitOR(SDNode *N);
-    SDValue visitORLike(SDValue N0, SDValue N1, SDNode *N);
-    SDValue visitXOR(SDNode *N);
-    SDValue SimplifyVCastOp(SDNode *N, const SDLoc &DL);
-    SDValue SimplifyVBinOp(SDNode *N, const SDLoc &DL);
-    SDValue visitSHL(SDNode *N);
-    SDValue visitSRA(SDNode *N);
-    SDValue visitSRL(SDNode *N);
-    SDValue visitFunnelShift(SDNode *N);
-    SDValue visitSHLSAT(SDNode *N);
-    SDValue visitRotate(SDNode *N);
-    SDValue visitABS(SDNode *N);
-    SDValue visitBSWAP(SDNode *N);
-    SDValue visitBITREVERSE(SDNode *N);
-    SDValue visitCTLZ(SDNode *N);
-    SDValue visitCTLZ_ZERO_UNDEF(SDNode *N);
-    SDValue visitCTTZ(SDNode *N);
-    SDValue visitCTTZ_ZERO_UNDEF(SDNode *N);
-    SDValue visitCTPOP(SDNode *N);
-    SDValue visitSELECT(SDNode *N);
-    SDValue visitVSELECT(SDNode *N);
-    SDValue visitSELECT_CC(SDNode *N);
-    SDValue visitSETCC(SDNode *N);
-    SDValue visitSETCCCARRY(SDNode *N);
-    SDValue visitSIGN_EXTEND(SDNode *N);
-    SDValue visitZERO_EXTEND(SDNode *N);
-    SDValue visitANY_EXTEND(SDNode *N);
-    SDValue visitAssertExt(SDNode *N);
-    SDValue visitAssertAlign(SDNode *N);
-    SDValue visitSIGN_EXTEND_INREG(SDNode *N);
-    SDValue visitEXTEND_VECTOR_INREG(SDNode *N);
-    SDValue visitTRUNCATE(SDNode *N);
-    SDValue visitBITCAST(SDNode *N);
-    SDValue visitFREEZE(SDNode *N);
-    SDValue visitBUILD_PAIR(SDNode *N);
-    SDValue visitFADD(SDNode *N);
-    SDValue visitVP_FADD(SDNode *N);
-    SDValue visitVP_FSUB(SDNode *N);
-    SDValue visitSTRICT_FADD(SDNode *N);
-    SDValue visitFSUB(SDNode *N);
-    SDValue visitFMUL(SDNode *N);
-    template <class MatchContextClass> SDValue visitFMA(SDNode *N);
-    SDValue visitFDIV(SDNode *N);
-    SDValue visitFREM(SDNode *N);
-    SDValue visitFSQRT(SDNode *N);
-    SDValue visitFCOPYSIGN(SDNode *N);
-    SDValue visitFPOW(SDNode *N);
-    SDValue visitSINT_TO_FP(SDNode *N);
-    SDValue visitUINT_TO_FP(SDNode *N);
-    SDValue visitFP_TO_SINT(SDNode *N);
-    SDValue visitFP_TO_UINT(SDNode *N);
-    SDValue visitFP_ROUND(SDNode *N);
-    SDValue visitFP_EXTEND(SDNode *N);
-    SDValue visitFNEG(SDNode *N);
-    SDValue visitFABS(SDNode *N);
-    SDValue visitFCEIL(SDNode *N);
-    SDValue visitFTRUNC(SDNode *N);
-    SDValue visitFFREXP(SDNode *N);
-    SDValue visitFFLOOR(SDNode *N);
-    SDValue visitFMinMax(SDNode *N);
-    SDValue visitBRCOND(SDNode *N);
-    SDValue visitBR_CC(SDNode *N);
-    SDValue visitLOAD(SDNode *N);
-
-    SDValue replaceStoreChain(StoreSDNode *ST, SDValue BetterChain);
-    SDValue replaceStoreOfFPConstant(StoreSDNode *ST);
-    SDValue replaceStoreOfInsertLoad(StoreSDNode *ST);
-
-    bool refineExtractVectorEltIntoMultipleNarrowExtractVectorElts(SDNode *N);
-
-    SDValue visitSTORE(SDNode *N);
-    SDValue visitLIFETIME_END(SDNode *N);
-    SDValue visitINSERT_VECTOR_ELT(SDNode *N);
-    SDValue visitEXTRACT_VECTOR_ELT(SDNode *N);
-    SDValue visitBUILD_VECTOR(SDNode *N);
-    SDValue visitCONCAT_VECTORS(SDNode *N);
-    SDValue visitEXTRACT_SUBVECTOR(SDNode *N);
-    SDValue visitVECTOR_SHUFFLE(SDNode *N);
-    SDValue visitSCALAR_TO_VECTOR(SDNode *N);
-    SDValue visitINSERT_SUBVECTOR(SDNode *N);
-    SDValue visitMLOAD(SDNode *N);
-    SDValue visitMSTORE(SDNode *N);
-    SDValue visitMGATHER(SDNode *N);
-    SDValue visitMSCATTER(SDNode *N);
-    SDValue visitVPGATHER(SDNode *N);
-    SDValue visitVPSCATTER(SDNode *N);
-    SDValue visitFP_TO_FP16(SDNode *N);
-    SDValue visitFP16_TO_FP(SDNode *N);
-    SDValue visitFP_TO_BF16(SDNode *N);
-    SDValue visitVECREDUCE(SDNode *N);
-    SDValue visitVPOp(SDNode *N);
-    SDValue visitGET_FPENV_MEM(SDNode *N);
-    SDValue visitSET_FPENV_MEM(SDNode *N);
-
-    template <class MatchContextClass>
-    SDValue visitFADDForFMACombine(SDNode *N);
-    template <class MatchContextClass>
-    SDValue visitFSUBForFMACombine(SDNode *N);
-    SDValue visitFMULForFMADistributiveCombine(SDNode *N);
-
-    SDValue XformToShuffleWithZero(SDNode *N);
-    bool reassociationCanBreakAddressingModePattern(unsigned Opc,
-                                                    const SDLoc &DL,
-                                                    SDNode *N,
-                                                    SDValue N0,
-                                                    SDValue N1);
-    SDValue reassociateOpsCommutative(unsigned Opc, const SDLoc &DL, SDValue N0,
-                                      SDValue N1, SDNodeFlags Flags);
-    SDValue reassociateOps(unsigned Opc, const SDLoc &DL, SDValue N0,
-                           SDValue N1, SDNodeFlags Flags);
-    SDValue reassociateReduction(unsigned RedOpc, unsigned Opc, const SDLoc &DL,
-                                 EVT VT, SDValue N0, SDValue N1,
-                                 SDNodeFlags Flags = SDNodeFlags());
-
-    SDValue visitShiftByConstant(SDNode *N);
-
-    SDValue foldSelectOfConstants(SDNode *N);
-    SDValue foldVSelectOfConstants(SDNode *N);
-    SDValue foldBinOpIntoSelect(SDNode *BO);
-    bool SimplifySelectOps(SDNode *SELECT, SDValue LHS, SDValue RHS);
-    SDValue hoistLogicOpWithSameOpcodeHands(SDNode *N);
-    SDValue SimplifySelect(const SDLoc &DL, SDValue N0, SDValue N1, SDValue N2);
-    SDValue SimplifySelectCC(const SDLoc &DL, SDValue N0, SDValue N1,
-                             SDValue N2, SDValue N3, ISD::CondCode CC,
-                             bool NotExtCompare = false);
-    SDValue convertSelectOfFPConstantsToLoadOffset(
-        const SDLoc &DL, SDValue N0, SDValue N1, SDValue N2, SDValue N3,
-        ISD::CondCode CC);
-    SDValue foldSignChangeInBitcast(SDNode *N);
-    SDValue foldSelectCCToShiftAnd(const SDLoc &DL, SDValue N0, SDValue N1,
-                                   SDValue N2, SDValue N3, ISD::CondCode CC);
-    SDValue foldSelectOfBinops(SDNode *N);
-    SDValue foldSextSetcc(SDNode *N);
-    SDValue foldLogicOfSetCCs(bool IsAnd, SDValue N0, SDValue N1,
-                              const SDLoc &DL);
-    SDValue foldSubToUSubSat(EVT DstVT, SDNode *N);
-    SDValue foldABSToABD(SDNode *N);
-    SDValue unfoldMaskedMerge(SDNode *N);
-    SDValue unfoldExtremeBitClearingToShifts(SDNode *N);
-    SDValue SimplifySetCC(EVT VT, SDValue N0, SDValue N1, ISD::CondCode Cond,
-                          const SDLoc &DL, bool foldBooleans);
-    SDValue rebuildSetCC(SDValue N);
-
-    bool isSetCCEquivalent(SDValue N, SDValue &LHS, SDValue &RHS,
-                           SDValue &CC, bool MatchStrict = false) const;
-    bool isOneUseSetCC(SDValue N) const;
-
-    SDValue SimplifyNodeWithTwoResults(SDNode *N, unsigned LoOp,
-                                         unsigned HiOp);
-    SDValue CombineConsecutiveLoads(SDNode *N, EVT VT);
-    SDValue foldBitcastedFPLogic(SDNode *N, SelectionDAG &DAG,
-                                 const TargetLowering &TLI);
-
-    SDValue CombineExtLoad(SDNode *N);
-    SDValue CombineZExtLogicopShiftLoad(SDNode *N);
-    SDValue combineRepeatedFPDivisors(SDNode *N);
-    SDValue mergeInsertEltWithShuffle(SDNode *N, unsigned InsIndex);
-    SDValue combineInsertEltToShuffle(SDNode *N, unsigned InsIndex);
-    SDValue combineInsertEltToLoad(SDNode *N, unsigned InsIndex);
-    SDValue ConstantFoldBITCASTofBUILD_VECTOR(SDNode *, EVT);
-    SDValue BuildSDIV(SDNode *N);
-    SDValue BuildSDIVPow2(SDNode *N);
-    SDValue BuildUDIV(SDNode *N);
-    SDValue BuildSREMPow2(SDNode *N);
-    SDValue buildOptimizedSREM(SDValue N0, SDValue N1, SDNode *N);
-    SDValue BuildLogBase2(SDValue V, const SDLoc &DL);
-    SDValue BuildDivEstimate(SDValue N, SDValue Op, SDNodeFlags Flags);
-    SDValue buildRsqrtEstimate(SDValue Op, SDNodeFlags Flags);
-    SDValue buildSqrtEstimate(SDValue Op, SDNodeFlags Flags);
-    SDValue buildSqrtEstimateImpl(SDValue Op, SDNodeFlags Flags, bool Recip);
-    SDValue buildSqrtNROneConst(SDValue Arg, SDValue Est, unsigned Iterations,
-                                SDNodeFlags Flags, bool Reciprocal);
-    SDValue buildSqrtNRTwoConst(SDValue Arg, SDValue Est, unsigned Iterations,
-                                SDNodeFlags Flags, bool Reciprocal);
-    SDValue MatchBSwapHWordLow(SDNode *N, SDValue N0, SDValue N1,
-                               bool DemandHighBits = true);
-    SDValue MatchBSwapHWord(SDNode *N, SDValue N0, SDValue N1);
-    SDValue MatchRotatePosNeg(SDValue Shifted, SDValue Pos, SDValue Neg,
-                              SDValue InnerPos, SDValue InnerNeg, bool HasPos,
-                              unsigned PosOpcode, unsigned NegOpcode,
-                              const SDLoc &DL);
-    SDValue MatchFunnelPosNeg(SDValue N0, SDValue N1, SDValue Pos, SDValue Neg,
-                              SDValue InnerPos, SDValue InnerNeg, bool HasPos,
-                              unsigned PosOpcode, unsigned NegOpcode,
-                              const SDLoc &DL);
-    SDValue MatchRotate(SDValue LHS, SDValue RHS, const SDLoc &DL);
-    SDValue MatchLoadCombine(SDNode *N);
-    SDValue mergeTruncStores(StoreSDNode *N);
-    SDValue reduceLoadWidth(SDNode *N);
-    SDValue ReduceLoadOpStoreWidth(SDNode *N);
-    SDValue splitMergedValStore(StoreSDNode *ST);
-    SDValue TransformFPLoadStorePair(SDNode *N);
-    SDValue convertBuildVecZextToZext(SDNode *N);
-    SDValue convertBuildVecZextToBuildVecWithZeros(SDNode *N);
-    SDValue reduceBuildVecExtToExtBuildVec(SDNode *N);
-    SDValue reduceBuildVecTruncToBitCast(SDNode *N);
-    SDValue reduceBuildVecToShuffle(SDNode *N);
-    SDValue createBuildVecShuffle(const SDLoc &DL, SDNode *N,
-                                  ArrayRef<int> VectorMask, SDValue VecIn1,
-                                  SDValue VecIn2, unsigned LeftIdx,
-                                  bool DidSplitVec);
-    SDValue matchVSelectOpSizesWithSetCC(SDNode *Cast);
-
-    /// Walk up chain skipping non-aliasing memory nodes,
-    /// looking for aliasing nodes and adding them to the Aliases vector.
-    void GatherAllAliases(SDNode *N, SDValue OriginalChain,
-                          SmallVectorImpl<SDValue> &Aliases);
-
-    /// Return true if there is any possibility that the two addresses overlap.
-    bool mayAlias(SDNode *Op0, SDNode *Op1) const;
-
-    /// Walk up chain skipping non-aliasing memory nodes, looking for a better
-    /// chain (aliasing node.)
-    SDValue FindBetterChain(SDNode *N, SDValue Chain);
-
-    /// Try to replace a store and any possibly adjacent stores on
-    /// consecutive chains with better chains. Return true only if St is
-    /// replaced.
-    ///
-    /// Notice that other chains may still be replaced even if the function
-    /// returns false.
-    bool findBetterNeighborChains(StoreSDNode *St);
-
-    // Helper for findBetterNeighborChains. Walk up store chain add additional
-    // chained stores that do not overlap and can be parallelized.
-    bool parallelizeChainedStores(StoreSDNode *St);
-
-    /// Holds a pointer to an LSBaseSDNode as well as information on where it
-    /// is located in a sequence of memory operations connected by a chain.
-    struct MemOpLink {
-      // Ptr to the mem node.
-      LSBaseSDNode *MemNode;
-
-      // Offset from the base ptr.
-      int64_t OffsetFromBase;
-
-      MemOpLink(LSBaseSDNode *N, int64_t Offset)
-          : MemNode(N), OffsetFromBase(Offset) {}
-    };
+  SelectionDAG &getDAG() const { return DAG; }
 
-    // Classify the origin of a stored value.
-    enum class StoreSource { Unknown, Constant, Extract, Load };
-    StoreSource getStoreSource(SDValue StoreVal) {
-      switch (StoreVal.getOpcode()) {
-      case ISD::Constant:
-      case ISD::ConstantFP:
-        return StoreSource::Constant;
-      case ISD::BUILD_VECTOR:
-        if (ISD::isBuildVectorOfConstantSDNodes(StoreVal.getNode()) ||
-            ISD::isBuildVectorOfConstantFPSDNodes(StoreVal.getNode()))
-          return StoreSource::Constant;
-        return StoreSource::Unknown;
-      case ISD::EXTRACT_VECTOR_ELT:
-      case ISD::EXTRACT_SUBVECTOR:
-        return StoreSource::Extract;
-      case ISD::LOAD:
-        return StoreSource::Load;
-      default:
-        return StoreSource::Unknown;
-      }
-    }
+  /// Returns a type large enough to hold any valid shift amount - before type
+  /// legalization these can be huge.
+  EVT getShiftAmountTy(EVT LHSTy) {
+    assert(LHSTy.isInteger() && "Shift amount is not an integer type!");
+    return TLI.getShiftAmountTy(LHSTy, DAG.getDataLayout(), LegalTypes);
+  }
 
-    /// This is a helper function for visitMUL to check the profitability
-    /// of folding (mul (add x, c1), c2) -> (add (mul x, c2), c1*c2).
-    /// MulNode is the original multiply, AddNode is (add x, c1),
-    /// and ConstNode is c2.
-    bool isMulAddWithConstProfitable(SDNode *MulNode, SDValue AddNode,
-                                     SDValue ConstNode);
-
-    /// This is a helper function for visitAND and visitZERO_EXTEND.  Returns
-    /// true if the (and (load x) c) pattern matches an extload.  ExtVT returns
-    /// the type of the loaded value to be extended.
-    bool isAndLoadExtLoad(ConstantSDNode *AndC, LoadSDNode *LoadN,
-                          EVT LoadResultTy, EVT &ExtVT);
-
-    /// Helper function to calculate whether the given Load/Store can have its
-    /// width reduced to ExtVT.
-    bool isLegalNarrowLdSt(LSBaseSDNode *LDSTN, ISD::LoadExtType ExtType,
-                           EVT &MemVT, unsigned ShAmt = 0);
-
-    /// Used by BackwardsPropagateMask to find suitable loads.
-    bool SearchForAndLoads(SDNode *N, SmallVectorImpl<LoadSDNode*> &Loads,
-                           SmallPtrSetImpl<SDNode*> &NodesWithConsts,
-                           ConstantSDNode *Mask, SDNode *&NodeToMask);
-    /// Attempt to propagate a given AND node back to load leaves so that they
-    /// can be combined into narrow loads.
-    bool BackwardsPropagateMask(SDNode *N);
-
-    /// Helper function for mergeConsecutiveStores which merges the component
-    /// store chains.
-    SDValue getMergeStoreChains(SmallVectorImpl<MemOpLink> &StoreNodes,
-                                unsigned NumStores);
-
-    /// Helper function for mergeConsecutiveStores which checks if all the store
-    /// nodes have the same underlying object. We can still reuse the first
-    /// store's pointer info if all the stores are from the same object.
-    bool hasSameUnderlyingObj(ArrayRef<MemOpLink> StoreNodes);
-
-    /// This is a helper function for mergeConsecutiveStores. When the source
-    /// elements of the consecutive stores are all constants or all extracted
-    /// vector elements, try to merge them into one larger store introducing
-    /// bitcasts if necessary.  \return True if a merged store was created.
-    bool mergeStoresOfConstantsOrVecElts(SmallVectorImpl<MemOpLink> &StoreNodes,
-                                         EVT MemVT, unsigned NumStores,
-                                         bool IsConstantSrc, bool UseVector,
-                                         bool UseTrunc);
-
-    /// This is a helper function for mergeConsecutiveStores. Stores that
-    /// potentially may be merged with St are placed in StoreNodes. RootNode is
-    /// a chain predecessor to all store candidates.
-    void getStoreMergeCandidates(StoreSDNode *St,
-                                 SmallVectorImpl<MemOpLink> &StoreNodes,
-                                 SDNode *&Root);
-
-    /// Helper function for mergeConsecutiveStores. Checks if candidate stores
-    /// have indirect dependency through their operands. RootNode is the
-    /// predecessor to all stores calculated by getStoreMergeCandidates and is
-    /// used to prune the dependency check. \return True if safe to merge.
-    bool checkMergeStoreCandidatesForDependencies(
-        SmallVectorImpl<MemOpLink> &StoreNodes, unsigned NumStores,
-        SDNode *RootNode);
-
-    /// This is a helper function for mergeConsecutiveStores. Given a list of
-    /// store candidates, find the first N that are consecutive in memory.
-    /// Returns 0 if there are not at least 2 consecutive stores to try merging.
-    unsigned getConsecutiveStores(SmallVectorImpl<MemOpLink> &StoreNodes,
-                                  int64_t ElementSizeBytes) const;
-
-    /// This is a helper function for mergeConsecutiveStores. It is used for
-    /// store chains that are composed entirely of constant values.
-    bool tryStoreMergeOfConstants(SmallVectorImpl<MemOpLink> &StoreNodes,
-                                  unsigned NumConsecutiveStores,
-                                  EVT MemVT, SDNode *Root, bool AllowVectors);
-
-    /// This is a helper function for mergeConsecutiveStores. It is used for
-    /// store chains that are composed entirely of extracted vector elements.
-    /// When extracting multiple vector elements, try to store them in one
-    /// vector store rather than a sequence of scalar stores.
-    bool tryStoreMergeOfExtracts(SmallVectorImpl<MemOpLink> &StoreNodes,
-                                 unsigned NumConsecutiveStores, EVT MemVT,
-                                 SDNode *Root);
-
-    /// This is a helper function for mergeConsecutiveStores. It is used for
-    /// store chains that are composed entirely of loaded values.
-    bool tryStoreMergeOfLoads(SmallVectorImpl<MemOpLink> &StoreNodes,
-                              unsigned NumConsecutiveStores, EVT MemVT,
-                              SDNode *Root, bool AllowVectors,
-                              bool IsNonTemporalStore, bool IsNonTemporalLoad);
-
-    /// Merge consecutive store operations into a wide store.
-    /// This optimization uses wide integers or vectors when possible.
-    /// \return true if stores were merged.
-    bool mergeConsecutiveStores(StoreSDNode *St);
-
-    /// Try to transform a truncation where C is a constant:
-    ///     (trunc (and X, C)) -> (and (trunc X), (trunc C))
-    ///
-    /// \p N needs to be a truncation and its first operand an AND. Other
-    /// requirements are checked by the function (e.g. that trunc is
-    /// single-use) and if missed an empty SDValue is returned.
-    SDValue distributeTruncateThroughAnd(SDNode *N);
-
-    /// Helper function to determine whether the target supports operation
-    /// given by \p Opcode for type \p VT, that is, whether the operation
-    /// is legal or custom before legalizing operations, and whether is
-    /// legal (but not custom) after legalization.
-    bool hasOperation(unsigned Opcode, EVT VT) {
-      return TLI.isOperationLegalOrCustom(Opcode, VT, LegalOperations);
-    }
-
-  public:
-    /// Runs the dag combiner on all nodes in the work list
-    void Run(CombineLevel AtLevel);
-
-    SelectionDAG &getDAG() const { return DAG; }
-
-    /// Returns a type large enough to hold any valid shift amount - before type
-    /// legalization these can be huge.
-    EVT getShiftAmountTy(EVT LHSTy) {
-      assert(LHSTy.isInteger() && "Shift amount is not an integer type!");
-      return TLI.getShiftAmountTy(LHSTy, DAG.getDataLayout(), LegalTypes);
-    }
-
-    /// This method returns true if we are running before type legalization or
-    /// if the specified VT is legal.
-    bool isTypeLegal(const EVT &VT) {
-      if (!LegalTypes) return true;
-      return TLI.isTypeLegal(VT);
-    }
-
-    /// Convenience wrapper around TargetLowering::getSetCCResultType
-    EVT getSetCCResultType(EVT VT) const {
-      return TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(), VT);
-    }
-
-    void ExtendSetCCUses(const SmallVectorImpl<SDNode *> &SetCCs,
-                         SDValue OrigLoad, SDValue ExtLoad,
-                         ISD::NodeType ExtType);
-  };
+  /// This method returns true if we are running before type legalization or
+  /// if the specified VT is legal.
+  bool isTypeLegal(const EVT &VT) {
+    if (!LegalTypes)
+      return true;
+    return TLI.isTypeLegal(VT);
+  }
+
+  /// Convenience wrapper around TargetLowering::getSetCCResultType
+  EVT getSetCCResultType(EVT VT) const {
+    return TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(), VT);
+  }
+
+  void ExtendSetCCUses(const SmallVectorImpl<SDNode *> &SetCCs,
+                       SDValue OrigLoad, SDValue ExtLoad,
+                       ISD::NodeType ExtType);
+};
 
 /// This class is a DAGUpdateListener that removes any deleted
 /// nodes from the worklist.
@@ -858,11 +849,9 @@ class WorklistRemover : public SelectionDAG::DAGUpdateListener {
 
 public:
   explicit WorklistRemover(DAGCombiner &dc)
-    : SelectionDAG::DAGUpdateListener(dc.getDAG()), DC(dc) {}
+      : SelectionDAG::DAGUpdateListener(dc.getDAG()), DC(dc) {}
 
-  void NodeDeleted(SDNode *N, SDNode *E) override {
-    DC.removeFromWorklist(N);
-  }
+  void NodeDeleted(SDNode *N, SDNode *E) override { DC.removeFromWorklist(N); }
 };
 
 class WorklistInserter : public SelectionDAG::DAGUpdateListener {
@@ -913,8 +902,7 @@ class VPMatchContext {
     if (auto RootMaskPos = ISD::getVPMaskIdx(Root->getOpcode()))
       RootMaskOp = Root->getOperand(*RootMaskPos);
 
-    if (auto RootVLenPos =
-            ISD::getVPExplicitVectorLengthIdx(Root->getOpcode()))
+    if (auto RootVLenPos = ISD::getVPExplicitVectorLengthIdx(Root->getOpcode()))
       RootVectorLenOp = Root->getOperand(*RootVLenPos);
   }
 
@@ -962,8 +950,7 @@ class VPMatchContext {
     unsigned VPOpcode = ISD::getVPForBaseOpcode(Opcode);
     assert(ISD::getVPMaskIdx(VPOpcode) == 2 &&
            ISD::getVPExplicitVectorLengthIdx(VPOpcode) == 3);
-    return DAG.getNode(VPOpcode, DL, VT,
-                       {N1, N2, RootMaskOp, RootVectorLenOp});
+    return DAG.getNode(VPOpcode, DL, VT, {N1, N2, RootMaskOp, RootVectorLenOp});
   }
 
   SDValue getNode(unsigned Opcode, const SDLoc &DL, EVT VT, SDValue N1,
@@ -1016,32 +1003,32 @@ class VPMatchContext {
 //===----------------------------------------------------------------------===//
 
 void TargetLowering::DAGCombinerInfo::AddToWorklist(SDNode *N) {
-  ((DAGCombiner*)DC)->AddToWorklist(N);
+  ((DAGCombiner *)DC)->AddToWorklist(N);
 }
 
-SDValue TargetLowering::DAGCombinerInfo::
-CombineTo(SDNode *N, ArrayRef<SDValue> To, bool AddTo) {
-  return ((DAGCombiner*)DC)->CombineTo(N, &To[0], To.size(), AddTo);
+SDValue TargetLowering::DAGCombinerInfo::CombineTo(SDNode *N,
+                                                   ArrayRef<SDValue> To,
+                                                   bool AddTo) {
+  return ((DAGCombiner *)DC)->CombineTo(N, &To[0], To.size(), AddTo);
 }
 
-SDValue TargetLowering::DAGCombinerInfo::
-CombineTo(SDNode *N, SDValue Res, bool AddTo) {
-  return ((DAGCombiner*)DC)->CombineTo(N, Res, AddTo);
+SDValue TargetLowering::DAGCombinerInfo::CombineTo(SDNode *N, SDValue Res,
+                                                   bool AddTo) {
+  return ((DAGCombiner *)DC)->CombineTo(N, Res, AddTo);
 }
 
-SDValue TargetLowering::DAGCombinerInfo::
-CombineTo(SDNode *N, SDValue Res0, SDValue Res1, bool AddTo) {
-  return ((DAGCombiner*)DC)->CombineTo(N, Res0, Res1, AddTo);
+SDValue TargetLowering::DAGCombinerInfo::CombineTo(SDNode *N, SDValue Res0,
+                                                   SDValue Res1, bool AddTo) {
+  return ((DAGCombiner *)DC)->CombineTo(N, Res0, Res1, AddTo);
 }
 
-bool TargetLowering::DAGCombinerInfo::
-recursivelyDeleteUnusedNodes(SDNode *N) {
-  return ((DAGCombiner*)DC)->recursivelyDeleteUnusedNodes(N);
+bool TargetLowering::DAGCombinerInfo::recursivelyDeleteUnusedNodes(SDNode *N) {
+  return ((DAGCombiner *)DC)->recursivelyDeleteUnusedNodes(N);
 }
 
-void TargetLowering::DAGCombinerInfo::
-CommitTargetLoweringOpt(const TargetLowering::TargetLoweringOpt &TLO) {
-  return ((DAGCombiner*)DC)->CommitTargetLoweringOpt(TLO);
+void TargetLowering::DAGCombinerInfo::CommitTargetLoweringOpt(
+    const TargetLowering::TargetLoweringOpt &TLO) {
+  return ((DAGCombiner *)DC)->CommitTargetLoweringOpt(TLO);
 }
 
 //===----------------------------------------------------------------------===//
@@ -1082,16 +1069,15 @@ bool DAGCombiner::isSetCCEquivalent(SDValue N, SDValue &LHS, SDValue &RHS,
   if (N.getOpcode() == ISD::SETCC) {
     LHS = N.getOperand(0);
     RHS = N.getOperand(1);
-    CC  = N.getOperand(2);
+    CC = N.getOperand(2);
     return true;
   }
 
-  if (MatchStrict &&
-      (N.getOpcode() == ISD::STRICT_FSETCC ||
-       N.getOpcode() == ISD::STRICT_FSETCCS)) {
+  if (MatchStrict && (N.getOpcode() == ISD::STRICT_FSETCC ||
+                      N.getOpcode() == ISD::STRICT_FSETCCS)) {
     LHS = N.getOperand(1);
     RHS = N.getOperand(2);
-    CC  = N.getOperand(3);
+    CC = N.getOperand(3);
     return true;
   }
 
@@ -1105,7 +1091,7 @@ bool DAGCombiner::isSetCCEquivalent(SDValue N, SDValue &LHS, SDValue &RHS,
 
   LHS = N.getOperand(0);
   RHS = N.getOperand(1);
-  CC  = N.getOperand(4);
+  CC = N.getOperand(4);
   return true;
 }
 
@@ -1182,11 +1168,8 @@ static bool canSplitIdx(LoadSDNode *LD) {
           !cast<ConstantSDNode>(LD->getOperand(2))->isOpaque());
 }
 
-bool DAGCombiner::reassociationCanBreakAddressingModePattern(unsigned Opc,
-                                                             const SDLoc &DL,
-                                                             SDNode *N,
-                                                             SDValue N0,
-                                                             SDValue N1) {
+bool DAGCombiner::reassociationCanBreakAddressingModePattern(
+    unsigned Opc, const SDLoc &DL, SDNode *N, SDValue N0, SDValue N1) {
   // Currently this only tries to ensure we don't undo the GEP splits done by
   // CodeGenPrepare when shouldConsiderGEPOffsetSplit is true. To ensure this,
   // we check if the following transformation would be problematic:
@@ -1408,8 +1391,7 @@ SDValue DAGCombiner::CombineTo(SDNode *N, const SDValue *To, unsigned NumTo,
              To[0].dump(&DAG);
              dbgs() << " and " << NumTo - 1 << " other values\n");
   for (unsigned i = 0, e = NumTo; i != e; ++i)
-    assert((!To[i].getNode() ||
-            N->getValueType(i) == To[i].getValueType()) &&
+    assert((!To[i].getNode() || N->getValueType(i) == To[i].getValueType()) &&
            "Cannot combine value to value of different type!");
 
   WorklistRemover DeadNodes(*this);
@@ -1430,8 +1412,8 @@ SDValue DAGCombiner::CombineTo(SDNode *N, const SDValue *To, unsigned NumTo,
   return SDValue(N, 0);
 }
 
-void DAGCombiner::
-CommitTargetLoweringOpt(const TargetLowering::TargetLoweringOpt &TLO) {
+void DAGCombiner::CommitTargetLoweringOpt(
+    const TargetLowering::TargetLoweringOpt &TLO) {
   // Replace the old value with the new one.
   ++NodesCombined;
   LLVM_DEBUG(dbgs() << "\nReplacing.2 "; TLO.Old.dump(&DAG);
@@ -1505,17 +1487,17 @@ SDValue DAGCombiner::PromoteOperand(SDValue Op, EVT PVT, bool &Replace) {
   if (ISD::isUNINDEXEDLoad(Op.getNode())) {
     LoadSDNode *LD = cast<LoadSDNode>(Op);
     EVT MemVT = LD->getMemoryVT();
-    ISD::LoadExtType ExtType = ISD::isNON_EXTLoad(LD) ? ISD::EXTLOAD
-                                                      : LD->getExtensionType();
+    ISD::LoadExtType ExtType =
+        ISD::isNON_EXTLoad(LD) ? ISD::EXTLOAD : LD->getExtensionType();
     Replace = true;
-    return DAG.getExtLoad(ExtType, DL, PVT,
-                          LD->getChain(), LD->getBasePtr(),
+    return DAG.getExtLoad(ExtType, DL, PVT, LD->getChain(), LD->getBasePtr(),
                           MemVT, LD->getMemOperand());
   }
 
   unsigned Opc = Op.getOpcode();
   switch (Opc) {
-  default: break;
+  default:
+    break;
   case ISD::AssertSext:
     if (SDValue Op0 = SExtPromoteOperand(Op.getOperand(0), PVT))
       return DAG.getNode(ISD::AssertSext, DL, PVT, Op0, Op.getOperand(1));
@@ -1526,7 +1508,7 @@ SDValue DAGCombiner::PromoteOperand(SDValue Op, EVT PVT, bool &Replace) {
     break;
   case ISD::Constant: {
     unsigned ExtOpc =
-      Op.getValueType().isByteSized() ? ISD::SIGN_EXTEND : ISD::ZERO_EXTEND;
+        Op.getValueType().isByteSized() ? ISD::SIGN_EXTEND : ISD::ZERO_EXTEND;
     return DAG.getNode(ExtOpc, DL, PVT, Op);
   }
   }
@@ -1742,11 +1724,11 @@ bool DAGCombiner::PromoteLoad(SDValue Op) {
     SDNode *N = Op.getNode();
     LoadSDNode *LD = cast<LoadSDNode>(N);
     EVT MemVT = LD->getMemoryVT();
-    ISD::LoadExtType ExtType = ISD::isNON_EXTLoad(LD) ? ISD::EXTLOAD
-                                                      : LD->getExtensionType();
-    SDValue NewLD = DAG.getExtLoad(ExtType, DL, PVT,
-                                   LD->getChain(), LD->getBasePtr(),
-                                   MemVT, LD->getMemOperand());
+    ISD::LoadExtType ExtType =
+        ISD::isNON_EXTLoad(LD) ? ISD::EXTLOAD : LD->getExtensionType();
+    SDValue NewLD =
+        DAG.getExtLoad(ExtType, DL, PVT, LD->getChain(), LD->getBasePtr(),
+                       MemVT, LD->getMemOperand());
     SDValue Result = DAG.getNode(ISD::TRUNCATE, DL, VT, NewLD);
 
     LLVM_DEBUG(dbgs() << "\nPromoting "; N->dump(&DAG); dbgs() << "\nTo: ";
@@ -1903,139 +1885,247 @@ void DAGCombiner::Run(CombineLevel AtLevel) {
 
 SDValue DAGCombiner::visit(SDNode *N) {
   switch (N->getOpcode()) {
-  default: break;
-  case ISD::TokenFactor:        return visitTokenFactor(N);
-  case ISD::MERGE_VALUES:       return visitMERGE_VALUES(N);
-  case ISD::ADD:                return visitADD(N);
-  case ISD::SUB:                return visitSUB(N);
+  default:
+    break;
+  case ISD::TokenFactor:
+    return visitTokenFactor(N);
+  case ISD::MERGE_VALUES:
+    return visitMERGE_VALUES(N);
+  case ISD::ADD:
+    return visitADD(N);
+  case ISD::SUB:
+    return visitSUB(N);
   case ISD::SADDSAT:
-  case ISD::UADDSAT:            return visitADDSAT(N);
+  case ISD::UADDSAT:
+    return visitADDSAT(N);
   case ISD::SSUBSAT:
-  case ISD::USUBSAT:            return visitSUBSAT(N);
-  case ISD::ADDC:               return visitADDC(N);
+  case ISD::USUBSAT:
+    return visitSUBSAT(N);
+  case ISD::ADDC:
+    return visitADDC(N);
   case ISD::SADDO:
-  case ISD::UADDO:              return visitADDO(N);
-  case ISD::SUBC:               return visitSUBC(N);
+  case ISD::UADDO:
+    return visitADDO(N);
+  case ISD::SUBC:
+    return visitSUBC(N);
   case ISD::SSUBO:
-  case ISD::USUBO:              return visitSUBO(N);
-  case ISD::ADDE:               return visitADDE(N);
-  case ISD::UADDO_CARRY:        return visitUADDO_CARRY(N);
-  case ISD::SADDO_CARRY:        return visitSADDO_CARRY(N);
-  case ISD::SUBE:               return visitSUBE(N);
-  case ISD::USUBO_CARRY:        return visitUSUBO_CARRY(N);
-  case ISD::SSUBO_CARRY:        return visitSSUBO_CARRY(N);
+  case ISD::USUBO:
+    return visitSUBO(N);
+  case ISD::ADDE:
+    return visitADDE(N);
+  case ISD::UADDO_CARRY:
+    return visitUADDO_CARRY(N);
+  case ISD::SADDO_CARRY:
+    return visitSADDO_CARRY(N);
+  case ISD::SUBE:
+    return visitSUBE(N);
+  case ISD::USUBO_CARRY:
+    return visitUSUBO_CARRY(N);
+  case ISD::SSUBO_CARRY:
+    return visitSSUBO_CARRY(N);
   case ISD::SMULFIX:
   case ISD::SMULFIXSAT:
   case ISD::UMULFIX:
-  case ISD::UMULFIXSAT:         return visitMULFIX(N);
-  case ISD::MUL:                return visitMUL(N);
-  case ISD::SDIV:               return visitSDIV(N);
-  case ISD::UDIV:               return visitUDIV(N);
+  case ISD::UMULFIXSAT:
+    return visitMULFIX(N);
+  case ISD::MUL:
+    return visitMUL(N);
+  case ISD::SDIV:
+    return visitSDIV(N);
+  case ISD::UDIV:
+    return visitUDIV(N);
   case ISD::SREM:
-  case ISD::UREM:               return visitREM(N);
-  case ISD::MULHU:              return visitMULHU(N);
-  case ISD::MULHS:              return visitMULHS(N);
+  case ISD::UREM:
+    return visitREM(N);
+  case ISD::MULHU:
+    return visitMULHU(N);
+  case ISD::MULHS:
+    return visitMULHS(N);
   case ISD::AVGFLOORS:
   case ISD::AVGFLOORU:
   case ISD::AVGCEILS:
-  case ISD::AVGCEILU:           return visitAVG(N);
+  case ISD::AVGCEILU:
+    return visitAVG(N);
   case ISD::ABDS:
-  case ISD::ABDU:               return visitABD(N);
-  case ISD::SMUL_LOHI:          return visitSMUL_LOHI(N);
-  case ISD::UMUL_LOHI:          return visitUMUL_LOHI(N);
+  case ISD::ABDU:
+    return visitABD(N);
+  case ISD::SMUL_LOHI:
+    return visitSMUL_LOHI(N);
+  case ISD::UMUL_LOHI:
+    return visitUMUL_LOHI(N);
   case ISD::SMULO:
-  case ISD::UMULO:              return visitMULO(N);
+  case ISD::UMULO:
+    return visitMULO(N);
   case ISD::SMIN:
   case ISD::SMAX:
   case ISD::UMIN:
-  case ISD::UMAX:               return visitIMINMAX(N);
-  case ISD::AND:                return visitAND(N);
-  case ISD::OR:                 return visitOR(N);
-  case ISD::XOR:                return visitXOR(N);
-  case ISD::SHL:                return visitSHL(N);
-  case ISD::SRA:                return visitSRA(N);
-  case ISD::SRL:                return visitSRL(N);
+  case ISD::UMAX:
+    return visitIMINMAX(N);
+  case ISD::AND:
+    return visitAND(N);
+  case ISD::OR:
+    return visitOR(N);
+  case ISD::XOR:
+    return visitXOR(N);
+  case ISD::SHL:
+    return visitSHL(N);
+  case ISD::SRA:
+    return visitSRA(N);
+  case ISD::SRL:
+    return visitSRL(N);
   case ISD::ROTR:
-  case ISD::ROTL:               return visitRotate(N);
+  case ISD::ROTL:
+    return visitRotate(N);
   case ISD::FSHL:
-  case ISD::FSHR:               return visitFunnelShift(N);
+  case ISD::FSHR:
+    return visitFunnelShift(N);
   case ISD::SSHLSAT:
-  case ISD::USHLSAT:            return visitSHLSAT(N);
-  case ISD::ABS:                return visitABS(N);
-  case ISD::BSWAP:              return visitBSWAP(N);
-  case ISD::BITREVERSE:         return visitBITREVERSE(N);
-  case ISD::CTLZ:               return visitCTLZ(N);
-  case ISD::CTLZ_ZERO_UNDEF:    return visitCTLZ_ZERO_UNDEF(N);
-  case ISD::CTTZ:               return visitCTTZ(N);
-  case ISD::CTTZ_ZERO_UNDEF:    return visitCTTZ_ZERO_UNDEF(N);
-  case ISD::CTPOP:              return visitCTPOP(N);
-  case ISD::SELECT:             return visitSELECT(N);
-  case ISD::VSELECT:            return visitVSELECT(N);
-  case ISD::SELECT_CC:          return visitSELECT_CC(N);
-  case ISD::SETCC:              return visitSETCC(N);
-  case ISD::SETCCCARRY:         return visitSETCCCARRY(N);
-  case ISD::SIGN_EXTEND:        return visitSIGN_EXTEND(N);
-  case ISD::ZERO_EXTEND:        return visitZERO_EXTEND(N);
-  case ISD::ANY_EXTEND:         return visitANY_EXTEND(N);
+  case ISD::USHLSAT:
+    return visitSHLSAT(N);
+  case ISD::ABS:
+    return visitABS(N);
+  case ISD::BSWAP:
+    return visitBSWAP(N);
+  case ISD::BITREVERSE:
+    return visitBITREVERSE(N);
+  case ISD::CTLZ:
+    return visitCTLZ(N);
+  case ISD::CTLZ_ZERO_UNDEF:
+    return visitCTLZ_ZERO_UNDEF(N);
+  case ISD::CTTZ:
+    return visitCTTZ(N);
+  case ISD::CTTZ_ZERO_UNDEF:
+    return visitCTTZ_ZERO_UNDEF(N);
+  case ISD::CTPOP:
+    return visitCTPOP(N);
+  case ISD::SELECT:
+    return visitSELECT(N);
+  case ISD::VSELECT:
+    return visitVSELECT(N);
+  case ISD::SELECT_CC:
+    return visitSELECT_CC(N);
+  case ISD::SETCC:
+    return visitSETCC(N);
+  case ISD::SETCCCARRY:
+    return visitSETCCCARRY(N);
+  case ISD::SIGN_EXTEND:
+    return visitSIGN_EXTEND(N);
+  case ISD::ZERO_EXTEND:
+    return visitZERO_EXTEND(N);
+  case ISD::ANY_EXTEND:
+    return visitANY_EXTEND(N);
   case ISD::AssertSext:
-  case ISD::AssertZext:         return visitAssertExt(N);
-  case ISD::AssertAlign:        return visitAssertAlign(N);
-  case ISD::SIGN_EXTEND_INREG:  return visitSIGN_EXTEND_INREG(N);
+  case ISD::AssertZext:
+    return visitAssertExt(N);
+  case ISD::AssertAlign:
+    return visitAssertAlign(N);
+  case ISD::SIGN_EXTEND_INREG:
+    return visitSIGN_EXTEND_INREG(N);
   case ISD::SIGN_EXTEND_VECTOR_INREG:
   case ISD::ZERO_EXTEND_VECTOR_INREG:
-  case ISD::ANY_EXTEND_VECTOR_INREG: return visitEXTEND_VECTOR_INREG(N);
-  case ISD::TRUNCATE:           return visitTRUNCATE(N);
-  case ISD::BITCAST:            return visitBITCAST(N);
-  case ISD::BUILD_PAIR:         return visitBUILD_PAIR(N);
-  case ISD::FADD:               return visitFADD(N);
-  case ISD::STRICT_FADD:        return visitSTRICT_FADD(N);
-  case ISD::FSUB:               return visitFSUB(N);
-  case ISD::FMUL:               return visitFMUL(N);
-  case ISD::FMA:                return visitFMA<EmptyMatchContext>(N);
-  case ISD::FDIV:               return visitFDIV(N);
-  case ISD::FREM:               return visitFREM(N);
-  case ISD::FSQRT:              return visitFSQRT(N);
-  case ISD::FCOPYSIGN:          return visitFCOPYSIGN(N);
-  case ISD::FPOW:               return visitFPOW(N);
-  case ISD::SINT_TO_FP:         return visitSINT_TO_FP(N);
-  case ISD::UINT_TO_FP:         return visitUINT_TO_FP(N);
-  case ISD::FP_TO_SINT:         return visitFP_TO_SINT(N);
-  case ISD::FP_TO_UINT:         return visitFP_TO_UINT(N);
-  case ISD::FP_ROUND:           return visitFP_ROUND(N);
-  case ISD::FP_EXTEND:          return visitFP_EXTEND(N);
-  case ISD::FNEG:               return visitFNEG(N);
-  case ISD::FABS:               return visitFABS(N);
-  case ISD::FFLOOR:             return visitFFLOOR(N);
+  case ISD::ANY_EXTEND_VECTOR_INREG:
+    return visitEXTEND_VECTOR_INREG(N);
+  case ISD::TRUNCATE:
+    return visitTRUNCATE(N);
+  case ISD::BITCAST:
+    return visitBITCAST(N);
+  case ISD::BUILD_PAIR:
+    return visitBUILD_PAIR(N);
+  case ISD::FADD:
+    return visitFADD(N);
+  case ISD::STRICT_FADD:
+    return visitSTRICT_FADD(N);
+  case ISD::FSUB:
+    return visitFSUB(N);
+  case ISD::FMUL:
+    return visitFMUL(N);
+  case ISD::FMA:
+    return visitFMA<EmptyMatchContext>(N);
+  case ISD::FDIV:
+    return visitFDIV(N);
+  case ISD::FREM:
+    return visitFREM(N);
+  case ISD::FSQRT:
+    return visitFSQRT(N);
+  case ISD::FCOPYSIGN:
+    return visitFCOPYSIGN(N);
+  case ISD::FPOW:
+    return visitFPOW(N);
+  case ISD::SINT_TO_FP:
+    return visitSINT_TO_FP(N);
+  case ISD::UINT_TO_FP:
+    return visitUINT_TO_FP(N);
+  case ISD::FP_TO_SINT:
+    return visitFP_TO_SINT(N);
+  case ISD::FP_TO_UINT:
+    return visitFP_TO_UINT(N);
+  case ISD::FP_ROUND:
+    return visitFP_ROUND(N);
+  case ISD::FP_EXTEND:
+    return visitFP_EXTEND(N);
+  case ISD::FNEG:
+    return visitFNEG(N);
+  case ISD::FABS:
+    return visitFABS(N);
+  case ISD::FFLOOR:
+    return visitFFLOOR(N);
   case ISD::FMINNUM:
   case ISD::FMAXNUM:
   case ISD::FMINIMUM:
-  case ISD::FMAXIMUM:           return visitFMinMax(N);
-  case ISD::FCEIL:              return visitFCEIL(N);
-  case ISD::FTRUNC:             return visitFTRUNC(N);
-  case ISD::FFREXP:             return visitFFREXP(N);
-  case ISD::BRCOND:             return visitBRCOND(N);
-  case ISD::BR_CC:              return visitBR_CC(N);
-  case ISD::LOAD:               return visitLOAD(N);
-  case ISD::STORE:              return visitSTORE(N);
-  case ISD::INSERT_VECTOR_ELT:  return visitINSERT_VECTOR_ELT(N);
-  case ISD::EXTRACT_VECTOR_ELT: return visitEXTRACT_VECTOR_ELT(N);
-  case ISD::BUILD_VECTOR:       return visitBUILD_VECTOR(N);
-  case ISD::CONCAT_VECTORS:     return visitCONCAT_VECTORS(N);
-  case ISD::EXTRACT_SUBVECTOR:  return visitEXTRACT_SUBVECTOR(N);
-  case ISD::VECTOR_SHUFFLE:     return visitVECTOR_SHUFFLE(N);
-  case ISD::SCALAR_TO_VECTOR:   return visitSCALAR_TO_VECTOR(N);
-  case ISD::INSERT_SUBVECTOR:   return visitINSERT_SUBVECTOR(N);
-  case ISD::MGATHER:            return visitMGATHER(N);
-  case ISD::MLOAD:              return visitMLOAD(N);
-  case ISD::MSCATTER:           return visitMSCATTER(N);
-  case ISD::MSTORE:             return visitMSTORE(N);
-  case ISD::LIFETIME_END:       return visitLIFETIME_END(N);
-  case ISD::FP_TO_FP16:         return visitFP_TO_FP16(N);
-  case ISD::FP16_TO_FP:         return visitFP16_TO_FP(N);
-  case ISD::FP_TO_BF16:         return visitFP_TO_BF16(N);
-  case ISD::FREEZE:             return visitFREEZE(N);
-  case ISD::GET_FPENV_MEM:      return visitGET_FPENV_MEM(N);
-  case ISD::SET_FPENV_MEM:      return visitSET_FPENV_MEM(N);
+  case ISD::FMAXIMUM:
+    return visitFMinMax(N);
+  case ISD::FCEIL:
+    return visitFCEIL(N);
+  case ISD::FTRUNC:
+    return visitFTRUNC(N);
+  case ISD::FFREXP:
+    return visitFFREXP(N);
+  case ISD::BRCOND:
+    return visitBRCOND(N);
+  case ISD::BR_CC:
+    return visitBR_CC(N);
+  case ISD::LOAD:
+    return visitLOAD(N);
+  case ISD::STORE:
+    return visitSTORE(N);
+  case ISD::INSERT_VECTOR_ELT:
+    return visitINSERT_VECTOR_ELT(N);
+  case ISD::EXTRACT_VECTOR_ELT:
+    return visitEXTRACT_VECTOR_ELT(N);
+  case ISD::BUILD_VECTOR:
+    return visitBUILD_VECTOR(N);
+  case ISD::CONCAT_VECTORS:
+    return visitCONCAT_VECTORS(N);
+  case ISD::EXTRACT_SUBVECTOR:
+    return visitEXTRACT_SUBVECTOR(N);
+  case ISD::VECTOR_SHUFFLE:
+    return visitVECTOR_SHUFFLE(N);
+  case ISD::SCALAR_TO_VECTOR:
+    return visitSCALAR_TO_VECTOR(N);
+  case ISD::INSERT_SUBVECTOR:
+    return visitINSERT_SUBVECTOR(N);
+  case ISD::MGATHER:
+    return visitMGATHER(N);
+  case ISD::MLOAD:
+    return visitMLOAD(N);
+  case ISD::MSCATTER:
+    return visitMSCATTER(N);
+  case ISD::MSTORE:
+    return visitMSTORE(N);
+  case ISD::LIFETIME_END:
+    return visitLIFETIME_END(N);
+  case ISD::FP_TO_FP16:
+    return visitFP_TO_FP16(N);
+  case ISD::FP16_TO_FP:
+    return visitFP16_TO_FP(N);
+  case ISD::FP_TO_BF16:
+    return visitFP_TO_BF16(N);
+  case ISD::FREEZE:
+    return visitFREEZE(N);
+  case ISD::GET_FPENV_MEM:
+    return visitGET_FPENV_MEM(N);
+  case ISD::SET_FPENV_MEM:
+    return visitSET_FPENV_MEM(N);
   case ISD::VECREDUCE_FADD:
   case ISD::VECREDUCE_FMUL:
   case ISD::VECREDUCE_ADD:
@@ -2050,7 +2140,8 @@ SDValue DAGCombiner::visit(SDNode *N) {
   case ISD::VECREDUCE_FMAX:
   case ISD::VECREDUCE_FMIN:
   case ISD::VECREDUCE_FMAXIMUM:
-  case ISD::VECREDUCE_FMINIMUM:     return visitVECREDUCE(N);
+  case ISD::VECREDUCE_FMINIMUM:
+    return visitVECREDUCE(N);
 #define BEGIN_REGISTER_VP_SDNODE(SDOPC, ...) case ISD::SDOPC:
 #include "llvm/IR/VPIntrinsics.def"
     return visitVPOp(N);
@@ -2072,8 +2163,7 @@ SDValue DAGCombiner::combine(SDNode *N) {
         TLI.hasTargetDAGCombine((ISD::NodeType)N->getOpcode())) {
 
       // Expose the DAG combiner to the target combiner impls.
-      TargetLowering::DAGCombinerInfo
-        DagCombineInfo(DAG, Level, false, this);
+      TargetLowering::DAGCombinerInfo DagCombineInfo(DAG, Level, false, this);
 
       RV = TLI.PerformDAGCombine(N, DagCombineInfo);
     }
@@ -2082,7 +2172,8 @@ SDValue DAGCombiner::combine(SDNode *N) {
   // If nothing happened still, try promoting the operation.
   if (!RV.getNode()) {
     switch (N->getOpcode()) {
-    default: break;
+    default:
+      break;
     case ISD::ADD:
     case ISD::SUB:
     case ISD::MUL:
@@ -2133,9 +2224,9 @@ static SDValue getInputChainForNode(SDNode *N) {
   if (unsigned NumOps = N->getNumOperands()) {
     if (N->getOperand(0).getValueType() == MVT::Other)
       return N->getOperand(0);
-    if (N->getOperand(NumOps-1).getValueType() == MVT::Other)
-      return N->getOperand(NumOps-1);
-    for (unsigned i = 1; i < NumOps-1; ++i)
+    if (N->getOperand(NumOps - 1).getValueType() == MVT::Other)
+      return N->getOperand(NumOps - 1);
+    for (unsigned i = 1; i < NumOps - 1; ++i)
       if (N->getOperand(i).getValueType() == MVT::Other)
         return N->getOperand(i);
   }
@@ -2166,10 +2257,10 @@ SDValue DAGCombiner::visitTokenFactor(SDNode *N) {
   if (N->hasOneUse() && N->use_begin()->getOpcode() == ISD::TokenFactor)
     AddToWorklist(*(N->use_begin()));
 
-  SmallVector<SDNode *, 8> TFs;     // List of token factors to visit.
-  SmallVector<SDValue, 8> Ops;      // Ops for replacing token factor.
-  SmallPtrSet<SDNode*, 16> SeenOps;
-  bool Changed = false;             // If we should replace this token factor.
+  SmallVector<SDNode *, 8> TFs; // List of token factors to visit.
+  SmallVector<SDValue, 8> Ops;  // Ops for replacing token factor.
+  SmallPtrSet<SDNode *, 16> SeenOps;
+  bool Changed = false; // If we should replace this token factor.
 
   // Start out with this token factor.
   TFs.push_back(N);
@@ -2347,7 +2438,7 @@ SDValue DAGCombiner::visitMERGE_VALUES(SDNode *N) {
     DAG.ReplaceAllUsesWith(N, Ops.data());
   } while (!N->use_empty());
   deleteAndRecombine(N);
-  return SDValue(N, 0);   // Return N so it doesn't get rechecked!
+  return SDValue(N, 0); // Return N so it doesn't get rechecked!
 }
 
 /// If \p N is a ConstantSDNode with isOpaque() == false return it casted to a
@@ -2521,7 +2612,8 @@ SDValue DAGCombiner::foldBinOpIntoSelect(SDNode *BO) {
 
     // Peek through trunc to shift amount type.
     if ((BinOpcode == ISD::SHL || BinOpcode == ISD::SRA ||
-         BinOpcode == ISD::SRL) && Sel.hasOneUse()) {
+         BinOpcode == ISD::SRL) &&
+        Sel.hasOneUse()) {
       // This is valid when the truncated bits of x are already zero.
       SDValue Op;
       KnownBits Known;
@@ -2555,8 +2647,7 @@ SDValue DAGCombiner::foldBinOpIntoSelect(SDNode *BO) {
        (isNullOrNullSplat(CF) && isAllOnesOrAllOnesSplat(CT)));
 
   SDValue CBO = BO->getOperand(SelOpNo ^ 1);
-  if (!CanFoldNonConst &&
-      !isConstantOrConstantVector(CBO, true) &&
+  if (!CanFoldNonConst && !isConstantOrConstantVector(CBO, true) &&
       !DAG.isConstantFPBuildVectorOrConstantFP(CBO))
     return SDValue();
 
@@ -2631,8 +2722,8 @@ static SDValue foldAddSubBoolOfMaskedVal(SDNode *N, SelectionDAG &DAG) {
   EVT VT = C.getValueType();
   SDLoc DL(N);
   SDValue LowBit = DAG.getZExtOrTrunc(SetCC.getOperand(0), DL, VT);
-  SDValue C1 = IsAdd ? DAG.getConstant(CN->getAPIntValue() + 1, DL, VT) :
-                       DAG.getConstant(CN->getAPIntValue() - 1, DL, VT);
+  SDValue C1 = IsAdd ? DAG.getConstant(CN->getAPIntValue() + 1, DL, VT)
+                     : DAG.getConstant(CN->getAPIntValue() - 1, DL, VT);
   return DAG.getNode(IsAdd ? ISD::SUB : ISD::ADD, DL, VT, C1, LowBit);
 }
 
@@ -2678,8 +2769,7 @@ static SDValue foldAddSubOfSignBit(SDNode *N, SelectionDAG &DAG) {
   return SDValue();
 }
 
-static bool
-areBitwiseNotOfEachother(SDValue Op0, SDValue Op1) {
+static bool areBitwiseNotOfEachother(SDValue Op0, SDValue Op1) {
   return (isBitwiseNot(Op0) && Op0.getOperand(0) == Op1) ||
          (isBitwiseNot(Op1) && Op1.getOperand(0) == Op0);
 }
@@ -2825,14 +2915,12 @@ SDValue DAGCombiner::visitADDLike(SDNode *N) {
   // fold ((A-B)+(C-A)) -> (C-B)
   if (N0.getOpcode() == ISD::SUB && N1.getOpcode() == ISD::SUB &&
       N0.getOperand(0) == N1.getOperand(1))
-    return DAG.getNode(ISD::SUB, DL, VT, N1.getOperand(0),
-                       N0.getOperand(1));
+    return DAG.getNode(ISD::SUB, DL, VT, N1.getOperand(0), N0.getOperand(1));
 
   // fold ((A-B)+(B-C)) -> (A-C)
   if (N0.getOpcode() == ISD::SUB && N1.getOpcode() == ISD::SUB &&
       N0.getOperand(1) == N1.getOperand(0))
-    return DAG.getNode(ISD::SUB, DL, VT, N0.getOperand(0),
-                       N1.getOperand(1));
+    return DAG.getNode(ISD::SUB, DL, VT, N0.getOperand(0), N1.getOperand(1));
 
   // fold (A+(B-(A+C))) to (B-C)
   if (N1.getOpcode() == ISD::SUB && N1.getOperand(1).getOpcode() == ISD::ADD &&
@@ -3078,9 +3166,8 @@ static SDValue getAsCarry(const TargetLowering &TLI, SDValue V,
   // If the result is masked, then no matter what kind of bool it is we can
   // return. If it isn't, then we need to make sure the bool type is either 0 or
   // 1 and not other values.
-  if (Masked ||
-      TLI.getBooleanContents(V.getValueType()) ==
-          TargetLoweringBase::ZeroOrOneBooleanContent)
+  if (Masked || TLI.getBooleanContents(V.getValueType()) ==
+                    TargetLoweringBase::ZeroOrOneBooleanContent)
     return V;
 
   return SDValue();
@@ -3115,7 +3202,7 @@ static SDValue foldAddSubMasked1(bool IsAdd, SDValue N0, SDValue N1,
 
 /// Helper for doing combines based on N0 and N1 being added to each other.
 SDValue DAGCombiner::visitADDLikeCommutative(SDValue N0, SDValue N1,
-                                          SDNode *LocReference) {
+                                             SDNode *LocReference) {
   EVT VT = N0.getValueType();
   SDLoc DL(LocReference);
 
@@ -3192,8 +3279,8 @@ SDValue DAGCombiner::visitADDLikeCommutative(SDValue N0, SDValue N1,
   // (add X, (uaddo_carry Y, 0, Carry)) -> (uaddo_carry X, Y, Carry)
   if (N1.getOpcode() == ISD::UADDO_CARRY && isNullConstant(N1.getOperand(1)) &&
       N1.getResNo() == 0)
-    return DAG.getNode(ISD::UADDO_CARRY, DL, N1->getVTList(),
-                       N0, N1.getOperand(0), N1.getOperand(2));
+    return DAG.getNode(ISD::UADDO_CARRY, DL, N1->getVTList(), N0,
+                       N1.getOperand(0), N1.getOperand(2));
 
   // (add X, Carry) -> (uaddo_carry X, 0, Carry)
   if (TLI.isOperationLegalOrCustom(ISD::UADDO_CARRY, VT))
@@ -3224,8 +3311,7 @@ SDValue DAGCombiner::visitADDC(SDNode *N) {
 
   // fold (addc x, 0) -> x + no carry out
   if (isNullConstant(N1))
-    return CombineTo(N, N0, DAG.getNode(ISD::CARRY_FALSE,
-                                        DL, MVT::Glue));
+    return CombineTo(N, N0, DAG.getNode(ISD::CARRY_FALSE, DL, MVT::Glue));
 
   // If it cannot overflow, transform into an add.
   if (DAG.computeOverflowForUnsignedAdd(N0, N1) == SelectionDAG::OFK_Never)
@@ -3243,8 +3329,7 @@ SDValue DAGCombiner::visitADDC(SDNode *N) {
  * no matter what, use DAG.getLogicalNOT.
  */
 static SDValue extractBooleanFlip(SDValue V, SelectionDAG &DAG,
-                                  const TargetLowering &TLI,
-                                  bool Force) {
+                                  const TargetLowering &TLI, bool Force) {
   if (Force && isa<ConstantSDNode>(V))
     return DAG.getLogicalNOT(SDLoc(V), V, V.getValueType());
 
@@ -3258,16 +3343,16 @@ static SDValue extractBooleanFlip(SDValue V, SelectionDAG &DAG,
   EVT VT = V.getValueType();
 
   bool IsFlip = false;
-  switch(TLI.getBooleanContents(VT)) {
-    case TargetLowering::ZeroOrOneBooleanContent:
-      IsFlip = Const->isOne();
-      break;
-    case TargetLowering::ZeroOrNegativeOneBooleanContent:
-      IsFlip = Const->isAllOnes();
-      break;
-    case TargetLowering::UndefinedBooleanContent:
-      IsFlip = (Const->getAPIntValue() & 0x01) == 1;
-      break;
+  switch (TLI.getBooleanContents(VT)) {
+  case TargetLowering::ZeroOrOneBooleanContent:
+    IsFlip = Const->isOne();
+    break;
+  case TargetLowering::ZeroOrNegativeOneBooleanContent:
+    IsFlip = Const->isAllOnes();
+    break;
+  case TargetLowering::UndefinedBooleanContent:
+    IsFlip = (Const->getAPIntValue() & 0x01) == 1;
+    break;
   }
 
   if (IsFlip)
@@ -3357,8 +3442,7 @@ SDValue DAGCombiner::visitADDE(SDNode *N) {
   ConstantSDNode *N0C = dyn_cast<ConstantSDNode>(N0);
   ConstantSDNode *N1C = dyn_cast<ConstantSDNode>(N1);
   if (N0C && !N1C)
-    return DAG.getNode(ISD::ADDE, SDLoc(N), N->getVTList(),
-                       N1, N0, CarryIn);
+    return DAG.getNode(ISD::ADDE, SDLoc(N), N->getVTList(), N1, N0, CarryIn);
 
   // fold (adde x, y, false) -> (addc x, y)
   if (CarryIn.getOpcode() == ISD::CARRY_FALSE)
@@ -3392,9 +3476,9 @@ SDValue DAGCombiner::visitUADDO_CARRY(SDNode *N) {
     EVT CarryVT = CarryIn.getValueType();
     SDValue CarryExt = DAG.getBoolExtOrTrunc(CarryIn, DL, VT, CarryVT);
     AddToWorklist(CarryExt.getNode());
-    return CombineTo(N, DAG.getNode(ISD::AND, DL, VT, CarryExt,
-                                    DAG.getConstant(1, DL, VT)),
-                     DAG.getConstant(0, DL, CarryVT));
+    return CombineTo(
+        N, DAG.getNode(ISD::AND, DL, VT, CarryExt, DAG.getConstant(1, DL, VT)),
+        DAG.getConstant(0, DL, CarryVT));
   }
 
   if (SDValue Combined = visitUADDO_CARRYLike(N0, N1, CarryIn, N))
@@ -3466,8 +3550,7 @@ static SDValue combineUADDO_CARRYDiamond(DAGCombiner &Combiner,
     return SDValue();
   }
 
-
-  auto cancelDiamond = [&](SDValue A,SDValue B) {
+  auto cancelDiamond = [&](SDValue A, SDValue B) {
     SDLoc DL(N);
     SDValue NewY =
         DAG.getNode(ISD::UADDO_CARRY, DL, Carry0->getVTList(), A, B, Z);
@@ -3902,9 +3985,9 @@ SDValue DAGCombiner::visitSUB(SDNode *N) {
 
   // fold (A-(B-C)) -> A+(C-B)
   if (N1.getOpcode() == ISD::SUB && N1.hasOneUse())
-    return DAG.getNode(ISD::ADD, DL, VT, N0,
-                       DAG.getNode(ISD::SUB, DL, VT, N1.getOperand(1),
-                                   N1.getOperand(0)));
+    return DAG.getNode(
+        ISD::ADD, DL, VT, N0,
+        DAG.getNode(ISD::SUB, DL, VT, N1.getOperand(1), N1.getOperand(0)));
 
   // A - (A & B)  ->  A & (~B)
   if (N1.getOpcode() == ISD::AND) {
@@ -3924,15 +4007,13 @@ SDValue DAGCombiner::visitSUB(SDNode *N) {
   if (N1.getOpcode() == ISD::MUL && N1.hasOneUse()) {
     if (N1.getOperand(0).getOpcode() == ISD::SUB &&
         isNullOrNullSplat(N1.getOperand(0).getOperand(0))) {
-      SDValue Mul = DAG.getNode(ISD::MUL, DL, VT,
-                                N1.getOperand(0).getOperand(1),
-                                N1.getOperand(1));
+      SDValue Mul = DAG.getNode(
+          ISD::MUL, DL, VT, N1.getOperand(0).getOperand(1), N1.getOperand(1));
       return DAG.getNode(ISD::ADD, DL, VT, N0, Mul);
     }
     if (N1.getOperand(1).getOpcode() == ISD::SUB &&
         isNullOrNullSplat(N1.getOperand(1).getOperand(0))) {
-      SDValue Mul = DAG.getNode(ISD::MUL, DL, VT,
-                                N1.getOperand(0),
+      SDValue Mul = DAG.getNode(ISD::MUL, DL, VT, N1.getOperand(0),
                                 N1.getOperand(1).getOperand(1));
       return DAG.getNode(ISD::ADD, DL, VT, N0, Mul);
     }
@@ -4079,8 +4160,8 @@ SDValue DAGCombiner::visitSUB(SDNode *N) {
   // (sub (usubo_carry X, 0, Carry), Y) -> (usubo_carry X, Y, Carry)
   if (N0.getOpcode() == ISD::USUBO_CARRY && isNullConstant(N0.getOperand(1)) &&
       N0.getResNo() == 0 && N0.hasOneUse())
-    return DAG.getNode(ISD::USUBO_CARRY, DL, N0->getVTList(),
-                       N0.getOperand(0), N1, N0.getOperand(2));
+    return DAG.getNode(ISD::USUBO_CARRY, DL, N0->getVTList(), N0.getOperand(0),
+                       N1, N0.getOperand(2));
 
   if (TLI.isOperationLegalOrCustom(ISD::UADDO_CARRY, VT)) {
     // (sub Carry, X)  ->  (uaddo_carry (sub 0, X), 0, Carry)
@@ -4295,7 +4376,7 @@ SDValue DAGCombiner::visitMULFIX(SDNode *N) {
 
   // Canonicalize constant to RHS (vector doesn't have to splat)
   if (DAG.isConstantIntBuildVectorOrConstantInt(N0) &&
-     !DAG.isConstantIntBuildVectorOrConstantInt(N1))
+      !DAG.isConstantIntBuildVectorOrConstantInt(N1))
     return DAG.getNode(N->getOpcode(), SDLoc(N), VT, N1, N0, Scale);
 
   // fold (mulfix x, 0, scale) -> 0
@@ -4334,9 +4415,9 @@ SDValue DAGCombiner::visitMUL(SDNode *N) {
       return FoldedVOp;
 
     N1IsConst = ISD::isConstantSplatVector(N1.getNode(), ConstValue1);
-    assert((!N1IsConst ||
-            ConstValue1.getBitWidth() == VT.getScalarSizeInBits()) &&
-           "Splat APInt should be element width");
+    assert(
+        (!N1IsConst || ConstValue1.getBitWidth() == VT.getScalarSizeInBits()) &&
+        "Splat APInt should be element width");
   } else {
     N1IsConst = isa<ConstantSDNode>(N1);
     if (N1IsConst) {
@@ -4377,10 +4458,9 @@ SDValue DAGCombiner::visitMUL(SDNode *N) {
 
     // FIXME: If the input is something that is easily negated (e.g. a
     // single-use add), we should put the negate there.
-    return DAG.getNode(ISD::SUB, DL, VT,
-                       DAG.getConstant(0, DL, VT),
+    return DAG.getNode(ISD::SUB, DL, VT, DAG.getConstant(0, DL, VT),
                        DAG.getNode(ISD::SHL, DL, VT, N0,
-                            DAG.getConstant(Log2Val, DL, ShiftVT)));
+                                   DAG.getConstant(Log2Val, DL, ShiftVT)));
   }
 
   // Attempt to reuse an existing umul_lohi/smul_lohi node, but only if the
@@ -4461,11 +4541,13 @@ SDValue DAGCombiner::visitMUL(SDNode *N) {
     // Check for both (mul (shl X, C), Y)  and  (mul Y, (shl X, C)).
     if (N0.getOpcode() == ISD::SHL &&
         isConstantOrConstantVector(N0.getOperand(1)) && N0->hasOneUse()) {
-      Sh = N0; Y = N1;
+      Sh = N0;
+      Y = N1;
     } else if (N1.getOpcode() == ISD::SHL &&
                isConstantOrConstantVector(N1.getOperand(1)) &&
                N1->hasOneUse()) {
-      Sh = N1; Y = N0;
+      Sh = N1;
+      Y = N0;
     }
 
     if (Sh.getNode()) {
@@ -4527,7 +4609,8 @@ SDValue DAGCombiner::visitMUL(SDNode *N) {
       for (unsigned I = 0; I != NumElts; ++I)
         if (ClearMask[I])
           Mask[I] = Zero;
-      return DAG.getNode(ISD::AND, DL, VT, N0, DAG.getBuildVector(VT, DL, Mask));
+      return DAG.getNode(ISD::AND, DL, VT, N0,
+                         DAG.getBuildVector(VT, DL, Mask));
     }
   }
 
@@ -4555,12 +4638,23 @@ static bool isDivRemLibcallAvailable(SDNode *Node, bool isSigned,
   if (!NodeType.isSimple())
     return false;
   switch (NodeType.getSimpleVT().SimpleTy) {
-  default: return false; // No libcall for vector types.
-  case MVT::i8:   LC= isSigned ? RTLIB::SDIVREM_I8  : RTLIB::UDIVREM_I8;  break;
-  case MVT::i16:  LC= isSigned ? RTLIB::SDIVREM_I16 : RTLIB::UDIVREM_I16; break;
-  case MVT::i32:  LC= isSigned ? RTLIB::SDIVREM_I32 : RTLIB::UDIVREM_I32; break;
-  case MVT::i64:  LC= isSigned ? RTLIB::SDIVREM_I64 : RTLIB::UDIVREM_I64; break;
-  case MVT::i128: LC= isSigned ? RTLIB::SDIVREM_I128:RTLIB::UDIVREM_I128; break;
+  default:
+    return false; // No libcall for vector types.
+  case MVT::i8:
+    LC = isSigned ? RTLIB::SDIVREM_I8 : RTLIB::UDIVREM_I8;
+    break;
+  case MVT::i16:
+    LC = isSigned ? RTLIB::SDIVREM_I16 : RTLIB::UDIVREM_I16;
+    break;
+  case MVT::i32:
+    LC = isSigned ? RTLIB::SDIVREM_I32 : RTLIB::UDIVREM_I32;
+    break;
+  case MVT::i64:
+    LC = isSigned ? RTLIB::SDIVREM_I64 : RTLIB::UDIVREM_I64;
+    break;
+  case MVT::i128:
+    LC = isSigned ? RTLIB::SDIVREM_I128 : RTLIB::UDIVREM_I128;
+    break;
   }
 
   return TLI.getLibcallName(LC) != nullptr;
@@ -4613,8 +4707,7 @@ SDValue DAGCombiner::useDivRem(SDNode *Node) {
     // target-specific that we won't be able to recognize.
     unsigned UserOpc = User->getOpcode();
     if ((UserOpc == Opcode || UserOpc == OtherOpcode || UserOpc == DivRemOpc) &&
-        User->getOperand(0) == Op0 &&
-        User->getOperand(1) == Op1) {
+        User->getOperand(0) == Op0 && User->getOperand(1) == Op1) {
       if (!combined) {
         if (UserOpc == OtherOpcode) {
           SDVTList VTs = DAG.getVTList(VT, VT);
@@ -4722,8 +4815,8 @@ SDValue DAGCombiner::visitSDIV(SDNode *N) {
   if (SDValue V = visitSDIVLike(N0, N1, N)) {
     // If the corresponding remainder node exists, update its users with
     // (Dividend - (Quotient * Divisor).
-    if (SDNode *RemNode = DAG.getNodeIfExists(ISD::SREM, N->getVTList(),
-                                              { N0, N1 })) {
+    if (SDNode *RemNode =
+            DAG.getNodeIfExists(ISD::SREM, N->getVTList(), {N0, N1})) {
       SDValue Mul = DAG.getNode(ISD::MUL, DL, VT, V, N1);
       SDValue Sub = DAG.getNode(ISD::SUB, DL, VT, N0, Mul);
       AddToWorklist(Mul.getNode());
@@ -4739,7 +4832,7 @@ SDValue DAGCombiner::visitSDIV(SDNode *N) {
   AttributeList Attr = DAG.getMachineFunction().getFunction().getAttributes();
   if (!N1C || TLI.isIntDivCheap(N->getValueType(0), Attr))
     if (SDValue DivRem = useDivRem(N))
-        return DivRem;
+      return DivRem;
 
   return SDValue();
 }
@@ -4862,8 +4955,8 @@ SDValue DAGCombiner::visitUDIV(SDNode *N) {
   if (SDValue V = visitUDIVLike(N0, N1, N)) {
     // If the corresponding remainder node exists, update its users with
     // (Dividend - (Quotient * Divisor).
-    if (SDNode *RemNode = DAG.getNodeIfExists(ISD::UREM, N->getVTList(),
-                                              { N0, N1 })) {
+    if (SDNode *RemNode =
+            DAG.getNodeIfExists(ISD::UREM, N->getVTList(), {N0, N1})) {
       SDValue Mul = DAG.getNode(ISD::MUL, DL, VT, V, N1);
       SDValue Sub = DAG.getNode(ISD::SUB, DL, VT, N0, Mul);
       AddToWorklist(Mul.getNode());
@@ -4879,7 +4972,7 @@ SDValue DAGCombiner::visitUDIV(SDNode *N) {
   AttributeList Attr = DAG.getMachineFunction().getFunction().getAttributes();
   if (!N1C || TLI.isIntDivCheap(N->getValueType(0), Attr))
     if (SDValue DivRem = useDivRem(N))
-        return DivRem;
+      return DivRem;
 
   return SDValue();
 }
@@ -5014,8 +5107,8 @@ SDValue DAGCombiner::visitREM(SDNode *N) {
     if (OptimizedDiv.getNode() && OptimizedDiv.getNode() != N) {
       // If the equivalent Div node also exists, update its users.
       unsigned DivOpcode = isSigned ? ISD::SDIV : ISD::UDIV;
-      if (SDNode *DivNode = DAG.getNodeIfExists(DivOpcode, N->getVTList(),
-                                                { N0, N1 }))
+      if (SDNode *DivNode =
+              DAG.getNodeIfExists(DivOpcode, N->getVTList(), {N0, N1}))
         CombineTo(DivNode, OptimizedDiv);
       SDValue Mul = DAG.getNode(ISD::MUL, DL, VT, OptimizedDiv, N1);
       SDValue Sub = DAG.getNode(ISD::SUB, DL, VT, N0, Mul);
@@ -5077,14 +5170,14 @@ SDValue DAGCombiner::visitMULHS(SDNode *N) {
       !VT.isVector()) {
     MVT Simple = VT.getSimpleVT();
     unsigned SimpleSize = Simple.getSizeInBits();
-    EVT NewVT = EVT::getIntegerVT(*DAG.getContext(), SimpleSize*2);
+    EVT NewVT = EVT::getIntegerVT(*DAG.getContext(), SimpleSize * 2);
     if (TLI.isOperationLegal(ISD::MUL, NewVT)) {
       N0 = DAG.getNode(ISD::SIGN_EXTEND, DL, NewVT, N0);
       N1 = DAG.getNode(ISD::SIGN_EXTEND, DL, NewVT, N1);
       N1 = DAG.getNode(ISD::MUL, DL, NewVT, N0, N1);
-      N1 = DAG.getNode(ISD::SRL, DL, NewVT, N1,
-            DAG.getConstant(SimpleSize, DL,
-                            getShiftAmountTy(N1.getValueType())));
+      N1 = DAG.getNode(
+          ISD::SRL, DL, NewVT, N1,
+          DAG.getConstant(SimpleSize, DL, getShiftAmountTy(N1.getValueType())));
       return DAG.getNode(ISD::TRUNCATE, DL, VT, N1);
     }
   }
@@ -5134,8 +5227,8 @@ SDValue DAGCombiner::visitMULHU(SDNode *N) {
       DAG.isKnownToBeAPowerOfTwo(N1) && hasOperation(ISD::SRL, VT)) {
     unsigned NumEltBits = VT.getScalarSizeInBits();
     SDValue LogBase2 = BuildLogBase2(N1, DL);
-    SDValue SRLAmt = DAG.getNode(
-        ISD::SUB, DL, VT, DAG.getConstant(NumEltBits, DL, VT), LogBase2);
+    SDValue SRLAmt = DAG.getNode(ISD::SUB, DL, VT,
+                                 DAG.getConstant(NumEltBits, DL, VT), LogBase2);
     EVT ShiftVT = getShiftAmountTy(N0.getValueType());
     SDValue Trunc = DAG.getZExtOrTrunc(SRLAmt, DL, ShiftVT);
     return DAG.getNode(ISD::SRL, DL, VT, N0, Trunc);
@@ -5147,14 +5240,14 @@ SDValue DAGCombiner::visitMULHU(SDNode *N) {
       !VT.isVector()) {
     MVT Simple = VT.getSimpleVT();
     unsigned SimpleSize = Simple.getSizeInBits();
-    EVT NewVT = EVT::getIntegerVT(*DAG.getContext(), SimpleSize*2);
+    EVT NewVT = EVT::getIntegerVT(*DAG.getContext(), SimpleSize * 2);
     if (TLI.isOperationLegal(ISD::MUL, NewVT)) {
       N0 = DAG.getNode(ISD::ZERO_EXTEND, DL, NewVT, N0);
       N1 = DAG.getNode(ISD::ZERO_EXTEND, DL, NewVT, N1);
       N1 = DAG.getNode(ISD::MUL, DL, NewVT, N0, N1);
-      N1 = DAG.getNode(ISD::SRL, DL, NewVT, N1,
-            DAG.getConstant(SimpleSize, DL,
-                            getShiftAmountTy(N1.getValueType())));
+      N1 = DAG.getNode(
+          ISD::SRL, DL, NewVT, N1,
+          DAG.getConstant(SimpleSize, DL, getShiftAmountTy(N1.getValueType())));
       return DAG.getNode(ISD::TRUNCATE, DL, VT, N1);
     }
   }
@@ -5322,15 +5415,15 @@ SDValue DAGCombiner::visitSMUL_LOHI(SDNode *N) {
   if (VT.isSimple() && !VT.isVector()) {
     MVT Simple = VT.getSimpleVT();
     unsigned SimpleSize = Simple.getSizeInBits();
-    EVT NewVT = EVT::getIntegerVT(*DAG.getContext(), SimpleSize*2);
+    EVT NewVT = EVT::getIntegerVT(*DAG.getContext(), SimpleSize * 2);
     if (TLI.isOperationLegal(ISD::MUL, NewVT)) {
       SDValue Lo = DAG.getNode(ISD::SIGN_EXTEND, DL, NewVT, N0);
       SDValue Hi = DAG.getNode(ISD::SIGN_EXTEND, DL, NewVT, N1);
       Lo = DAG.getNode(ISD::MUL, DL, NewVT, Lo, Hi);
       // Compute the high part as N1.
-      Hi = DAG.getNode(ISD::SRL, DL, NewVT, Lo,
-            DAG.getConstant(SimpleSize, DL,
-                            getShiftAmountTy(Lo.getValueType())));
+      Hi = DAG.getNode(
+          ISD::SRL, DL, NewVT, Lo,
+          DAG.getConstant(SimpleSize, DL, getShiftAmountTy(Lo.getValueType())));
       Hi = DAG.getNode(ISD::TRUNCATE, DL, VT, Hi);
       // Compute the low part as N0.
       Lo = DAG.getNode(ISD::TRUNCATE, DL, VT, Lo);
@@ -5372,15 +5465,15 @@ SDValue DAGCombiner::visitUMUL_LOHI(SDNode *N) {
   if (VT.isSimple() && !VT.isVector()) {
     MVT Simple = VT.getSimpleVT();
     unsigned SimpleSize = Simple.getSizeInBits();
-    EVT NewVT = EVT::getIntegerVT(*DAG.getContext(), SimpleSize*2);
+    EVT NewVT = EVT::getIntegerVT(*DAG.getContext(), SimpleSize * 2);
     if (TLI.isOperationLegal(ISD::MUL, NewVT)) {
       SDValue Lo = DAG.getNode(ISD::ZERO_EXTEND, DL, NewVT, N0);
       SDValue Hi = DAG.getNode(ISD::ZERO_EXTEND, DL, NewVT, N1);
       Lo = DAG.getNode(ISD::MUL, DL, NewVT, Lo, Hi);
       // Compute the high part as N1.
-      Hi = DAG.getNode(ISD::SRL, DL, NewVT, Lo,
-            DAG.getConstant(SimpleSize, DL,
-                            getShiftAmountTy(Lo.getValueType())));
+      Hi = DAG.getNode(
+          ISD::SRL, DL, NewVT, Lo,
+          DAG.getConstant(SimpleSize, DL, getShiftAmountTy(Lo.getValueType())));
       Hi = DAG.getNode(ISD::TRUNCATE, DL, VT, Hi);
       // Compute the low part as N0.
       Lo = DAG.getNode(ISD::TRUNCATE, DL, VT, Lo);
@@ -5429,14 +5522,14 @@ SDValue DAGCombiner::visitMULO(SDNode *N) {
   // FIXME: This needs a freeze.
   if (N1C && N1C->getAPIntValue() == 2 &&
       (!IsSigned || VT.getScalarSizeInBits() > 2))
-    return DAG.getNode(IsSigned ? ISD::SADDO : ISD::UADDO, DL,
-                       N->getVTList(), N0, N0);
+    return DAG.getNode(IsSigned ? ISD::SADDO : ISD::UADDO, DL, N->getVTList(),
+                       N0, N0);
 
   // A 1 bit SMULO overflows if both inputs are 1.
   if (IsSigned && VT.getScalarSizeInBits() == 1) {
     SDValue And = DAG.getNode(ISD::AND, DL, VT, N0, N1);
-    SDValue Cmp = DAG.getSetCC(DL, CarryVT, And,
-                               DAG.getConstant(0, DL, VT), ISD::SETNE);
+    SDValue Cmp =
+        DAG.getSetCC(DL, CarryVT, And, DAG.getConstant(0, DL, VT), ISD::SETNE);
     return CombineTo(N, And, Cmp);
   }
 
@@ -5489,7 +5582,7 @@ static SDValue isSaturatingMinMax(SDValue N0, SDValue N1, SDValue N2,
         Type *InputTy = FPVT.getTypeForEVT(*DAG.getContext());
         const fltSemantics &Semantics = InputTy->getFltSemantics();
         uint32_t MinBitWidth =
-          APFloatBase::semanticsIntSizeInBits(Semantics, /*isSigned*/ true);
+            APFloatBase::semanticsIntSizeInBits(Semantics, /*isSigned*/ true);
         if (IntVT.getSizeInBits() >= MinBitWidth) {
           Unsigned = true;
           BW = PowerOf2Ceil(MinBitWidth);
@@ -5646,11 +5739,20 @@ SDValue DAGCombiner::visitIMINMAX(SDNode *N) {
       (N1.isUndef() || DAG.SignBitIsZero(N1))) {
     unsigned AltOpcode;
     switch (Opcode) {
-    case ISD::SMIN: AltOpcode = ISD::UMIN; break;
-    case ISD::SMAX: AltOpcode = ISD::UMAX; break;
-    case ISD::UMIN: AltOpcode = ISD::SMIN; break;
-    case ISD::UMAX: AltOpcode = ISD::SMAX; break;
-    default: llvm_unreachable("Unknown MINMAX opcode");
+    case ISD::SMIN:
+      AltOpcode = ISD::UMIN;
+      break;
+    case ISD::SMAX:
+      AltOpcode = ISD::UMAX;
+      break;
+    case ISD::UMIN:
+      AltOpcode = ISD::SMIN;
+      break;
+    case ISD::UMAX:
+      AltOpcode = ISD::SMAX;
+      break;
+    default:
+      llvm_unreachable("Unknown MINMAX opcode");
     }
     if (TLI.isOperationLegal(AltOpcode, VT))
       return DAG.getNode(AltOpcode, DL, VT, N0, N1);
@@ -5806,11 +5908,11 @@ SDValue DAGCombiner::hoistLogicOpWithSameOpcodeHands(SDNode *N) {
   // We also handle SCALAR_TO_VECTOR because xor/or/and operations are cheaper
   // on scalars.
   if ((HandOpcode == ISD::BITCAST || HandOpcode == ISD::SCALAR_TO_VECTOR) &&
-       Level <= AfterLegalizeTypes) {
+      Level <= AfterLegalizeTypes) {
     // Input types must be integer and the same.
     if (XVT.isInteger() && XVT == Y.getValueType() &&
-        !(VT.isVector() && TLI.isTypeLegal(VT) &&
-          !XVT.isVector() && !TLI.isTypeLegal(XVT))) {
+        !(VT.isVector() && TLI.isTypeLegal(VT) && !XVT.isVector() &&
+          !TLI.isTypeLegal(XVT))) {
       SDValue Logic = DAG.getNode(LogicOpcode, DL, XVT, X, Y);
       return DAG.getNode(HandOpcode, DL, VT, Logic);
     }
@@ -5850,8 +5952,8 @@ SDValue DAGCombiner::hoistLogicOpWithSameOpcodeHands(SDNode *N) {
 
     // (logic_op (shuf (A, C), shuf (B, C))) --> shuf (logic_op (A, B), C)
     if (N0.getOperand(1) == N1.getOperand(1) && ShOp.getNode()) {
-      SDValue Logic = DAG.getNode(LogicOpcode, DL, VT,
-                                  N0.getOperand(0), N1.getOperand(0));
+      SDValue Logic =
+          DAG.getNode(LogicOpcode, DL, VT, N0.getOperand(0), N1.getOperand(0));
       return DAG.getVectorShuffle(VT, DL, Logic, ShOp, SVN0->getMask());
     }
 
@@ -5863,8 +5965,8 @@ SDValue DAGCombiner::hoistLogicOpWithSameOpcodeHands(SDNode *N) {
 
     // (logic_op (shuf (C, A), shuf (C, B))) --> shuf (C, logic_op (A, B))
     if (N0.getOperand(0) == N1.getOperand(0) && ShOp.getNode()) {
-      SDValue Logic = DAG.getNode(LogicOpcode, DL, VT, N0.getOperand(1),
-                                  N1.getOperand(1));
+      SDValue Logic =
+          DAG.getNode(LogicOpcode, DL, VT, N0.getOperand(1), N1.getOperand(1));
       return DAG.getVectorShuffle(VT, DL, ShOp, Logic, SVN0->getMask());
     }
   }
@@ -6067,19 +6169,19 @@ static unsigned getMinMaxOpcodeForFP(SDValue Operand1, SDValue Operand2,
            ((CC == ISD::SETUGT || CC == ISD::SETUGE) &&
             (OrAndOpcode == ISD::AND)))
     return isFMAXNUMFMINNUM ? ISD::FMINNUM
-                            : arebothOperandsNotSNan(Operand1, Operand2, DAG) &&
-                                      isFMAXNUMFMINNUM_IEEE
-                                  ? ISD::FMINNUM_IEEE
-                                  : ISD::DELETED_NODE;
+           : arebothOperandsNotSNan(Operand1, Operand2, DAG) &&
+                   isFMAXNUMFMINNUM_IEEE
+               ? ISD::FMINNUM_IEEE
+               : ISD::DELETED_NODE;
   else if (((CC == ISD::SETOGT || CC == ISD::SETOGE) &&
             (OrAndOpcode == ISD::OR)) ||
            ((CC == ISD::SETULT || CC == ISD::SETULE) &&
             (OrAndOpcode == ISD::AND)))
     return isFMAXNUMFMINNUM ? ISD::FMAXNUM
-                            : arebothOperandsNotSNan(Operand1, Operand2, DAG) &&
-                                      isFMAXNUMFMINNUM_IEEE
-                                  ? ISD::FMAXNUM_IEEE
-                                  : ISD::DELETED_NODE;
+           : arebothOperandsNotSNan(Operand1, Operand2, DAG) &&
+                   isFMAXNUMFMINNUM_IEEE
+               ? ISD::FMAXNUM_IEEE
+               : ISD::DELETED_NODE;
   return ISD::DELETED_NODE;
 }
 
@@ -6300,15 +6402,14 @@ SDValue DAGCombiner::visitANDLike(SDValue N0, SDValue N1, SDNode *N) {
         APInt SRLC = SRLI->getAPIntValue();
         if (ADDC.getSignificantBits() <= 64 && SRLC.ult(VT.getSizeInBits()) &&
             !TLI.isLegalAddImmediate(ADDC.getSExtValue())) {
-          APInt Mask = APInt::getHighBitsSet(VT.getSizeInBits(),
-                                             SRLC.getZExtValue());
+          APInt Mask =
+              APInt::getHighBitsSet(VT.getSizeInBits(), SRLC.getZExtValue());
           if (DAG.MaskedValueIsZero(N0.getOperand(1), Mask)) {
             ADDC |= Mask;
             if (TLI.isLegalAddImmediate(ADDC.getSExtValue())) {
               SDLoc DL0(N0);
-              SDValue NewAdd =
-                DAG.getNode(ISD::ADD, DL0, VT,
-                            N0.getOperand(0), DAG.getConstant(ADDC, DL, VT));
+              SDValue NewAdd = DAG.getNode(ISD::ADD, DL0, VT, N0.getOperand(0),
+                                           DAG.getConstant(ADDC, DL, VT));
               CombineTo(N0.getNode(), NewAdd);
               // Return N so it doesn't get rechecked!
               return SDValue(N, 0);
@@ -6448,10 +6549,9 @@ bool DAGCombiner::isLegalNarrowLdSt(LSBaseSDNode *LDST,
 }
 
 bool DAGCombiner::SearchForAndLoads(SDNode *N,
-                                    SmallVectorImpl<LoadSDNode*> &Loads,
-                                    SmallPtrSetImpl<SDNode*> &NodesWithConsts,
-                                    ConstantSDNode *Mask,
-                                    SDNode *&NodeToMask) {
+                                    SmallVectorImpl<LoadSDNode *> &Loads,
+                                    SmallPtrSetImpl<SDNode *> &NodesWithConsts,
+                                    ConstantSDNode *Mask, SDNode *&NodeToMask) {
   // Recursively search for the operands, looking for loads which can be
   // narrowed.
   for (SDValue Op : N->op_values()) {
@@ -6469,7 +6569,7 @@ bool DAGCombiner::SearchForAndLoads(SDNode *N,
     if (!Op.hasOneUse())
       return false;
 
-    switch(Op.getOpcode()) {
+    switch (Op.getOpcode()) {
     case ISD::LOAD: {
       auto *Load = cast<LoadSDNode>(Op);
       EVT ExtVT;
@@ -6493,9 +6593,9 @@ bool DAGCombiner::SearchForAndLoads(SDNode *N,
     case ISD::AssertZext: {
       unsigned ActiveBits = Mask->getAPIntValue().countr_one();
       EVT ExtVT = EVT::getIntegerVT(*DAG.getContext(), ActiveBits);
-      EVT VT = Op.getOpcode() == ISD::AssertZext ?
-        cast<VTSDNode>(Op.getOperand(1))->getVT() :
-        Op.getOperand(0).getValueType();
+      EVT VT = Op.getOpcode() == ISD::AssertZext
+                   ? cast<VTSDNode>(Op.getOperand(1))->getVT()
+                   : Op.getOperand(0).getValueType();
 
       // We can accept extending nodes if the mask is wider or an equal
       // width to the original type.
@@ -6548,8 +6648,8 @@ bool DAGCombiner::BackwardsPropagateMask(SDNode *N) {
   if (isa<LoadSDNode>(N->getOperand(0)))
     return false;
 
-  SmallVector<LoadSDNode*, 8> Loads;
-  SmallPtrSet<SDNode*, 2> NodesWithConsts;
+  SmallVector<LoadSDNode *, 8> Loads;
+  SmallPtrSet<SDNode *, 2> NodesWithConsts;
   SDNode *FixupNode = nullptr;
   if (SearchForAndLoads(N, Loads, NodesWithConsts, Mask, FixupNode)) {
     if (Loads.empty())
@@ -6562,9 +6662,9 @@ bool DAGCombiner::BackwardsPropagateMask(SDNode *N) {
     // masking.
     if (FixupNode) {
       LLVM_DEBUG(dbgs() << "First, need to fix up: "; FixupNode->dump());
-      SDValue And = DAG.getNode(ISD::AND, SDLoc(FixupNode),
-                                FixupNode->getValueType(0),
-                                SDValue(FixupNode, 0), MaskOp);
+      SDValue And =
+          DAG.getNode(ISD::AND, SDLoc(FixupNode), FixupNode->getValueType(0),
+                      SDValue(FixupNode, 0), MaskOp);
       DAG.ReplaceAllUsesOfValueWith(SDValue(FixupNode, 0), And);
       if (And.getOpcode() == ISD ::AND)
         DAG.UpdateNodeOperands(And.getNode(), SDValue(FixupNode, 0), MaskOp);
@@ -6576,10 +6676,10 @@ bool DAGCombiner::BackwardsPropagateMask(SDNode *N) {
       SDValue Op1 = LogicN->getOperand(1);
 
       if (isa<ConstantSDNode>(Op0))
-          std::swap(Op0, Op1);
+        std::swap(Op0, Op1);
 
-      SDValue And = DAG.getNode(ISD::AND, SDLoc(Op1), Op1.getValueType(),
-                                Op1, MaskOp);
+      SDValue And =
+          DAG.getNode(ISD::AND, SDLoc(Op1), Op1.getValueType(), Op1, MaskOp);
 
       DAG.UpdateNodeOperands(LogicN, Op0, And);
     }
@@ -6755,8 +6855,8 @@ static SDValue foldAndToUsubsat(SDNode *N, SelectionDAG &DAG) {
   unsigned BitWidth = VT.getScalarSizeInBits();
   ConstantSDNode *XorC = isConstOrConstSplat(N0.getOperand(1), true);
   ConstantSDNode *SraC = isConstOrConstSplat(N1.getOperand(1), true);
-  if (!XorC || !XorC->getAPIntValue().isSignMask() ||
-      !SraC || SraC->getAPIntValue() != BitWidth - 1)
+  if (!XorC || !XorC->getAPIntValue().isSignMask() || !SraC ||
+      SraC->getAPIntValue() != BitWidth - 1)
     return SDValue();
 
   // (i8 X ^ 128) & (i8 X s>> 7) --> usubsat X, 128
@@ -7001,8 +7101,8 @@ SDValue DAGCombiner::visitAND(SDNode *N) {
        N0.getOperand(0).getOpcode() == ISD::LOAD &&
        N0.getOperand(0).getResNo() == 0) ||
       (N0.getOpcode() == ISD::LOAD && N0.getResNo() == 0)) {
-    LoadSDNode *Load = cast<LoadSDNode>( (N0.getOpcode() == ISD::LOAD) ?
-                                         N0 : N0.getOperand(0) );
+    LoadSDNode *Load =
+        cast<LoadSDNode>((N0.getOpcode() == ISD::LOAD) ? N0 : N0.getOperand(0));
 
     // Get the constant (if applicable) the zero'th operand is being ANDed with.
     // This can be a pure constant or a vector splat, in which case we treat the
@@ -7049,9 +7149,8 @@ SDValue DAGCombiner::visitAND(SDNode *N) {
     // If we want to change an EXTLOAD to a ZEXTLOAD, ensure a ZEXTLOAD is
     // actually legal and isn't going to get expanded, else this is a false
     // optimisation.
-    bool CanZextLoadProfitably = TLI.isLoadExtLegal(ISD::ZEXTLOAD,
-                                                    Load->getValueType(0),
-                                                    Load->getMemoryVT());
+    bool CanZextLoadProfitably = TLI.isLoadExtLegal(
+        ISD::ZEXTLOAD, Load->getValueType(0), Load->getMemoryVT());
 
     // Resize the constant to the same size as the original memory access before
     // extension. If it is still the AllOnesValue then this AND is completely
@@ -7060,10 +7159,16 @@ SDValue DAGCombiner::visitAND(SDNode *N) {
 
     bool B;
     switch (Load->getExtensionType()) {
-    default: B = false; break;
-    case ISD::EXTLOAD: B = CanZextLoadProfitably; break;
+    default:
+      B = false;
+      break;
+    case ISD::EXTLOAD:
+      B = CanZextLoadProfitably;
+      break;
     case ISD::ZEXTLOAD:
-    case ISD::NON_EXTLOAD: B = true; break;
+    case ISD::NON_EXTLOAD:
+      B = true;
+      break;
     }
 
     if (B && Constant.isAllOnes()) {
@@ -7075,16 +7180,15 @@ SDValue DAGCombiner::visitAND(SDNode *N) {
       CombineTo(N, (N0.getNode() == Load) ? NewLoad : N0);
 
       if (Load->getExtensionType() == ISD::EXTLOAD) {
-        NewLoad = DAG.getLoad(Load->getAddressingMode(), ISD::ZEXTLOAD,
-                              Load->getValueType(0), SDLoc(Load),
-                              Load->getChain(), Load->getBasePtr(),
-                              Load->getOffset(), Load->getMemoryVT(),
-                              Load->getMemOperand());
+        NewLoad = DAG.getLoad(
+            Load->getAddressingMode(), ISD::ZEXTLOAD, Load->getValueType(0),
+            SDLoc(Load), Load->getChain(), Load->getBasePtr(),
+            Load->getOffset(), Load->getMemoryVT(), Load->getMemOperand());
         // Replace uses of the EXTLOAD with the new ZEXTLOAD.
         if (Load->getNumValues() == 3) {
           // PRE/POST_INC loads have 3 values.
-          SDValue To[] = { NewLoad.getValue(0), NewLoad.getValue(1),
-                           NewLoad.getValue(2) };
+          SDValue To[] = {NewLoad.getValue(0), NewLoad.getValue(1),
+                          NewLoad.getValue(2)};
           CombineTo(Load, To, 3, true);
         } else {
           CombineTo(Load, NewLoad.getValue(0), NewLoad.getValue(1));
@@ -7187,7 +7291,8 @@ SDValue DAGCombiner::visitAND(SDNode *N) {
         return SubRHS;
       if (SubRHS.getOpcode() == ISD::SIGN_EXTEND &&
           SubRHS.getOperand(0).getScalarValueSizeInBits() == 1)
-        return DAG.getNode(ISD::ZERO_EXTEND, SDLoc(N), VT, SubRHS.getOperand(0));
+        return DAG.getNode(ISD::ZERO_EXTEND, SDLoc(N), VT,
+                           SubRHS.getOperand(0));
     }
   }
 
@@ -7296,8 +7401,8 @@ SDValue DAGCombiner::MatchBSwapHWordLow(SDNode *N, SDValue N0, SDValue N1,
     ConstantSDNode *N01C = dyn_cast<ConstantSDNode>(N0.getOperand(1));
     // Also handle 0xffff since the LHS is guaranteed to have zeros there.
     // This is needed for X86.
-    if (!N01C || (N01C->getZExtValue() != 0xFF00 &&
-                  N01C->getZExtValue() != 0xFFFF))
+    if (!N01C ||
+        (N01C->getZExtValue() != 0xFF00 && N01C->getZExtValue() != 0xFFFF))
       return SDValue();
     N0 = N0.getOperand(0);
     LookPassAnd0 = true;
@@ -7346,8 +7451,8 @@ SDValue DAGCombiner::MatchBSwapHWordLow(SDNode *N, SDValue N0, SDValue N1,
     ConstantSDNode *N101C = dyn_cast<ConstantSDNode>(N10.getOperand(1));
     // Also allow 0xFFFF since the bits will be shifted out. This is needed
     // for X86.
-    if (!N101C || (N101C->getZExtValue() != 0xFF00 &&
-                   N101C->getZExtValue() != 0xFFFF))
+    if (!N101C ||
+        (N101C->getZExtValue() != 0xFF00 && N101C->getZExtValue() != 0xFFFF))
       return SDValue();
     N10 = N10.getOperand(0);
     LookPassAnd1 = true;
@@ -7381,9 +7486,9 @@ SDValue DAGCombiner::MatchBSwapHWordLow(SDNode *N, SDValue N0, SDValue N1,
   SDValue Res = DAG.getNode(ISD::BSWAP, SDLoc(N), VT, N00);
   if (OpSizeInBits > 16) {
     SDLoc DL(N);
-    Res = DAG.getNode(ISD::SRL, DL, VT, Res,
-                      DAG.getConstant(OpSizeInBits - 16, DL,
-                                      getShiftAmountTy(VT)));
+    Res = DAG.getNode(
+        ISD::SRL, DL, VT, Res,
+        DAG.getConstant(OpSizeInBits - 16, DL, getShiftAmountTy(VT)));
   }
   return Res;
 }
@@ -7420,8 +7525,12 @@ static bool isBSwapHWordElement(SDValue N, MutableArrayRef<SDNode *> Parts) {
   switch (N1C->getZExtValue()) {
   default:
     return false;
-  case 0xFF:       MaskByteOffset = 0; break;
-  case 0xFF00:     MaskByteOffset = 1; break;
+  case 0xFF:
+    MaskByteOffset = 0;
+    break;
+  case 0xFF00:
+    MaskByteOffset = 1;
+    break;
   case 0xFFFF:
     // In case demanded bits didn't clear the bits that will be shifted out.
     // This is needed for X86.
@@ -7430,8 +7539,12 @@ static bool isBSwapHWordElement(SDValue N, MutableArrayRef<SDNode *> Parts) {
       break;
     }
     return false;
-  case 0xFF0000:   MaskByteOffset = 2; break;
-  case 0xFF000000: MaskByteOffset = 3; break;
+  case 0xFF0000:
+    MaskByteOffset = 2;
+    break;
+  case 0xFF000000:
+    MaskByteOffset = 3;
+    break;
   }
 
   // Look for (x & 0xff) << 8 as well as ((x << 8) & 0xff00).
@@ -7562,7 +7675,6 @@ SDValue DAGCombiner::MatchBSwapHWord(SDNode *N, SDValue N0, SDValue N1) {
                                               getShiftAmountTy(VT)))
     return BSwap;
 
-
   // Look for either
   // (or (bswaphpair), (bswaphpair))
   // (or (or (bswaphpair), (and)), (and))
@@ -7591,8 +7703,7 @@ SDValue DAGCombiner::MatchBSwapHWord(SDNode *N, SDValue N0, SDValue N1) {
     return SDValue();
 
   SDLoc DL(N);
-  SDValue BSwap = DAG.getNode(ISD::BSWAP, DL, VT,
-                              SDValue(Parts[0], 0));
+  SDValue BSwap = DAG.getNode(ISD::BSWAP, DL, VT, SDValue(Parts[0], 0));
 
   // Result of the bswap should be rotated by 16. If it's not legal, then
   // do  (x << 16) | (x >> 16).
@@ -7626,18 +7737,18 @@ SDValue DAGCombiner::visitORLike(SDValue N0, SDValue N1, SDNode *N) {
     // We can only do this xform if we know that bits from X that are set in C2
     // but not in C1 are already zero.  Likewise for Y.
     if (const ConstantSDNode *N0O1C =
-        getAsNonOpaqueConstant(N0.getOperand(1))) {
+            getAsNonOpaqueConstant(N0.getOperand(1))) {
       if (const ConstantSDNode *N1O1C =
-          getAsNonOpaqueConstant(N1.getOperand(1))) {
+              getAsNonOpaqueConstant(N1.getOperand(1))) {
         // We can only do this xform if we know that bits from X that are set in
         // C2 but not in C1 are already zero.  Likewise for Y.
         const APInt &LHSMask = N0O1C->getAPIntValue();
         const APInt &RHSMask = N1O1C->getAPIntValue();
 
-        if (DAG.MaskedValueIsZero(N0.getOperand(0), RHSMask&~LHSMask) &&
-            DAG.MaskedValueIsZero(N1.getOperand(0), LHSMask&~RHSMask)) {
-          SDValue X = DAG.getNode(ISD::OR, SDLoc(N0), VT,
-                                  N0.getOperand(0), N1.getOperand(0));
+        if (DAG.MaskedValueIsZero(N0.getOperand(0), RHSMask & ~LHSMask) &&
+            DAG.MaskedValueIsZero(N1.getOperand(0), LHSMask & ~RHSMask)) {
+          SDValue X = DAG.getNode(ISD::OR, SDLoc(N0), VT, N0.getOperand(0),
+                                  N1.getOperand(0));
           return DAG.getNode(ISD::AND, DL, VT, X,
                              DAG.getConstant(LHSMask | RHSMask, DL, VT));
         }
@@ -7646,13 +7757,12 @@ SDValue DAGCombiner::visitORLike(SDValue N0, SDValue N1, SDNode *N) {
   }
 
   // (or (and X, M), (and X, N)) -> (and X, (or M, N))
-  if (N0.getOpcode() == ISD::AND &&
-      N1.getOpcode() == ISD::AND &&
+  if (N0.getOpcode() == ISD::AND && N1.getOpcode() == ISD::AND &&
       N0.getOperand(0) == N1.getOperand(0) &&
       // Don't increase # computations.
       (N0->hasOneUse() || N1->hasOneUse())) {
-    SDValue X = DAG.getNode(ISD::OR, SDLoc(N0), VT,
-                            N0.getOperand(1), N1.getOperand(1));
+    SDValue X =
+        DAG.getNode(ISD::OR, SDLoc(N0), VT, N0.getOperand(1), N1.getOperand(1));
     return DAG.getNode(ISD::AND, DL, VT, N0.getOperand(0), X);
   }
 
@@ -7821,9 +7931,8 @@ SDValue DAGCombiner::visitOR(SDNode *N) {
           SDValue NewLHS = ZeroN00 ? N0.getOperand(1) : N0.getOperand(0);
           SDValue NewRHS = ZeroN10 ? N1.getOperand(1) : N1.getOperand(0);
 
-          SDValue LegalShuffle =
-              TLI.buildLegalVectorShuffle(VT, SDLoc(N), NewLHS, NewRHS,
-                                          Mask, DAG);
+          SDValue LegalShuffle = TLI.buildLegalVectorShuffle(
+              VT, SDLoc(N), NewLHS, NewRHS, Mask, DAG);
           if (LegalShuffle)
             return LegalShuffle;
         }
@@ -8026,11 +8135,11 @@ static SDValue extractShiftForRotate(SelectionDAG &DAG, SDValue OppShift,
   // Constant mul/udiv/shift amount from the RHS of the ExtractFrom op.
   ConstantSDNode *ExtractFromCst =
       isConstOrConstSplat(ExtractFrom.getOperand(1));
-  // TODO: We should be able to handle non-uniform constant vectors for these values
-  // Check that we have constant values.
-  if (!OppShiftCst || !OppShiftCst->getAPIntValue() ||
-      !OppLHSCst || !OppLHSCst->getAPIntValue() ||
-      !ExtractFromCst || !ExtractFromCst->getAPIntValue())
+  // TODO: We should be able to handle non-uniform constant vectors for these
+  // values Check that we have constant values.
+  if (!OppShiftCst || !OppShiftCst->getAPIntValue() || !OppLHSCst ||
+      !OppLHSCst->getAPIntValue() || !ExtractFromCst ||
+      !ExtractFromCst->getAPIntValue())
     return SDValue();
 
   // Compute the shift amount we need to extract to complete the rotate.
@@ -8334,12 +8443,12 @@ SDValue DAGCombiner::MatchRotate(SDValue LHS, SDValue RHS, const SDLoc &DL) {
   }
 
   // Match "(X shl/srl V1) & V2" where V2 may not be present.
-  SDValue LHSShift;   // The shift.
-  SDValue LHSMask;    // AND value if any.
+  SDValue LHSShift; // The shift.
+  SDValue LHSMask;  // AND value if any.
   matchRotateHalf(DAG, LHS, LHSShift, LHSMask);
 
-  SDValue RHSShift;   // The shift.
-  SDValue RHSMask;    // AND value if any.
+  SDValue RHSShift; // The shift.
+  SDValue RHSMask;  // AND value if any.
   matchRotateHalf(DAG, RHS, RHSShift, RHSMask);
 
   // If neither side matched a rotate half, bail
@@ -8606,7 +8715,7 @@ calculateByteProvider(SDValue Op, unsigned Index, unsigned Depth,
     return std::nullopt;
   unsigned ByteWidth = BitWidth / 8;
   assert(Index < ByteWidth && "invalid index requested");
-  (void) ByteWidth;
+  (void)ByteWidth;
 
   switch (Op.getOpcode()) {
   case ISD::OR: {
@@ -8718,13 +8827,9 @@ calculateByteProvider(SDValue Op, unsigned Index, unsigned Depth,
   return std::nullopt;
 }
 
-static unsigned littleEndianByteAt(unsigned BW, unsigned i) {
-  return i;
-}
+static unsigned littleEndianByteAt(unsigned BW, unsigned i) { return i; }
 
-static unsigned bigEndianByteAt(unsigned BW, unsigned i) {
-  return BW - i - 1;
-}
+static unsigned bigEndianByteAt(unsigned BW, unsigned i) { return BW - i - 1; }
 
 // Check if the bytes offsets we are looking at match with either big or
 // little endian value loaded. Return true for big endian, false for little
@@ -9095,7 +9200,8 @@ SDValue DAGCombiner::MatchLoadCombine(SDNode *N) {
     Loads.insert(L);
   }
 
-  assert(!Loads.empty() && "All the bytes of the value must be loaded from "
+  assert(!Loads.empty() &&
+         "All the bytes of the value must be loaded from "
          "memory, so there must be at least one load which produces the value");
   assert(Base && "Base address of the accessed memory location must be set");
   assert(FirstOffset != INT64_MAX && "First byte offset must be set");
@@ -9367,8 +9473,8 @@ SDValue DAGCombiner::visitXOR(SDNode *N) {
           // FIXME Can we handle multiple uses? Could we token factor the chain
           // results from the new/old setcc?
           SDValue SetCC =
-              DAG.getSetCC(SDLoc(N0), VT, LHS, RHS, NotCC,
-                           N0.getOperand(0), N0Opcode == ISD::STRICT_FSETCCS);
+              DAG.getSetCC(SDLoc(N0), VT, LHS, RHS, NotCC, N0.getOperand(0),
+                           N0Opcode == ISD::STRICT_FSETCCS);
           CombineTo(N, SetCC);
           DAG.ReplaceAllUsesOfValueWith(N0.getValue(1), SetCC.getValue(1));
           recursivelyDeleteUnusedNodes(N0.getNode());
@@ -9382,7 +9488,7 @@ SDValue DAGCombiner::visitXOR(SDNode *N) {
 
   // fold (not (zext (setcc x, y))) -> (zext (not (setcc x, y)))
   if (isOneConstant(N1) && N0Opcode == ISD::ZERO_EXTEND && N0.hasOneUse() &&
-      isSetCCEquivalent(N0.getOperand(0), LHS, RHS, CC)){
+      isSetCCEquivalent(N0.getOperand(0), LHS, RHS, CC)) {
     SDValue V = N0.getOperand(0);
     SDLoc DL0(N0);
     V = DAG.getNode(ISD::XOR, DL0, V.getValueType(), V,
@@ -9399,7 +9505,8 @@ SDValue DAGCombiner::visitXOR(SDNode *N) {
       unsigned NewOpcode = N0Opcode == ISD::AND ? ISD::OR : ISD::AND;
       N00 = DAG.getNode(ISD::XOR, SDLoc(N00), VT, N00, N1); // N00 = ~N00
       N01 = DAG.getNode(ISD::XOR, SDLoc(N01), VT, N01, N1); // N01 = ~N01
-      AddToWorklist(N00.getNode()); AddToWorklist(N01.getNode());
+      AddToWorklist(N00.getNode());
+      AddToWorklist(N01.getNode());
       return DAG.getNode(NewOpcode, DL, VT, N00, N01);
     }
   }
@@ -9411,7 +9518,8 @@ SDValue DAGCombiner::visitXOR(SDNode *N) {
       unsigned NewOpcode = N0Opcode == ISD::AND ? ISD::OR : ISD::AND;
       N00 = DAG.getNode(ISD::XOR, SDLoc(N00), VT, N00, N1); // N00 = ~N00
       N01 = DAG.getNode(ISD::XOR, SDLoc(N01), VT, N01, N1); // N01 = ~N01
-      AddToWorklist(N00.getNode()); AddToWorklist(N01.getNode());
+      AddToWorklist(N00.getNode());
+      AddToWorklist(N01.getNode());
       return DAG.getNode(NewOpcode, DL, VT, N00, N01);
     }
   }
@@ -9726,8 +9834,8 @@ SDValue DAGCombiner::visitRotate(SDNode *N) {
       bool SameSide = (N->getOpcode() == NextOp);
       unsigned CombineOp = SameSide ? ISD::ADD : ISD::SUB;
       SDValue BitsizeC = DAG.getConstant(Bitsize, dl, ShiftVT);
-      SDValue Norm1 = DAG.FoldConstantArithmetic(ISD::UREM, dl, ShiftVT,
-                                                 {N1, BitsizeC});
+      SDValue Norm1 =
+          DAG.FoldConstantArithmetic(ISD::UREM, dl, ShiftVT, {N1, BitsizeC});
       SDValue Norm2 = DAG.FoldConstantArithmetic(ISD::UREM, dl, ShiftVT,
                                                  {N0.getOperand(1), BitsizeC});
       if (Norm1 && Norm2)
@@ -10281,14 +10389,11 @@ SDValue DAGCombiner::visitSRA(SDNode *N) {
           TLI.isOperationLegalOrCustom(ISD::TRUNCATE, VT) &&
           TLI.isTruncateFree(VT, TruncVT)) {
         SDLoc DL(N);
-        SDValue Amt = DAG.getConstant(ShiftAmt, DL,
-            getShiftAmountTy(N0.getOperand(0).getValueType()));
-        SDValue Shift = DAG.getNode(ISD::SRL, DL, VT,
-                                    N0.getOperand(0), Amt);
-        SDValue Trunc = DAG.getNode(ISD::TRUNCATE, DL, TruncVT,
-                                    Shift);
-        return DAG.getNode(ISD::SIGN_EXTEND, DL,
-                           N->getValueType(0), Trunc);
+        SDValue Amt = DAG.getConstant(
+            ShiftAmt, DL, getShiftAmountTy(N0.getOperand(0).getValueType()));
+        SDValue Shift = DAG.getNode(ISD::SRL, DL, VT, N0.getOperand(0), Amt);
+        SDValue Trunc = DAG.getNode(ISD::TRUNCATE, DL, TruncVT, Shift);
+        return DAG.getNode(ISD::SIGN_EXTEND, DL, N->getValueType(0), Trunc);
       }
     }
   }
@@ -10478,9 +10583,9 @@ SDValue DAGCombiner::visitSRL(SDNode *N) {
         SDValue NewShiftAmt = DAG.getConstant(c1 + c2, DL, ShiftAmtVT);
         SDValue NewShift = DAG.getNode(ISD::SRL, DL, InnerShiftVT,
                                        InnerShift.getOperand(0), NewShiftAmt);
-        SDValue Mask = DAG.getConstant(APInt::getLowBitsSet(InnerShiftSize,
-                                                            OpSizeInBits - c2),
-                                       DL, InnerShiftVT);
+        SDValue Mask = DAG.getConstant(
+            APInt::getLowBitsSet(InnerShiftSize, OpSizeInBits - c2), DL,
+            InnerShiftVT);
         SDValue And = DAG.getNode(ISD::AND, DL, InnerShiftVT, NewShift, Mask);
         return DAG.getNode(ISD::TRUNCATE, DL, VT, And);
       }
@@ -10536,10 +10641,9 @@ SDValue DAGCombiner::visitSRL(SDNode *N) {
     if (!LegalTypes || TLI.isTypeDesirableForOp(ISD::SRL, SmallVT)) {
       uint64_t ShiftAmt = N1C->getZExtValue();
       SDLoc DL0(N0);
-      SDValue SmallShift = DAG.getNode(ISD::SRL, DL0, SmallVT,
-                                       N0.getOperand(0),
-                          DAG.getConstant(ShiftAmt, DL0,
-                                          getShiftAmountTy(SmallVT)));
+      SDValue SmallShift = DAG.getNode(
+          ISD::SRL, DL0, SmallVT, N0.getOperand(0),
+          DAG.getConstant(ShiftAmt, DL0, getShiftAmountTy(SmallVT)));
       AddToWorklist(SmallShift.getNode());
       APInt Mask = APInt::getLowBitsSet(OpSizeInBits, OpSizeInBits - ShiftAmt);
       SDLoc DL(N);
@@ -10556,21 +10660,22 @@ SDValue DAGCombiner::visitSRL(SDNode *N) {
       return DAG.getNode(ISD::SRL, SDLoc(N), VT, N0.getOperand(0), N1);
   }
 
-  // fold (srl (ctlz x), "5") -> x  iff x has one bit set (the low bit), and x has a power
-  // of two bitwidth. The "5" represents (log2 (bitwidth x)).
-  if (N1C && N0.getOpcode() == ISD::CTLZ &&
-      isPowerOf2_32(OpSizeInBits) &&
+  // fold (srl (ctlz x), "5") -> x  iff x has one bit set (the low bit), and x
+  // has a power of two bitwidth. The "5" represents (log2 (bitwidth x)).
+  if (N1C && N0.getOpcode() == ISD::CTLZ && isPowerOf2_32(OpSizeInBits) &&
       N1C->getAPIntValue() == Log2_32(OpSizeInBits)) {
     KnownBits Known = DAG.computeKnownBits(N0.getOperand(0));
 
     // If any of the input bits are KnownOne, then the input couldn't be all
     // zeros, thus the result of the srl will always be zero.
-    if (Known.One.getBoolValue()) return DAG.getConstant(0, SDLoc(N0), VT);
+    if (Known.One.getBoolValue())
+      return DAG.getConstant(0, SDLoc(N0), VT);
 
     // If all of the bits input the to ctlz node are known to be zero, then
     // the result of the ctlz is "32" and the result of the shift is one.
     APInt UnknownBits = ~Known.Zero;
-    if (UnknownBits == 0) return DAG.getConstant(1, SDLoc(N0), VT);
+    if (UnknownBits == 0)
+      return DAG.getConstant(1, SDLoc(N0), VT);
 
     // Otherwise, check to see if there is exactly one bit input to the ctlz.
     if (UnknownBits.isPowerOf2()) {
@@ -10583,15 +10688,14 @@ SDValue DAGCombiner::visitSRL(SDNode *N) {
 
       if (ShAmt) {
         SDLoc DL(N0);
-        Op = DAG.getNode(ISD::SRL, DL, VT, Op,
-                  DAG.getConstant(ShAmt, DL,
-                                  getShiftAmountTy(Op.getValueType())));
+        Op = DAG.getNode(
+            ISD::SRL, DL, VT, Op,
+            DAG.getConstant(ShAmt, DL, getShiftAmountTy(Op.getValueType())));
         AddToWorklist(Op.getNode());
       }
 
       SDLoc DL(N);
-      return DAG.getNode(ISD::XOR, DL, VT,
-                         Op, DAG.getConstant(1, DL, VT));
+      return DAG.getNode(ISD::XOR, DL, VT, Op, DAG.getConstant(1, DL, VT));
     }
   }
 
@@ -10760,8 +10864,9 @@ SDValue DAGCombiner::visitFunnelShift(SDNode *N) {
 
   // fold (fshl N0, N0, N2) -> (rotl N0, N2)
   // fold (fshr N0, N0, N2) -> (rotr N0, N2)
-  // TODO: Investigate flipping this rotate if only one is legal, if funnel shift
-  // is legal as well we might be better off avoiding non-constant (BW - N2).
+  // TODO: Investigate flipping this rotate if only one is legal, if funnel
+  // shift is legal as well we might be better off avoiding non-constant (BW -
+  // N2).
   unsigned RotOpc = IsFSHL ? ISD::ROTL : ISD::ROTR;
   if (N0 == N1 && hasOperation(RotOpc, VT))
     return DAG.getNode(RotOpc, SDLoc(N), VT, N0, N2);
@@ -11214,22 +11319,22 @@ SDValue DAGCombiner::foldSelectOfConstants(SDNode *N) {
 
   if (CondVT != MVT::i1 || LegalOperations) {
     // fold (select Cond, 0, 1) -> (xor Cond, 1)
-    // We can't do this reliably if integer based booleans have different contents
-    // to floating point based booleans. This is because we can't tell whether we
-    // have an integer-based boolean or a floating-point-based boolean unless we
-    // can find the SETCC that produced it and inspect its operands. This is
-    // fairly easy if C is the SETCC node, but it can potentially be
-    // undiscoverable (or not reasonably discoverable). For example, it could be
-    // in another basic block or it could require searching a complicated
-    // expression.
+    // We can't do this reliably if integer based booleans have different
+    // contents to floating point based booleans. This is because we can't tell
+    // whether we have an integer-based boolean or a floating-point-based
+    // boolean unless we can find the SETCC that produced it and inspect its
+    // operands. This is fairly easy if C is the SETCC node, but it can
+    // potentially be undiscoverable (or not reasonably discoverable). For
+    // example, it could be in another basic block or it could require searching
+    // a complicated expression.
     if (CondVT.isInteger() &&
-        TLI.getBooleanContents(/*isVec*/false, /*isFloat*/true) ==
+        TLI.getBooleanContents(/*isVec*/ false, /*isFloat*/ true) ==
             TargetLowering::ZeroOrOneBooleanContent &&
-        TLI.getBooleanContents(/*isVec*/false, /*isFloat*/false) ==
+        TLI.getBooleanContents(/*isVec*/ false, /*isFloat*/ false) ==
             TargetLowering::ZeroOrOneBooleanContent &&
         C1->isZero() && C2->isOne()) {
-      SDValue NotCond =
-          DAG.getNode(ISD::XOR, DL, CondVT, Cond, DAG.getConstant(1, DL, CondVT));
+      SDValue NotCond = DAG.getNode(ISD::XOR, DL, CondVT, Cond,
+                                    DAG.getConstant(1, DL, CondVT));
       if (VT.bitsEq(CondVT))
         return NotCond;
       return DAG.getZExtOrTrunc(NotCond, DL, VT);
@@ -11291,8 +11396,7 @@ SDValue DAGCombiner::foldSelectOfConstants(SDNode *N) {
   // select Cond, Pow2, 0 --> (zext Cond) << log2(Pow2)
   if (C1Val.isPowerOf2() && C2Val.isZero()) {
     Cond = DAG.getZExtOrTrunc(Cond, DL, VT);
-    SDValue ShAmtC =
-        DAG.getShiftAmountConstant(C1Val.exactLogBase2(), VT, DL);
+    SDValue ShAmtC = DAG.getShiftAmountConstant(C1Val.exactLogBase2(), VT, DL);
     return DAG.getNode(ISD::SHL, DL, VT, Cond, ShAmtC);
   }
 
@@ -11465,8 +11569,8 @@ SDValue DAGCombiner::visitSELECT(SDNode *N) {
     if (N0->getOpcode() == ISD::OR && N0->hasOneUse()) {
       SDValue Cond0 = N0->getOperand(0);
       SDValue Cond1 = N0->getOperand(1);
-      SDValue InnerSelect = DAG.getNode(ISD::SELECT, DL, N1.getValueType(),
-                                        Cond1, N1, N2, Flags);
+      SDValue InnerSelect =
+          DAG.getNode(ISD::SELECT, DL, N1.getValueType(), Cond1, N1, N2, Flags);
       if (normalizeToSequence || !InnerSelect.use_empty())
         return DAG.getNode(ISD::SELECT, DL, N1.getValueType(), Cond0, N1,
                            InnerSelect, Flags);
@@ -11484,8 +11588,8 @@ SDValue DAGCombiner::visitSELECT(SDNode *N) {
         // Create the actual and node if we can generate good code for it.
         if (!normalizeToSequence) {
           SDValue And = DAG.getNode(ISD::AND, DL, N0.getValueType(), N0, N1_0);
-          return DAG.getNode(ISD::SELECT, DL, N1.getValueType(), And, N1_1,
-                             N2, Flags);
+          return DAG.getNode(ISD::SELECT, DL, N1.getValueType(), And, N1_1, N2,
+                             Flags);
         }
         // Otherwise see if we can optimize the "and" to a better pattern.
         if (SDValue Combined = visitANDLike(N0, N1_0, N)) {
@@ -11503,8 +11607,8 @@ SDValue DAGCombiner::visitSELECT(SDNode *N) {
         // Create the actual or node if we can generate good code for it.
         if (!normalizeToSequence) {
           SDValue Or = DAG.getNode(ISD::OR, DL, N0.getValueType(), N0, N2_0);
-          return DAG.getNode(ISD::SELECT, DL, N1.getValueType(), Or, N1,
-                             N2_2, Flags);
+          return DAG.getNode(ISD::SELECT, DL, N1.getValueType(), Or, N1, N2_2,
+                             Flags);
         }
         // Otherwise see if we can optimize to a better pattern.
         if (SDValue Combined = visitORLike(N0, N2_0, N))
@@ -11719,14 +11823,14 @@ SDValue DAGCombiner::visitVPSCATTER(SDNode *N) {
 
   if (refineUniformBase(BasePtr, Index, MSC->isIndexScaled(), DAG, DL)) {
     SDValue Ops[] = {Chain, StoreVal, BasePtr, Index, Scale, Mask, VL};
-    return DAG.getScatterVP(DAG.getVTList(MVT::Other), MSC->getMemoryVT(),
-                            DL, Ops, MSC->getMemOperand(), IndexType);
+    return DAG.getScatterVP(DAG.getVTList(MVT::Other), MSC->getMemoryVT(), DL,
+                            Ops, MSC->getMemOperand(), IndexType);
   }
 
   if (refineIndexType(Index, IndexType, StoreVal.getValueType(), DAG)) {
     SDValue Ops[] = {Chain, StoreVal, BasePtr, Index, Scale, Mask, VL};
-    return DAG.getScatterVP(DAG.getVTList(MVT::Other), MSC->getMemoryVT(),
-                            DL, Ops, MSC->getMemOperand(), IndexType);
+    return DAG.getScatterVP(DAG.getVTList(MVT::Other), MSC->getMemoryVT(), DL,
+                            Ops, MSC->getMemOperand(), IndexType);
   }
 
   return SDValue();
@@ -11859,16 +11963,16 @@ SDValue DAGCombiner::visitVPGATHER(SDNode *N) {
 
   if (refineUniformBase(BasePtr, Index, MGT->isIndexScaled(), DAG, DL)) {
     SDValue Ops[] = {Chain, BasePtr, Index, Scale, Mask, VL};
-    return DAG.getGatherVP(
-        DAG.getVTList(N->getValueType(0), MVT::Other), MGT->getMemoryVT(), DL,
-        Ops, MGT->getMemOperand(), IndexType);
+    return DAG.getGatherVP(DAG.getVTList(N->getValueType(0), MVT::Other),
+                           MGT->getMemoryVT(), DL, Ops, MGT->getMemOperand(),
+                           IndexType);
   }
 
   if (refineIndexType(Index, IndexType, N->getValueType(0), DAG)) {
     SDValue Ops[] = {Chain, BasePtr, Index, Scale, Mask, VL};
-    return DAG.getGatherVP(
-        DAG.getVTList(N->getValueType(0), MVT::Other), MGT->getMemoryVT(), DL,
-        Ops, MGT->getMemOperand(), IndexType);
+    return DAG.getGatherVP(DAG.getVTList(N->getValueType(0), MVT::Other),
+                           MGT->getMemoryVT(), DL, Ops, MGT->getMemOperand(),
+                           IndexType);
   }
 
   return SDValue();
@@ -12076,8 +12180,8 @@ SDValue DAGCombiner::visitVSELECT(SDNode *N) {
       unsigned WideWidth = WideVT.getScalarSizeInBits();
       bool IsSigned = isSignedIntSetCC(CC);
       auto LoadExtOpcode = IsSigned ? ISD::SEXTLOAD : ISD::ZEXTLOAD;
-      if (LHS.getOpcode() == ISD::LOAD && LHS.hasOneUse() &&
-          SetCCWidth != 1 && SetCCWidth < WideWidth &&
+      if (LHS.getOpcode() == ISD::LOAD && LHS.hasOneUse() && SetCCWidth != 1 &&
+          SetCCWidth < WideWidth &&
           TLI.isLoadExtLegalOrCustom(LoadExtOpcode, WideVT, NarrowVT) &&
           TLI.isOperationLegalOrCustom(ISD::SETCC, WideVT)) {
         // Both compare operands can be widened for free. The LHS can use an
@@ -12116,7 +12220,7 @@ SDValue DAGCombiner::visitVSELECT(SDNode *N) {
         case ISD::SETLE:
         case ISD::SETULT:
         case ISD::SETULE:
-          if (RHS == N1.getOperand(0) && LHS == N1.getOperand(1) )
+          if (RHS == N1.getOperand(0) && LHS == N1.getOperand(1))
             return DAG.getNode(ABDOpc, DL, VT, LHS, RHS);
           break;
         default:
@@ -12254,7 +12358,7 @@ SDValue DAGCombiner::visitVSELECT(SDNode *N) {
   }
 
   if (SimplifySelectOps(N, N1, N2))
-    return SDValue(N, 0);  // Don't revisit N.
+    return SDValue(N, 0); // Don't revisit N.
 
   // Fold (vselect all_ones, N1, N2) -> N1
   if (ISD::isConstantSplatVectorAllOnes(N0.getNode()))
@@ -12330,7 +12434,7 @@ SDValue DAGCombiner::visitSELECT_CC(SDNode *N) {
 
   // If we can fold this based on the true/false value, do so.
   if (SimplifySelectOps(N, N2, N3))
-    return SDValue(N, 0);  // Don't revisit N.
+    return SDValue(N, 0); // Don't revisit N.
 
   // fold select_cc into other things, such as min/max/abs
   return SimplifySelectCC(SDLoc(N), N0, N1, N2, N3, CC);
@@ -12486,7 +12590,8 @@ static SDValue tryToFoldExtendOfConstant(SDNode *N, const TargetLowering &TLI,
     SDValue Op1 = N0->getOperand(1);
     SDValue Op2 = N0->getOperand(2);
     if (isa<ConstantSDNode>(Op1) && isa<ConstantSDNode>(Op2) &&
-        (Opcode != ISD::ZERO_EXTEND || !TLI.isZExtFree(N0.getValueType(), VT))) {
+        (Opcode != ISD::ZERO_EXTEND ||
+         !TLI.isZExtFree(N0.getValueType(), VT))) {
       // For any_extend, choose sign extension of the constants to allow a
       // possible further transform to sign_extend_inreg.i.e.
       //
@@ -12510,7 +12615,7 @@ static SDValue tryToFoldExtendOfConstant(SDNode *N, const TargetLowering &TLI,
   // fold (aext (build_vector AllConstants) -> (build_vector AllConstants)
   EVT SVT = VT.getScalarType();
   if (!(VT.isVector() && (!LegalTypes || TLI.isTypeLegal(SVT)) &&
-      ISD::isBuildVectorOfConstantSDNodes(N0.getNode())))
+        ISD::isBuildVectorOfConstantSDNodes(N0.getNode())))
     return SDValue();
 
   // We can fold this node into a build_vector.
@@ -12589,8 +12694,8 @@ static bool ExtendUsesToFormExtLoad(EVT VT, SDNode *N, SDValue N0,
 
   if (HasCopyToRegUses) {
     bool BothLiveOut = false;
-    for (SDNode::use_iterator UI = N->use_begin(), UE = N->use_end();
-         UI != UE; ++UI) {
+    for (SDNode::use_iterator UI = N->use_begin(), UE = N->use_end(); UI != UE;
+         ++UI) {
       SDUse &Use = UI.getUse();
       if (Use.getResNo() == 0 && Use.getUser()->getOpcode() == ISD::CopyToReg) {
         BothLiveOut = true;
@@ -12659,9 +12764,8 @@ SDValue DAGCombiner::CombineExtLoad(SDNode *N) {
   LoadSDNode *LN0 = cast<LoadSDNode>(N0);
 
   if (!ISD::isNON_EXTLoad(LN0) || !ISD::isUNINDEXEDLoad(LN0) ||
-      !N0.hasOneUse() || !LN0->isSimple() ||
-      !DstVT.isVector() || !DstVT.isPow2VectorType() ||
-      !TLI.isVectorLoadExtDesirable(SDValue(N, 0)))
+      !N0.hasOneUse() || !LN0->isSimple() || !DstVT.isVector() ||
+      !DstVT.isPow2VectorType() || !TLI.isVectorLoadExtDesirable(SDValue(N, 0)))
     return SDValue();
 
   SmallVector<SDNode *, 4> SetCCs;
@@ -12757,7 +12861,6 @@ SDValue DAGCombiner::CombineZExtLogicopShiftLoad(SDNode *N) {
       Load->getExtensionType() == ISD::SEXTLOAD || Load->isIndexed())
     return SDValue();
 
-
   // If the shift op is SHL, the logic op must be AND, otherwise the result
   // will be wrong.
   if (N1.getOpcode() == ISD::SHL && N0.getOpcode() != ISD::AND)
@@ -12766,7 +12869,7 @@ SDValue DAGCombiner::CombineZExtLogicopShiftLoad(SDNode *N) {
   if (!N0.hasOneUse() || !N1.hasOneUse())
     return SDValue();
 
-  SmallVector<SDNode*, 4> SetCCs;
+  SmallVector<SDNode *, 4> SetCCs;
   if (!ExtendUsesToFormExtLoad(VT, N1.getNode(), N1.getOperand(0),
                                ISD::ZERO_EXTEND, SetCCs, TLI))
     return SDValue();
@@ -12777,8 +12880,8 @@ SDValue DAGCombiner::CombineZExtLogicopShiftLoad(SDNode *N) {
                                    Load->getMemoryVT(), Load->getMemOperand());
 
   SDLoc DL1(N1);
-  SDValue Shift = DAG.getNode(N1.getOpcode(), DL1, VT, ExtLoad,
-                              N1.getOperand(1));
+  SDValue Shift =
+      DAG.getNode(N1.getOpcode(), DL1, VT, ExtLoad, N1.getOperand(1));
 
   APInt Mask = N0.getConstantOperandAPInt(1).zext(VT.getSizeInBits());
   SDLoc DL0(N0);
@@ -12790,15 +12893,15 @@ SDValue DAGCombiner::CombineZExtLogicopShiftLoad(SDNode *N) {
   if (SDValue(Load, 0).hasOneUse()) {
     DAG.ReplaceAllUsesOfValueWith(SDValue(Load, 1), ExtLoad.getValue(1));
   } else {
-    SDValue Trunc = DAG.getNode(ISD::TRUNCATE, SDLoc(Load),
-                                Load->getValueType(0), ExtLoad);
+    SDValue Trunc =
+        DAG.getNode(ISD::TRUNCATE, SDLoc(Load), Load->getValueType(0), ExtLoad);
     CombineTo(Load, Trunc, ExtLoad.getValue(1));
   }
 
   // N0 is dead at this point.
   recursivelyDeleteUnusedNodes(N0.getNode());
 
-  return SDValue(N,0); // Return N so it doesn't get rechecked!
+  return SDValue(N, 0); // Return N so it doesn't get rechecked!
 }
 
 /// If we're narrowing or widening the result of a vector select and the final
@@ -12861,8 +12964,7 @@ static SDValue tryToFoldExtOfExtload(SelectionDAG &DAG, DAGCombiner &Combiner,
 
   LoadSDNode *LN0 = cast<LoadSDNode>(N0);
   EVT MemVT = LN0->getMemoryVT();
-  if ((LegalOperations || !LN0->isSimple() ||
-       VT.isVector()) &&
+  if ((LegalOperations || !LN0->isSimple() || VT.isVector()) &&
       !TLI.isLoadExtLegal(ExtLoadType, VT, MemVT))
     return SDValue();
 
@@ -12953,7 +13055,8 @@ tryToFoldExtOfMaskedLoad(SelectionDAG &DAG, const TargetLowering &TLI, EVT VT,
 static SDValue foldExtendedSignBitTest(SDNode *N, SelectionDAG &DAG,
                                        bool LegalOperations) {
   assert((N->getOpcode() == ISD::SIGN_EXTEND ||
-          N->getOpcode() == ISD::ZERO_EXTEND) && "Expected sext or zext");
+          N->getOpcode() == ISD::ZERO_EXTEND) &&
+         "Expected sext or zext");
 
   SDValue SetCC = N->getOperand(0);
   if (LegalOperations || SetCC.getOpcode() != ISD::SETCC ||
@@ -12979,7 +13082,7 @@ static SDValue foldExtendedSignBitTest(SDNode *N, SelectionDAG &DAG,
       SDValue NotX = DAG.getNOT(DL, X, VT);
       SDValue ShiftAmount = DAG.getConstant(ShCt, DL, VT);
       auto ShiftOpcode =
-        N->getOpcode() == ISD::SIGN_EXTEND ? ISD::SRA : ISD::SRL;
+          N->getOpcode() == ISD::SIGN_EXTEND ? ISD::SRA : ISD::SRL;
       return DAG.getNode(ShiftOpcode, DL, VT, NotX, ShiftAmount);
     }
   }
@@ -13160,37 +13263,37 @@ SDValue DAGCombiner::visitSIGN_EXTEND(SDNode *N) {
         // CombineTo deleted the truncate, if needed, but not what's under it.
         AddToWorklist(oye);
       }
-      return SDValue(N, 0);   // Return N so it doesn't get rechecked!
+      return SDValue(N, 0); // Return N so it doesn't get rechecked!
     }
 
     // See if the value being truncated is already sign extended.  If so, just
     // eliminate the trunc/sext pair.
     SDValue Op = N0.getOperand(0);
-    unsigned OpBits   = Op.getScalarValueSizeInBits();
-    unsigned MidBits  = N0.getScalarValueSizeInBits();
+    unsigned OpBits = Op.getScalarValueSizeInBits();
+    unsigned MidBits = N0.getScalarValueSizeInBits();
     unsigned DestBits = VT.getScalarSizeInBits();
     unsigned NumSignBits = DAG.ComputeNumSignBits(Op);
 
     if (OpBits == DestBits) {
       // Op is i32, Mid is i8, and Dest is i32.  If Op has more than 24 sign
       // bits, it is already ready.
-      if (NumSignBits > DestBits-MidBits)
+      if (NumSignBits > DestBits - MidBits)
         return Op;
     } else if (OpBits < DestBits) {
       // Op is i32, Mid is i8, and Dest is i64.  If Op has more than 24 sign
       // bits, just sext from i32.
-      if (NumSignBits > OpBits-MidBits)
+      if (NumSignBits > OpBits - MidBits)
         return DAG.getNode(ISD::SIGN_EXTEND, DL, VT, Op);
     } else {
       // Op is i64, Mid is i8, and Dest is i32.  If Op has more than 56 sign
       // bits, just truncate to i32.
-      if (NumSignBits > OpBits-MidBits)
+      if (NumSignBits > OpBits - MidBits)
         return DAG.getNode(ISD::TRUNCATE, DL, VT, Op);
     }
 
     // fold (sext (truncate x)) -> (sextinreg x).
-    if (!LegalOperations || TLI.isOperationLegal(ISD::SIGN_EXTEND_INREG,
-                                                 N0.getValueType())) {
+    if (!LegalOperations ||
+        TLI.isOperationLegal(ISD::SIGN_EXTEND_INREG, N0.getValueType())) {
       if (OpBits < DestBits)
         Op = DAG.getNode(ISD::ANY_EXTEND, SDLoc(N0), VT, Op);
       else if (OpBits > DestBits)
@@ -13230,18 +13333,17 @@ SDValue DAGCombiner::visitSIGN_EXTEND(SDNode *N) {
     LoadSDNode *LN00 = cast<LoadSDNode>(N0.getOperand(0));
     EVT MemVT = LN00->getMemoryVT();
     if (TLI.isLoadExtLegal(ISD::SEXTLOAD, VT, MemVT) &&
-      LN00->getExtensionType() != ISD::ZEXTLOAD && LN00->isUnindexed()) {
-      SmallVector<SDNode*, 4> SetCCs;
+        LN00->getExtensionType() != ISD::ZEXTLOAD && LN00->isUnindexed()) {
+      SmallVector<SDNode *, 4> SetCCs;
       bool DoXform = ExtendUsesToFormExtLoad(VT, N0.getNode(), N0.getOperand(0),
                                              ISD::SIGN_EXTEND, SetCCs, TLI);
       if (DoXform) {
-        SDValue ExtLoad = DAG.getExtLoad(ISD::SEXTLOAD, SDLoc(LN00), VT,
-                                         LN00->getChain(), LN00->getBasePtr(),
-                                         LN00->getMemoryVT(),
-                                         LN00->getMemOperand());
+        SDValue ExtLoad = DAG.getExtLoad(
+            ISD::SEXTLOAD, SDLoc(LN00), VT, LN00->getChain(),
+            LN00->getBasePtr(), LN00->getMemoryVT(), LN00->getMemOperand());
         APInt Mask = N0.getConstantOperandAPInt(1).sext(VT.getSizeInBits());
-        SDValue And = DAG.getNode(N0.getOpcode(), DL, VT,
-                                  ExtLoad, DAG.getConstant(Mask, DL, VT));
+        SDValue And = DAG.getNode(N0.getOpcode(), DL, VT, ExtLoad,
+                                  DAG.getConstant(Mask, DL, VT));
         ExtendSetCCUses(SetCCs, N0.getOperand(0), ExtLoad, ISD::SIGN_EXTEND);
         bool NoReplaceTruncAnd = !N0.hasOneUse();
         bool NoReplaceTrunc = SDValue(LN00, 0).hasOneUse();
@@ -13259,7 +13361,7 @@ SDValue DAGCombiner::visitSIGN_EXTEND(SDNode *N) {
                                       LN00->getValueType(0), ExtLoad);
           CombineTo(LN00, Trunc, ExtLoad.getValue(1));
         }
-        return SDValue(N,0); // Return N so it doesn't get rechecked!
+        return SDValue(N, 0); // Return N so it doesn't get rechecked!
       }
     }
   }
@@ -13331,7 +13433,8 @@ SDValue DAGCombiner::visitSIGN_EXTEND(SDNode *N) {
 /// destination type, widen the pop-count to the destination type.
 static SDValue widenCtPop(SDNode *Extend, SelectionDAG &DAG) {
   assert((Extend->getOpcode() == ISD::ZERO_EXTEND ||
-          Extend->getOpcode() == ISD::ANY_EXTEND) && "Expected extend op");
+          Extend->getOpcode() == ISD::ANY_EXTEND) &&
+         "Expected extend op");
 
   SDValue CtPop = Extend->getOperand(0);
   if (CtPop.getOpcode() != ISD::CTPOP || !CtPop.hasOneUse())
@@ -13411,12 +13514,12 @@ SDValue DAGCombiner::visitZERO_EXTEND(SDNode *N) {
   KnownBits Known;
   if (isTruncateOf(DAG, N0, Op, Known)) {
     APInt TruncatedBits =
-      (Op.getScalarValueSizeInBits() == N0.getScalarValueSizeInBits()) ?
-      APInt(Op.getScalarValueSizeInBits(), 0) :
-      APInt::getBitsSet(Op.getScalarValueSizeInBits(),
-                        N0.getScalarValueSizeInBits(),
-                        std::min(Op.getScalarValueSizeInBits(),
-                                 VT.getScalarSizeInBits()));
+        (Op.getScalarValueSizeInBits() == N0.getScalarValueSizeInBits())
+            ? APInt(Op.getScalarValueSizeInBits(), 0)
+            : APInt::getBitsSet(Op.getScalarValueSizeInBits(),
+                                N0.getScalarValueSizeInBits(),
+                                std::min(Op.getScalarValueSizeInBits(),
+                                         VT.getScalarSizeInBits()));
     if (TruncatedBits.isSubsetOf(Known.Zero))
       return DAG.getZExtOrTrunc(Op, DL, VT);
   }
@@ -13438,8 +13541,8 @@ SDValue DAGCombiner::visitZERO_EXTEND(SDNode *N) {
     EVT SrcVT = N0.getOperand(0).getValueType();
     EVT MinVT = N0.getValueType();
 
-    // Try to mask before the extension to avoid having to generate a larger mask,
-    // possibly over several sub-vectors.
+    // Try to mask before the extension to avoid having to generate a larger
+    // mask, possibly over several sub-vectors.
     if (SrcVT.bitsLT(VT) && VT.isVector()) {
       if (!LegalOperations || (TLI.isOperationLegal(ISD::AND, SrcVT) &&
                                TLI.isOperationLegal(ISD::ZERO_EXTEND, VT))) {
@@ -13475,8 +13578,7 @@ SDValue DAGCombiner::visitZERO_EXTEND(SDNode *N) {
     SDValue X = N0.getOperand(0).getOperand(0);
     X = DAG.getAnyExtOrTrunc(X, SDLoc(X), VT);
     APInt Mask = N0.getConstantOperandAPInt(1).zext(VT.getSizeInBits());
-    return DAG.getNode(ISD::AND, DL, VT,
-                       X, DAG.getConstant(Mask, DL, VT));
+    return DAG.getNode(ISD::AND, DL, VT, X, DAG.getConstant(Mask, DL, VT));
   }
 
   // Try to simplify (zext (load x)).
@@ -13508,7 +13610,7 @@ SDValue DAGCombiner::visitZERO_EXTEND(SDNode *N) {
     if (TLI.isLoadExtLegal(ISD::ZEXTLOAD, VT, MemVT) &&
         LN00->getExtensionType() != ISD::SEXTLOAD && LN00->isUnindexed()) {
       bool DoXform = true;
-      SmallVector<SDNode*, 4> SetCCs;
+      SmallVector<SDNode *, 4> SetCCs;
       if (!N0.hasOneUse()) {
         if (N0.getOpcode() == ISD::AND) {
           auto *AndC = cast<ConstantSDNode>(N0.getOperand(1));
@@ -13522,13 +13624,12 @@ SDValue DAGCombiner::visitZERO_EXTEND(SDNode *N) {
         DoXform = ExtendUsesToFormExtLoad(VT, N0.getNode(), N0.getOperand(0),
                                           ISD::ZERO_EXTEND, SetCCs, TLI);
       if (DoXform) {
-        SDValue ExtLoad = DAG.getExtLoad(ISD::ZEXTLOAD, SDLoc(LN00), VT,
-                                         LN00->getChain(), LN00->getBasePtr(),
-                                         LN00->getMemoryVT(),
-                                         LN00->getMemOperand());
+        SDValue ExtLoad = DAG.getExtLoad(
+            ISD::ZEXTLOAD, SDLoc(LN00), VT, LN00->getChain(),
+            LN00->getBasePtr(), LN00->getMemoryVT(), LN00->getMemOperand());
         APInt Mask = N0.getConstantOperandAPInt(1).zext(VT.getSizeInBits());
-        SDValue And = DAG.getNode(N0.getOpcode(), DL, VT,
-                                  ExtLoad, DAG.getConstant(Mask, DL, VT));
+        SDValue And = DAG.getNode(N0.getOpcode(), DL, VT, ExtLoad,
+                                  DAG.getConstant(Mask, DL, VT));
         ExtendSetCCUses(SetCCs, N0.getOperand(0), ExtLoad, ISD::ZERO_EXTEND);
         bool NoReplaceTruncAnd = !N0.hasOneUse();
         bool NoReplaceTrunc = SDValue(LN00, 0).hasOneUse();
@@ -13546,7 +13647,7 @@ SDValue DAGCombiner::visitZERO_EXTEND(SDNode *N) {
                                       LN00->getValueType(0), ExtLoad);
           CombineTo(LN00, Trunc, ExtLoad.getValue(1));
         }
-        return SDValue(N,0); // Return N so it doesn't get rechecked!
+        return SDValue(N, 0); // Return N so it doesn't get rechecked!
       }
     }
   }
@@ -13665,8 +13766,7 @@ SDValue DAGCombiner::visitANY_EXTEND(SDNode *N) {
   // fold (aext (aext x)) -> (aext x)
   // fold (aext (zext x)) -> (zext x)
   // fold (aext (sext x)) -> (sext x)
-  if (N0.getOpcode() == ISD::ANY_EXTEND  ||
-      N0.getOpcode() == ISD::ZERO_EXTEND ||
+  if (N0.getOpcode() == ISD::ANY_EXTEND || N0.getOpcode() == ISD::ZERO_EXTEND ||
       N0.getOpcode() == ISD::SIGN_EXTEND)
     return DAG.getNode(N0.getOpcode(), SDLoc(N), VT, N0.getOperand(0));
 
@@ -13688,7 +13788,7 @@ SDValue DAGCombiner::visitANY_EXTEND(SDNode *N) {
         // CombineTo deleted the truncate, if needed, but not what's under it.
         AddToWorklist(oye);
       }
-      return SDValue(N, 0);   // Return N so it doesn't get rechecked!
+      return SDValue(N, 0); // Return N so it doesn't get rechecked!
     }
   }
 
@@ -13757,13 +13857,13 @@ SDValue DAGCombiner::visitANY_EXTEND(SDNode *N) {
     ISD::LoadExtType ExtType = LN0->getExtensionType();
     EVT MemVT = LN0->getMemoryVT();
     if (!LegalOperations || TLI.isLoadExtLegal(ExtType, VT, MemVT)) {
-      SDValue ExtLoad = DAG.getExtLoad(ExtType, SDLoc(N),
-                                       VT, LN0->getChain(), LN0->getBasePtr(),
-                                       MemVT, LN0->getMemOperand());
+      SDValue ExtLoad =
+          DAG.getExtLoad(ExtType, SDLoc(N), VT, LN0->getChain(),
+                         LN0->getBasePtr(), MemVT, LN0->getMemOperand());
       CombineTo(N, ExtLoad);
       DAG.ReplaceAllUsesOfValueWith(SDValue(LN0, 1), ExtLoad.getValue(1));
       recursivelyDeleteUnusedNodes(LN0);
-      return SDValue(N, 0);   // Return N so it doesn't get rechecked!
+      return SDValue(N, 0); // Return N so it doesn't get rechecked!
     }
   }
 
@@ -13787,18 +13887,16 @@ SDValue DAGCombiner::visitANY_EXTEND(SDNode *N) {
       // we know that the element size of the sext'd result matches the
       // element size of the compare operands.
       if (VT.getSizeInBits() == N00VT.getSizeInBits())
-        return DAG.getSetCC(SDLoc(N), VT, N0.getOperand(0),
-                             N0.getOperand(1),
-                             cast<CondCodeSDNode>(N0.getOperand(2))->get());
+        return DAG.getSetCC(SDLoc(N), VT, N0.getOperand(0), N0.getOperand(1),
+                            cast<CondCodeSDNode>(N0.getOperand(2))->get());
 
       // If the desired elements are smaller or larger than the source
       // elements we can use a matching integer vector type and then
       // truncate/any extend
       EVT MatchingVectorType = N00VT.changeVectorElementTypeToInteger();
-      SDValue VsetCC =
-        DAG.getSetCC(SDLoc(N), MatchingVectorType, N0.getOperand(0),
-                      N0.getOperand(1),
-                      cast<CondCodeSDNode>(N0.getOperand(2))->get());
+      SDValue VsetCC = DAG.getSetCC(
+          SDLoc(N), MatchingVectorType, N0.getOperand(0), N0.getOperand(1),
+          cast<CondCodeSDNode>(N0.getOperand(2))->get());
       return DAG.getAnyExtOrTrunc(VsetCC, SDLoc(N), VT);
     }
 
@@ -13858,8 +13956,8 @@ SDValue DAGCombiner::visitAssertExt(SDNode *N) {
     EVT BigA_AssertVT = cast<VTSDNode>(BigA.getOperand(1))->getVT();
     if (AssertVT.bitsLT(BigA_AssertVT)) {
       SDLoc DL(N);
-      SDValue NewAssert = DAG.getNode(Opcode, DL, BigA.getValueType(),
-                                      BigA.getOperand(0), N1);
+      SDValue NewAssert =
+          DAG.getNode(Opcode, DL, BigA.getValueType(), BigA.getOperand(0), N1);
       return DAG.getNode(ISD::TRUNCATE, DL, N->getValueType(0), NewAssert);
     }
   }
@@ -14038,7 +14136,7 @@ SDValue DAGCombiner::reduceLoadWidth(SDNode *N) {
     SDNode *Mask = *(SRL->use_begin());
     if (SRL.hasOneUse() && Mask->getOpcode() == ISD::AND &&
         isa<ConstantSDNode>(Mask->getOperand(1))) {
-      const APInt& ShiftMask = Mask->getConstantOperandAPInt(1);
+      const APInt &ShiftMask = Mask->getConstantOperandAPInt(1);
       if (ShiftMask.isMask()) {
         EVT MaskedVT =
             EVT::getIntegerVT(*DAG.getContext(), ShiftMask.countr_one());
@@ -14073,8 +14171,7 @@ SDValue DAGCombiner::reduceLoadWidth(SDNode *N) {
   LoadSDNode *LN0 = cast<LoadSDNode>(N0);
   // Reducing the width of a volatile load is illegal.  For atomics, we may be
   // able to reduce the width provided we never widen again. (see D66309)
-  if (!LN0->isSimple() ||
-      !isLegalNarrowLdSt(LN0, ExtType, ExtVT, ShAmt))
+  if (!LN0->isSimple() || !isLegalNarrowLdSt(LN0, ExtType, ExtVT, ShAmt))
     return SDValue();
 
   auto AdjustBigEndianShift = [&](unsigned ShAmt) {
@@ -14127,15 +14224,15 @@ SDValue DAGCombiner::reduceLoadWidth(SDNode *N) {
     if (ShLeftAmt >= VT.getScalarSizeInBits())
       Result = DAG.getConstant(0, DL, VT);
     else
-      Result = DAG.getNode(ISD::SHL, DL, VT,
-                          Result, DAG.getConstant(ShLeftAmt, DL, ShImmTy));
+      Result = DAG.getNode(ISD::SHL, DL, VT, Result,
+                           DAG.getConstant(ShLeftAmt, DL, ShImmTy));
   }
 
   if (HasShiftedOffset) {
     // We're using a shifted mask, so the load now has an offset. This means
     // that data has been loaded into the lower bytes than it would have been
-    // before, so we need to shl the loaded data into the correct position in the
-    // register.
+    // before, so we need to shl the loaded data into the correct position in
+    // the register.
     SDValue ShiftC = DAG.getConstant(ShAmt, DL, VT);
     Result = DAG.getNode(ISD::SHL, DL, VT, Result, ShiftC);
     DAG.ReplaceAllUsesOfValueWith(SDValue(N, 0), Result);
@@ -14244,37 +14341,33 @@ SDValue DAGCombiner::visitSIGN_EXTEND_INREG(SDNode *N) {
   // If sextload is not supported by target, we can only do the combine when
   // load has one use. Doing otherwise can block folding the extload with other
   // extends that the target does support.
-  if (ISD::isEXTLoad(N0.getNode()) &&
-      ISD::isUNINDEXEDLoad(N0.getNode()) &&
+  if (ISD::isEXTLoad(N0.getNode()) && ISD::isUNINDEXEDLoad(N0.getNode()) &&
       ExtVT == cast<LoadSDNode>(N0)->getMemoryVT() &&
       ((!LegalOperations && cast<LoadSDNode>(N0)->isSimple() &&
         N0.hasOneUse()) ||
        TLI.isLoadExtLegal(ISD::SEXTLOAD, VT, ExtVT))) {
     LoadSDNode *LN0 = cast<LoadSDNode>(N0);
-    SDValue ExtLoad = DAG.getExtLoad(ISD::SEXTLOAD, SDLoc(N), VT,
-                                     LN0->getChain(),
-                                     LN0->getBasePtr(), ExtVT,
-                                     LN0->getMemOperand());
+    SDValue ExtLoad =
+        DAG.getExtLoad(ISD::SEXTLOAD, SDLoc(N), VT, LN0->getChain(),
+                       LN0->getBasePtr(), ExtVT, LN0->getMemOperand());
     CombineTo(N, ExtLoad);
     CombineTo(N0.getNode(), ExtLoad, ExtLoad.getValue(1));
     AddToWorklist(ExtLoad.getNode());
-    return SDValue(N, 0);   // Return N so it doesn't get rechecked!
+    return SDValue(N, 0); // Return N so it doesn't get rechecked!
   }
 
   // fold (sext_inreg (zextload x)) -> (sextload x) iff load has one use
   if (ISD::isZEXTLoad(N0.getNode()) && ISD::isUNINDEXEDLoad(N0.getNode()) &&
-      N0.hasOneUse() &&
-      ExtVT == cast<LoadSDNode>(N0)->getMemoryVT() &&
+      N0.hasOneUse() && ExtVT == cast<LoadSDNode>(N0)->getMemoryVT() &&
       ((!LegalOperations && cast<LoadSDNode>(N0)->isSimple()) &&
        TLI.isLoadExtLegal(ISD::SEXTLOAD, VT, ExtVT))) {
     LoadSDNode *LN0 = cast<LoadSDNode>(N0);
-    SDValue ExtLoad = DAG.getExtLoad(ISD::SEXTLOAD, SDLoc(N), VT,
-                                     LN0->getChain(),
-                                     LN0->getBasePtr(), ExtVT,
-                                     LN0->getMemOperand());
+    SDValue ExtLoad =
+        DAG.getExtLoad(ISD::SEXTLOAD, SDLoc(N), VT, LN0->getChain(),
+                       LN0->getBasePtr(), ExtVT, LN0->getMemOperand());
     CombineTo(N, ExtLoad);
     CombineTo(N0.getNode(), ExtLoad, ExtLoad.getValue(1));
-    return SDValue(N, 0);   // Return N so it doesn't get rechecked!
+    return SDValue(N, 0); // Return N so it doesn't get rechecked!
   }
 
   // fold (sext_inreg (masked_load x)) -> (sext_masked_load x)
@@ -14295,8 +14388,7 @@ SDValue DAGCombiner::visitSIGN_EXTEND_INREG(SDNode *N) {
 
   // fold (sext_inreg (masked_gather x)) -> (sext_masked_gather x)
   if (auto *GN0 = dyn_cast<MaskedGatherSDNode>(N0)) {
-    if (SDValue(GN0, 0).hasOneUse() &&
-        ExtVT == GN0->getMemoryVT() &&
+    if (SDValue(GN0, 0).hasOneUse() && ExtVT == GN0->getMemoryVT() &&
         TLI.isVectorLoadExtDesirable(SDValue(SDValue(GN0, 0)))) {
       SDValue Ops[] = {GN0->getChain(),   GN0->getPassThru(), GN0->getMask(),
                        GN0->getBasePtr(), GN0->getIndex(),    GN0->getScale()};
@@ -14420,8 +14512,7 @@ SDValue DAGCombiner::visitTRUNCATE(SDNode *N) {
 
   // fold (truncate (ext x)) -> (ext x) or (truncate x) or x
   if (N0.getOpcode() == ISD::ZERO_EXTEND ||
-      N0.getOpcode() == ISD::SIGN_EXTEND ||
-      N0.getOpcode() == ISD::ANY_EXTEND) {
+      N0.getOpcode() == ISD::SIGN_EXTEND || N0.getOpcode() == ISD::ANY_EXTEND) {
     // if the source is smaller than the dest, we still need an extend.
     if (N0.getOperand(0).getValueType().bitsLT(VT))
       return DAG.getNode(N0.getOpcode(), SDLoc(N), VT, N0.getOperand(0));
@@ -14460,14 +14551,14 @@ SDValue DAGCombiner::visitTRUNCATE(SDNode *N) {
   // Note: We only run this optimization after type legalization (which often
   // creates this pattern) and before operation legalization after which
   // we need to be more careful about the vector instructions that we generate.
-  if (N0.getOpcode() == ISD::EXTRACT_VECTOR_ELT &&
-      LegalTypes && !LegalOperations && N0->hasOneUse() && VT != MVT::i1) {
+  if (N0.getOpcode() == ISD::EXTRACT_VECTOR_ELT && LegalTypes &&
+      !LegalOperations && N0->hasOneUse() && VT != MVT::i1) {
     EVT VecTy = N0.getOperand(0).getValueType();
     EVT ExTy = N0.getValueType();
     EVT TrTy = N->getValueType(0);
 
     auto EltCnt = VecTy.getVectorElementCount();
-    unsigned SizeRatio = ExTy.getSizeInBits()/TrTy.getSizeInBits();
+    unsigned SizeRatio = ExTy.getSizeInBits() / TrTy.getSizeInBits();
     auto NewEltCnt = EltCnt * SizeRatio;
 
     EVT NVT = EVT::getVectorVT(*DAG.getContext(), TrTy, NewEltCnt);
@@ -14476,7 +14567,8 @@ SDValue DAGCombiner::visitTRUNCATE(SDNode *N) {
     SDValue EltNo = N0->getOperand(1);
     if (isa<ConstantSDNode>(EltNo) && isTypeLegal(NVT)) {
       int Elt = cast<ConstantSDNode>(EltNo)->getZExtValue();
-      int Index = isLE ? (Elt*SizeRatio) : (Elt*SizeRatio + (SizeRatio-1));
+      int Index =
+          isLE ? (Elt * SizeRatio) : (Elt * SizeRatio + (SizeRatio - 1));
 
       SDLoc DL(N);
       return DAG.getNode(ISD::EXTRACT_VECTOR_ELT, DL, TrTy,
@@ -14563,7 +14655,7 @@ SDValue DAGCombiner::visitTRUNCATE(SDNode *N) {
     // Check that the element types match.
     if (BuildVectEltTy == TruncVecEltTy) {
       // Now we only need to compute the offset of the truncated elements.
-      unsigned BuildVecNumElts =  BuildVect.getNumOperands();
+      unsigned BuildVecNumElts = BuildVect.getNumOperands();
       unsigned TruncVecNumElts = VT.getVectorNumElements();
       unsigned TruncEltOffset = BuildVecNumElts / TruncVecNumElts;
 
@@ -14589,10 +14681,9 @@ SDValue DAGCombiner::visitTRUNCATE(SDNode *N) {
     if (N0.hasOneUse() && ISD::isUNINDEXEDLoad(N0.getNode())) {
       LoadSDNode *LN0 = cast<LoadSDNode>(N0);
       if (LN0->isSimple() && LN0->getMemoryVT().bitsLT(VT)) {
-        SDValue NewLoad = DAG.getExtLoad(LN0->getExtensionType(), SDLoc(LN0),
-                                         VT, LN0->getChain(), LN0->getBasePtr(),
-                                         LN0->getMemoryVT(),
-                                         LN0->getMemOperand());
+        SDValue NewLoad = DAG.getExtLoad(
+            LN0->getExtensionType(), SDLoc(LN0), VT, LN0->getChain(),
+            LN0->getBasePtr(), LN0->getMemoryVT(), LN0->getMemOperand());
         DAG.ReplaceAllUsesOfValueWith(N0.getValue(1), NewLoad.getValue(1));
         return NewLoad;
       }
@@ -14775,7 +14866,8 @@ SDValue DAGCombiner::CombineConsecutiveLoads(SDNode *N, EVT VT) {
   if ((!LegalOperations || TLI.isOperationLegal(ISD::LOAD, VT)) &&
       DAG.areNonVolatileConsecutiveLoads(LD2, LD1, LD1Bytes, 1) &&
       TLI.allowsMemoryAccess(*DAG.getContext(), DAG.getDataLayout(), VT,
-                             *LD1->getMemOperand(), &LD1Fast) && LD1Fast)
+                             *LD1->getMemOperand(), &LD1Fast) &&
+      LD1Fast)
     return DAG.getLoad(VT, SDLoc(N), LD1->getChain(), LD1->getBasePtr(),
                        LD1->getPointerInfo(), LD1->getAlign());
 
@@ -14987,11 +15079,11 @@ SDValue DAGCombiner::visitBITCAST(SDNode *N) {
     }
     APInt SignBit = APInt::getSignMask(VT.getSizeInBits());
     if (N0.getOpcode() == ISD::FNEG)
-      return DAG.getNode(ISD::XOR, DL, VT,
-                         NewConv, DAG.getConstant(SignBit, DL, VT));
+      return DAG.getNode(ISD::XOR, DL, VT, NewConv,
+                         DAG.getConstant(SignBit, DL, VT));
     assert(N0.getOpcode() == ISD::FABS);
-    return DAG.getNode(ISD::AND, DL, VT,
-                       NewConv, DAG.getConstant(~SignBit, DL, VT));
+    return DAG.getNode(ISD::AND, DL, VT, NewConv,
+                       DAG.getConstant(~SignBit, DL, VT));
   }
 
   // fold (bitconvert (fcopysign cst, x)) ->
@@ -15023,10 +15115,9 @@ SDValue DAGCombiner::visitBITCAST(SDNode *N) {
         // To get the sign bit in the right place, we have to shift it right
         // before truncating.
         SDLoc DL(X);
-        X = DAG.getNode(ISD::SRL, DL,
-                        X.getValueType(), X,
-                        DAG.getConstant(OrigXWidth-VTWidth, DL,
-                                        X.getValueType()));
+        X = DAG.getNode(
+            ISD::SRL, DL, X.getValueType(), X,
+            DAG.getConstant(OrigXWidth - VTWidth, DL, X.getValueType()));
         AddToWorklist(X.getNode());
         X = DAG.getNode(ISD::TRUNCATE, SDLoc(X), VT, X);
         AddToWorklist(X.getNode());
@@ -15055,13 +15146,13 @@ SDValue DAGCombiner::visitBITCAST(SDNode *N) {
         return DAG.getNode(ISD::XOR, SDLoc(N), VT, Cst, FlipBits);
       }
       APInt SignBit = APInt::getSignMask(VT.getSizeInBits());
-      X = DAG.getNode(ISD::AND, SDLoc(X), VT,
-                      X, DAG.getConstant(SignBit, SDLoc(X), VT));
+      X = DAG.getNode(ISD::AND, SDLoc(X), VT, X,
+                      DAG.getConstant(SignBit, SDLoc(X), VT));
       AddToWorklist(X.getNode());
 
       SDValue Cst = DAG.getBitcast(VT, N0.getOperand(0));
-      Cst = DAG.getNode(ISD::AND, SDLoc(Cst), VT,
-                        Cst, DAG.getConstant(~SignBit, SDLoc(Cst), VT));
+      Cst = DAG.getNode(ISD::AND, SDLoc(Cst), VT, Cst,
+                        DAG.getConstant(~SignBit, SDLoc(Cst), VT));
       AddToWorklist(Cst.getNode());
 
       return DAG.getNode(ISD::OR, SDLoc(N), VT, X, Cst);
@@ -15205,12 +15296,13 @@ SDValue DAGCombiner::visitFREEZE(SDNode *N) {
 
 /// We know that BV is a build_vector node with Constant, ConstantFP or Undef
 /// operands. DstEltVT indicates the destination element value type.
-SDValue DAGCombiner::
-ConstantFoldBITCASTofBUILD_VECTOR(SDNode *BV, EVT DstEltVT) {
+SDValue DAGCombiner::ConstantFoldBITCASTofBUILD_VECTOR(SDNode *BV,
+                                                       EVT DstEltVT) {
   EVT SrcEltVT = BV->getValueType(0).getVectorElementType();
 
   // If this is already the right type, we're done.
-  if (SrcEltVT == DstEltVT) return SDValue(BV, 0);
+  if (SrcEltVT == DstEltVT)
+    return SDValue(BV, 0);
 
   unsigned SrcBitSize = SrcEltVT.getSizeInBits();
   unsigned DstBitSize = DstEltVT.getSizeInBits();
@@ -15886,8 +15978,8 @@ SDValue DAGCombiner::visitFMULForFMADistributiveCombine(SDNode *N) {
 
   // Floating-point multiply-add with intermediate rounding. This can result
   // in a less precise result due to the changed rounding order.
-  bool HasFMAD = Options.UnsafeFPMath &&
-                 (LegalOperations && TLI.isFMADLegal(DAG, N));
+  bool HasFMAD =
+      Options.UnsafeFPMath && (LegalOperations && TLI.isFMADLegal(DAG, N));
 
   // No valid opcode, do not combine.
   if (!HasFMAD && !HasFMA)
@@ -15995,7 +16087,8 @@ SDValue DAGCombiner::visitFADD(SDNode *N) {
   // N0 + -0.0 --> N0 (also allowed with +0.0 and fast-math)
   ConstantFPSDNode *N1C = isConstOrConstSplatFP(N1, true);
   if (N1C && N1C->isZero())
-    if (N1C->isNegative() || Options.NoSignedZerosFPMath || Flags.hasNoSignedZeros())
+    if (N1C->isNegative() || Options.NoSignedZerosFPMath ||
+        Flags.hasNoSignedZeros())
       return N0;
 
   if (SDValue NewSel = foldBinOpIntoSelect(N))
@@ -16287,7 +16380,7 @@ SDValue DAGCombiner::visitFMUL(SDNode *N) {
 
   // canonicalize constant to RHS
   if (DAG.isConstantFPBuildVectorOrConstantFP(N0) &&
-     !DAG.isConstantFPBuildVectorOrConstantFP(N1))
+      !DAG.isConstantFPBuildVectorOrConstantFP(N1))
     return DAG.getNode(ISD::FMUL, DL, VT, N1, N0);
 
   // fold vector ops
@@ -16335,8 +16428,8 @@ SDValue DAGCombiner::visitFMUL(SDNode *N) {
   // fold (fmul X, -1.0) -> (fsub -0.0, X)
   if (N1CFP && N1CFP->isExactlyValue(-1.0)) {
     if (!LegalOperations || TLI.isOperationLegal(ISD::FSUB, VT)) {
-      return DAG.getNode(ISD::FSUB, DL, VT,
-                         DAG.getConstantFP(-0.0, DL, VT), N0, Flags);
+      return DAG.getNode(ISD::FSUB, DL, VT, DAG.getConstantFP(-0.0, DL, VT), N0,
+                         Flags);
     }
   }
 
@@ -16366,16 +16459,16 @@ SDValue DAGCombiner::visitFMUL(SDNode *N) {
       std::swap(Select, X);
 
     SDValue Cond = Select.getOperand(0);
-    auto TrueOpnd  = dyn_cast<ConstantFPSDNode>(Select.getOperand(1));
+    auto TrueOpnd = dyn_cast<ConstantFPSDNode>(Select.getOperand(1));
     auto FalseOpnd = dyn_cast<ConstantFPSDNode>(Select.getOperand(2));
 
-    if (TrueOpnd && FalseOpnd &&
-        Cond.getOpcode() == ISD::SETCC && Cond.getOperand(0) == X &&
-        isa<ConstantFPSDNode>(Cond.getOperand(1)) &&
+    if (TrueOpnd && FalseOpnd && Cond.getOpcode() == ISD::SETCC &&
+        Cond.getOperand(0) == X && isa<ConstantFPSDNode>(Cond.getOperand(1)) &&
         cast<ConstantFPSDNode>(Cond.getOperand(1))->isExactlyValue(0.0)) {
       ISD::CondCode CC = cast<CondCodeSDNode>(Cond.getOperand(2))->get();
       switch (CC) {
-      default: break;
+      default:
+        break;
       case ISD::SETOLT:
       case ISD::SETULT:
       case ISD::SETOLE:
@@ -16393,7 +16486,7 @@ SDValue DAGCombiner::visitFMUL(SDNode *N) {
         if (TrueOpnd->isExactlyValue(-1.0) && FalseOpnd->isExactlyValue(1.0) &&
             TLI.isOperationLegal(ISD::FNEG, VT))
           return DAG.getNode(ISD::FNEG, DL, VT,
-                   DAG.getNode(ISD::FABS, DL, VT, X));
+                             DAG.getNode(ISD::FABS, DL, VT, X));
         if (TrueOpnd->isExactlyValue(1.0) && FalseOpnd->isExactlyValue(-1.0))
           return DAG.getNode(ISD::FABS, DL, VT, X);
 
@@ -16428,8 +16521,7 @@ template <class MatchContextClass> SDValue DAGCombiner::visitFMA(SDNode *N) {
       Options.UnsafeFPMath || N->getFlags().hasAllowReassociation();
 
   // Constant fold FMA.
-  if (isa<ConstantFPSDNode>(N0) &&
-      isa<ConstantFPSDNode>(N1) &&
+  if (isa<ConstantFPSDNode>(N0) && isa<ConstantFPSDNode>(N1) &&
       isa<ConstantFPSDNode>(N2)) {
     return matcher.getNode(ISD::FMA, DL, VT, N0, N1, N2);
   }
@@ -16466,7 +16558,7 @@ template <class MatchContextClass> SDValue DAGCombiner::visitFMA(SDNode *N) {
 
   // Canonicalize (fma c, x, y) -> (fma x, c, y)
   if (DAG.isConstantFPBuildVectorOrConstantFP(N0) &&
-     !DAG.isConstantFPBuildVectorOrConstantFP(N1))
+      !DAG.isConstantFPBuildVectorOrConstantFP(N1))
     return matcher.getNode(ISD::FMA, SDLoc(N), VT, N1, N0, N2);
 
   if (CanReassociate) {
@@ -16606,8 +16698,8 @@ SDValue DAGCombiner::combineRepeatedFPDivisors(SDNode *N) {
   for (auto *U : Users) {
     SDValue Dividend = U->getOperand(0);
     if (Dividend != FPOne) {
-      SDValue NewNode = DAG.getNode(ISD::FMUL, SDLoc(U), VT, Dividend,
-                                    Reciprocal, Flags);
+      SDValue NewNode =
+          DAG.getNode(ISD::FMUL, SDLoc(U), VT, Dividend, Reciprocal, Flags);
       CombineTo(U, NewNode);
     } else if (U != Reciprocal.getNode()) {
       // In the absence of fast-math-flags, this user node is always the
@@ -16615,7 +16707,7 @@ SDValue DAGCombiner::combineRepeatedFPDivisors(SDNode *N) {
       CombineTo(U, Reciprocal);
     }
   }
-  return SDValue(N, 0);  // N was replaced.
+  return SDValue(N, 0); // N was replaced.
 }
 
 SDValue DAGCombiner::visitFDIV(SDNode *N) {
@@ -16824,8 +16916,7 @@ static inline bool CanCombineFCOPYSIGN_EXTEND_ROUND(EVT XTy, EVT YTy) {
 
 static inline bool CanCombineFCOPYSIGN_EXTEND_ROUND(SDNode *N) {
   SDValue N1 = N->getOperand(1);
-  if (N1.getOpcode() != ISD::FP_EXTEND &&
-      N1.getOpcode() != ISD::FP_ROUND)
+  if (N1.getOpcode() != ISD::FP_EXTEND && N1.getOpcode() != ISD::FP_ROUND)
     return false;
   EVT N1VT = N1->getValueType(0);
   EVT N1Op0VT = N1->getOperand(0).getValueType();
@@ -16890,8 +16981,9 @@ SDValue DAGCombiner::visitFPOW(SDNode *N) {
   // TODO: Since we're approximating, we don't need an exact 1/3 exponent.
   //       Some range near 1/3 should be fine.
   EVT VT = N->getValueType(0);
-  if ((VT == MVT::f32 && ExponentC->getValueAPF().isExactlyValue(1.0f/3.0f)) ||
-      (VT == MVT::f64 && ExponentC->getValueAPF().isExactlyValue(1.0/3.0))) {
+  if ((VT == MVT::f32 &&
+       ExponentC->getValueAPF().isExactlyValue(1.0f / 3.0f)) ||
+      (VT == MVT::f64 && ExponentC->getValueAPF().isExactlyValue(1.0 / 3.0))) {
     // pow(-0.0, 1/3) = +0.0; cbrt(-0.0) = -0.0.
     // pow(-inf, 1/3) = +inf; cbrt(-inf) = -inf.
     // pow(-val, 1/3) =  nan; cbrt(-val) = -num.
@@ -16995,8 +17087,7 @@ SDValue DAGCombiner::visitSINT_TO_FP(SDNode *N) {
   // fold (sint_to_fp c1) -> c1fp
   if (DAG.isConstantIntBuildVectorOrConstantInt(N0) &&
       // ...but only if the target supports immediate floating-point values
-      (!LegalOperations ||
-       TLI.isOperationLegalOrCustom(ISD::ConstantFP, VT)))
+      (!LegalOperations || TLI.isOperationLegalOrCustom(ISD::ConstantFP, VT)))
     return DAG.getNode(ISD::SINT_TO_FP, SDLoc(N), VT, N0);
 
   // If the input is a legal type, and SINT_TO_FP is not legal on this target,
@@ -17047,8 +17138,7 @@ SDValue DAGCombiner::visitUINT_TO_FP(SDNode *N) {
   // fold (uint_to_fp c1) -> c1fp
   if (DAG.isConstantIntBuildVectorOrConstantInt(N0) &&
       // ...but only if the target supports immediate floating-point values
-      (!LegalOperations ||
-       TLI.isOperationLegalOrCustom(ISD::ConstantFP, VT)))
+      (!LegalOperations || TLI.isOperationLegalOrCustom(ISD::ConstantFP, VT)))
     return DAG.getNode(ISD::UINT_TO_FP, SDLoc(N), VT, N0);
 
   // If the input is a legal type, and UINT_TO_FP is not legal on this target,
@@ -17105,8 +17195,8 @@ static SDValue FoldIntToFPToInt(SDNode *N, SelectionDAG &DAG) {
   // represented exactly in the float range.
   if (APFloat::semanticsPrecision(sem) >= ActualSize) {
     if (VT.getScalarSizeInBits() > SrcVT.getScalarSizeInBits()) {
-      unsigned ExtOp = IsInputSigned && IsOutputSigned ? ISD::SIGN_EXTEND
-                                                       : ISD::ZERO_EXTEND;
+      unsigned ExtOp =
+          IsInputSigned && IsOutputSigned ? ISD::SIGN_EXTEND : ISD::ZERO_EXTEND;
       return DAG.getNode(ExtOp, SDLoc(N), VT, Src);
     }
     if (VT.getScalarSizeInBits() < SrcVT.getScalarSizeInBits())
@@ -17198,13 +17288,11 @@ SDValue DAGCombiner::visitFP_ROUND(SDNode *N) {
   // eliminate the fp_round on Y.  The second step requires an additional
   // predicate to match the implementation above.
   if (N0.getOpcode() == ISD::FCOPYSIGN && N0->hasOneUse() &&
-      CanCombineFCOPYSIGN_EXTEND_ROUND(VT,
-                                       N0.getValueType())) {
-    SDValue Tmp = DAG.getNode(ISD::FP_ROUND, SDLoc(N0), VT,
-                              N0.getOperand(0), N1);
+      CanCombineFCOPYSIGN_EXTEND_ROUND(VT, N0.getValueType())) {
+    SDValue Tmp =
+        DAG.getNode(ISD::FP_ROUND, SDLoc(N0), VT, N0.getOperand(0), N1);
     AddToWorklist(Tmp.getNode());
-    return DAG.getNode(ISD::FCOPYSIGN, SDLoc(N), VT,
-                       Tmp, N0.getOperand(1));
+    return DAG.getNode(ISD::FCOPYSIGN, SDLoc(N), VT, Tmp, N0.getOperand(1));
   }
 
   if (SDValue NewVSel = matchVSelectOpSizesWithSetCC(N))
@@ -17222,8 +17310,7 @@ SDValue DAGCombiner::visitFP_EXTEND(SDNode *N) {
       return FoldedVOp;
 
   // If this is fp_round(fpextend), don't fold it, allow ourselves to be folded.
-  if (N->hasOneUse() &&
-      N->use_begin()->getOpcode() == ISD::FP_ROUND)
+  if (N->hasOneUse() && N->use_begin()->getOpcode() == ISD::FP_ROUND)
     return SDValue();
 
   // fold (fp_extend c1fp) -> c1fp
@@ -17237,13 +17324,12 @@ SDValue DAGCombiner::visitFP_EXTEND(SDNode *N) {
 
   // Turn fp_extend(fp_round(X, 1)) -> x since the fp_round doesn't affect the
   // value of X.
-  if (N0.getOpcode() == ISD::FP_ROUND
-      && N0.getConstantOperandVal(1) == 1) {
+  if (N0.getOpcode() == ISD::FP_ROUND && N0.getConstantOperandVal(1) == 1) {
     SDValue In = N0.getOperand(0);
-    if (In.getValueType() == VT) return In;
+    if (In.getValueType() == VT)
+      return In;
     if (VT.bitsLT(In.getValueType()))
-      return DAG.getNode(ISD::FP_ROUND, SDLoc(N), VT,
-                         In, N0.getOperand(1));
+      return DAG.getNode(ISD::FP_ROUND, SDLoc(N), VT, In, N0.getOperand(1));
     return DAG.getNode(ISD::FP_EXTEND, SDLoc(N), VT, In);
   }
 
@@ -17252,16 +17338,15 @@ SDValue DAGCombiner::visitFP_EXTEND(SDNode *N) {
       TLI.isLoadExtLegalOrCustom(ISD::EXTLOAD, VT, N0.getValueType())) {
     LoadSDNode *LN0 = cast<LoadSDNode>(N0);
     SDValue ExtLoad = DAG.getExtLoad(ISD::EXTLOAD, SDLoc(N), VT,
-                                     LN0->getChain(),
-                                     LN0->getBasePtr(), N0.getValueType(),
-                                     LN0->getMemOperand());
+                                     LN0->getChain(), LN0->getBasePtr(),
+                                     N0.getValueType(), LN0->getMemOperand());
     CombineTo(N, ExtLoad);
     CombineTo(
         N0.getNode(),
         DAG.getNode(ISD::FP_ROUND, SDLoc(N0), N0.getValueType(), ExtLoad,
                     DAG.getIntPtrConstant(1, SDLoc(N0), /*isTarget=*/true)),
         ExtLoad.getValue(1));
-    return SDValue(N, 0);   // Return N so it doesn't get rechecked!
+    return SDValue(N, 0); // Return N so it doesn't get rechecked!
   }
 
   if (SDValue NewVSel = matchVSelectOpSizesWithSetCC(N))
@@ -17293,7 +17378,8 @@ SDValue DAGCombiner::visitFTRUNC(SDNode *N) {
   // ftrunc is a part of fptosi/fptoui expansion on some targets, so this is
   // likely to be generated to extract integer from a rounded floating value.
   switch (N0.getOpcode()) {
-  default: break;
+  default:
+    break;
   case ISD::FRINT:
   case ISD::FTRUNC:
   case ISD::FNEARBYINT:
@@ -17344,7 +17430,8 @@ SDValue DAGCombiner::visitFNEG(SDNode *N) {
   // not.
   if (N0.getOpcode() == ISD::FSUB &&
       (DAG.getTarget().Options.NoSignedZerosFPMath ||
-       N->getFlags().hasNoSignedZeros()) && N0.hasOneUse()) {
+       N->getFlags().hasNoSignedZeros()) &&
+      N0.hasOneUse()) {
     return DAG.getNode(ISD::FSUB, SDLoc(N), VT, N0.getOperand(1),
                        N0.getOperand(0));
   }
@@ -17508,9 +17595,9 @@ SDValue DAGCombiner::visitBRCOND(SDNode *N) {
   if (N1.getOpcode() == ISD::SETCC &&
       TLI.isOperationLegalOrCustom(ISD::BR_CC,
                                    N1.getOperand(0).getValueType())) {
-    return DAG.getNode(ISD::BR_CC, SDLoc(N), MVT::Other,
-                       Chain, N1.getOperand(2),
-                       N1.getOperand(0), N1.getOperand(1), N2);
+    return DAG.getNode(ISD::BR_CC, SDLoc(N), MVT::Other, Chain,
+                       N1.getOperand(2), N1.getOperand(0), N1.getOperand(1),
+                       N2);
   }
 
   if (N1.hasOneUse()) {
@@ -17563,8 +17650,8 @@ SDValue DAGCombiner::rebuildSetCC(SDValue N) {
         if (AndConst.isPowerOf2() &&
             cast<ConstantSDNode>(Op1)->getAPIntValue() == AndConst.logBase2()) {
           SDLoc DL(N);
-          return DAG.getSetCC(DL, getSetCCResultType(Op0.getValueType()),
-                              Op0, DAG.getConstant(0, DL, Op0.getValueType()),
+          return DAG.getSetCC(DL, getSetCCResultType(Op0.getValueType()), Op0,
+                              DAG.getConstant(0, DL, Op0.getValueType()),
                               ISD::SETNE);
         }
       }
@@ -17636,16 +17723,15 @@ SDValue DAGCombiner::visitBR_CC(SDNode *N) {
 
   // Use SimplifySetCC to simplify SETCC's.
   SDValue Simp = SimplifySetCC(getSetCCResultType(CondLHS.getValueType()),
-                               CondLHS, CondRHS, CC->get(), SDLoc(N),
-                               false);
-  if (Simp.getNode()) AddToWorklist(Simp.getNode());
+                               CondLHS, CondRHS, CC->get(), SDLoc(N), false);
+  if (Simp.getNode())
+    AddToWorklist(Simp.getNode());
 
   // fold to a simpler setcc
   if (Simp.getNode() && Simp.getOpcode() == ISD::SETCC)
-    return DAG.getNode(ISD::BR_CC, SDLoc(N), MVT::Other,
-                       N->getOperand(0), Simp.getOperand(2),
-                       Simp.getOperand(0), Simp.getOperand(1),
-                       N->getOperand(4));
+    return DAG.getNode(ISD::BR_CC, SDLoc(N), MVT::Other, N->getOperand(0),
+                       Simp.getOperand(2), Simp.getOperand(0),
+                       Simp.getOperand(1), N->getOperand(4));
 
   return SDValue();
 }
@@ -17867,7 +17953,8 @@ bool DAGCombiner::CombineToPreIndexedLoadStore(SDNode *N) {
     if (OtherUses[i]->getOperand(OffsetIdx).getNode() == BasePtr.getNode())
       OffsetIdx = 0;
     assert(OtherUses[i]->getOperand(!OffsetIdx).getNode() ==
-           BasePtr.getNode() && "Expected BasePtr operand");
+               BasePtr.getNode() &&
+           "Expected BasePtr operand");
 
     // We need to replace ptr0 in the following expression:
     //   x0 * offset0 + y0 * ptr0 = t0
@@ -17891,9 +17978,12 @@ bool DAGCombiner::CombineToPreIndexedLoadStore(SDNode *N) {
     unsigned Opcode = (Y0 * Y1 < 0) ? ISD::SUB : ISD::ADD;
 
     APInt CNV = Offset0;
-    if (X0 < 0) CNV = -CNV;
-    if (X1 * Y0 * Y1 < 0) CNV = CNV + Offset1;
-    else CNV = CNV - Offset1;
+    if (X0 < 0)
+      CNV = -CNV;
+    if (X1 * Y0 * Y1 < 0)
+      CNV = CNV + Offset1;
+    else
+      CNV = CNV - Offset1;
 
     SDLoc DL(OtherUses[i]);
 
@@ -17901,9 +17991,8 @@ bool DAGCombiner::CombineToPreIndexedLoadStore(SDNode *N) {
     SDValue NewOp1 = DAG.getConstant(CNV, DL, CN->getValueType(0));
     SDValue NewOp2 = Result.getValue(IsLoad ? 1 : 0);
 
-    SDValue NewUse = DAG.getNode(Opcode,
-                                 DL,
-                                 OtherUses[i]->getValueType(0), NewOp1, NewOp2);
+    SDValue NewUse =
+        DAG.getNode(Opcode, DL, OtherUses[i]->getValueType(0), NewOp1, NewOp2);
     DAG.ReplaceAllUsesOfValueWith(SDValue(OtherUses[i], 0), NewUse);
     deleteAndRecombine(OtherUses[i]);
   }
@@ -17918,8 +18007,7 @@ bool DAGCombiner::CombineToPreIndexedLoadStore(SDNode *N) {
 
 static bool shouldCombineToPostInc(SDNode *N, SDValue Ptr, SDNode *PtrUse,
                                    SDValue &BasePtr, SDValue &Offset,
-                                   ISD::MemIndexedMode &AM,
-                                   SelectionDAG &DAG,
+                                   ISD::MemIndexedMode &AM, SelectionDAG &DAG,
                                    const TargetLowering &TLI) {
   if (PtrUse == N ||
       (PtrUse->getOpcode() != ISD::ADD && PtrUse->getOpcode() != ISD::SUB))
@@ -18023,13 +18111,13 @@ bool DAGCombiner::CombineToPostIndexedLoadStore(SDNode *N) {
 
   SDValue Result;
   if (!IsMasked)
-    Result = IsLoad ? DAG.getIndexedLoad(SDValue(N, 0), SDLoc(N), BasePtr,
-                                         Offset, AM)
-                    : DAG.getIndexedStore(SDValue(N, 0), SDLoc(N),
-                                          BasePtr, Offset, AM);
+    Result =
+        IsLoad
+            ? DAG.getIndexedLoad(SDValue(N, 0), SDLoc(N), BasePtr, Offset, AM)
+            : DAG.getIndexedStore(SDValue(N, 0), SDLoc(N), BasePtr, Offset, AM);
   else
-    Result = IsLoad ? DAG.getIndexedMaskedLoad(SDValue(N, 0), SDLoc(N),
-                                               BasePtr, Offset, AM)
+    Result = IsLoad ? DAG.getIndexedMaskedLoad(SDValue(N, 0), SDLoc(N), BasePtr,
+                                               Offset, AM)
                     : DAG.getIndexedMaskedStore(SDValue(N, 0), SDLoc(N),
                                                 BasePtr, Offset, AM);
   ++PostIndexedNodes;
@@ -18305,9 +18393,9 @@ SDValue DAGCombiner::ForwardStoreValueToDirectLoad(LoadSDNode *LD) {
 }
 
 SDValue DAGCombiner::visitLOAD(SDNode *N) {
-  LoadSDNode *LD  = cast<LoadSDNode>(N);
+  LoadSDNode *LD = cast<LoadSDNode>(N);
   SDValue Chain = LD->getChain();
-  SDValue Ptr   = LD->getBasePtr();
+  SDValue Ptr = LD->getBasePtr();
 
   // If load is not volatile and there are no uses of the loaded value (and
   // the updated indexed value in case of indexed loads), change uses of the
@@ -18332,7 +18420,7 @@ SDValue DAGCombiner::visitLOAD(SDNode *N) {
         if (N->use_empty())
           deleteAndRecombine(N);
 
-        return SDValue(N, 0);   // Return N so it doesn't get rechecked!
+        return SDValue(N, 0); // Return N so it doesn't get rechecked!
       }
     } else {
       // Indexed loads.
@@ -18362,7 +18450,7 @@ SDValue DAGCombiner::visitLOAD(SDNode *N) {
         DAG.ReplaceAllUsesOfValueWith(SDValue(N, 1), Index);
         DAG.ReplaceAllUsesOfValueWith(SDValue(N, 2), Chain);
         deleteAndRecombine(N);
-        return SDValue(N, 0);   // Return N so it doesn't get rechecked!
+        return SDValue(N, 0); // Return N so it doesn't get rechecked!
       }
     }
   }
@@ -18398,18 +18486,17 @@ SDValue DAGCombiner::visitLOAD(SDNode *N) {
 
       // Replace the chain to void dependency.
       if (LD->getExtensionType() == ISD::NON_EXTLOAD) {
-        ReplLoad = DAG.getLoad(N->getValueType(0), SDLoc(LD),
-                               BetterChain, Ptr, LD->getMemOperand());
+        ReplLoad = DAG.getLoad(N->getValueType(0), SDLoc(LD), BetterChain, Ptr,
+                               LD->getMemOperand());
       } else {
         ReplLoad = DAG.getExtLoad(LD->getExtensionType(), SDLoc(LD),
-                                  LD->getValueType(0),
-                                  BetterChain, Ptr, LD->getMemoryVT(),
-                                  LD->getMemOperand());
+                                  LD->getValueType(0), BetterChain, Ptr,
+                                  LD->getMemoryVT(), LD->getMemOperand());
       }
 
       // Create token factor to keep old chain connected.
-      SDValue Token = DAG.getNode(ISD::TokenFactor, SDLoc(N),
-                                  MVT::Other, Chain, ReplLoad.getValue(1));
+      SDValue Token = DAG.getNode(ISD::TokenFactor, SDLoc(N), MVT::Other, Chain,
+                                  ReplLoad.getValue(1));
 
       // Replace uses with load result and token factor
       return CombineTo(N, ReplLoad.getValue(0), Token);
@@ -18971,8 +19058,8 @@ bool DAGCombiner::SliceUpLoad(SDNode *N) {
     ArgChains.push_back(SliceInst.getValue(1));
   }
 
-  SDValue Chain = DAG.getNode(ISD::TokenFactor, SDLoc(LD), MVT::Other,
-                              ArgChains);
+  SDValue Chain =
+      DAG.getNode(ISD::TokenFactor, SDLoc(LD), MVT::Other, ArgChains);
   DAG.ReplaceAllUsesOfValueWith(SDValue(N, 1), Chain);
   AddToWorklist(Chain.getNode());
   return true;
@@ -18981,23 +19068,22 @@ bool DAGCombiner::SliceUpLoad(SDNode *N) {
 /// Check to see if V is (and load (ptr), imm), where the load is having
 /// specific bytes cleared out.  If so, return the byte size being masked out
 /// and the shift amount.
-static std::pair<unsigned, unsigned>
-CheckForMaskedLoad(SDValue V, SDValue Ptr, SDValue Chain) {
+static std::pair<unsigned, unsigned> CheckForMaskedLoad(SDValue V, SDValue Ptr,
+                                                        SDValue Chain) {
   std::pair<unsigned, unsigned> Result(0, 0);
 
   // Check for the structure we're looking for.
-  if (V->getOpcode() != ISD::AND ||
-      !isa<ConstantSDNode>(V->getOperand(1)) ||
+  if (V->getOpcode() != ISD::AND || !isa<ConstantSDNode>(V->getOperand(1)) ||
       !ISD::isNormalLoad(V->getOperand(0).getNode()))
     return Result;
 
   // Check the chain and pointer.
   LoadSDNode *LD = cast<LoadSDNode>(V->getOperand(0));
-  if (LD->getBasePtr() != Ptr) return Result;  // Not from same pointer.
+  if (LD->getBasePtr() != Ptr)
+    return Result; // Not from same pointer.
 
   // This only handles simple types.
-  if (V.getValueType() != MVT::i16 &&
-      V.getValueType() != MVT::i32 &&
+  if (V.getValueType() != MVT::i16 && V.getValueType() != MVT::i32 &&
       V.getValueType() != MVT::i64)
     return Result;
 
@@ -19006,10 +19092,13 @@ CheckForMaskedLoad(SDValue V, SDValue Ptr, SDValue Chain) {
   // follow the sign bit for uniformity.
   uint64_t NotMask = ~cast<ConstantSDNode>(V->getOperand(1))->getSExtValue();
   unsigned NotMaskLZ = llvm::countl_zero(NotMask);
-  if (NotMaskLZ & 7) return Result;  // Must be multiple of a byte.
+  if (NotMaskLZ & 7)
+    return Result; // Must be multiple of a byte.
   unsigned NotMaskTZ = llvm::countr_zero(NotMask);
-  if (NotMaskTZ & 7) return Result;  // Must be multiple of a byte.
-  if (NotMaskLZ == 64) return Result;  // All zero mask.
+  if (NotMaskTZ & 7)
+    return Result; // Must be multiple of a byte.
+  if (NotMaskLZ == 64)
+    return Result; // All zero mask.
 
   // See if we have a continuous run of bits.  If so, we have 0*1+0*
   if (llvm::countr_one(NotMask >> NotMaskTZ) + NotMaskTZ + NotMaskLZ != 64)
@@ -19017,19 +19106,22 @@ CheckForMaskedLoad(SDValue V, SDValue Ptr, SDValue Chain) {
 
   // Adjust NotMaskLZ down to be from the actual size of the int instead of i64.
   if (V.getValueType() != MVT::i64 && NotMaskLZ)
-    NotMaskLZ -= 64-V.getValueSizeInBits();
+    NotMaskLZ -= 64 - V.getValueSizeInBits();
 
-  unsigned MaskedBytes = (V.getValueSizeInBits()-NotMaskLZ-NotMaskTZ)/8;
+  unsigned MaskedBytes = (V.getValueSizeInBits() - NotMaskLZ - NotMaskTZ) / 8;
   switch (MaskedBytes) {
   case 1:
   case 2:
-  case 4: break;
-  default: return Result; // All one mask, or 5-byte mask.
+  case 4:
+    break;
+  default:
+    return Result; // All one mask, or 5-byte mask.
   }
 
   // Verify that the first bit starts at a multiple of mask so that the access
   // is aligned the same as the access width.
-  if (NotMaskTZ && NotMaskTZ/8 % MaskedBytes) return Result;
+  if (NotMaskTZ && NotMaskTZ / 8 % MaskedBytes)
+    return Result;
 
   // For narrowing to be valid, it must be the case that the load the
   // immediately preceding memory operation before the store.
@@ -19044,7 +19136,7 @@ CheckForMaskedLoad(SDValue V, SDValue Ptr, SDValue Chain) {
     return Result; // Fail.
 
   Result.first = MaskedBytes;
-  Result.second = NotMaskTZ/8;
+  Result.second = NotMaskTZ / 8;
   return Result;
 }
 
@@ -19061,9 +19153,10 @@ ShrinkLoadReplaceStoreWithStore(const std::pair<unsigned, unsigned> &MaskInfo,
 
   // Check to see if IVal is all zeros in the part being masked in by the 'or'
   // that uses this.  If not, this is not a replacement.
-  APInt Mask = ~APInt::getBitsSet(IVal.getValueSizeInBits(),
-                                  ByteShift*8, (ByteShift+NumBytes)*8);
-  if (!DAG.MaskedValueIsZero(IVal, Mask)) return SDValue();
+  APInt Mask = ~APInt::getBitsSet(IVal.getValueSizeInBits(), ByteShift * 8,
+                                  (ByteShift + NumBytes) * 8);
+  if (!DAG.MaskedValueIsZero(IVal, Mask))
+    return SDValue();
 
   // Check that it is legal on the target to do this.  It is legal if the new
   // VT we're shrinking to (i8/i16/i32) is legal or we're still before type
@@ -19094,8 +19187,9 @@ ShrinkLoadReplaceStoreWithStore(const std::pair<unsigned, unsigned> &MaskInfo,
   // shifted by ByteShift and truncated down to NumBytes.
   if (ByteShift) {
     SDLoc DL(IVal);
-    IVal = DAG.getNode(ISD::SRL, DL, IVal.getValueType(), IVal,
-                       DAG.getConstant(ByteShift*8, DL,
+    IVal =
+        DAG.getNode(ISD::SRL, DL, IVal.getValueType(), IVal,
+                    DAG.getConstant(ByteShift * 8, DL,
                                     DC->getShiftAmountTy(IVal.getValueType())));
   }
 
@@ -19115,16 +19209,15 @@ ShrinkLoadReplaceStoreWithStore(const std::pair<unsigned, unsigned> &MaskInfo,
   ++OpsNarrowed;
   if (UseTruncStore)
     return DAG.getTruncStore(St->getChain(), SDLoc(St), IVal, Ptr,
-                             St->getPointerInfo().getWithOffset(StOffset),
-                             VT, St->getOriginalAlign());
+                             St->getPointerInfo().getWithOffset(StOffset), VT,
+                             St->getOriginalAlign());
 
   // Truncate down to the new size.
   IVal = DAG.getNode(ISD::TRUNCATE, SDLoc(IVal), VT, IVal);
 
-  return DAG
-      .getStore(St->getChain(), SDLoc(St), IVal, Ptr,
-                St->getPointerInfo().getWithOffset(StOffset),
-                St->getOriginalAlign());
+  return DAG.getStore(St->getChain(), SDLoc(St), IVal, Ptr,
+                      St->getPointerInfo().getWithOffset(StOffset),
+                      St->getOriginalAlign());
 }
 
 /// Look for sequence of load / op / store where op is one of 'or', 'xor', and
@@ -19132,13 +19225,13 @@ ShrinkLoadReplaceStoreWithStore(const std::pair<unsigned, unsigned> &MaskInfo,
 /// narrowing the load and store if it would end up being a win for performance
 /// or code size.
 SDValue DAGCombiner::ReduceLoadOpStoreWidth(SDNode *N) {
-  StoreSDNode *ST  = cast<StoreSDNode>(N);
+  StoreSDNode *ST = cast<StoreSDNode>(N);
   if (!ST->isSimple())
     return SDValue();
 
   SDValue Chain = ST->getChain();
   SDValue Value = ST->getValue();
-  SDValue Ptr   = ST->getBasePtr();
+  SDValue Ptr = ST->getBasePtr();
   EVT VT = Value.getValueType();
 
   if (ST->isTruncatingStore() || VT.isVector())
@@ -19159,15 +19252,15 @@ SDValue DAGCombiner::ReduceLoadOpStoreWidth(SDNode *N) {
     std::pair<unsigned, unsigned> MaskedLoad;
     MaskedLoad = CheckForMaskedLoad(Value.getOperand(0), Ptr, Chain);
     if (MaskedLoad.first)
-      if (SDValue NewST = ShrinkLoadReplaceStoreWithStore(MaskedLoad,
-                                                  Value.getOperand(1), ST,this))
+      if (SDValue NewST = ShrinkLoadReplaceStoreWithStore(
+              MaskedLoad, Value.getOperand(1), ST, this))
         return NewST;
 
     // Or is commutative, so try swapping X and Y.
     MaskedLoad = CheckForMaskedLoad(Value.getOperand(1), Ptr, Chain);
     if (MaskedLoad.first)
-      if (SDValue NewST = ShrinkLoadReplaceStoreWithStore(MaskedLoad,
-                                                  Value.getOperand(0), ST,this))
+      if (SDValue NewST = ShrinkLoadReplaceStoreWithStore(
+              MaskedLoad, Value.getOperand(0), ST, this))
         return NewST;
   }
 
@@ -19181,9 +19274,8 @@ SDValue DAGCombiner::ReduceLoadOpStoreWidth(SDNode *N) {
   if (ISD::isNormalLoad(N0.getNode()) && N0.hasOneUse() &&
       Chain == SDValue(N0.getNode(), 1)) {
     LoadSDNode *LD = cast<LoadSDNode>(N0);
-    if (LD->getBasePtr() != Ptr ||
-        LD->getPointerInfo().getAddrSpace() !=
-        ST->getPointerInfo().getAddrSpace())
+    if (LD->getBasePtr() != Ptr || LD->getPointerInfo().getAddrSpace() !=
+                                       ST->getPointerInfo().getAddrSpace())
       return SDValue();
 
     // Find the type to narrow it the load / op / store to.
@@ -19200,10 +19292,9 @@ SDValue DAGCombiner::ReduceLoadOpStoreWidth(SDNode *N) {
     EVT NewVT = EVT::getIntegerVT(*DAG.getContext(), NewBW);
     // The narrowing should be profitable, the load/store operation should be
     // legal (or custom) and the store size should be equal to the NewVT width.
-    while (NewBW < BitWidth &&
-           (NewVT.getStoreSizeInBits() != NewBW ||
-            !TLI.isOperationLegalOrCustom(Opc, NewVT) ||
-            !TLI.isNarrowingProfitable(VT, NewVT))) {
+    while (NewBW < BitWidth && (NewVT.getStoreSizeInBits() != NewBW ||
+                                !TLI.isOperationLegalOrCustom(Opc, NewVT) ||
+                                !TLI.isNarrowingProfitable(VT, NewVT))) {
       NewBW = NextPowerOf2(NewBW);
       NewVT = EVT::getIntegerVT(*DAG.getContext(), NewBW);
     }
@@ -19214,8 +19305,8 @@ SDValue DAGCombiner::ReduceLoadOpStoreWidth(SDNode *N) {
     // start at the previous one.
     if (ShAmt % NewBW)
       ShAmt = (((ShAmt + NewBW - 1) / NewBW) * NewBW) - NewBW;
-    APInt Mask = APInt::getBitsSet(BitWidth, ShAmt,
-                                   std::min(BitWidth, ShAmt + NewBW));
+    APInt Mask =
+        APInt::getBitsSet(BitWidth, ShAmt, std::min(BitWidth, ShAmt + NewBW));
     if ((Imm & Mask) == Imm) {
       APInt NewImm = (Imm & Mask).lshr(ShAmt).trunc(NewBW);
       if (Opc == ISD::AND)
@@ -19240,9 +19331,9 @@ SDValue DAGCombiner::ReduceLoadOpStoreWidth(SDNode *N) {
           DAG.getLoad(NewVT, SDLoc(N0), LD->getChain(), NewPtr,
                       LD->getPointerInfo().getWithOffset(PtrOff), NewAlign,
                       LD->getMemOperand()->getFlags(), LD->getAAInfo());
-      SDValue NewVal = DAG.getNode(Opc, SDLoc(Value), NewVT, NewLD,
-                                   DAG.getConstant(NewImm, SDLoc(Value),
-                                                   NewVT));
+      SDValue NewVal =
+          DAG.getNode(Opc, SDLoc(Value), NewVT, NewLD,
+                      DAG.getConstant(NewImm, SDLoc(Value), NewVT));
       SDValue NewST =
           DAG.getStore(Chain, SDLoc(N), NewVal, NewPtr,
                        ST->getPointerInfo().getWithOffset(PtrOff), NewAlign);
@@ -19264,16 +19355,14 @@ SDValue DAGCombiner::ReduceLoadOpStoreWidth(SDNode *N) {
 /// by any other operations, then consider transforming the pair to integer
 /// load / store operations if the target deems the transformation profitable.
 SDValue DAGCombiner::TransformFPLoadStorePair(SDNode *N) {
-  StoreSDNode *ST  = cast<StoreSDNode>(N);
+  StoreSDNode *ST = cast<StoreSDNode>(N);
   SDValue Value = ST->getValue();
   if (ISD::isNormalStore(ST) && ISD::isNormalLoad(Value.getNode()) &&
       Value.hasOneUse()) {
     LoadSDNode *LD = cast<LoadSDNode>(Value);
     EVT VT = LD->getMemoryVT();
-    if (!VT.isFloatingPoint() ||
-        VT != ST->getMemoryVT() ||
-        LD->isNonTemporal() ||
-        ST->isNonTemporal() ||
+    if (!VT.isFloatingPoint() || VT != ST->getMemoryVT() ||
+        LD->isNonTemporal() || ST->isNonTemporal() ||
         LD->getPointerInfo().getAddrSpace() != 0 ||
         ST->getPointerInfo().getAddrSpace() != 0)
       return SDValue();
@@ -19560,7 +19649,7 @@ bool DAGCombiner::mergeStoresOfConstantsOrVecElts(
     bool IsLE = DAG.getDataLayout().isLittleEndian();
     for (unsigned i = 0; i < NumStores; ++i) {
       unsigned Idx = IsLE ? (NumStores - 1 - i) : i;
-      StoreSDNode *St  = cast<StoreSDNode>(StoreNodes[Idx].MemNode);
+      StoreSDNode *St = cast<StoreSDNode>(StoreNodes[Idx].MemNode);
 
       SDValue Val = St->getValue();
       Val = peekThroughBitcasts(Val);
@@ -19874,7 +19963,7 @@ DAGCombiner::getConsecutiveStores(SmallVectorImpl<MemOpLink> &StoreNodes,
     size_t StartIdx = 0;
     while ((StartIdx + 1 < StoreNodes.size()) &&
            StoreNodes[StartIdx].OffsetFromBase + ElementSizeBytes !=
-              StoreNodes[StartIdx + 1].OffsetFromBase)
+               StoreNodes[StartIdx + 1].OffsetFromBase)
       ++StartIdx;
 
     // Bail if we don't have enough candidates to merge.
@@ -19976,7 +20065,8 @@ bool DAGCombiner::tryStoreMergeOfConstants(
 
       // We only use vectors if the target allows it and the function is not
       // marked with the noimplicitfloat attribute.
-      if (TLI.storeOfVectorConstantIsCheap(!NonZero, MemVT, i + 1, FirstStoreAS) &&
+      if (TLI.storeOfVectorConstantIsCheap(!NonZero, MemVT, i + 1,
+                                           FirstStoreAS) &&
           AllowVectors) {
         // Find a legal type for the vector store.
         unsigned Elts = (i + 1) * NumMemElts;
@@ -20489,17 +20579,17 @@ SDValue DAGCombiner::replaceStoreChain(StoreSDNode *ST, SDValue BetterChain) {
 
   // Replace the chain to avoid dependency.
   if (ST->isTruncatingStore()) {
-    ReplStore = DAG.getTruncStore(BetterChain, SL, ST->getValue(),
-                                  ST->getBasePtr(), ST->getMemoryVT(),
-                                  ST->getMemOperand());
+    ReplStore =
+        DAG.getTruncStore(BetterChain, SL, ST->getValue(), ST->getBasePtr(),
+                          ST->getMemoryVT(), ST->getMemOperand());
   } else {
     ReplStore = DAG.getStore(BetterChain, SL, ST->getValue(), ST->getBasePtr(),
                              ST->getMemOperand());
   }
 
   // Create token to keep both nodes around.
-  SDValue Token = DAG.getNode(ISD::TokenFactor, SL,
-                              MVT::Other, ST->getChain(), ReplStore);
+  SDValue Token =
+      DAG.getNode(ISD::TokenFactor, SL, MVT::Other, ST->getChain(), ReplStore);
 
   // Make sure the new and old chains are cleaned up.
   AddToWorklist(Token.getNode());
@@ -20532,7 +20622,7 @@ SDValue DAGCombiner::replaceStoreOfFPConstant(StoreSDNode *ST) {
   switch (CFP->getSimpleValueType(0).SimpleTy) {
   default:
     llvm_unreachable("Unknown FP type");
-  case MVT::f16:    // We don't do this for these yet.
+  case MVT::f16: // We don't do this for these yet.
   case MVT::bf16:
   case MVT::f80:
   case MVT::f128:
@@ -20541,25 +20631,22 @@ SDValue DAGCombiner::replaceStoreOfFPConstant(StoreSDNode *ST) {
   case MVT::f32:
     if ((isTypeLegal(MVT::i32) && !LegalOperations && ST->isSimple()) ||
         TLI.isOperationLegalOrCustom(ISD::STORE, MVT::i32)) {
-      Tmp = DAG.getConstant((uint32_t)CFP->getValueAPF().
-                            bitcastToAPInt().getZExtValue(), SDLoc(CFP),
-                            MVT::i32);
+      Tmp = DAG.getConstant(
+          (uint32_t)CFP->getValueAPF().bitcastToAPInt().getZExtValue(),
+          SDLoc(CFP), MVT::i32);
       return DAG.getStore(Chain, DL, Tmp, Ptr, ST->getMemOperand());
     }
 
     return SDValue();
   case MVT::f64:
-    if ((TLI.isTypeLegal(MVT::i64) && !LegalOperations &&
-         ST->isSimple()) ||
+    if ((TLI.isTypeLegal(MVT::i64) && !LegalOperations && ST->isSimple()) ||
         TLI.isOperationLegalOrCustom(ISD::STORE, MVT::i64)) {
-      Tmp = DAG.getConstant(CFP->getValueAPF().bitcastToAPInt().
-                            getZExtValue(), SDLoc(CFP), MVT::i64);
-      return DAG.getStore(Chain, DL, Tmp,
-                          Ptr, ST->getMemOperand());
+      Tmp = DAG.getConstant(CFP->getValueAPF().bitcastToAPInt().getZExtValue(),
+                            SDLoc(CFP), MVT::i64);
+      return DAG.getStore(Chain, DL, Tmp, Ptr, ST->getMemOperand());
     }
 
-    if (ST->isSimple() &&
-        TLI.isOperationLegalOrCustom(ISD::STORE, MVT::i32)) {
+    if (ST->isSimple() && TLI.isOperationLegalOrCustom(ISD::STORE, MVT::i32)) {
       // Many FP stores are not made apparent until after legalize, e.g. for
       // argument passing.  Since this is so common, custom legalize the
       // 64-bit integer store into two 32-bit stores.
@@ -20578,8 +20665,7 @@ SDValue DAGCombiner::replaceStoreOfFPConstant(StoreSDNode *ST) {
       SDValue St1 = DAG.getStore(Chain, DL, Hi, Ptr,
                                  ST->getPointerInfo().getWithOffset(4),
                                  ST->getOriginalAlign(), MMOFlags, AAInfo);
-      return DAG.getNode(ISD::TokenFactor, DL, MVT::Other,
-                         St0, St1);
+      return DAG.getNode(ISD::TokenFactor, DL, MVT::Other, St0, St1);
     }
 
     return SDValue();
@@ -20645,10 +20731,10 @@ SDValue DAGCombiner::replaceStoreOfInsertLoad(StoreSDNode *ST) {
 }
 
 SDValue DAGCombiner::visitSTORE(SDNode *N) {
-  StoreSDNode *ST  = cast<StoreSDNode>(N);
+  StoreSDNode *ST = cast<StoreSDNode>(N);
   SDValue Chain = ST->getChain();
   SDValue Value = ST->getValue();
-  SDValue Ptr   = ST->getBasePtr();
+  SDValue Ptr = ST->getBasePtr();
 
   // If this is a store of a bit convert, store the input value if the
   // resultant store does not need a higher alignment than the original.
@@ -20663,8 +20749,8 @@ SDValue DAGCombiner::visitSTORE(SDNode *N) {
     // TODO: May be able to relax for unordered atomics (see D66309)
     if (((!LegalOperations && ST->isSimple()) ||
          TLI.isOperationLegal(ISD::STORE, SVT)) &&
-        TLI.isStoreBitCastBeneficial(Value.getValueType(), SVT,
-                                     DAG, *ST->getMemOperand())) {
+        TLI.isStoreBitCastBeneficial(Value.getValueType(), SVT, DAG,
+                                     *ST->getMemOperand())) {
       return DAG.getStore(Chain, SDLoc(N), Value.getOperand(0), Ptr,
                           ST->getMemOperand());
     }
@@ -20789,8 +20875,8 @@ SDValue DAGCombiner::visitSTORE(SDNode *N) {
 
   // TODO: Can relax for unordered atomics (see D66309)
   if (StoreSDNode *ST1 = dyn_cast<StoreSDNode>(Chain)) {
-    if (ST->isUnindexed() && ST->isSimple() &&
-        ST1->isUnindexed() && ST1->isSimple()) {
+    if (ST->isUnindexed() && ST->isSimple() && ST1->isUnindexed() &&
+        ST1->isSimple()) {
       if (OptLevel != CodeGenOpt::None && ST1->getBasePtr() == Ptr &&
           ST1->getValue() == Value && ST->getMemoryVT() == ST1->getMemoryVT() &&
           ST->getAddressSpace() == ST1->getAddressSpace()) {
@@ -20839,8 +20925,8 @@ SDValue DAGCombiner::visitSTORE(SDNode *N) {
       Value->hasOneUse() && ST->isUnindexed() &&
       TLI.canCombineTruncStore(Value.getOperand(0).getValueType(),
                                ST->getMemoryVT(), LegalOperations)) {
-    return DAG.getTruncStore(Chain, SDLoc(N), Value.getOperand(0),
-                             Ptr, ST->getMemoryVT(), ST->getMemOperand());
+    return DAG.getTruncStore(Chain, SDLoc(N), Value.getOperand(0), Ptr,
+                             ST->getMemoryVT(), ST->getMemOperand());
   }
 
   // Always perform this optimization before types are legal. If the target
@@ -20852,7 +20938,8 @@ SDValue DAGCombiner::visitSTORE(SDNode *N) {
       // Keep trying to merge store sequences until we are unable to do so
       // or until we merge the last store on the chain.
       bool Changed = mergeConsecutiveStores(ST);
-      if (!Changed) break;
+      if (!Changed)
+        break;
       // Return N as merge only uses CombineTo and no worklist clean
       // up is necessary.
       if (N->getOpcode() == ISD::DELETED_NODE || !isa<StoreSDNode>(N))
@@ -21343,16 +21430,16 @@ SDValue DAGCombiner::visitINSERT_VECTOR_ELT(SDNode *N) {
   //
   // Do this only if the child insert_vector node has one use; also
   // do this only if indices are both constants and Idx1 < Idx0.
-  if (InVec.getOpcode() == ISD::INSERT_VECTOR_ELT && InVec.hasOneUse()
-      && isa<ConstantSDNode>(InVec.getOperand(2))) {
+  if (InVec.getOpcode() == ISD::INSERT_VECTOR_ELT && InVec.hasOneUse() &&
+      isa<ConstantSDNode>(InVec.getOperand(2))) {
     unsigned OtherElt = InVec.getConstantOperandVal(2);
     if (Elt < OtherElt) {
       // Swap nodes.
       SDValue NewOp = DAG.getNode(ISD::INSERT_VECTOR_ELT, DL, VT,
                                   InVec.getOperand(0), InVal, EltNo);
       AddToWorklist(NewOp.getNode());
-      return DAG.getNode(ISD::INSERT_VECTOR_ELT, SDLoc(InVec.getNode()),
-                         VT, NewOp, InVec.getOperand(1), InVec.getOperand(2));
+      return DAG.getNode(ISD::INSERT_VECTOR_ELT, SDLoc(InVec.getNode()), VT,
+                         NewOp, InVec.getOperand(1), InVec.getOperand(2));
     }
   }
 
@@ -21799,9 +21886,10 @@ SDValue DAGCombiner::visitEXTRACT_VECTOR_ELT(SDNode *N) {
     if (DAG.isKnownNeverZero(Index))
       return DAG.getUNDEF(ScalarVT);
 
-    // Check if the result type doesn't match the inserted element type. 
+    // Check if the result type doesn't match the inserted element type.
     // The inserted element and extracted element may have mismatched bitwidth.
-    // As a result, EXTRACT_VECTOR_ELT may extend or truncate the extracted vector.
+    // As a result, EXTRACT_VECTOR_ELT may extend or truncate the extracted
+    // vector.
     SDValue InOp = VecOp.getOperand(0);
     if (InOp.getValueType() != ScalarVT) {
       assert(InOp.getValueType().isInteger() && ScalarVT.isInteger());
@@ -22073,19 +22161,19 @@ SDValue DAGCombiner::visitEXTRACT_VECTOR_ELT(SDNode *N) {
     SDLoc SL(N);
     EVT ConcatVT = VecOp.getOperand(0).getValueType();
     unsigned ConcatNumElts = ConcatVT.getVectorNumElements();
-    SDValue NewIdx = DAG.getConstant(Elt % ConcatNumElts, SL,
-                                     Index.getValueType());
+    SDValue NewIdx =
+        DAG.getConstant(Elt % ConcatNumElts, SL, Index.getValueType());
 
     SDValue ConcatOp = VecOp.getOperand(Elt / ConcatNumElts);
-    SDValue Elt = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, SL,
-                              ConcatVT.getVectorElementType(),
-                              ConcatOp, NewIdx);
+    SDValue Elt =
+        DAG.getNode(ISD::EXTRACT_VECTOR_ELT, SL,
+                    ConcatVT.getVectorElementType(), ConcatOp, NewIdx);
     return DAG.getNode(ISD::BITCAST, SL, ScalarVT, Elt);
   }
 
   // Make sure we found a non-volatile load and the extractelement is
   // the only use.
-  if (!LN0 || !LN0->hasNUsesOfValue(1,0) || !LN0->isSimple())
+  if (!LN0 || !LN0->hasNUsesOfValue(1, 0) || !LN0->isSimple())
     return SDValue();
 
   // If Idx was -1 above, Elt is going to be -1, so just return undef.
@@ -22121,9 +22209,10 @@ SDValue DAGCombiner::reduceBuildVecExtToExtBuildVec(SDNode *N) {
   for (unsigned i = 0; i != NumInScalars; ++i) {
     SDValue In = N->getOperand(i);
     // Ignore undef inputs.
-    if (In.isUndef()) continue;
+    if (In.isUndef())
+      continue;
 
-    bool AnyExt  = In.getOpcode() == ISD::ANY_EXTEND;
+    bool AnyExt = In.getOpcode() == ISD::ANY_EXTEND;
     bool ZeroExt = In.getOpcode() == ISD::ZERO_EXTEND;
 
     // Abort if the element is not an extension.
@@ -22169,10 +22258,10 @@ SDValue DAGCombiner::reduceBuildVecExtToExtBuildVec(SDNode *N) {
     return SDValue();
 
   bool isLE = DAG.getDataLayout().isLittleEndian();
-  unsigned ElemRatio = OutScalarTy.getSizeInBits()/SourceType.getSizeInBits();
+  unsigned ElemRatio = OutScalarTy.getSizeInBits() / SourceType.getSizeInBits();
   assert(ElemRatio > 1 && "Invalid element size ratio");
-  SDValue Filler = AllAnyExt ? DAG.getUNDEF(SourceType):
-                               DAG.getConstant(0, DL, SourceType);
+  SDValue Filler =
+      AllAnyExt ? DAG.getUNDEF(SourceType) : DAG.getConstant(0, DL, SourceType);
 
   unsigned NewBVElems = ElemRatio * VT.getVectorNumElements();
   SmallVector<SDValue, 8> Ops(NewBVElems, Filler);
@@ -22181,15 +22270,14 @@ SDValue DAGCombiner::reduceBuildVecExtToExtBuildVec(SDNode *N) {
   for (unsigned i = 0, e = N->getNumOperands(); i != e; ++i) {
     SDValue Cast = N->getOperand(i);
     assert((Cast.getOpcode() == ISD::ANY_EXTEND ||
-            Cast.getOpcode() == ISD::ZERO_EXTEND ||
-            Cast.isUndef()) && "Invalid cast opcode");
+            Cast.getOpcode() == ISD::ZERO_EXTEND || Cast.isUndef()) &&
+           "Invalid cast opcode");
     SDValue In;
     if (Cast.isUndef())
       In = DAG.getUNDEF(SourceType);
     else
       In = Cast->getOperand(0);
-    unsigned Index = isLE ? (i * ElemRatio) :
-                            (i * ElemRatio + (ElemRatio - 1));
+    unsigned Index = isLE ? (i * ElemRatio) : (i * ElemRatio + (ElemRatio - 1));
 
     assert(Index < Ops.size() && "Invalid index");
     Ops[Index] = In;
@@ -22197,12 +22285,10 @@ SDValue DAGCombiner::reduceBuildVecExtToExtBuildVec(SDNode *N) {
 
   // The type of the new BUILD_VECTOR node.
   EVT VecVT = EVT::getVectorVT(*DAG.getContext(), SourceType, NewBVElems);
-  assert(VecVT.getSizeInBits() == VT.getSizeInBits() &&
-         "Invalid vector size");
+  assert(VecVT.getSizeInBits() == VT.getSizeInBits() && "Invalid vector size");
   // Check if the new vector type is legal.
-  if (!isTypeLegal(VecVT) ||
-      (!TLI.isOperationLegal(ISD::BUILD_VECTOR, VecVT) &&
-       TLI.isOperationLegal(ISD::BUILD_VECTOR, VT)))
+  if (!isTypeLegal(VecVT) || (!TLI.isOperationLegal(ISD::BUILD_VECTOR, VecVT) &&
+                              TLI.isOperationLegal(ISD::BUILD_VECTOR, VT)))
     return SDValue();
 
   // Make the new BUILD_VECTOR.
@@ -22254,7 +22340,8 @@ SDValue DAGCombiner::reduceBuildVecTruncToBitCast(SDNode *N) {
   for (unsigned i = 0; i != NumInScalars; ++i) {
     SDValue In = PeekThroughBitcast(N->getOperand(i));
     // Ignore undef inputs.
-    if (In.isUndef()) continue;
+    if (In.isUndef())
+      continue;
 
     if (In.getOpcode() != ISD::TRUNCATE)
       return SDValue();
@@ -22983,8 +23070,8 @@ SDValue DAGCombiner::visitBUILD_VECTOR(SDNode *N) {
                                      SrcVT.getVectorElementType(), NumElts);
         if (!LegalTypes || TLI.isTypeLegal(NewVT)) {
           SmallVector<SDValue, 8> Ops(N->getNumOperands(), Splat);
-          SDValue Concat = DAG.getNode(ISD::CONCAT_VECTORS, SDLoc(N),
-                                       NewVT, Ops);
+          SDValue Concat =
+              DAG.getNode(ISD::CONCAT_VECTORS, SDLoc(N), NewVT, Ops);
           return DAG.getBitcast(VT, Concat);
         }
       }
@@ -23425,7 +23512,7 @@ SDValue DAGCombiner::visitCONCAT_VECTORS(SDNode *N) {
     // concat_vectors(scalar_to_vector(scalar), undef) ->
     //     scalar_to_vector(scalar)
     if (!LegalOperations && Scalar.getOpcode() == ISD::SCALAR_TO_VECTOR &&
-         Scalar.hasOneUse()) {
+        Scalar.hasOneUse()) {
       EVT SVT = Scalar.getValueType().getVectorElementType();
       if (SVT == Scalar.getOperand(0).getValueType())
         Scalar = Scalar.getOperand(0);
@@ -24015,8 +24102,8 @@ SDValue DAGCombiner::visitEXTRACT_SUBVECTOR(SDNode *N) {
     if ((SrcNumElts % DestNumElts) == 0) {
       unsigned SrcDestRatio = SrcNumElts / DestNumElts;
       ElementCount NewExtEC = NVT.getVectorElementCount() * SrcDestRatio;
-      EVT NewExtVT = EVT::getVectorVT(*DAG.getContext(), SrcVT.getScalarType(),
-                                      NewExtEC);
+      EVT NewExtVT =
+          EVT::getVectorVT(*DAG.getContext(), SrcVT.getScalarType(), NewExtEC);
       if (TLI.isOperationLegalOrCustom(ISD::EXTRACT_SUBVECTOR, NewExtVT)) {
         SDLoc DL(N);
         SDValue NewIndex = DAG.getVectorIdxConstant(ExtIdx * SrcDestRatio, DL);
@@ -24201,8 +24288,8 @@ static SDValue foldShuffleOfConcatUndefs(ShuffleVectorSDNode *Shuf,
 
   // Ask the target if this is a valid transform.
   const TargetLowering &TLI = DAG.getTargetLoweringInfo();
-  EVT HalfVT = EVT::getVectorVT(*DAG.getContext(), VT.getScalarType(),
-                                HalfNumElts);
+  EVT HalfVT =
+      EVT::getVectorVT(*DAG.getContext(), VT.getScalarType(), HalfNumElts);
   if (!TLI.isShuffleMaskLegal(Mask0, HalfVT) ||
       !TLI.isShuffleMaskLegal(Mask1, HalfVT))
     return SDValue();
@@ -24689,7 +24776,8 @@ static SDValue combineShuffleOfSplatVal(ShuffleVectorSDNode *Shuf,
     }
   }
 
-  // If the inner operand is a known splat with no undefs, just return that directly.
+  // If the inner operand is a known splat with no undefs, just return that
+  // directly.
   // TODO: Create DemandedElts mask from Shuf's mask.
   // TODO: Allow undef elements and merge with the shuffle code below.
   if (DAG.isSplatValue(Shuf->getOperand(0), /*AllowUndefs*/ false))
@@ -25111,11 +25199,10 @@ SDValue DAGCombiner::visitVECTOR_SHUFFLE(SDNode *N) {
   if (SDValue V = combineTruncationShuffle(SVN, DAG))
     return V;
 
-  if (N0.getOpcode() == ISD::CONCAT_VECTORS &&
-      Level < AfterLegalizeVectorOps &&
+  if (N0.getOpcode() == ISD::CONCAT_VECTORS && Level < AfterLegalizeVectorOps &&
       (N1.isUndef() ||
-      (N1.getOpcode() == ISD::CONCAT_VECTORS &&
-       N0.getOperand(0).getValueType() == N1.getOperand(0).getValueType()))) {
+       (N1.getOpcode() == ISD::CONCAT_VECTORS &&
+        N0.getOperand(0).getValueType() == N1.getOperand(0).getValueType()))) {
     if (SDValue V = partitionShuffleOfConcats(N, DAG))
       return V;
   }
@@ -25124,8 +25211,7 @@ SDValue DAGCombiner::visitVECTOR_SHUFFLE(SDNode *N) {
   // only low-half elements of a concat with undef:
   // shuf (concat X, X), undef, Mask --> shuf (concat X, undef), undef, Mask'
   if (N0.getOpcode() == ISD::CONCAT_VECTORS && N1.isUndef() &&
-      N0.getNumOperands() == 2 &&
-      N0.getOperand(0) == N0.getOperand(1)) {
+      N0.getNumOperands() == 2 && N0.getOperand(0) == N0.getOperand(1)) {
     int HalfNumElts = (int)NumElts / 2;
     SmallVector<int, 8> NewMask;
     for (unsigned i = 0; i != NumElts; ++i) {
@@ -25251,8 +25337,9 @@ SDValue DAGCombiner::visitVECTOR_SHUFFLE(SDNode *N) {
         // XformToShuffleWithZero which loses UNDEF mask elements.
         if (TLI.isVectorClearMaskLegal(ClearMask, IntVT))
           return DAG.getBitcast(
-              VT, DAG.getVectorShuffle(IntVT, DL, DAG.getBitcast(IntVT, N0),
-                                      DAG.getConstant(0, DL, IntVT), ClearMask));
+              VT,
+              DAG.getVectorShuffle(IntVT, DL, DAG.getBitcast(IntVT, N0),
+                                   DAG.getConstant(0, DL, IntVT), ClearMask));
 
         if (TLI.isOperationLegalOrCustom(ISD::AND, IntVT))
           return DAG.getBitcast(
@@ -25271,9 +25358,8 @@ SDValue DAGCombiner::visitVECTOR_SHUFFLE(SDNode *N) {
   // If this shuffle only has a single input that is a bitcasted shuffle,
   // attempt to merge the 2 shuffles and suitably bitcast the inputs/output
   // back to their original types.
-  if (N0.getOpcode() == ISD::BITCAST && N0.hasOneUse() &&
-      N1.isUndef() && Level < AfterLegalizeVectorOps &&
-      TLI.isTypeLegal(VT)) {
+  if (N0.getOpcode() == ISD::BITCAST && N0.hasOneUse() && N1.isUndef() &&
+      Level < AfterLegalizeVectorOps && TLI.isTypeLegal(VT)) {
 
     SDValue BC0 = peekThroughOneUseBitcasts(N0);
     if (BC0.getOpcode() == ISD::VECTOR_SHUFFLE && BC0.hasOneUse()) {
@@ -25536,8 +25622,8 @@ SDValue DAGCombiner::visitVECTOR_SHUFFLE(SDNode *N) {
           SDValue Op1 = LeftOp ? Op10 : Op11;
           if (Commute)
             std::swap(Op0, Op1);
-          // Only accept the merged shuffle if we don't introduce undef elements,
-          // or the inner shuffle already contained undef elements.
+          // Only accept the merged shuffle if we don't introduce undef
+          // elements, or the inner shuffle already contained undef elements.
           auto *SVN0 = dyn_cast<ShuffleVectorSDNode>(Op0);
           return SVN0 && InnerN->isOnlyUserOf(SVN0) &&
                  MergeInnerShuffle(Commute, SVN, SVN0, Op1, TLI, SV0, SV1,
@@ -25712,11 +25798,11 @@ SDValue DAGCombiner::visitINSERT_SUBVECTOR(SDNode *N) {
     if (isNullConstant(N2) &&
         VT.isScalableVector() == SrcVT.isScalableVector()) {
       if (VT.getVectorMinNumElements() >= SrcVT.getVectorMinNumElements())
-        return DAG.getNode(ISD::INSERT_SUBVECTOR, SDLoc(N),
-                           VT, N0, N1.getOperand(0), N2);
+        return DAG.getNode(ISD::INSERT_SUBVECTOR, SDLoc(N), VT, N0,
+                           N1.getOperand(0), N2);
       else
-        return DAG.getNode(ISD::EXTRACT_SUBVECTOR, SDLoc(N),
-                           VT, N1.getOperand(0), N2);
+        return DAG.getNode(ISD::EXTRACT_SUBVECTOR, SDLoc(N), VT,
+                           N1.getOperand(0), N2);
     }
   }
 
@@ -25824,8 +25910,8 @@ SDValue DAGCombiner::visitINSERT_SUBVECTOR(SDNode *N) {
       SDValue NewOp = DAG.getNode(ISD::INSERT_SUBVECTOR, SDLoc(N), VT,
                                   N0.getOperand(0), N1, N2);
       AddToWorklist(NewOp.getNode());
-      return DAG.getNode(ISD::INSERT_SUBVECTOR, SDLoc(N0.getNode()),
-                         VT, NewOp, N0.getOperand(1), N0.getOperand(2));
+      return DAG.getNode(ISD::INSERT_SUBVECTOR, SDLoc(N0.getNode()), VT, NewOp,
+                         N0.getOperand(1), N0.getOperand(2));
     }
   }
 
@@ -25902,8 +25988,8 @@ SDValue DAGCombiner::visitVECREDUCE(SDNode *N) {
   // On an boolean vector an and/or reduction is the same as a umin/umax
   // reduction. Convert them if the latter is legal while the former isn't.
   if (Opcode == ISD::VECREDUCE_AND || Opcode == ISD::VECREDUCE_OR) {
-    unsigned NewOpcode = Opcode == ISD::VECREDUCE_AND
-        ? ISD::VECREDUCE_UMIN : ISD::VECREDUCE_UMAX;
+    unsigned NewOpcode = Opcode == ISD::VECREDUCE_AND ? ISD::VECREDUCE_UMIN
+                                                      : ISD::VECREDUCE_UMAX;
     if (!TLI.isOperationLegalOrCustom(Opcode, VT) &&
         TLI.isOperationLegalOrCustom(NewOpcode, VT) &&
         DAG.ComputeNumSignBits(N0) == VT.getScalarSizeInBits())
@@ -26382,8 +26468,7 @@ SDValue DAGCombiner::SimplifySelect(const SDLoc &DL, SDValue N0, SDValue N1,
     // Otherwise, just return whatever node we got back, like fabs.
     if (SCC.getOpcode() == ISD::SELECT_CC) {
       const SDNodeFlags Flags = N0->getFlags();
-      SDValue SETCC = DAG.getNode(ISD::SETCC, SDLoc(N0),
-                                  N0.getValueType(),
+      SDValue SETCC = DAG.getNode(ISD::SETCC, SDLoc(N0), N0.getValueType(),
                                   SCC.getOperand(0), SCC.getOperand(1),
                                   SCC.getOperand(4), Flags);
       AddToWorklist(SETCC.getNode());
@@ -26428,9 +26513,8 @@ bool DAGCombiner::SimplifySelectOps(SDNode *TheSelect, SDValue LHS,
           Zero = isConstOrConstSplatFP(Cmp.getOperand(1));
         }
       }
-      if (Zero && Zero->isZero() &&
-          Sqrt.getOperand(0) == CmpLHS && (CC == ISD::SETOLT ||
-          CC == ISD::SETULT || CC == ISD::SETLT)) {
+      if (Zero && Zero->isZero() && Sqrt.getOperand(0) == CmpLHS &&
+          (CC == ISD::SETOLT || CC == ISD::SETULT || CC == ISD::SETLT)) {
         // We have: (select (setcc x, [+-]0.0, *lt), NaN, (fsqrt x))
         CombineTo(TheSelect, Sqrt);
         return true;
@@ -26438,12 +26522,13 @@ bool DAGCombiner::SimplifySelectOps(SDNode *TheSelect, SDValue LHS,
     }
   }
   // Cannot simplify select with vector condition
-  if (TheSelect->getOperand(0).getValueType().isVector()) return false;
+  if (TheSelect->getOperand(0).getValueType().isVector())
+    return false;
 
   // If this is a select from two identical things, try to pull the operation
   // through the select.
-  if (LHS.getOpcode() != RHS.getOpcode() ||
-      !LHS.hasOneUse() || !RHS.hasOneUse())
+  if (LHS.getOpcode() != RHS.getOpcode() || !LHS.hasOneUse() ||
+      !RHS.hasOneUse())
     return false;
 
   // If this is a load and the token chain is identical, replace the select
@@ -26523,11 +26608,10 @@ bool DAGCombiner::SimplifySelectOps(SDNode *TheSelect, SDValue LHS,
            SDNode::hasPredecessorHelper(RLD, Visited, Worklist)))
         return false;
 
-      Addr = DAG.getSelect(SDLoc(TheSelect),
-                           LLD->getBasePtr().getValueType(),
+      Addr = DAG.getSelect(SDLoc(TheSelect), LLD->getBasePtr().getValueType(),
                            TheSelect->getOperand(0), LLD->getBasePtr(),
                            RLD->getBasePtr());
-    } else {  // Otherwise SELECT_CC
+    } else { // Otherwise SELECT_CC
       // We cannot do this optimization if any pair of {RLD, LLD} is a
       // predecessor to {RLD, LLD, CondLHS, CondRHS}. As we've already compared
       // the Loads, we only need to check if CondLHS/CondRHS is a successor to
@@ -26545,12 +26629,10 @@ bool DAGCombiner::SimplifySelectOps(SDNode *TheSelect, SDValue LHS,
            SDNode::hasPredecessorHelper(RLD, Visited, Worklist)))
         return false;
 
-      Addr = DAG.getNode(ISD::SELECT_CC, SDLoc(TheSelect),
-                         LLD->getBasePtr().getValueType(),
-                         TheSelect->getOperand(0),
-                         TheSelect->getOperand(1),
-                         LLD->getBasePtr(), RLD->getBasePtr(),
-                         TheSelect->getOperand(4));
+      Addr = DAG.getNode(
+          ISD::SELECT_CC, SDLoc(TheSelect), LLD->getBasePtr().getValueType(),
+          TheSelect->getOperand(0), TheSelect->getOperand(1), LLD->getBasePtr(),
+          RLD->getBasePtr(), TheSelect->getOperand(4));
     }
 
     SDValue Load;
@@ -26784,8 +26866,8 @@ SDValue DAGCombiner::convertSelectOfFPConstantsToLoadOffset(
   if (!TV->hasOneUse() && !FV->hasOneUse())
     return SDValue();
 
-  Constant *Elts[] = { const_cast<ConstantFP*>(FV->getConstantFPValue()),
-                       const_cast<ConstantFP*>(TV->getConstantFPValue()) };
+  Constant *Elts[] = {const_cast<ConstantFP *>(FV->getConstantFPValue()),
+                      const_cast<ConstantFP *>(TV->getConstantFPValue())};
   Type *FPTy = Elts[0]->getType();
   const DataLayout &TD = DAG.getDataLayout();
 
@@ -26807,9 +26889,9 @@ SDValue DAGCombiner::convertSelectOfFPConstantsToLoadOffset(
   AddToWorklist(CstOffset.getNode());
   CPIdx = DAG.getNode(ISD::ADD, DL, CPIdx.getValueType(), CPIdx, CstOffset);
   AddToWorklist(CPIdx.getNode());
-  return DAG.getLoad(TV->getValueType(0), DL, DAG.getEntryNode(), CPIdx,
-                     MachinePointerInfo::getConstantPool(
-                         DAG.getMachineFunction()), Alignment);
+  return DAG.getLoad(
+      TV->getValueType(0), DL, DAG.getEntryNode(), CPIdx,
+      MachinePointerInfo::getConstantPool(DAG.getMachineFunction()), Alignment);
 }
 
 /// Simplify an expression of the form (N0 cond N1) ? N2 : N3
@@ -26818,7 +26900,8 @@ SDValue DAGCombiner::SimplifySelectCC(const SDLoc &DL, SDValue N0, SDValue N1,
                                       SDValue N2, SDValue N3, ISD::CondCode CC,
                                       bool NotExtCompare) {
   // (x ? y : y) -> y.
-  if (N2 == N3) return N2;
+  if (N2 == N3)
+    return N2;
 
   EVT CmpOpVT = N0.getValueType();
   EVT CmpResVT = getSetCCResultType(CmpOpVT);
@@ -26866,9 +26949,8 @@ SDValue DAGCombiner::SimplifySelectCC(const SDLoc &DL, SDValue N0, SDValue N1,
 
         // Now arithmetic right shift it all the way over, so the result is
         // either all-ones, or zero.
-        SDValue ShrAmt =
-          DAG.getConstant(ShCt, SDLoc(Shl),
-                          getShiftAmountTy(Shl.getValueType()));
+        SDValue ShrAmt = DAG.getConstant(ShCt, SDLoc(Shl),
+                                         getShiftAmountTy(Shl.getValueType()));
         SDValue Shr = DAG.getNode(ISD::SRA, SDLoc(N0), VT, Shl, ShrAmt);
 
         return DAG.getNode(ISD::AND, DL, VT, Shr, N3);
@@ -26985,8 +27067,7 @@ SDValue DAGCombiner::SimplifySelectCC(const SDLoc &DL, SDValue N0, SDValue N1,
 SDValue DAGCombiner::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
                                    ISD::CondCode Cond, const SDLoc &DL,
                                    bool foldBooleans) {
-  TargetLowering::DAGCombinerInfo
-    DagCombineInfo(DAG, Level, false, this);
+  TargetLowering::DAGCombinerInfo DagCombineInfo(DAG, Level, false, this);
   return TLI.SimplifySetCC(VT, N0, N1, Cond, foldBooleans, DagCombineInfo, DL);
 }
 
@@ -27159,8 +27240,8 @@ SDValue DAGCombiner::BuildDivEstimate(SDValue N, SDValue Op,
 ///   X_{i+1} = X_i (1.5 - A X_i^2 / 2)
 /// As a result, we precompute A/2 prior to the iteration loop.
 SDValue DAGCombiner::buildSqrtNROneConst(SDValue Arg, SDValue Est,
-                                         unsigned Iterations,
-                                         SDNodeFlags Flags, bool Reciprocal) {
+                                         unsigned Iterations, SDNodeFlags Flags,
+                                         bool Reciprocal) {
   EVT VT = Arg.getValueType();
   SDLoc DL(Arg);
   SDValue ThreeHalves = DAG.getConstantFP(1.5, DL, VT);
@@ -27191,8 +27272,8 @@ SDValue DAGCombiner::buildSqrtNROneConst(SDValue Arg, SDValue Est,
 ///     =>
 ///   X_{i+1} = (-0.5 * X_i) * (A * X_i * X_i + (-3.0))
 SDValue DAGCombiner::buildSqrtNRTwoConst(SDValue Arg, SDValue Est,
-                                         unsigned Iterations,
-                                         SDNodeFlags Flags, bool Reciprocal) {
+                                         unsigned Iterations, SDNodeFlags Flags,
+                                         bool Reciprocal) {
   EVT VT = Arg.getValueType();
   SDLoc DL(Arg);
   SDValue MinusThree = DAG.getConstantFP(-3.0, DL, VT);
@@ -27252,15 +27333,14 @@ SDValue DAGCombiner::buildSqrtEstimateImpl(SDValue Op, SDNodeFlags Flags,
   int Iterations = TLI.getSqrtRefinementSteps(VT, MF);
 
   bool UseOneConstNR = false;
-  if (SDValue Est =
-      TLI.getSqrtEstimate(Op, DAG, Enabled, Iterations, UseOneConstNR,
-                          Reciprocal)) {
+  if (SDValue Est = TLI.getSqrtEstimate(Op, DAG, Enabled, Iterations,
+                                        UseOneConstNR, Reciprocal)) {
     AddToWorklist(Est.getNode());
 
     if (Iterations > 0)
       Est = UseOneConstNR
-            ? buildSqrtNROneConst(Op, Est, Iterations, Flags, Reciprocal)
-            : buildSqrtNRTwoConst(Op, Est, Iterations, Flags, Reciprocal);
+                ? buildSqrtNROneConst(Op, Est, Iterations, Flags, Reciprocal)
+                : buildSqrtNRTwoConst(Op, Est, Iterations, Flags, Reciprocal);
     if (!Reciprocal) {
       SDLoc DL(Op);
       // Try the target specific test first.
@@ -27303,11 +27383,10 @@ bool DAGCombiner::mayAlias(SDNode *Op0, SDNode *Op1) const {
     if (const auto *LSN = dyn_cast<LSBaseSDNode>(N)) {
       int64_t Offset = 0;
       if (auto *C = dyn_cast<ConstantSDNode>(LSN->getOffset()))
-        Offset = (LSN->getAddressingMode() == ISD::PRE_INC)
-                     ? C->getSExtValue()
-                     : (LSN->getAddressingMode() == ISD::PRE_DEC)
-                           ? -1 * C->getSExtValue()
-                           : 0;
+        Offset = (LSN->getAddressingMode() == ISD::PRE_INC) ? C->getSExtValue()
+                 : (LSN->getAddressingMode() == ISD::PRE_DEC)
+                     ? -1 * C->getSExtValue()
+                     : 0;
       uint64_t Size =
           MemoryLocation::getSizeOrUnknown(LSN->getMemoryVT().getStoreSize());
       return {LSN->isVolatile(),
@@ -27429,8 +27508,8 @@ bool DAGCombiner::mayAlias(SDNode *Op0, SDNode *Op1) const {
 /// looking for aliasing nodes and adding them to the Aliases vector.
 void DAGCombiner::GatherAllAliases(SDNode *N, SDValue OriginalChain,
                                    SmallVectorImpl<SDValue> &Aliases) {
-  SmallVector<SDValue, 8> Chains;     // List of chains to visit.
-  SmallPtrSet<SDNode *, 16> Visited;  // Visited node set.
+  SmallVector<SDValue, 8> Chains;    // List of chains to visit.
+  SmallPtrSet<SDNode *, 16> Visited; // Visited node set.
 
   // Get alias information for node.
   // TODO: relax aliasing for unordered atomics (see D66309)
diff --git a/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp b/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
index f0affce7b6b805c..08afffa0070bd25 100644
--- a/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/FastISel.cpp
@@ -307,8 +307,8 @@ Register FastISel::materializeConstant(const Value *V, MVT VT) {
         Register IntegerReg =
             getRegForValue(ConstantInt::get(V->getContext(), SIntVal));
         if (IntegerReg)
-          Reg = fastEmit_r(IntVT.getSimpleVT(), VT, ISD::SINT_TO_FP,
-                           IntegerReg);
+          Reg =
+              fastEmit_r(IntVT.getSimpleVT(), VT, ISD::SINT_TO_FP, IntegerReg);
       }
     }
   } else if (const auto *Op = dyn_cast<Operator>(V)) {
@@ -392,8 +392,7 @@ Register FastISel::getRegForGEPIndex(const Value *Idx) {
   if (IdxVT.bitsLT(PtrVT)) {
     IdxN = fastEmit_r(IdxVT.getSimpleVT(), PtrVT, ISD::SIGN_EXTEND, IdxN);
   } else if (IdxVT.bitsGT(PtrVT)) {
-    IdxN =
-        fastEmit_r(IdxVT.getSimpleVT(), PtrVT, ISD::TRUNCATE, IdxN);
+    IdxN = fastEmit_r(IdxVT.getSimpleVT(), PtrVT, ISD::TRUNCATE, IdxN);
   }
   return IdxN;
 }
@@ -468,9 +467,8 @@ bool FastISel::selectBinaryOp(const User *I, unsigned ISDOpcode) {
       if (!Op1)
         return false;
 
-      Register ResultReg =
-          fastEmit_ri_(VT.getSimpleVT(), ISDOpcode, Op1, CI->getZExtValue(),
-                       VT.getSimpleVT());
+      Register ResultReg = fastEmit_ri_(VT.getSimpleVT(), ISDOpcode, Op1,
+                                        CI->getZExtValue(), VT.getSimpleVT());
       if (!ResultReg)
         return false;
 
@@ -501,8 +499,8 @@ bool FastISel::selectBinaryOp(const User *I, unsigned ISDOpcode) {
       ISDOpcode = ISD::AND;
     }
 
-    Register ResultReg = fastEmit_ri_(VT.getSimpleVT(), ISDOpcode, Op0, Imm,
-                                      VT.getSimpleVT());
+    Register ResultReg =
+        fastEmit_ri_(VT.getSimpleVT(), ISDOpcode, Op0, Imm, VT.getSimpleVT());
     if (!ResultReg)
       return false;
 
@@ -516,8 +514,8 @@ bool FastISel::selectBinaryOp(const User *I, unsigned ISDOpcode) {
     return false;
 
   // Now we have both operands in registers. Emit the instruction.
-  Register ResultReg = fastEmit_rr(VT.getSimpleVT(), VT.getSimpleVT(),
-                                   ISDOpcode, Op0, Op1);
+  Register ResultReg =
+      fastEmit_rr(VT.getSimpleVT(), VT.getSimpleVT(), ISDOpcode, Op0, Op1);
   if (!ResultReg)
     // Target-specific code wasn't able to find a machine opcode for
     // the given ISD opcode and type. Halt "fast" selection and bail.
@@ -544,8 +542,8 @@ bool FastISel::selectGetElementPtr(const User *I) {
   // FIXME: What's a good SWAG number for MaxOffs?
   uint64_t MaxOffs = 2048;
   MVT VT = TLI.getPointerTy(DL);
-  for (gep_type_iterator GTI = gep_type_begin(I), E = gep_type_end(I);
-       GTI != E; ++GTI) {
+  for (gep_type_iterator GTI = gep_type_begin(I), E = gep_type_end(I); GTI != E;
+       ++GTI) {
     const Value *Idx = GTI.getOperand();
     if (StructType *StTy = GTI.getStructTypeOrNull()) {
       uint64_t Field = cast<ConstantInt>(Idx)->getZExtValue();
@@ -763,7 +761,8 @@ bool FastISel::selectPatchpoint(const CallInst *I) {
   CallingConv::ID CC = I->getCallingConv();
   bool IsAnyRegCC = CC == CallingConv::AnyReg;
   bool HasDef = !I->getType()->isVoidTy();
-  Value *Callee = I->getOperand(PatchPointOpers::TargetPos)->stripPointerCasts();
+  Value *Callee =
+      I->getOperand(PatchPointOpers::TargetPos)->stripPointerCasts();
 
   // Get the real number of arguments participating in the call <numArgs>
   assert(isa<ConstantInt>(I->getOperand(PatchPointOpers::NArgPos)) &&
@@ -812,12 +811,12 @@ bool FastISel::selectPatchpoint(const CallInst *I) {
   // Add the call target.
   if (const auto *C = dyn_cast<IntToPtrInst>(Callee)) {
     uint64_t CalleeConstAddr =
-      cast<ConstantInt>(C->getOperand(0))->getZExtValue();
+        cast<ConstantInt>(C->getOperand(0))->getZExtValue();
     Ops.push_back(MachineOperand::CreateImm(CalleeConstAddr));
   } else if (const auto *C = dyn_cast<ConstantExpr>(Callee)) {
     if (C->getOpcode() == Instruction::IntToPtr) {
       uint64_t CalleeConstAddr =
-        cast<ConstantInt>(C->getOperand(0))->getZExtValue();
+          cast<ConstantInt>(C->getOperand(0))->getZExtValue();
       Ops.push_back(MachineOperand::CreateImm(CalleeConstAddr));
     } else
       llvm_unreachable("Unsupported ConstantExpr.");
@@ -872,8 +871,8 @@ bool FastISel::selectPatchpoint(const CallInst *I) {
                                             /*isImp=*/true));
 
   // Insert the patchpoint instruction before the call generated by the target.
-  MachineInstrBuilder MIB = BuildMI(*FuncInfo.MBB, CLI.Call, MIMD,
-                                    TII.get(TargetOpcode::PATCHPOINT));
+  MachineInstrBuilder MIB =
+      BuildMI(*FuncInfo.MBB, CLI.Call, MIMD, TII.get(TargetOpcode::PATCHPOINT));
 
   for (auto &MO : Ops)
     MIB.add(MO);
@@ -927,7 +926,8 @@ bool FastISel::selectXRayTypedEvent(const CallInst *I) {
   for (auto &MO : Ops)
     MIB.add(MO);
 
-  // Insert the Patchable Typed Event Call instruction, that gets lowered properly.
+  // Insert the Patchable Typed Event Call instruction, that gets lowered
+  // properly.
   return true;
 }
 
@@ -1362,7 +1362,8 @@ bool FastISel::selectIntrinsicCall(const IntrinsicInst *II) {
     }
 
     BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD,
-            TII.get(TargetOpcode::DBG_LABEL)).addMetadata(DI->getLabel());
+            TII.get(TargetOpcode::DBG_LABEL))
+        .addMetadata(DI->getLabel());
     return true;
   }
   case Intrinsic::objectsize:
@@ -1417,8 +1418,8 @@ bool FastISel::selectCast(const User *I, unsigned Opcode) {
     // Unhandled operand.  Halt "fast" selection and bail.
     return false;
 
-  Register ResultReg = fastEmit_r(SrcVT.getSimpleVT(), DstVT.getSimpleVT(),
-                                  Opcode, InputReg);
+  Register ResultReg =
+      fastEmit_r(SrcVT.getSimpleVT(), DstVT.getSimpleVT(), Opcode, InputReg);
   if (!ResultReg)
     return false;
 
@@ -1469,8 +1470,9 @@ bool FastISel::selectFreeze(const User *I) {
   MVT Ty = ETy.getSimpleVT();
   const TargetRegisterClass *TyRegClass = TLI.getRegClassFor(Ty);
   Register ResultReg = createResultReg(TyRegClass);
-  BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD,
-          TII.get(TargetOpcode::COPY), ResultReg).addReg(Reg);
+  BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, TII.get(TargetOpcode::COPY),
+          ResultReg)
+      .addReg(Reg);
 
   updateValueMap(I, ResultReg);
   return true;
@@ -1478,8 +1480,7 @@ bool FastISel::selectFreeze(const User *I) {
 
 // Remove local value instructions starting from the instruction after
 // SavedLastLocalValue to the current function insert point.
-void FastISel::removeDeadLocalValueCode(MachineInstr *SavedLastLocalValue)
-{
+void FastISel::removeDeadLocalValueCode(MachineInstr *SavedLastLocalValue) {
   MachineInstr *CurLastLocalValue = getLastLocalValue();
   if (CurLastLocalValue != SavedLastLocalValue) {
     // Find the first local value instruction to be deleted.
@@ -1625,8 +1626,8 @@ bool FastISel::selectFNeg(const User *I, const Value *In) {
 
   // If the target has ISD::FNEG, use it.
   EVT VT = TLI.getValueType(DL, I->getType());
-  Register ResultReg = fastEmit_r(VT.getSimpleVT(), VT.getSimpleVT(), ISD::FNEG,
-                                  OpReg);
+  Register ResultReg =
+      fastEmit_r(VT.getSimpleVT(), VT.getSimpleVT(), ISD::FNEG, OpReg);
   if (ResultReg) {
     updateValueMap(I, ResultReg);
     return true;
@@ -1640,14 +1641,14 @@ bool FastISel::selectFNeg(const User *I, const Value *In) {
   if (!TLI.isTypeLegal(IntVT))
     return false;
 
-  Register IntReg = fastEmit_r(VT.getSimpleVT(), IntVT.getSimpleVT(),
-                               ISD::BITCAST, OpReg);
+  Register IntReg =
+      fastEmit_r(VT.getSimpleVT(), IntVT.getSimpleVT(), ISD::BITCAST, OpReg);
   if (!IntReg)
     return false;
 
-  Register IntResultReg = fastEmit_ri_(
-      IntVT.getSimpleVT(), ISD::XOR, IntReg,
-      UINT64_C(1) << (VT.getSizeInBits() - 1), IntVT.getSimpleVT());
+  Register IntResultReg = fastEmit_ri_(IntVT.getSimpleVT(), ISD::XOR, IntReg,
+                                       UINT64_C(1) << (VT.getSizeInBits() - 1),
+                                       IntVT.getSimpleVT());
   if (!IntResultReg)
     return false;
 
@@ -1928,7 +1929,8 @@ Register FastISel::constrainOperandRegClass(const MCInstrDesc &II, Register Op,
       // has gone very wrong before we got here.
       Register NewOp = createResultReg(RegClass);
       BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD,
-              TII.get(TargetOpcode::COPY), NewOp).addReg(Op);
+              TII.get(TargetOpcode::COPY), NewOp)
+          .addReg(Op);
       return NewOp;
     }
   }
@@ -1952,11 +1954,9 @@ Register FastISel::fastEmitInst_r(unsigned MachineInstOpcode,
   Op0 = constrainOperandRegClass(II, Op0, II.getNumDefs());
 
   if (II.getNumDefs() >= 1)
-    BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, II, ResultReg)
-        .addReg(Op0);
+    BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, II, ResultReg).addReg(Op0);
   else {
-    BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, II)
-        .addReg(Op0);
+    BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, II).addReg(Op0);
     BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, TII.get(TargetOpcode::COPY),
             ResultReg)
         .addReg(II.implicit_defs()[0]);
@@ -1979,9 +1979,7 @@ Register FastISel::fastEmitInst_rr(unsigned MachineInstOpcode,
         .addReg(Op0)
         .addReg(Op1);
   else {
-    BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, II)
-        .addReg(Op0)
-        .addReg(Op1);
+    BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, II).addReg(Op0).addReg(Op1);
     BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, TII.get(TargetOpcode::COPY),
             ResultReg)
         .addReg(II.implicit_defs()[0]);
@@ -2029,9 +2027,7 @@ Register FastISel::fastEmitInst_ri(unsigned MachineInstOpcode,
         .addReg(Op0)
         .addImm(Imm);
   else {
-    BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, II)
-        .addReg(Op0)
-        .addImm(Imm);
+    BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, II).addReg(Op0).addImm(Imm);
     BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, TII.get(TargetOpcode::COPY),
             ResultReg)
         .addReg(II.implicit_defs()[0]);
@@ -2075,8 +2071,7 @@ Register FastISel::fastEmitInst_f(unsigned MachineInstOpcode,
     BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, II, ResultReg)
         .addFPImm(FPImm);
   else {
-    BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, II)
-        .addFPImm(FPImm);
+    BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, II).addFPImm(FPImm);
     BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, TII.get(TargetOpcode::COPY),
             ResultReg)
         .addReg(II.implicit_defs()[0]);
@@ -2116,8 +2111,7 @@ Register FastISel::fastEmitInst_i(unsigned MachineInstOpcode,
   const MCInstrDesc &II = TII.get(MachineInstOpcode);
 
   if (II.getNumDefs() >= 1)
-    BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, II, ResultReg)
-        .addImm(Imm);
+    BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, II, ResultReg).addImm(Imm);
   else {
     BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, II).addImm(Imm);
     BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, TII.get(TargetOpcode::COPY),
@@ -2135,7 +2129,8 @@ Register FastISel::fastEmitInst_extractsubreg(MVT RetVT, unsigned Op0,
   const TargetRegisterClass *RC = MRI.getRegClass(Op0);
   MRI.constrainRegClass(Op0, TRI.getSubClassWithSubReg(RC, Idx));
   BuildMI(*FuncInfo.MBB, FuncInfo.InsertPt, MIMD, TII.get(TargetOpcode::COPY),
-          ResultReg).addReg(Op0, 0, Idx);
+          ResultReg)
+      .addReg(Op0, 0, Idx);
   return ResultReg;
 }
 
@@ -2226,7 +2221,7 @@ bool FastISel::tryToFoldLoad(const LoadInst *LI, const Instruction *FoldInst) {
 
   const Instruction *TheUser = LI->user_back();
   while (TheUser != FoldInst && // Scan up until we find FoldInst.
-         // Stay in the right block.
+                                // Stay in the right block.
          TheUser->getParent() == FoldInst->getParent() &&
          --MaxUsers) { // Don't scan too far.
     // If there are multiple or no uses of this instruction, then bail out.
@@ -2348,34 +2343,87 @@ CmpInst::Predicate FastISel::optimizeCmpPredicate(const CmpInst *CI) const {
     return Predicate;
 
   switch (Predicate) {
-  default: llvm_unreachable("Invalid predicate!");
-  case CmpInst::FCMP_FALSE: Predicate = CmpInst::FCMP_FALSE; break;
-  case CmpInst::FCMP_OEQ:   Predicate = CmpInst::FCMP_ORD;   break;
-  case CmpInst::FCMP_OGT:   Predicate = CmpInst::FCMP_FALSE; break;
-  case CmpInst::FCMP_OGE:   Predicate = CmpInst::FCMP_ORD;   break;
-  case CmpInst::FCMP_OLT:   Predicate = CmpInst::FCMP_FALSE; break;
-  case CmpInst::FCMP_OLE:   Predicate = CmpInst::FCMP_ORD;   break;
-  case CmpInst::FCMP_ONE:   Predicate = CmpInst::FCMP_FALSE; break;
-  case CmpInst::FCMP_ORD:   Predicate = CmpInst::FCMP_ORD;   break;
-  case CmpInst::FCMP_UNO:   Predicate = CmpInst::FCMP_UNO;   break;
-  case CmpInst::FCMP_UEQ:   Predicate = CmpInst::FCMP_TRUE;  break;
-  case CmpInst::FCMP_UGT:   Predicate = CmpInst::FCMP_UNO;   break;
-  case CmpInst::FCMP_UGE:   Predicate = CmpInst::FCMP_TRUE;  break;
-  case CmpInst::FCMP_ULT:   Predicate = CmpInst::FCMP_UNO;   break;
-  case CmpInst::FCMP_ULE:   Predicate = CmpInst::FCMP_TRUE;  break;
-  case CmpInst::FCMP_UNE:   Predicate = CmpInst::FCMP_UNO;   break;
-  case CmpInst::FCMP_TRUE:  Predicate = CmpInst::FCMP_TRUE;  break;
-
-  case CmpInst::ICMP_EQ:    Predicate = CmpInst::FCMP_TRUE;  break;
-  case CmpInst::ICMP_NE:    Predicate = CmpInst::FCMP_FALSE; break;
-  case CmpInst::ICMP_UGT:   Predicate = CmpInst::FCMP_FALSE; break;
-  case CmpInst::ICMP_UGE:   Predicate = CmpInst::FCMP_TRUE;  break;
-  case CmpInst::ICMP_ULT:   Predicate = CmpInst::FCMP_FALSE; break;
-  case CmpInst::ICMP_ULE:   Predicate = CmpInst::FCMP_TRUE;  break;
-  case CmpInst::ICMP_SGT:   Predicate = CmpInst::FCMP_FALSE; break;
-  case CmpInst::ICMP_SGE:   Predicate = CmpInst::FCMP_TRUE;  break;
-  case CmpInst::ICMP_SLT:   Predicate = CmpInst::FCMP_FALSE; break;
-  case CmpInst::ICMP_SLE:   Predicate = CmpInst::FCMP_TRUE;  break;
+  default:
+    llvm_unreachable("Invalid predicate!");
+  case CmpInst::FCMP_FALSE:
+    Predicate = CmpInst::FCMP_FALSE;
+    break;
+  case CmpInst::FCMP_OEQ:
+    Predicate = CmpInst::FCMP_ORD;
+    break;
+  case CmpInst::FCMP_OGT:
+    Predicate = CmpInst::FCMP_FALSE;
+    break;
+  case CmpInst::FCMP_OGE:
+    Predicate = CmpInst::FCMP_ORD;
+    break;
+  case CmpInst::FCMP_OLT:
+    Predicate = CmpInst::FCMP_FALSE;
+    break;
+  case CmpInst::FCMP_OLE:
+    Predicate = CmpInst::FCMP_ORD;
+    break;
+  case CmpInst::FCMP_ONE:
+    Predicate = CmpInst::FCMP_FALSE;
+    break;
+  case CmpInst::FCMP_ORD:
+    Predicate = CmpInst::FCMP_ORD;
+    break;
+  case CmpInst::FCMP_UNO:
+    Predicate = CmpInst::FCMP_UNO;
+    break;
+  case CmpInst::FCMP_UEQ:
+    Predicate = CmpInst::FCMP_TRUE;
+    break;
+  case CmpInst::FCMP_UGT:
+    Predicate = CmpInst::FCMP_UNO;
+    break;
+  case CmpInst::FCMP_UGE:
+    Predicate = CmpInst::FCMP_TRUE;
+    break;
+  case CmpInst::FCMP_ULT:
+    Predicate = CmpInst::FCMP_UNO;
+    break;
+  case CmpInst::FCMP_ULE:
+    Predicate = CmpInst::FCMP_TRUE;
+    break;
+  case CmpInst::FCMP_UNE:
+    Predicate = CmpInst::FCMP_UNO;
+    break;
+  case CmpInst::FCMP_TRUE:
+    Predicate = CmpInst::FCMP_TRUE;
+    break;
+
+  case CmpInst::ICMP_EQ:
+    Predicate = CmpInst::FCMP_TRUE;
+    break;
+  case CmpInst::ICMP_NE:
+    Predicate = CmpInst::FCMP_FALSE;
+    break;
+  case CmpInst::ICMP_UGT:
+    Predicate = CmpInst::FCMP_FALSE;
+    break;
+  case CmpInst::ICMP_UGE:
+    Predicate = CmpInst::FCMP_TRUE;
+    break;
+  case CmpInst::ICMP_ULT:
+    Predicate = CmpInst::FCMP_FALSE;
+    break;
+  case CmpInst::ICMP_ULE:
+    Predicate = CmpInst::FCMP_TRUE;
+    break;
+  case CmpInst::ICMP_SGT:
+    Predicate = CmpInst::FCMP_FALSE;
+    break;
+  case CmpInst::ICMP_SGE:
+    Predicate = CmpInst::FCMP_TRUE;
+    break;
+  case CmpInst::ICMP_SLT:
+    Predicate = CmpInst::FCMP_FALSE;
+    break;
+  case CmpInst::ICMP_SLE:
+    Predicate = CmpInst::FCMP_TRUE;
+    break;
   }
 
   return Predicate;
diff --git a/llvm/lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp b/llvm/lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp
index 1d0a03ccfcdc6ae..e0d2c8e7085e594 100644
--- a/llvm/lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/FunctionLoweringInfo.cpp
@@ -44,8 +44,10 @@ using namespace llvm;
 /// PHI nodes or outside of the basic block that defines it, or used by a
 /// switch or atomic instruction, which may expand to multiple basic blocks.
 static bool isUsedOutsideOfDefiningBlock(const Instruction *I) {
-  if (I->use_empty()) return false;
-  if (isa<PHINode>(I)) return true;
+  if (I->use_empty())
+    return false;
+  if (isa<PHINode>(I))
+    return true;
   const BasicBlock *BB = I->getParent();
   for (const User *U : I->users())
     if (cast<Instruction>(U)->getParent() != BB || isa<PHINode>(U))
@@ -139,8 +141,9 @@ void FunctionLoweringInfo::set(const Function &fn, MachineFunction &mf,
           uint64_t TySize =
               MF->getDataLayout().getTypeAllocSize(Ty).getKnownMinValue();
 
-          TySize *= CUI->getZExtValue();   // Get total allocated size.
-          if (TySize == 0) TySize = 1; // Don't create zero-sized stack objects.
+          TySize *= CUI->getZExtValue(); // Get total allocated size.
+          if (TySize == 0)
+            TySize = 1; // Don't create zero-sized stack objects.
           int FrameIndex = INT_MAX;
           auto Iter = CatchObjects.find(AI);
           if (Iter != CatchObjects.end() && TLI->needsFixedCatchObjects()) {
@@ -376,7 +379,8 @@ Register FunctionLoweringInfo::CreateRegs(Type *Ty, bool isDivergent) {
     unsigned NumRegs = TLI->getNumRegisters(Ty->getContext(), ValueVT);
     for (unsigned i = 0; i != NumRegs; ++i) {
       Register R = CreateReg(RegisterVT, isDivergent);
-      if (!FirstReg) FirstReg = R;
+      if (!FirstReg)
+        FirstReg = R;
     }
   }
   return FirstReg;
@@ -513,8 +517,7 @@ void FunctionLoweringInfo::ComputePHILiveOutRegInfo(const PHINode *PN) {
 /// setArgumentFrameIndex - Record frame index for the byval
 /// argument. This overrides previous frame index entry for this argument,
 /// if any.
-void FunctionLoweringInfo::setArgumentFrameIndex(const Argument *A,
-                                                 int FI) {
+void FunctionLoweringInfo::setArgumentFrameIndex(const Argument *A, int FI) {
   ByValArgFrameIndexMap[A] = FI;
 }
 
@@ -540,8 +543,7 @@ Register FunctionLoweringInfo::getCatchPadExceptionPointerVReg(
   return VReg;
 }
 
-const Value *
-FunctionLoweringInfo::getValueFromVirtualReg(Register Vreg) {
+const Value *FunctionLoweringInfo::getValueFromVirtualReg(Register Vreg) {
   if (VirtReg2Value.empty()) {
     SmallVector<EVT, 4> ValueVTs;
     for (auto &P : ValueMap) {
diff --git a/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp b/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp
index 1d4be4df8ec0841..63699cea91eacc6 100644
--- a/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.cpp
@@ -45,7 +45,7 @@ unsigned InstrEmitter::CountResults(SDNode *Node) {
   while (N && Node->getValueType(N - 1) == MVT::Glue)
     --N;
   if (N && Node->getValueType(N - 1) == MVT::Other)
-    --N;    // Skip over chain result.
+    --N; // Skip over chain result.
   return N;
 }
 
@@ -172,7 +172,8 @@ void InstrEmitter::EmitCopyFromReg(SDNode *Node, unsigned ResNo, bool IsClone,
     // Create the reg, emit the copy.
     VRBase = MRI->createVirtualRegister(DstRC);
     BuildMI(*MBB, InsertPos, Node->getDebugLoc(), TII->get(TargetOpcode::COPY),
-            VRBase).addReg(SrcReg);
+            VRBase)
+        .addReg(SrcReg);
   }
 
   SDValue Op(Node, ResNo);
@@ -183,11 +184,9 @@ void InstrEmitter::EmitCopyFromReg(SDNode *Node, unsigned ResNo, bool IsClone,
   assert(isNew && "Node emitted out of order - early");
 }
 
-void InstrEmitter::CreateVirtualRegisters(SDNode *Node,
-                                       MachineInstrBuilder &MIB,
-                                       const MCInstrDesc &II,
-                                       bool IsClone, bool IsCloned,
-                                       DenseMap<SDValue, Register> &VRBaseMap) {
+void InstrEmitter::CreateVirtualRegisters(
+    SDNode *Node, MachineInstrBuilder &MIB, const MCInstrDesc &II, bool IsClone,
+    bool IsCloned, DenseMap<SDValue, Register> &VRBaseMap) {
   assert(Node->getMachineOpcode() != TargetOpcode::IMPLICIT_DEF &&
          "IMPLICIT_DEF should have been handled as a special case elsewhere!");
 
@@ -203,7 +202,7 @@ void InstrEmitter::CreateVirtualRegisters(SDNode *Node,
     // register instead of creating a new vreg.
     Register VRBase;
     const TargetRegisterClass *RC =
-      TRI->getAllocatableClass(TII->getRegClass(II, i, TRI, *MF));
+        TRI->getAllocatableClass(TII->getRegClass(II, i, TRI, *MF));
     // Always let the value type influence the used register class. The
     // constraints on the instruction may be too lax to represent the value
     // type correctly. For example, a 64-bit float (X86::FR64) can't live in
@@ -220,7 +219,7 @@ void InstrEmitter::CreateVirtualRegisters(SDNode *Node,
 
     if (!II.operands().empty() && II.operands()[i].isOptionalDef()) {
       // Optional def must be a physical register.
-      VRBase = cast<RegisterSDNode>(Node->getOperand(i-NumResults))->getReg();
+      VRBase = cast<RegisterSDNode>(Node->getOperand(i - NumResults))->getReg();
       assert(VRBase.isPhysical());
       MIB.addReg(VRBase, RegState::Define);
     }
@@ -285,19 +284,15 @@ Register InstrEmitter::getVR(SDValue Op,
   return I->second;
 }
 
-
 /// AddRegisterOperand - Add the specified register as an operand to the
 /// specified machine instr. Insert register copies if the register is
 /// not in the required register class.
-void
-InstrEmitter::AddRegisterOperand(MachineInstrBuilder &MIB,
-                                 SDValue Op,
-                                 unsigned IIOpNum,
-                                 const MCInstrDesc *II,
-                                 DenseMap<SDValue, Register> &VRBaseMap,
-                                 bool IsDebug, bool IsClone, bool IsCloned) {
-  assert(Op.getValueType() != MVT::Other &&
-         Op.getValueType() != MVT::Glue &&
+void InstrEmitter::AddRegisterOperand(MachineInstrBuilder &MIB, SDValue Op,
+                                      unsigned IIOpNum, const MCInstrDesc *II,
+                                      DenseMap<SDValue, Register> &VRBaseMap,
+                                      bool IsDebug, bool IsClone,
+                                      bool IsCloned) {
+  assert(Op.getValueType() != MVT::Other && Op.getValueType() != MVT::Glue &&
          "Chain and glue operands should occur at end of operand list!");
   // Get/emit the operand.
   Register VReg = getVR(Op, VRBaseMap);
@@ -323,18 +318,20 @@ InstrEmitter::AddRegisterOperand(MachineInstrBuilder &MIB,
           Op.getMachineOpcode() == TargetOpcode::IMPLICIT_DEF)
         MinNumRegs = 0;
 
-      const TargetRegisterClass *ConstrainedRC
-        = MRI->constrainRegClass(VReg, OpRC, MinNumRegs);
+      const TargetRegisterClass *ConstrainedRC =
+          MRI->constrainRegClass(VReg, OpRC, MinNumRegs);
       if (!ConstrainedRC) {
         OpRC = TRI->getAllocatableClass(OpRC);
         assert(OpRC && "Constraints cannot be fulfilled for allocation");
         Register NewVReg = MRI->createVirtualRegister(OpRC);
         BuildMI(*MBB, InsertPos, Op.getNode()->getDebugLoc(),
-                TII->get(TargetOpcode::COPY), NewVReg).addReg(VReg);
+                TII->get(TargetOpcode::COPY), NewVReg)
+            .addReg(VReg);
         VReg = NewVReg;
       } else {
         assert(ConstrainedRC->isAllocatable() &&
-           "Constraining an allocatable VReg produced an unallocatable class?");
+               "Constraining an allocatable VReg produced an unallocatable "
+               "class?");
       }
     }
   }
@@ -347,14 +344,12 @@ InstrEmitter::AddRegisterOperand(MachineInstrBuilder &MIB,
   // Tied operands are never killed, so we need to check that. And that
   // means we need to determine the index of the operand.
   bool isKill = Op.hasOneUse() &&
-                Op.getNode()->getOpcode() != ISD::CopyFromReg &&
-                !IsDebug &&
+                Op.getNode()->getOpcode() != ISD::CopyFromReg && !IsDebug &&
                 !(IsClone || IsCloned);
   if (isKill) {
     unsigned Idx = MIB->getNumOperands();
-    while (Idx > 0 &&
-           MIB->getOperand(Idx-1).isReg() &&
-           MIB->getOperand(Idx-1).isImplicit())
+    while (Idx > 0 && MIB->getOperand(Idx - 1).isReg() &&
+           MIB->getOperand(Idx - 1).isImplicit())
       --Idx;
     bool isTied = MCID.getOperandConstraint(Idx, MCOI::TIED_TO) != -1;
     if (isTied)
@@ -362,21 +357,19 @@ InstrEmitter::AddRegisterOperand(MachineInstrBuilder &MIB,
   }
 
   MIB.addReg(VReg, getDefRegState(isOptDef) | getKillRegState(isKill) |
-             getDebugRegState(IsDebug));
+                       getDebugRegState(IsDebug));
 }
 
 /// AddOperand - Add the specified operand to the specified machine instr.  II
 /// specifies the instruction information for the node, and IIOpNum is the
 /// operand number (in the II) that we are adding.
-void InstrEmitter::AddOperand(MachineInstrBuilder &MIB,
-                              SDValue Op,
-                              unsigned IIOpNum,
-                              const MCInstrDesc *II,
+void InstrEmitter::AddOperand(MachineInstrBuilder &MIB, SDValue Op,
+                              unsigned IIOpNum, const MCInstrDesc *II,
                               DenseMap<SDValue, Register> &VRBaseMap,
                               bool IsDebug, bool IsClone, bool IsCloned) {
   if (Op.isMachineOpcode()) {
-    AddRegisterOperand(MIB, Op, IIOpNum, II, VRBaseMap,
-                       IsDebug, IsClone, IsCloned);
+    AddRegisterOperand(MIB, Op, IIOpNum, II, VRBaseMap, IsDebug, IsClone,
+                       IsCloned);
   } else if (ConstantSDNode *C = dyn_cast<ConstantSDNode>(Op)) {
     MIB.addImm(C->getSExtValue());
   } else if (ConstantFPSDNode *F = dyn_cast<ConstantFPSDNode>(Op)) {
@@ -397,7 +390,8 @@ void InstrEmitter::AddOperand(MachineInstrBuilder &MIB,
     if (OpRC && IIRC && OpRC != IIRC && VReg.isVirtual()) {
       Register NewVReg = MRI->createVirtualRegister(IIRC);
       BuildMI(*MBB, InsertPos, Op.getNode()->getDebugLoc(),
-               TII->get(TargetOpcode::COPY), NewVReg).addReg(VReg);
+              TII->get(TargetOpcode::COPY), NewVReg)
+          .addReg(VReg);
       VReg = NewVReg;
     }
     // Turn additional physreg operands into implicit uses on non-variadic
@@ -432,22 +426,21 @@ void InstrEmitter::AddOperand(MachineInstrBuilder &MIB,
   } else if (auto *SymNode = dyn_cast<MCSymbolSDNode>(Op)) {
     MIB.addSym(SymNode->getMCSymbol());
   } else if (BlockAddressSDNode *BA = dyn_cast<BlockAddressSDNode>(Op)) {
-    MIB.addBlockAddress(BA->getBlockAddress(),
-                        BA->getOffset(),
+    MIB.addBlockAddress(BA->getBlockAddress(), BA->getOffset(),
                         BA->getTargetFlags());
   } else if (TargetIndexSDNode *TI = dyn_cast<TargetIndexSDNode>(Op)) {
     MIB.addTargetIndex(TI->getIndex(), TI->getOffset(), TI->getTargetFlags());
   } else {
-    assert(Op.getValueType() != MVT::Other &&
-           Op.getValueType() != MVT::Glue &&
+    assert(Op.getValueType() != MVT::Other && Op.getValueType() != MVT::Glue &&
            "Chain and glue operands should occur at end of operand list!");
-    AddRegisterOperand(MIB, Op, IIOpNum, II, VRBaseMap,
-                       IsDebug, IsClone, IsCloned);
+    AddRegisterOperand(MIB, Op, IIOpNum, II, VRBaseMap, IsDebug, IsClone,
+                       IsCloned);
   }
 }
 
 Register InstrEmitter::ConstrainForSubReg(Register VReg, unsigned SubIdx,
-                                          MVT VT, bool isDivergent, const DebugLoc &DL) {
+                                          MVT VT, bool isDivergent,
+                                          const DebugLoc &DL) {
   const TargetRegisterClass *VRC = MRI->getRegClass(VReg);
   const TargetRegisterClass *RC = TRI->getSubClassWithSubReg(VRC, SubIdx);
 
@@ -466,7 +459,7 @@ Register InstrEmitter::ConstrainForSubReg(Register VReg, unsigned SubIdx,
   assert(RC && "No legal register class for VT supports that SubIdx");
   Register NewReg = MRI->createVirtualRegister(RC);
   BuildMI(*MBB, InsertPos, DL, TII->get(TargetOpcode::COPY), NewReg)
-    .addReg(VReg);
+      .addReg(VReg);
   return NewReg;
 }
 
@@ -497,7 +490,7 @@ void InstrEmitter::EmitSubregNode(SDNode *Node,
     // classes.
     unsigned SubIdx = cast<ConstantSDNode>(Node->getOperand(1))->getZExtValue();
     const TargetRegisterClass *TRC =
-      TLI->getRegClassFor(Node->getSimpleValueType(0), Node->isDivergent());
+        TLI->getRegClassFor(Node->getSimpleValueType(0), Node->isDivergent());
 
     Register Reg;
     MachineInstr *DefMI;
@@ -514,8 +507,7 @@ void InstrEmitter::EmitSubregNode(SDNode *Node,
     unsigned DefSubIdx;
     if (DefMI &&
         TII->isCoalescableExtInstr(*DefMI, SrcReg, DstReg, DefSubIdx) &&
-        SubIdx == DefSubIdx &&
-        TRC == MRI->getRegClass(SrcReg)) {
+        SubIdx == DefSubIdx && TRC == MRI->getRegClass(SrcReg)) {
       // Optimize these:
       // r1025 = s/zext r1024, 4
       // r1026 = extract_subreg r1025, 4
@@ -523,7 +515,8 @@ void InstrEmitter::EmitSubregNode(SDNode *Node,
       // r1026 = copy r1024
       VRBase = MRI->createVirtualRegister(TRC);
       BuildMI(*MBB, InsertPos, Node->getDebugLoc(),
-              TII->get(TargetOpcode::COPY), VRBase).addReg(SrcReg);
+              TII->get(TargetOpcode::COPY), VRBase)
+          .addReg(SrcReg);
       MRI->clearKillFlags(SrcReg);
     } else {
       // Reg may not support a SubIdx sub-register, and we may need to
@@ -577,7 +570,7 @@ void InstrEmitter::EmitSubregNode(SDNode *Node,
 
     // Create the insert_subreg or subreg_to_reg machine instruction.
     MachineInstrBuilder MIB =
-      BuildMI(*MF, Node->getDebugLoc(), TII->get(Opc), VRBase);
+        BuildMI(*MF, Node->getDebugLoc(), TII->get(Opc), VRBase);
 
     // If creating a subreg_to_reg, then the first input operand
     // is an implicit value immediate, otherwise it's a register
@@ -585,15 +578,16 @@ void InstrEmitter::EmitSubregNode(SDNode *Node,
       const ConstantSDNode *SD = cast<ConstantSDNode>(N0);
       MIB.addImm(SD->getZExtValue());
     } else
-      AddOperand(MIB, N0, 0, nullptr, VRBaseMap, /*IsDebug=*/false,
-                 IsClone, IsCloned);
+      AddOperand(MIB, N0, 0, nullptr, VRBaseMap, /*IsDebug=*/false, IsClone,
+                 IsCloned);
     // Add the subregister being inserted
-    AddOperand(MIB, N1, 0, nullptr, VRBaseMap, /*IsDebug=*/false,
-               IsClone, IsCloned);
+    AddOperand(MIB, N1, 0, nullptr, VRBaseMap, /*IsDebug=*/false, IsClone,
+               IsCloned);
     MIB.addImm(SubIdx);
     MBB->insert(InsertPos, MIB);
   } else
-    llvm_unreachable("Node is not insert_subreg, extract_subreg, or subreg_to_reg");
+    llvm_unreachable(
+        "Node is not insert_subreg, extract_subreg, or subreg_to_reg");
 
   SDValue Op(Node, 0);
   bool isNew = VRBaseMap.insert(std::make_pair(Op, VRBase)).second;
@@ -605,18 +599,18 @@ void InstrEmitter::EmitSubregNode(SDNode *Node,
 /// COPY_TO_REGCLASS is just a normal copy, except that the destination
 /// register is constrained to be in a particular register class.
 ///
-void
-InstrEmitter::EmitCopyToRegClassNode(SDNode *Node,
-                                     DenseMap<SDValue, Register> &VRBaseMap) {
+void InstrEmitter::EmitCopyToRegClassNode(
+    SDNode *Node, DenseMap<SDValue, Register> &VRBaseMap) {
   unsigned VReg = getVR(Node->getOperand(0), VRBaseMap);
 
   // Create the new VReg in the destination class and emit a copy.
   unsigned DstRCIdx = cast<ConstantSDNode>(Node->getOperand(1))->getZExtValue();
   const TargetRegisterClass *DstRC =
-    TRI->getAllocatableClass(TRI->getRegClass(DstRCIdx));
+      TRI->getAllocatableClass(TRI->getRegClass(DstRCIdx));
   Register NewVReg = MRI->createVirtualRegister(DstRC);
   BuildMI(*MBB, InsertPos, Node->getDebugLoc(), TII->get(TargetOpcode::COPY),
-    NewVReg).addReg(VReg);
+          NewVReg)
+      .addReg(VReg);
 
   SDValue Op(Node, 0);
   bool isNew = VRBaseMap.insert(std::make_pair(Op, NewVReg)).second;
@@ -627,8 +621,8 @@ InstrEmitter::EmitCopyToRegClassNode(SDNode *Node,
 /// EmitRegSequence - Generate machine code for REG_SEQUENCE nodes.
 ///
 void InstrEmitter::EmitRegSequence(SDNode *Node,
-                                  DenseMap<SDValue, Register> &VRBaseMap,
-                                  bool IsClone, bool IsCloned) {
+                                   DenseMap<SDValue, Register> &VRBaseMap,
+                                   bool IsClone, bool IsCloned) {
   unsigned DstRCIdx = cast<ConstantSDNode>(Node->getOperand(0))->getZExtValue();
   const TargetRegisterClass *RC = TRI->getRegClass(DstRCIdx);
   Register NewVReg = MRI->createVirtualRegister(TRI->getAllocatableClass(RC));
@@ -638,7 +632,7 @@ void InstrEmitter::EmitRegSequence(SDNode *Node,
   // If the input pattern has a chain, then the root of the corresponding
   // output pattern will get a chain as well. This can happen to be a
   // REG_SEQUENCE (which is not "guarded" by countOperands/CountResults).
-  if (NumOps && Node->getOperand(NumOps-1).getValueType() == MVT::Other)
+  if (NumOps && Node->getOperand(NumOps - 1).getValueType() == MVT::Other)
     --NumOps; // Ignore chain if it exists.
 
   assert((NumOps & 1) == 1 &&
@@ -646,23 +640,23 @@ void InstrEmitter::EmitRegSequence(SDNode *Node,
   for (unsigned i = 1; i != NumOps; ++i) {
     SDValue Op = Node->getOperand(i);
     if ((i & 1) == 0) {
-      RegisterSDNode *R = dyn_cast<RegisterSDNode>(Node->getOperand(i-1));
+      RegisterSDNode *R = dyn_cast<RegisterSDNode>(Node->getOperand(i - 1));
       // Skip physical registers as they don't have a vreg to get and we'll
       // insert copies for them in TwoAddressInstructionPass anyway.
       if (!R || !R->getReg().isPhysical()) {
         unsigned SubIdx = cast<ConstantSDNode>(Op)->getZExtValue();
-        unsigned SubReg = getVR(Node->getOperand(i-1), VRBaseMap);
+        unsigned SubReg = getVR(Node->getOperand(i - 1), VRBaseMap);
         const TargetRegisterClass *TRC = MRI->getRegClass(SubReg);
         const TargetRegisterClass *SRC =
-        TRI->getMatchingSuperRegClass(RC, TRC, SubIdx);
+            TRI->getMatchingSuperRegClass(RC, TRC, SubIdx);
         if (SRC && SRC != RC) {
           MRI->setRegClass(NewVReg, SRC);
           RC = SRC;
         }
       }
     }
-    AddOperand(MIB, Op, i+1, &II, VRBaseMap, /*IsDebug=*/false,
-               IsClone, IsCloned);
+    AddOperand(MIB, Op, i + 1, &II, VRBaseMap, /*IsDebug=*/false, IsClone,
+               IsCloned);
   }
 
   MBB->insert(InsertPos, MIB);
@@ -950,8 +944,7 @@ InstrEmitter::EmitDbgValueFromSingleOp(SDDbgValue *SD,
   return MIB.addMetadata(Var).addMetadata(Expr);
 }
 
-MachineInstr *
-InstrEmitter::EmitDbgLabel(SDDbgLabel *SD) {
+MachineInstr *InstrEmitter::EmitDbgLabel(SDDbgLabel *SD) {
   MDNode *Label = SD->getLabel();
   DebugLoc DL = SD->getDebugLoc();
   assert(cast<DILabel>(Label)->isValidLocationForIntrinsic(DL) &&
@@ -967,9 +960,8 @@ InstrEmitter::EmitDbgLabel(SDDbgLabel *SD) {
 /// EmitMachineNode - Generate machine code for a target-specific node and
 /// needed dependencies.
 ///
-void InstrEmitter::
-EmitMachineNode(SDNode *Node, bool IsClone, bool IsCloned,
-                DenseMap<SDValue, Register> &VRBaseMap) {
+void InstrEmitter::EmitMachineNode(SDNode *Node, bool IsClone, bool IsCloned,
+                                   DenseMap<SDValue, Register> &VRBaseMap) {
   unsigned Opc = Node->getMachineOpcode();
 
   // Handle subreg insert/extract specially
@@ -1011,14 +1003,14 @@ EmitMachineNode(SDNode *Node, bool IsClone, bool IsCloned,
       CC = Node->getConstantOperandVal(PatchPointOpers::CCPos);
       NumDefs = NumResults;
     }
-    ScratchRegs = TLI->getScratchRegisters((CallingConv::ID) CC);
+    ScratchRegs = TLI->getScratchRegisters((CallingConv::ID)CC);
   } else if (Opc == TargetOpcode::STATEPOINT) {
     NumDefs = NumResults;
   }
 
   unsigned NumImpUses = 0;
   unsigned NodeOperands =
-    countOperands(Node, II.getNumOperands() - NumDefs, NumImpUses);
+      countOperands(Node, II.getNumOperands() - NumDefs, NumImpUses);
   bool HasVRegVariadicDefs = !MF->getTarget().usesPhysRegsForValues() &&
                              II.isVariadic() && II.variadicOpsAreDefs();
   bool HasPhysRegOuts = NumResults > NumDefs && !II.implicit_defs().empty() &&
@@ -1090,14 +1082,14 @@ EmitMachineNode(SDNode *Node, bool IsClone, bool IsCloned,
          "Unable to cope with optional defs and phys regs defs!");
   unsigned NumSkip = HasOptPRefs ? NumDefs - NumResults : 0;
   for (unsigned i = NumSkip; i != NodeOperands; ++i)
-    AddOperand(MIB, Node->getOperand(i), i-NumSkip+NumDefs, &II,
-               VRBaseMap, /*IsDebug=*/false, IsClone, IsCloned);
+    AddOperand(MIB, Node->getOperand(i), i - NumSkip + NumDefs, &II, VRBaseMap,
+               /*IsDebug=*/false, IsClone, IsCloned);
 
   // Add scratch registers as implicit def and early clobber
   if (ScratchRegs)
     for (unsigned i = 0; ScratchRegs[i]; ++i)
-      MIB.addReg(ScratchRegs[i], RegState::ImplicitDefine |
-                                 RegState::EarlyClobber);
+      MIB.addReg(ScratchRegs[i],
+                 RegState::ImplicitDefine | RegState::EarlyClobber);
 
   // Set the memory reference descriptions of this instruction now that it is
   // part of the function.
@@ -1141,7 +1133,7 @@ EmitMachineNode(SDNode *Node, bool IsClone, bool IsCloned,
   }
 
   // Scan the glue chain for any used physregs.
-  if (Node->getValueType(Node->getNumValues()-1) == MVT::Glue) {
+  if (Node->getValueType(Node->getNumValues() - 1) == MVT::Glue) {
     for (SDNode *F = Node->getGluedUser(); F; F = F->getGluedUser()) {
       if (F->getOpcode() == ISD::CopyFromReg) {
         UsedRegs.push_back(cast<RegisterSDNode>(F->getOperand(1))->getReg());
@@ -1198,9 +1190,8 @@ EmitMachineNode(SDNode *Node, bool IsClone, bool IsCloned,
 
 /// EmitSpecialNode - Generate machine code for a target-independent node and
 /// needed dependencies.
-void InstrEmitter::
-EmitSpecialNode(SDNode *Node, bool IsClone, bool IsCloned,
-                DenseMap<SDValue, Register> &VRBaseMap) {
+void InstrEmitter::EmitSpecialNode(SDNode *Node, bool IsClone, bool IsCloned,
+                                   DenseMap<SDValue, Register> &VRBaseMap) {
   switch (Node->getOpcode()) {
   default:
 #ifndef NDEBUG
@@ -1232,7 +1223,8 @@ EmitSpecialNode(SDNode *Node, bool IsClone, bool IsCloned,
       break;
 
     BuildMI(*MBB, InsertPos, Node->getDebugLoc(), TII->get(TargetOpcode::COPY),
-            DestReg).addReg(SrcReg);
+            DestReg)
+        .addReg(SrcReg);
     break;
   }
   case ISD::CopyFromReg: {
@@ -1246,8 +1238,7 @@ EmitSpecialNode(SDNode *Node, bool IsClone, bool IsCloned,
                        ? TargetOpcode::EH_LABEL
                        : TargetOpcode::ANNOTATION_LABEL;
     MCSymbol *S = cast<LabelSDNode>(Node)->getLabel();
-    BuildMI(*MBB, InsertPos, Node->getDebugLoc(),
-            TII->get(Opc)).addSym(S);
+    BuildMI(*MBB, InsertPos, Node->getDebugLoc(), TII->get(Opc)).addSym(S);
     break;
   }
 
@@ -1258,7 +1249,7 @@ EmitSpecialNode(SDNode *Node, bool IsClone, bool IsCloned,
                          : TargetOpcode::LIFETIME_END;
     auto *FI = cast<FrameIndexSDNode>(Node->getOperand(1));
     BuildMI(*MBB, InsertPos, Node->getDebugLoc(), TII->get(TarOp))
-    .addFrameIndex(FI->getIndex());
+        .addFrameIndex(FI->getIndex());
     break;
   }
 
@@ -1279,8 +1270,8 @@ EmitSpecialNode(SDNode *Node, bool IsClone, bool IsCloned,
   case ISD::INLINEASM:
   case ISD::INLINEASM_BR: {
     unsigned NumOps = Node->getNumOperands();
-    if (Node->getOperand(NumOps-1).getValueType() == MVT::Glue)
-      --NumOps;  // Ignore the glue operand.
+    if (Node->getOperand(NumOps - 1).getValueType() == MVT::Glue)
+      --NumOps; // Ignore the glue operand.
 
     // Create the inline asm machine instruction.
     unsigned TgtOpc = Node->getOpcode() == ISD::INLINEASM_BR
@@ -1297,8 +1288,8 @@ EmitSpecialNode(SDNode *Node, bool IsClone, bool IsCloned,
     // Add the HasSideEffect, isAlignStack, AsmDialect, MayLoad and MayStore
     // bits.
     int64_t ExtraInfo =
-      cast<ConstantSDNode>(Node->getOperand(InlineAsm::Op_ExtraInfo))->
-                          getZExtValue();
+        cast<ConstantSDNode>(Node->getOperand(InlineAsm::Op_ExtraInfo))
+            ->getZExtValue();
     MIB.addImm(ExtraInfo);
 
     // Remember to operand index of the group flags.
@@ -1310,12 +1301,12 @@ EmitSpecialNode(SDNode *Node, bool IsClone, bool IsCloned,
     // Add all of the operand registers to the instruction.
     for (unsigned i = InlineAsm::Op_FirstOperand; i != NumOps;) {
       unsigned Flags =
-        cast<ConstantSDNode>(Node->getOperand(i))->getZExtValue();
+          cast<ConstantSDNode>(Node->getOperand(i))->getZExtValue();
       const unsigned NumVals = InlineAsm::getNumOperandRegisters(Flags);
 
       GroupIdx.push_back(MIB->getNumOperands());
       MIB.addImm(Flags);
-      ++i;  // Skip the ID value.
+      ++i; // Skip the ID value.
 
       switch (InlineAsm::getKind(Flags)) {
       case InlineAsm::Kind::RegDef:
diff --git a/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.h b/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.h
index 959bce31c8b2789..613c76475b5f033 100644
--- a/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.h
+++ b/llvm/lib/CodeGen/SelectionDAG/InstrEmitter.h
@@ -47,36 +47,29 @@ class LLVM_LIBRARY_VISIBILITY InstrEmitter {
   void EmitCopyFromReg(SDNode *Node, unsigned ResNo, bool IsClone,
                        Register SrcReg, DenseMap<SDValue, Register> &VRBaseMap);
 
-  void CreateVirtualRegisters(SDNode *Node,
-                              MachineInstrBuilder &MIB,
-                              const MCInstrDesc &II,
-                              bool IsClone, bool IsCloned,
+  void CreateVirtualRegisters(SDNode *Node, MachineInstrBuilder &MIB,
+                              const MCInstrDesc &II, bool IsClone,
+                              bool IsCloned,
                               DenseMap<SDValue, Register> &VRBaseMap);
 
   /// getVR - Return the virtual register corresponding to the specified result
   /// of the specified node.
-  Register getVR(SDValue Op,
-                 DenseMap<SDValue, Register> &VRBaseMap);
+  Register getVR(SDValue Op, DenseMap<SDValue, Register> &VRBaseMap);
 
   /// AddRegisterOperand - Add the specified register as an operand to the
   /// specified machine instr. Insert register copies if the register is
   /// not in the required register class.
-  void AddRegisterOperand(MachineInstrBuilder &MIB,
-                          SDValue Op,
-                          unsigned IIOpNum,
-                          const MCInstrDesc *II,
-                          DenseMap<SDValue, Register> &VRBaseMap,
-                          bool IsDebug, bool IsClone, bool IsCloned);
+  void AddRegisterOperand(MachineInstrBuilder &MIB, SDValue Op,
+                          unsigned IIOpNum, const MCInstrDesc *II,
+                          DenseMap<SDValue, Register> &VRBaseMap, bool IsDebug,
+                          bool IsClone, bool IsCloned);
 
   /// AddOperand - Add the specified operand to the specified machine instr.  II
   /// specifies the instruction information for the node, and IIOpNum is the
   /// operand number (in the II) that we are adding. IIOpNum and II are used for
   /// assertions only.
-  void AddOperand(MachineInstrBuilder &MIB,
-                  SDValue Op,
-                  unsigned IIOpNum,
-                  const MCInstrDesc *II,
-                  DenseMap<SDValue, Register> &VRBaseMap,
+  void AddOperand(MachineInstrBuilder &MIB, SDValue Op, unsigned IIOpNum,
+                  const MCInstrDesc *II, DenseMap<SDValue, Register> &VRBaseMap,
                   bool IsDebug, bool IsClone, bool IsCloned);
 
   /// ConstrainForSubReg - Try to constrain VReg to a register class that
@@ -101,6 +94,7 @@ class LLVM_LIBRARY_VISIBILITY InstrEmitter {
   ///
   void EmitRegSequence(SDNode *Node, DenseMap<SDValue, Register> &VRBaseMap,
                        bool IsClone, bool IsCloned);
+
 public:
   /// CountResults - The results of target nodes have register or immediate
   /// operands first, then an optional chain, and optional flag operands
@@ -131,8 +125,9 @@ class LLVM_LIBRARY_VISIBILITY InstrEmitter {
                                  DenseMap<SDValue, Register> &VRBaseMap);
 
   /// Emit a DBG_VALUE from the operands to SDDbgValue.
-  MachineInstr *EmitDbgValueFromSingleOp(SDDbgValue *SD,
-                                    DenseMap<SDValue, Register> &VRBaseMap);
+  MachineInstr *
+  EmitDbgValueFromSingleOp(SDDbgValue *SD,
+                           DenseMap<SDValue, Register> &VRBaseMap);
 
   /// Generate machine instruction for a dbg_label node.
   MachineInstr *EmitDbgLabel(SDDbgLabel *SD);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
index af5fad7ba311554..409f656d62faf74 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp
@@ -134,27 +134,24 @@ class SelectionDAGLegalize {
                                      ArrayRef<int> Mask) const;
 
   std::pair<SDValue, SDValue> ExpandLibCall(RTLIB::Libcall LC, SDNode *Node,
-                        TargetLowering::ArgListTy &&Args, bool isSigned);
-  std::pair<SDValue, SDValue> ExpandLibCall(RTLIB::Libcall LC, SDNode *Node, bool isSigned);
+                                            TargetLowering::ArgListTy &&Args,
+                                            bool isSigned);
+  std::pair<SDValue, SDValue> ExpandLibCall(RTLIB::Libcall LC, SDNode *Node,
+                                            bool isSigned);
 
   void ExpandFrexpLibCall(SDNode *Node, SmallVectorImpl<SDValue> &Results);
   void ExpandFPLibCall(SDNode *Node, RTLIB::Libcall LC,
                        SmallVectorImpl<SDValue> &Results);
   void ExpandFPLibCall(SDNode *Node, RTLIB::Libcall Call_F32,
                        RTLIB::Libcall Call_F64, RTLIB::Libcall Call_F80,
-                       RTLIB::Libcall Call_F128,
-                       RTLIB::Libcall Call_PPCF128,
+                       RTLIB::Libcall Call_F128, RTLIB::Libcall Call_PPCF128,
                        SmallVectorImpl<SDValue> &Results);
-  SDValue ExpandIntLibCall(SDNode *Node, bool isSigned,
-                           RTLIB::Libcall Call_I8,
-                           RTLIB::Libcall Call_I16,
-                           RTLIB::Libcall Call_I32,
-                           RTLIB::Libcall Call_I64,
-                           RTLIB::Libcall Call_I128);
-  void ExpandArgFPLibCall(SDNode *Node,
-                          RTLIB::Libcall Call_F32, RTLIB::Libcall Call_F64,
-                          RTLIB::Libcall Call_F80, RTLIB::Libcall Call_F128,
-                          RTLIB::Libcall Call_PPCF128,
+  SDValue ExpandIntLibCall(SDNode *Node, bool isSigned, RTLIB::Libcall Call_I8,
+                           RTLIB::Libcall Call_I16, RTLIB::Libcall Call_I32,
+                           RTLIB::Libcall Call_I64, RTLIB::Libcall Call_I128);
+  void ExpandArgFPLibCall(SDNode *Node, RTLIB::Libcall Call_F32,
+                          RTLIB::Libcall Call_F64, RTLIB::Libcall Call_F80,
+                          RTLIB::Libcall Call_F128, RTLIB::Libcall Call_PPCF128,
                           SmallVectorImpl<SDValue> &Results);
   void ExpandDivRemLibCall(SDNode *Node, SmallVectorImpl<SDValue> &Results);
   void ExpandSinCosLibCall(SDNode *Node, SmallVectorImpl<SDValue> &Results);
@@ -189,7 +186,7 @@ class SelectionDAGLegalize {
 
   SDValue ExpandExtractFromVectorThroughStack(SDValue Op);
   SDValue ExpandInsertToVectorThroughStack(SDValue Op);
-  SDValue ExpandVectorBuildThroughStack(SDNode* Node);
+  SDValue ExpandVectorBuildThroughStack(SDNode *Node);
 
   SDValue ExpandConstantFP(ConstantFPSDNode *CFP, bool UseCP);
   SDValue ExpandConstant(ConstantSDNode *CP);
@@ -290,8 +287,8 @@ SDValue SelectionDAGLegalize::ShuffleWithNarrowerEltType(
 
 /// Expands the ConstantFP node to an integer constant or
 /// a load from the constant pool.
-SDValue
-SelectionDAGLegalize::ExpandConstantFP(ConstantFPSDNode *CFP, bool UseCP) {
+SDValue SelectionDAGLegalize::ExpandConstantFP(ConstantFPSDNode *CFP,
+                                               bool UseCP) {
   bool Extend = false;
   SDLoc dl(CFP);
 
@@ -302,7 +299,7 @@ SelectionDAGLegalize::ExpandConstantFP(ConstantFPSDNode *CFP, bool UseCP) {
   // an FP extending load is the same cost as a normal load (such as on the x87
   // fp stack or PPC FP unit).
   EVT VT = CFP->getValueType(0);
-  ConstantFP *LLVMC = const_cast<ConstantFP*>(CFP->getConstantFPValue());
+  ConstantFP *LLVMC = const_cast<ConstantFP *>(CFP->getConstantFPValue());
   if (!UseCP) {
     assert((VT == MVT::f64 || VT == MVT::f32) && "Invalid type expansion");
     return DAG.getConstant(LLVMC->getValueAPF().bitcastToAPInt(), dl,
@@ -378,7 +375,7 @@ SDValue SelectionDAGLegalize::PerformInsertVectorEltInMemory(SDValue Vec,
   // with a "move to register" or "extload into register" instruction, then
   // permute it into place, if the idx is a constant and if the idx is
   // supported by the target.
-  EVT VT    = Tmp1.getValueType();
+  EVT VT = Tmp1.getValueType();
   EVT EltVT = VT.getVectorElementType();
   SDValue StackPtr = DAG.CreateStackTemporary(VT);
 
@@ -396,8 +393,9 @@ SDValue SelectionDAGLegalize::PerformInsertVectorEltInMemory(SDValue Vec,
       Ch, dl, Tmp2, StackPtr2,
       MachinePointerInfo::getUnknownStack(DAG.getMachineFunction()), EltVT);
   // Load the updated vector.
-  return DAG.getLoad(VT, dl, Ch, StackPtr, MachinePointerInfo::getFixedStack(
-                                               DAG.getMachineFunction(), SPFI));
+  return DAG.getLoad(
+      VT, dl, Ch, StackPtr,
+      MachinePointerInfo::getFixedStack(DAG.getMachineFunction(), SPFI));
 }
 
 SDValue SelectionDAGLegalize::ExpandINSERT_VECTOR_ELT(SDValue Vec, SDValue Val,
@@ -410,8 +408,8 @@ SDValue SelectionDAGLegalize::ExpandINSERT_VECTOR_ELT(SDValue Vec, SDValue Val,
     EVT EltVT = Vec.getValueType().getVectorElementType();
     if (Val.getValueType() == EltVT ||
         (EltVT.isInteger() && Val.getValueType().bitsGE(EltVT))) {
-      SDValue ScVec = DAG.getNode(ISD::SCALAR_TO_VECTOR, dl,
-                                  Vec.getValueType(), Val);
+      SDValue ScVec =
+          DAG.getNode(ISD::SCALAR_TO_VECTOR, dl, Vec.getValueType(), Val);
 
       unsigned NumElts = Vec.getValueType().getVectorNumElements();
       // We generate a shuffle of InVec and ScVec, so the shuffle mask
@@ -427,7 +425,7 @@ SDValue SelectionDAGLegalize::ExpandINSERT_VECTOR_ELT(SDValue Vec, SDValue Val,
   return PerformInsertVectorEltInMemory(Vec, Val, Idx, dl);
 }
 
-SDValue SelectionDAGLegalize::OptimizeFloatStore(StoreSDNode* ST) {
+SDValue SelectionDAGLegalize::OptimizeFloatStore(StoreSDNode *ST) {
   if (!ISD::isNormalStore(ST))
     return SDValue();
 
@@ -450,11 +448,10 @@ SDValue SelectionDAGLegalize::OptimizeFloatStore(StoreSDNode* ST) {
     return SDValue();
 
   if (ConstantFPSDNode *CFP = dyn_cast<ConstantFPSDNode>(Value)) {
-    if (CFP->getValueType(0) == MVT::f32 &&
-        TLI.isTypeLegal(MVT::i32)) {
-      SDValue Con = DAG.getConstant(CFP->getValueAPF().
-                                      bitcastToAPInt().zextOrTrunc(32),
-                                    SDLoc(CFP), MVT::i32);
+    if (CFP->getValueType(0) == MVT::f32 && TLI.isTypeLegal(MVT::i32)) {
+      SDValue Con =
+          DAG.getConstant(CFP->getValueAPF().bitcastToAPInt().zextOrTrunc(32),
+                          SDLoc(CFP), MVT::i32);
       return DAG.getStore(Chain, dl, Con, Ptr, ST->getPointerInfo(),
                           ST->getOriginalAlign(), MMOFlags, AAInfo);
     }
@@ -462,8 +459,9 @@ SDValue SelectionDAGLegalize::OptimizeFloatStore(StoreSDNode* ST) {
     if (CFP->getValueType(0) == MVT::f64) {
       // If this target supports 64-bit registers, do a single 64-bit store.
       if (TLI.isTypeLegal(MVT::i64)) {
-        SDValue Con = DAG.getConstant(CFP->getValueAPF().bitcastToAPInt().
-                                      zextOrTrunc(64), SDLoc(CFP), MVT::i64);
+        SDValue Con =
+            DAG.getConstant(CFP->getValueAPF().bitcastToAPInt().zextOrTrunc(64),
+                            SDLoc(CFP), MVT::i64);
         return DAG.getStore(Chain, dl, Con, Ptr, ST->getPointerInfo(),
                             ST->getOriginalAlign(), MMOFlags, AAInfo);
       }
@@ -511,7 +509,8 @@ void SelectionDAGLegalize::LegalizeStoreOps(SDNode *Node) {
     SDValue Value = ST->getValue();
     MVT VT = Value.getSimpleValueType();
     switch (TLI.getOperationAction(ISD::STORE, VT)) {
-    default: llvm_unreachable("This action is not supported yet!");
+    default:
+      llvm_unreachable("This action is not supported yet!");
     case TargetLowering::Legal: {
       // If this is an unaligned store and the target doesn't support it,
       // expand it.
@@ -611,8 +610,7 @@ void SelectionDAGLegalize::LegalizeStoreOps(SDNode *Node) {
       // Store the remaining ExtraWidth bits.
       IncrementSize = RoundWidth / 8;
       Ptr = DAG.getNode(ISD::ADD, dl, Ptr.getValueType(), Ptr,
-                        DAG.getConstant(IncrementSize, dl,
-                                        Ptr.getValueType()));
+                        DAG.getConstant(IncrementSize, dl, Ptr.getValueType()));
       Lo = DAG.getTruncStore(Chain, dl, Value, Ptr,
                              ST->getPointerInfo().getWithOffset(IncrementSize),
                              ExtraVT, ST->getOriginalAlign(), MMOFlags, AAInfo);
@@ -623,7 +621,8 @@ void SelectionDAGLegalize::LegalizeStoreOps(SDNode *Node) {
     ReplaceNode(SDValue(Node, 0), Result);
   } else {
     switch (TLI.getTruncStoreAction(ST->getValue().getValueType(), StVT)) {
-    default: llvm_unreachable("This action is not supported yet!");
+    default:
+      llvm_unreachable("This action is not supported yet!");
     case TargetLowering::Legal: {
       EVT MemVT = ST->getMemoryVT();
       // If this is an unaligned store and the target doesn't support it,
@@ -671,9 +670,9 @@ void SelectionDAGLegalize::LegalizeStoreOps(SDNode *Node) {
 
 void SelectionDAGLegalize::LegalizeLoadOps(SDNode *Node) {
   LoadSDNode *LD = cast<LoadSDNode>(Node);
-  SDValue Chain = LD->getChain();  // The chain.
-  SDValue Ptr = LD->getBasePtr();  // The base pointer.
-  SDValue Value;                   // The value returned by the load op.
+  SDValue Chain = LD->getChain(); // The chain.
+  SDValue Ptr = LD->getBasePtr(); // The base pointer.
+  SDValue Value;                  // The value returned by the load op.
   SDLoc dl(Node);
 
   ISD::LoadExtType ExtType = LD->getExtensionType();
@@ -684,7 +683,8 @@ void SelectionDAGLegalize::LegalizeLoadOps(SDNode *Node) {
     SDValue RChain = SDValue(Node, 1);
 
     switch (TLI.getOperationAction(Node->getOpcode(), VT)) {
-    default: llvm_unreachable("This action is not supported yet!");
+    default:
+      llvm_unreachable("This action is not supported yet!");
     case TargetLowering::Legal: {
       EVT MemVT = LD->getMemoryVT();
       const DataLayout &DL = DAG.getDataLayout();
@@ -743,7 +743,7 @@ void SelectionDAGLegalize::LegalizeLoadOps(SDNode *Node) {
       // Until such a way is found, don't insist on promoting i1 here.
       (SrcVT != MVT::i1 ||
        TLI.getLoadExtAction(ExtType, Node->getValueType(0), MVT::i1) ==
-         TargetLowering::Promote)) {
+           TargetLowering::Promote)) {
     // Promote to a byte-sized load if not loading an integral number of
     // bytes.  For example, promote EXTLOAD:i20 -> EXTLOAD:i24.
     unsigned NewWidth = SrcVT.getStoreSizeInBits();
@@ -754,7 +754,7 @@ void SelectionDAGLegalize::LegalizeLoadOps(SDNode *Node) {
     // way.  A zext load from NVT thus automatically gives zext from SrcVT.
 
     ISD::LoadExtType NewExtType =
-      ExtType == ISD::ZEXTLOAD ? ISD::ZEXTLOAD : ISD::EXTLOAD;
+        ExtType == ISD::ZEXTLOAD ? ISD::ZEXTLOAD : ISD::EXTLOAD;
 
     SDValue Result = DAG.getExtLoad(NewExtType, dl, Node->getValueType(0),
                                     Chain, Ptr, LD->getPointerInfo(), NVT,
@@ -764,13 +764,11 @@ void SelectionDAGLegalize::LegalizeLoadOps(SDNode *Node) {
 
     if (ExtType == ISD::SEXTLOAD)
       // Having the top bits zero doesn't help when sign extending.
-      Result = DAG.getNode(ISD::SIGN_EXTEND_INREG, dl,
-                           Result.getValueType(),
+      Result = DAG.getNode(ISD::SIGN_EXTEND_INREG, dl, Result.getValueType(),
                            Result, DAG.getValueType(SrcVT));
     else if (ExtType == ISD::ZEXTLOAD || NVT == Result.getValueType())
       // All the top bits are guaranteed to be zero - inform the optimizers.
-      Result = DAG.getNode(ISD::AssertZext, dl,
-                           Result.getValueType(), Result,
+      Result = DAG.getNode(ISD::AssertZext, dl, Result.getValueType(), Result,
                            DAG.getValueType(SrcVT));
 
     Value = Result;
@@ -855,7 +853,8 @@ void SelectionDAGLegalize::LegalizeLoadOps(SDNode *Node) {
     bool isCustom = false;
     switch (TLI.getLoadExtAction(ExtType, Node->getValueType(0),
                                  SrcVT.getSimpleVT())) {
-    default: llvm_unreachable("This action is not supported yet!");
+    default:
+      llvm_unreachable("This action is not supported yet!");
     case TargetLowering::Custom:
       isCustom = true;
       [[fallthrough]];
@@ -927,18 +926,14 @@ void SelectionDAGLegalize::LegalizeLoadOps(SDNode *Node) {
       // and zero-extend operations are currently folded into extending
       // loads, whether they are legal or not, and then we end up here
       // without any support for legalizing them.
-      assert(ExtType != ISD::EXTLOAD &&
-             "EXTLOAD should always be supported!");
+      assert(ExtType != ISD::EXTLOAD && "EXTLOAD should always be supported!");
       // Turn the unsupported load into an EXTLOAD followed by an
       // explicit zero/sign extend inreg.
-      SDValue Result = DAG.getExtLoad(ISD::EXTLOAD, dl,
-                                      Node->getValueType(0),
-                                      Chain, Ptr, SrcVT,
-                                      LD->getMemOperand());
+      SDValue Result = DAG.getExtLoad(ISD::EXTLOAD, dl, Node->getValueType(0),
+                                      Chain, Ptr, SrcVT, LD->getMemOperand());
       SDValue ValRes;
       if (ExtType == ISD::SEXTLOAD)
-        ValRes = DAG.getNode(ISD::SIGN_EXTEND_INREG, dl,
-                             Result.getValueType(),
+        ValRes = DAG.getNode(ISD::SIGN_EXTEND_INREG, dl, Result.getValueType(),
                              Result, DAG.getValueType(SrcVT));
       else
         ValRes = DAG.getZeroExtendInReg(Result, dl, SrcVT);
@@ -975,15 +970,15 @@ void SelectionDAGLegalize::LegalizeOp(SDNode *Node) {
 #ifndef NDEBUG
   for (unsigned i = 0, e = Node->getNumValues(); i != e; ++i)
     assert(TLI.getTypeAction(*DAG.getContext(), Node->getValueType(i)) ==
-             TargetLowering::TypeLegal &&
+               TargetLowering::TypeLegal &&
            "Unexpected illegal type!");
 
   for (const SDValue &Op : Node->op_values())
     assert((TLI.getTypeAction(*DAG.getContext(), Op.getValueType()) ==
-              TargetLowering::TypeLegal ||
+                TargetLowering::TypeLegal ||
             Op.getOpcode() == ISD::TargetConstant ||
             Op.getOpcode() == ISD::Register) &&
-            "Unexpected illegal type!");
+           "Unexpected illegal type!");
 #endif
 
   // Figure out the correct action; the way to query this varies by opcode
@@ -997,12 +992,10 @@ void SelectionDAGLegalize::LegalizeOp(SDNode *Node) {
     Action = TLI.getOperationAction(Node->getOpcode(), MVT::Other);
     break;
   case ISD::GET_DYNAMIC_AREA_OFFSET:
-    Action = TLI.getOperationAction(Node->getOpcode(),
-                                    Node->getValueType(0));
+    Action = TLI.getOperationAction(Node->getOpcode(), Node->getValueType(0));
     break;
   case ISD::VAARG:
-    Action = TLI.getOperationAction(Node->getOpcode(),
-                                    Node->getValueType(0));
+    Action = TLI.getOperationAction(Node->getOpcode(), Node->getValueType(0));
     if (Action != TargetLowering::Promote)
       Action = TLI.getOperationAction(Node->getOpcode(), MVT::Other);
     break;
@@ -1069,8 +1062,8 @@ void SelectionDAGLegalize::LegalizeOp(SDNode *Node) {
     Action = TLI.getCondCodeAction(CCCode, OpVT);
     if (Action == TargetLowering::Legal) {
       if (Node->getOpcode() == ISD::SELECT_CC)
-        Action = TLI.getOperationAction(Node->getOpcode(),
-                                        Node->getValueType(0));
+        Action =
+            TLI.getOperationAction(Node->getOpcode(), Node->getValueType(0));
       else
         Action = TLI.getOperationAction(Node->getOpcode(), OpVT);
     }
@@ -1176,12 +1169,14 @@ void SelectionDAGLegalize::LegalizeOp(SDNode *Node) {
     break;
   }
   case ISD::MSCATTER:
-    Action = TLI.getOperationAction(Node->getOpcode(),
-                    cast<MaskedScatterSDNode>(Node)->getValue().getValueType());
+    Action = TLI.getOperationAction(
+        Node->getOpcode(),
+        cast<MaskedScatterSDNode>(Node)->getValue().getValueType());
     break;
   case ISD::MSTORE:
-    Action = TLI.getOperationAction(Node->getOpcode(),
-                    cast<MaskedStoreSDNode>(Node)->getValue().getValueType());
+    Action = TLI.getOperationAction(
+        Node->getOpcode(),
+        cast<MaskedStoreSDNode>(Node)->getValue().getValueType());
     break;
   case ISD::VP_SCATTER:
     Action = TLI.getOperationAction(
@@ -1214,8 +1209,8 @@ void SelectionDAGLegalize::LegalizeOp(SDNode *Node) {
   case ISD::VECREDUCE_FMAXIMUM:
   case ISD::VECREDUCE_FMINIMUM:
   case ISD::IS_FPCLASS:
-    Action = TLI.getOperationAction(
-        Node->getOpcode(), Node->getOperand(0).getValueType());
+    Action = TLI.getOperationAction(Node->getOpcode(),
+                                    Node->getOperand(0).getValueType());
     break;
   case ISD::VECREDUCE_SEQ_FADD:
   case ISD::VECREDUCE_SEQ_FMUL:
@@ -1234,8 +1229,8 @@ void SelectionDAGLegalize::LegalizeOp(SDNode *Node) {
   case ISD::VP_REDUCE_FMIN:
   case ISD::VP_REDUCE_SEQ_FADD:
   case ISD::VP_REDUCE_SEQ_FMUL:
-    Action = TLI.getOperationAction(
-        Node->getOpcode(), Node->getOperand(1).getValueType());
+    Action = TLI.getOperationAction(Node->getOpcode(),
+                                    Node->getOperand(1).getValueType());
     break;
   default:
     if (Node->getOpcode() >= ISD::BUILTIN_OP_END) {
@@ -1249,7 +1244,8 @@ void SelectionDAGLegalize::LegalizeOp(SDNode *Node) {
   if (SimpleFinishLegalizing) {
     SDNode *NewNode = Node;
     switch (Node->getOpcode()) {
-    default: break;
+    default:
+      break;
     case ISD::SHL:
     case ISD::SRL:
     case ISD::SRA:
@@ -1269,8 +1265,7 @@ void SelectionDAGLegalize::LegalizeOp(SDNode *Node) {
         if (SAO != Op1)
           NewNode = DAG.UpdateNodeOperands(Node, Op0, SAO);
       }
-    }
-    break;
+    } break;
     case ISD::FSHL:
     case ISD::FSHR:
     case ISD::SRL_PARTS:
@@ -1353,7 +1348,7 @@ void SelectionDAGLegalize::LegalizeOp(SDNode *Node) {
   default:
 #ifndef NDEBUG
     dbgs() << "NODE: ";
-    Node->dump( &DAG);
+    Node->dump(&DAG);
     dbgs() << "\n";
 #endif
     llvm_unreachable("Do not know how to legalize this operator!");
@@ -1388,8 +1383,7 @@ SDValue SelectionDAGLegalize::ExpandExtractFromVectorThroughStack(SDValue Op) {
   SDValue StackPtr, Ch;
   for (SDNode *User : Vec.getNode()->uses()) {
     if (StoreSDNode *ST = dyn_cast<StoreSDNode>(User)) {
-      if (ST->isIndexed() || ST->isTruncatingStore() ||
-          ST->getValue() != Vec)
+      if (ST->isIndexed() || ST->isTruncatingStore() || ST->getValue() != Vec)
         continue;
 
       // Make sure that nothing else could have stored into the destination of
@@ -1455,9 +1449,9 @@ SDValue SelectionDAGLegalize::ExpandExtractFromVectorThroughStack(SDValue Op) {
 SDValue SelectionDAGLegalize::ExpandInsertToVectorThroughStack(SDValue Op) {
   assert(Op.getValueType().isVector() && "Non-vector insert subvector!");
 
-  SDValue Vec  = Op.getOperand(0);
+  SDValue Vec = Op.getOperand(0);
   SDValue Part = Op.getOperand(1);
-  SDValue Idx  = Op.getOperand(2);
+  SDValue Idx = Op.getOperand(2);
   SDLoc dl(Op);
 
   // Store the value to a temporary stack slot, then LOAD the returned part.
@@ -1484,7 +1478,7 @@ SDValue SelectionDAGLegalize::ExpandInsertToVectorThroughStack(SDValue Op) {
   return DAG.getLoad(Op.getValueType(), dl, Ch, StackPtr, PtrInfo);
 }
 
-SDValue SelectionDAGLegalize::ExpandVectorBuildThroughStack(SDNode* Node) {
+SDValue SelectionDAGLegalize::ExpandVectorBuildThroughStack(SDNode *Node) {
   assert((Node->getOpcode() == ISD::BUILD_VECTOR ||
           Node->getOpcode() == ISD::CONCAT_VECTORS) &&
          "Unexpected opcode!");
@@ -1515,9 +1509,10 @@ SDValue SelectionDAGLegalize::ExpandVectorBuildThroughStack(SDNode* Node) {
   // Store (in the right endianness) the elements to memory.
   for (unsigned i = 0, e = Node->getNumOperands(); i != e; ++i) {
     // Ignore undef elements.
-    if (Node->getOperand(i).isUndef()) continue;
+    if (Node->getOperand(i).isUndef())
+      continue;
 
-    unsigned Offset = TypeByteSize*i;
+    unsigned Offset = TypeByteSize * i;
 
     SDValue Idx = DAG.getMemBasePlusOffset(FIPtr, TypeSize::Fixed(Offset), dl);
 
@@ -1531,7 +1526,7 @@ SDValue SelectionDAGLegalize::ExpandVectorBuildThroughStack(SDNode* Node) {
   }
 
   SDValue StoreChain;
-  if (!Stores.empty())    // Not all undef elements?
+  if (!Stores.empty()) // Not all undef elements?
     StoreChain = DAG.getNode(ISD::TokenFactor, dl, MVT::Other, Stores);
   else
     StoreChain = DAG.getEntryNode();
@@ -1582,8 +1577,8 @@ void SelectionDAGLegalize::getSignAsIntValue(FloatSignAsInt &State,
     unsigned ByteOffset = (NumBits / 8) - 1;
     IntPtr =
         DAG.getMemBasePlusOffset(StackPtr, TypeSize::Fixed(ByteOffset), DL);
-    State.IntPointerInfo = MachinePointerInfo::getFixedStack(MF, FI,
-                                                             ByteOffset);
+    State.IntPointerInfo =
+        MachinePointerInfo::getFixedStack(MF, FI, ByteOffset);
   }
 
   State.IntPtr = IntPtr;
@@ -1619,8 +1614,8 @@ SDValue SelectionDAGLegalize::ExpandFCOPYSIGN(SDNode *Node) const {
 
   EVT IntVT = SignAsInt.IntValue.getValueType();
   SDValue SignMask = DAG.getConstant(SignAsInt.SignMask, DL, IntVT);
-  SDValue SignBit = DAG.getNode(ISD::AND, DL, IntVT, SignAsInt.IntValue,
-                                SignMask);
+  SDValue SignBit =
+      DAG.getNode(ISD::AND, DL, IntVT, SignAsInt.IntValue, SignMask);
 
   // If FABS is legal transform FCOPYSIGN(x, y) => sign(x) ? -FABS(x) : FABS(X)
   EVT FloatVT = Mag.getValueType();
@@ -1638,8 +1633,8 @@ SDValue SelectionDAGLegalize::ExpandFCOPYSIGN(SDNode *Node) const {
   getSignAsIntValue(MagAsInt, DL, Mag);
   EVT MagVT = MagAsInt.IntValue.getValueType();
   SDValue ClearSignMask = DAG.getConstant(~MagAsInt.SignMask, DL, MagVT);
-  SDValue ClearedSign = DAG.getNode(ISD::AND, DL, MagVT, MagAsInt.IntValue,
-                                    ClearSignMask);
+  SDValue ClearedSign =
+      DAG.getNode(ISD::AND, DL, MagVT, MagAsInt.IntValue, ClearSignMask);
 
   // Get the signbit at the right position for MagAsInt.
   int ShiftAmount = SignAsInt.SignBit - MagAsInt.SignBit;
@@ -1698,16 +1693,16 @@ SDValue SelectionDAGLegalize::ExpandFABS(SDNode *Node) const {
   getSignAsIntValue(ValueAsInt, DL, Value);
   EVT IntVT = ValueAsInt.IntValue.getValueType();
   SDValue ClearSignMask = DAG.getConstant(~ValueAsInt.SignMask, DL, IntVT);
-  SDValue ClearedSign = DAG.getNode(ISD::AND, DL, IntVT, ValueAsInt.IntValue,
-                                    ClearSignMask);
+  SDValue ClearedSign =
+      DAG.getNode(ISD::AND, DL, IntVT, ValueAsInt.IntValue, ClearSignMask);
   return modifySignAsInt(ValueAsInt, DL, ClearedSign);
 }
 
-void SelectionDAGLegalize::ExpandDYNAMIC_STACKALLOC(SDNode* Node,
-                                           SmallVectorImpl<SDValue> &Results) {
+void SelectionDAGLegalize::ExpandDYNAMIC_STACKALLOC(
+    SDNode *Node, SmallVectorImpl<SDValue> &Results) {
   Register SPReg = TLI.getStackPointerRegisterToSaveRestore();
   assert(SPReg && "Target cannot require DYNAMIC_STACKALLOC expansion and"
-          " not tell us which reg is the stack pointer!");
+                  " not tell us which reg is the stack pointer!");
   SDLoc dl(Node);
   EVT VT = Node->getValueType(0);
   SDValue Tmp1 = SDValue(Node, 0);
@@ -1719,21 +1714,22 @@ void SelectionDAGLegalize::ExpandDYNAMIC_STACKALLOC(SDNode* Node,
   // pointer when other instructions are using the stack.
   Chain = DAG.getCALLSEQ_START(Chain, 0, 0, dl);
 
-  SDValue Size  = Tmp2.getOperand(1);
+  SDValue Size = Tmp2.getOperand(1);
   SDValue SP = DAG.getCopyFromReg(Chain, dl, SPReg, VT);
   Chain = SP.getValue(1);
   Align Alignment = cast<ConstantSDNode>(Tmp3)->getAlignValue();
   const TargetFrameLowering *TFL = DAG.getSubtarget().getFrameLowering();
   unsigned Opc =
-    TFL->getStackGrowthDirection() == TargetFrameLowering::StackGrowsUp ?
-    ISD::ADD : ISD::SUB;
+      TFL->getStackGrowthDirection() == TargetFrameLowering::StackGrowsUp
+          ? ISD::ADD
+          : ISD::SUB;
 
   Align StackAlign = TFL->getStackAlign();
-  Tmp1 = DAG.getNode(Opc, dl, VT, SP, Size);       // Value
+  Tmp1 = DAG.getNode(Opc, dl, VT, SP, Size); // Value
   if (Alignment > StackAlign)
     Tmp1 = DAG.getNode(ISD::AND, dl, VT, Tmp1,
                        DAG.getConstant(-Alignment.value(), dl, VT));
-  Chain = DAG.getCopyToReg(Chain, dl, SPReg, Tmp1);     // Output chain
+  Chain = DAG.getCopyToReg(Chain, dl, SPReg, Tmp1); // Output chain
 
   Tmp2 = DAG.getCALLSEQ_END(Chain, 0, 0, SDValue(), dl);
 
@@ -1779,8 +1775,8 @@ SDValue SelectionDAGLegalize::EmitStackConvert(SDValue SrcOp, EVT SlotVT,
   SDValue Store;
 
   if (SrcVT.bitsGT(SlotVT))
-    Store = DAG.getTruncStore(Chain, dl, SrcOp, FIPtr, PtrInfo,
-                              SlotVT, SrcAlign);
+    Store =
+        DAG.getTruncStore(Chain, dl, SrcOp, FIPtr, PtrInfo, SlotVT, SrcAlign);
   else {
     assert(SrcVT.bitsEq(SlotVT) && "Invalid store");
     Store = DAG.getStore(Chain, dl, SrcOp, FIPtr, PtrInfo, SrcAlign);
@@ -1813,9 +1809,8 @@ SDValue SelectionDAGLegalize::ExpandSCALAR_TO_VECTOR(SDNode *Node) {
       MachinePointerInfo::getFixedStack(DAG.getMachineFunction(), SPFI));
 }
 
-static bool
-ExpandBVWithShuffles(SDNode *Node, SelectionDAG &DAG,
-                     const TargetLowering &TLI, SDValue &Res) {
+static bool ExpandBVWithShuffles(SDNode *Node, SelectionDAG &DAG,
+                                 const TargetLowering &TLI, SDValue &Res) {
   unsigned NumElems = Node->getNumOperands();
   SDLoc dl(Node);
   EVT VT = Node->getValueType(0);
@@ -1829,7 +1824,7 @@ ExpandBVWithShuffles(SDNode *Node, SelectionDAG &DAG,
   // and next, assuming that all shuffles are legal, to create the new nodes.
   for (int Phase = 0; Phase < 2; ++Phase) {
     SmallVector<std::pair<SDValue, SmallVector<int, 16>>, 16> IntermedVals,
-                                                              NewIntermedVals;
+        NewIntermedVals;
     for (unsigned i = 0; i < NumElems; ++i) {
       SDValue V = Node->getOperand(i);
       if (V.isUndef())
@@ -1850,7 +1845,7 @@ ExpandBVWithShuffles(SDNode *Node, SelectionDAG &DAG,
 
         SmallVector<int, 16> FinalIndices;
         FinalIndices.reserve(IntermedVals[i].second.size() +
-                             IntermedVals[i+1].second.size());
+                             IntermedVals[i + 1].second.size());
 
         int k = 0;
         for (unsigned j = 0, f = IntermedVals[i].second.size(); j != f;
@@ -1858,17 +1853,16 @@ ExpandBVWithShuffles(SDNode *Node, SelectionDAG &DAG,
           ShuffleVec[k] = j;
           FinalIndices.push_back(IntermedVals[i].second[j]);
         }
-        for (unsigned j = 0, f = IntermedVals[i+1].second.size(); j != f;
+        for (unsigned j = 0, f = IntermedVals[i + 1].second.size(); j != f;
              ++j, ++k) {
           ShuffleVec[k] = NumElems + j;
-          FinalIndices.push_back(IntermedVals[i+1].second[j]);
+          FinalIndices.push_back(IntermedVals[i + 1].second[j]);
         }
 
         SDValue Shuffle;
         if (Phase)
           Shuffle = DAG.getVectorShuffle(VT, dl, IntermedVals[i].first,
-                                         IntermedVals[i+1].first,
-                                         ShuffleVec);
+                                         IntermedVals[i + 1].first, ShuffleVec);
         else if (!TLI.isShuffleMaskLegal(ShuffleVec, VT))
           return false;
         NewIntermedVals.push_back(
@@ -1949,14 +1943,14 @@ SDValue SelectionDAGLegalize::ExpandBUILD_VECTOR(SDNode *Node) {
 
   // If all elements are constants, create a load from the constant pool.
   if (isConstant) {
-    SmallVector<Constant*, 16> CV;
+    SmallVector<Constant *, 16> CV;
     for (unsigned i = 0, e = NumElems; i != e; ++i) {
       if (ConstantFPSDNode *V =
-          dyn_cast<ConstantFPSDNode>(Node->getOperand(i))) {
+              dyn_cast<ConstantFPSDNode>(Node->getOperand(i))) {
         CV.push_back(const_cast<ConstantFP *>(V->getConstantFPValue()));
       } else if (ConstantSDNode *V =
-                 dyn_cast<ConstantSDNode>(Node->getOperand(i))) {
-        if (OpVT==EltVT)
+                     dyn_cast<ConstantSDNode>(Node->getOperand(i))) {
+        if (OpVT == EltVT)
           CV.push_back(const_cast<ConstantInt *>(V->getConstantIntValue()));
         else {
           // If OpVT and EltVT don't match, EltVT is not legal and the
@@ -2034,9 +2028,10 @@ SDValue SelectionDAGLegalize::ExpandSPLAT_VECTOR(SDNode *Node) {
 // register, return the lo part and set the hi part to the by-reg argument in
 // the first.  If it does fit into a single register, return the result and
 // leave the Hi part unset.
-std::pair<SDValue, SDValue> SelectionDAGLegalize::ExpandLibCall(RTLIB::Libcall LC, SDNode *Node,
-                                            TargetLowering::ArgListTy &&Args,
-                                            bool isSigned) {
+std::pair<SDValue, SDValue>
+SelectionDAGLegalize::ExpandLibCall(RTLIB::Libcall LC, SDNode *Node,
+                                    TargetLowering::ArgListTy &&Args,
+                                    bool isSigned) {
   SDValue Callee = DAG.getExternalSymbol(TLI.getLibcallName(LC),
                                          TLI.getPointerTy(DAG.getDataLayout()));
 
@@ -2082,8 +2077,9 @@ std::pair<SDValue, SDValue> SelectionDAGLegalize::ExpandLibCall(RTLIB::Libcall L
   return CallInfo;
 }
 
-std::pair<SDValue, SDValue> SelectionDAGLegalize::ExpandLibCall(RTLIB::Libcall LC, SDNode *Node,
-                                            bool isSigned) {
+std::pair<SDValue, SDValue>
+SelectionDAGLegalize::ExpandLibCall(RTLIB::Libcall LC, SDNode *Node,
+                                    bool isSigned) {
   TargetLowering::ArgListTy Args;
   TargetLowering::ArgListEntry Entry;
   for (const SDValue &Op : Node->op_values()) {
@@ -2140,8 +2136,7 @@ void SelectionDAGLegalize::ExpandFrexpLibCall(
   Results.push_back(LoadExp);
 }
 
-void SelectionDAGLegalize::ExpandFPLibCall(SDNode* Node,
-                                           RTLIB::Libcall LC,
+void SelectionDAGLegalize::ExpandFPLibCall(SDNode *Node, RTLIB::Libcall LC,
                                            SmallVectorImpl<SDValue> &Results) {
   if (LC == RTLIB::UNKNOWN_LIBCALL)
     llvm_unreachable("Can't create an unknown libcall!");
@@ -2151,10 +2146,8 @@ void SelectionDAGLegalize::ExpandFPLibCall(SDNode* Node,
     SmallVector<SDValue, 4> Ops(drop_begin(Node->ops()));
     TargetLowering::MakeLibCallOptions CallOptions;
     // FIXME: This doesn't support tail calls.
-    std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG, LC, RetVT,
-                                                      Ops, CallOptions,
-                                                      SDLoc(Node),
-                                                      Node->getOperand(0));
+    std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(
+        DAG, LC, RetVT, Ops, CallOptions, SDLoc(Node), Node->getOperand(0));
     Results.push_back(Tmp.first);
     Results.push_back(Tmp.second);
   } else {
@@ -2164,20 +2157,17 @@ void SelectionDAGLegalize::ExpandFPLibCall(SDNode* Node,
 }
 
 /// Expand the node to a libcall based on the result type.
-void SelectionDAGLegalize::ExpandFPLibCall(SDNode* Node,
-                                           RTLIB::Libcall Call_F32,
-                                           RTLIB::Libcall Call_F64,
-                                           RTLIB::Libcall Call_F80,
-                                           RTLIB::Libcall Call_F128,
-                                           RTLIB::Libcall Call_PPCF128,
-                                           SmallVectorImpl<SDValue> &Results) {
-  RTLIB::Libcall LC = RTLIB::getFPLibCall(Node->getSimpleValueType(0),
-                                          Call_F32, Call_F64, Call_F80,
-                                          Call_F128, Call_PPCF128);
+void SelectionDAGLegalize::ExpandFPLibCall(
+    SDNode *Node, RTLIB::Libcall Call_F32, RTLIB::Libcall Call_F64,
+    RTLIB::Libcall Call_F80, RTLIB::Libcall Call_F128,
+    RTLIB::Libcall Call_PPCF128, SmallVectorImpl<SDValue> &Results) {
+  RTLIB::Libcall LC =
+      RTLIB::getFPLibCall(Node->getSimpleValueType(0), Call_F32, Call_F64,
+                          Call_F80, Call_F128, Call_PPCF128);
   ExpandFPLibCall(Node, LC, Results);
 }
 
-SDValue SelectionDAGLegalize::ExpandIntLibCall(SDNode* Node, bool isSigned,
+SDValue SelectionDAGLegalize::ExpandIntLibCall(SDNode *Node, bool isSigned,
                                                RTLIB::Libcall Call_I8,
                                                RTLIB::Libcall Call_I16,
                                                RTLIB::Libcall Call_I32,
@@ -2185,47 +2175,65 @@ SDValue SelectionDAGLegalize::ExpandIntLibCall(SDNode* Node, bool isSigned,
                                                RTLIB::Libcall Call_I128) {
   RTLIB::Libcall LC;
   switch (Node->getSimpleValueType(0).SimpleTy) {
-  default: llvm_unreachable("Unexpected request for libcall!");
-  case MVT::i8:   LC = Call_I8; break;
-  case MVT::i16:  LC = Call_I16; break;
-  case MVT::i32:  LC = Call_I32; break;
-  case MVT::i64:  LC = Call_I64; break;
-  case MVT::i128: LC = Call_I128; break;
+  default:
+    llvm_unreachable("Unexpected request for libcall!");
+  case MVT::i8:
+    LC = Call_I8;
+    break;
+  case MVT::i16:
+    LC = Call_I16;
+    break;
+  case MVT::i32:
+    LC = Call_I32;
+    break;
+  case MVT::i64:
+    LC = Call_I64;
+    break;
+  case MVT::i128:
+    LC = Call_I128;
+    break;
   }
   return ExpandLibCall(LC, Node, isSigned).first;
 }
 
 /// Expand the node to a libcall based on first argument type (for instance
 /// lround and its variant).
-void SelectionDAGLegalize::ExpandArgFPLibCall(SDNode* Node,
-                                            RTLIB::Libcall Call_F32,
-                                            RTLIB::Libcall Call_F64,
-                                            RTLIB::Libcall Call_F80,
-                                            RTLIB::Libcall Call_F128,
-                                            RTLIB::Libcall Call_PPCF128,
-                                            SmallVectorImpl<SDValue> &Results) {
+void SelectionDAGLegalize::ExpandArgFPLibCall(
+    SDNode *Node, RTLIB::Libcall Call_F32, RTLIB::Libcall Call_F64,
+    RTLIB::Libcall Call_F80, RTLIB::Libcall Call_F128,
+    RTLIB::Libcall Call_PPCF128, SmallVectorImpl<SDValue> &Results) {
   EVT InVT = Node->getOperand(Node->isStrictFPOpcode() ? 1 : 0).getValueType();
-  RTLIB::Libcall LC = RTLIB::getFPLibCall(InVT.getSimpleVT(),
-                                          Call_F32, Call_F64, Call_F80,
-                                          Call_F128, Call_PPCF128);
+  RTLIB::Libcall LC =
+      RTLIB::getFPLibCall(InVT.getSimpleVT(), Call_F32, Call_F64, Call_F80,
+                          Call_F128, Call_PPCF128);
   ExpandFPLibCall(Node, LC, Results);
 }
 
 /// Issue libcalls to __{u}divmod to compute div / rem pairs.
-void
-SelectionDAGLegalize::ExpandDivRemLibCall(SDNode *Node,
-                                          SmallVectorImpl<SDValue> &Results) {
+void SelectionDAGLegalize::ExpandDivRemLibCall(
+    SDNode *Node, SmallVectorImpl<SDValue> &Results) {
   unsigned Opcode = Node->getOpcode();
   bool isSigned = Opcode == ISD::SDIVREM;
 
   RTLIB::Libcall LC;
   switch (Node->getSimpleValueType(0).SimpleTy) {
-  default: llvm_unreachable("Unexpected request for libcall!");
-  case MVT::i8:   LC= isSigned ? RTLIB::SDIVREM_I8  : RTLIB::UDIVREM_I8;  break;
-  case MVT::i16:  LC= isSigned ? RTLIB::SDIVREM_I16 : RTLIB::UDIVREM_I16; break;
-  case MVT::i32:  LC= isSigned ? RTLIB::SDIVREM_I32 : RTLIB::UDIVREM_I32; break;
-  case MVT::i64:  LC= isSigned ? RTLIB::SDIVREM_I64 : RTLIB::UDIVREM_I64; break;
-  case MVT::i128: LC= isSigned ? RTLIB::SDIVREM_I128:RTLIB::UDIVREM_I128; break;
+  default:
+    llvm_unreachable("Unexpected request for libcall!");
+  case MVT::i8:
+    LC = isSigned ? RTLIB::SDIVREM_I8 : RTLIB::UDIVREM_I8;
+    break;
+  case MVT::i16:
+    LC = isSigned ? RTLIB::SDIVREM_I16 : RTLIB::UDIVREM_I16;
+    break;
+  case MVT::i32:
+    LC = isSigned ? RTLIB::SDIVREM_I32 : RTLIB::UDIVREM_I32;
+    break;
+  case MVT::i64:
+    LC = isSigned ? RTLIB::SDIVREM_I64 : RTLIB::UDIVREM_I64;
+    break;
+  case MVT::i128:
+    LC = isSigned ? RTLIB::SDIVREM_I128 : RTLIB::UDIVREM_I128;
+    break;
   }
 
   // The input chain to this libcall is the entry node of the function.
@@ -2281,20 +2289,30 @@ SelectionDAGLegalize::ExpandDivRemLibCall(SDNode *Node,
 static bool isSinCosLibcallAvailable(SDNode *Node, const TargetLowering &TLI) {
   RTLIB::Libcall LC;
   switch (Node->getSimpleValueType(0).SimpleTy) {
-  default: llvm_unreachable("Unexpected request for libcall!");
-  case MVT::f32:     LC = RTLIB::SINCOS_F32; break;
-  case MVT::f64:     LC = RTLIB::SINCOS_F64; break;
-  case MVT::f80:     LC = RTLIB::SINCOS_F80; break;
-  case MVT::f128:    LC = RTLIB::SINCOS_F128; break;
-  case MVT::ppcf128: LC = RTLIB::SINCOS_PPCF128; break;
+  default:
+    llvm_unreachable("Unexpected request for libcall!");
+  case MVT::f32:
+    LC = RTLIB::SINCOS_F32;
+    break;
+  case MVT::f64:
+    LC = RTLIB::SINCOS_F64;
+    break;
+  case MVT::f80:
+    LC = RTLIB::SINCOS_F80;
+    break;
+  case MVT::f128:
+    LC = RTLIB::SINCOS_F128;
+    break;
+  case MVT::ppcf128:
+    LC = RTLIB::SINCOS_PPCF128;
+    break;
   }
   return TLI.getLibcallName(LC) != nullptr;
 }
 
 /// Only issue sincos libcall if both sin and cos are needed.
 static bool useSinCos(SDNode *Node) {
-  unsigned OtherOpcode = Node->getOpcode() == ISD::FSIN
-    ? ISD::FCOS : ISD::FSIN;
+  unsigned OtherOpcode = Node->getOpcode() == ISD::FSIN ? ISD::FCOS : ISD::FSIN;
 
   SDValue Op0 = Node->getOperand(0);
   for (const SDNode *User : Op0.getNode()->uses()) {
@@ -2308,17 +2326,27 @@ static bool useSinCos(SDNode *Node) {
 }
 
 /// Issue libcalls to sincos to compute sin / cos pairs.
-void
-SelectionDAGLegalize::ExpandSinCosLibCall(SDNode *Node,
-                                          SmallVectorImpl<SDValue> &Results) {
+void SelectionDAGLegalize::ExpandSinCosLibCall(
+    SDNode *Node, SmallVectorImpl<SDValue> &Results) {
   RTLIB::Libcall LC;
   switch (Node->getSimpleValueType(0).SimpleTy) {
-  default: llvm_unreachable("Unexpected request for libcall!");
-  case MVT::f32:     LC = RTLIB::SINCOS_F32; break;
-  case MVT::f64:     LC = RTLIB::SINCOS_F64; break;
-  case MVT::f80:     LC = RTLIB::SINCOS_F80; break;
-  case MVT::f128:    LC = RTLIB::SINCOS_F128; break;
-  case MVT::ppcf128: LC = RTLIB::SINCOS_PPCF128; break;
+  default:
+    llvm_unreachable("Unexpected request for libcall!");
+  case MVT::f32:
+    LC = RTLIB::SINCOS_F32;
+    break;
+  case MVT::f64:
+    LC = RTLIB::SINCOS_F64;
+    break;
+  case MVT::f80:
+    LC = RTLIB::SINCOS_F80;
+    break;
+  case MVT::f128:
+    LC = RTLIB::SINCOS_F128;
+    break;
+  case MVT::ppcf128:
+    LC = RTLIB::SINCOS_PPCF128;
+    break;
   }
 
   // The input chain to this libcall is the entry node of the function.
@@ -2647,8 +2675,8 @@ SDValue SelectionDAGLegalize::ExpandLegalINT_TO_FP(SDNode *Node,
     SDValue MemChain = DAG.getEntryNode();
 
     // Store the lo of the constructed double.
-    SDValue Store1 = DAG.getStore(MemChain, dl, Lo, StackSlot,
-                                  MachinePointerInfo());
+    SDValue Store1 =
+        DAG.getStore(MemChain, dl, Lo, StackSlot, MachinePointerInfo());
     // Store the hi of the constructed double.
     SDValue HiPtr = DAG.getMemBasePlusOffset(StackSlot, TypeSize::Fixed(4), dl);
     SDValue Store2 =
@@ -2672,12 +2700,10 @@ SDValue SelectionDAGLegalize::ExpandLegalINT_TO_FP(SDNode *Node,
       Chain = Sub.getValue(1);
       if (DestVT != Sub.getValueType()) {
         std::pair<SDValue, SDValue> ResultPair;
-        ResultPair =
-            DAG.getStrictFPExtendOrRound(Sub, Chain, dl, DestVT);
+        ResultPair = DAG.getStrictFPExtendOrRound(Sub, Chain, dl, DestVT);
         Result = ResultPair.first;
         Chain = ResultPair.second;
-      }
-      else
+      } else
         Result = Sub;
     } else {
       Sub = DAG.getNode(ISD::FSUB, dl, MVT::f64, Load, Bias);
@@ -2724,10 +2750,10 @@ SDValue SelectionDAGLegalize::ExpandLegalINT_TO_FP(SDNode *Node,
       // In strict mode, we must avoid spurious exceptions, and therefore
       // must make sure to only emit a single STRICT_SINT_TO_FP.
       SDValue InCvt = DAG.getSelect(dl, SrcVT, SignBitTest, Or, Op0);
-      Fast = DAG.getNode(ISD::STRICT_SINT_TO_FP, dl, { DestVT, MVT::Other },
-                         { Node->getOperand(0), InCvt });
-      Slow = DAG.getNode(ISD::STRICT_FADD, dl, { DestVT, MVT::Other },
-                         { Fast.getValue(1), Fast, Fast });
+      Fast = DAG.getNode(ISD::STRICT_SINT_TO_FP, dl, {DestVT, MVT::Other},
+                         {Node->getOperand(0), InCvt});
+      Slow = DAG.getNode(ISD::STRICT_FADD, dl, {DestVT, MVT::Other},
+                         {Fast.getValue(1), Fast, Fast});
       Chain = Slow.getValue(1);
       // The STRICT_SINT_TO_FP inherits the exception mode from the
       // incoming STRICT_UINT_TO_FP node; the STRICT_FADD node can
@@ -2760,8 +2786,8 @@ SDValue SelectionDAGLegalize::ExpandLegalINT_TO_FP(SDNode *Node,
 
   SDValue Tmp1;
   if (Node->isStrictFPOpcode()) {
-    Tmp1 = DAG.getNode(ISD::STRICT_SINT_TO_FP, dl, { DestVT, MVT::Other },
-                       { Node->getOperand(0), Op0 });
+    Tmp1 = DAG.getNode(ISD::STRICT_SINT_TO_FP, dl, {DestVT, MVT::Other},
+                       {Node->getOperand(0), Op0});
   } else
     Tmp1 = DAG.getNode(ISD::SINT_TO_FP, dl, DestVT, Op0);
 
@@ -2769,8 +2795,8 @@ SDValue SelectionDAGLegalize::ExpandLegalINT_TO_FP(SDNode *Node,
                                  DAG.getConstant(0, dl, SrcVT), ISD::SETLT);
   SDValue Zero = DAG.getIntPtrConstant(0, dl),
           Four = DAG.getIntPtrConstant(4, dl);
-  SDValue CstOffset = DAG.getSelect(dl, Zero.getValueType(),
-                                    SignSet, Four, Zero);
+  SDValue CstOffset =
+      DAG.getSelect(dl, Zero.getValueType(), SignSet, Four, Zero);
 
   // If the sign bit of the integer is set, the large number will be treated
   // as a negative number.  To counteract this, the dynamic code adds an
@@ -2779,15 +2805,23 @@ SDValue SelectionDAGLegalize::ExpandLegalINT_TO_FP(SDNode *Node,
   switch (SrcVT.getSimpleVT().SimpleTy) {
   default:
     return SDValue();
-  case MVT::i8 : FF = 0x43800000ULL; break;  // 2^8  (as a float)
-  case MVT::i16: FF = 0x47800000ULL; break;  // 2^16 (as a float)
-  case MVT::i32: FF = 0x4F800000ULL; break;  // 2^32 (as a float)
-  case MVT::i64: FF = 0x5F800000ULL; break;  // 2^64 (as a float)
+  case MVT::i8:
+    FF = 0x43800000ULL;
+    break; // 2^8  (as a float)
+  case MVT::i16:
+    FF = 0x47800000ULL;
+    break; // 2^16 (as a float)
+  case MVT::i32:
+    FF = 0x4F800000ULL;
+    break; // 2^32 (as a float)
+  case MVT::i64:
+    FF = 0x5F800000ULL;
+    break; // 2^64 (as a float)
   }
   if (DAG.getDataLayout().isLittleEndian())
     FF <<= 32;
-  Constant *FudgeFactor = ConstantInt::get(
-                                       Type::getInt64Ty(*DAG.getContext()), FF);
+  Constant *FudgeFactor =
+      ConstantInt::get(Type::getInt64Ty(*DAG.getContext()), FF);
 
   SDValue CPIdx =
       DAG.getConstantPool(FudgeFactor, TLI.getPointerTy(DAG.getDataLayout()));
@@ -2811,8 +2845,8 @@ SDValue SelectionDAGLegalize::ExpandLegalINT_TO_FP(SDNode *Node,
   }
 
   if (Node->isStrictFPOpcode()) {
-    SDValue Result = DAG.getNode(ISD::STRICT_FADD, dl, { DestVT, MVT::Other },
-                                 { Tmp1.getValue(1), Tmp1, FudgeInReg });
+    SDValue Result = DAG.getNode(ISD::STRICT_FADD, dl, {DestVT, MVT::Other},
+                                 {Tmp1.getValue(1), Tmp1, FudgeInReg});
     Chain = Result.getValue(1);
     return Result;
   }
@@ -2842,7 +2876,7 @@ void SelectionDAGLegalize::PromoteLegalINT_TO_FP(
 
   // Scan for the appropriate larger type to use.
   while (true) {
-    NewInTy = (MVT::SimpleValueType)(NewInTy.getSimpleVT().SimpleTy+1);
+    NewInTy = (MVT::SimpleValueType)(NewInTy.getSimpleVT().SimpleTy + 1);
     assert(NewInTy.isInteger() && "Ran out of possibilities!");
 
     // If the target supports SINT_TO_FP of this type, use it.
@@ -2886,8 +2920,8 @@ void SelectionDAGLegalize::PromoteLegalINT_TO_FP(
 /// we promote it.  At this point, we know that the result and operand types are
 /// legal for the target, and that there is a legal FP_TO_UINT or FP_TO_SINT
 /// operation that returns a larger result.
-void SelectionDAGLegalize::PromoteLegalFP_TO_INT(SDNode *N, const SDLoc &dl,
-                                                 SmallVectorImpl<SDValue> &Results) {
+void SelectionDAGLegalize::PromoteLegalFP_TO_INT(
+    SDNode *N, const SDLoc &dl, SmallVectorImpl<SDValue> &Results) {
   bool IsStrict = N->isStrictFPOpcode();
   bool IsSigned = N->getOpcode() == ISD::FP_TO_SINT ||
                   N->getOpcode() == ISD::STRICT_FP_TO_SINT;
@@ -2900,7 +2934,7 @@ void SelectionDAGLegalize::PromoteLegalFP_TO_INT(SDNode *N, const SDLoc &dl,
 
   // Scan for the appropriate larger type to use.
   while (true) {
-    NewOutTy = (MVT::SimpleValueType)(NewOutTy.getSimpleVT().SimpleTy+1);
+    NewOutTy = (MVT::SimpleValueType)(NewOutTy.getSimpleVT().SimpleTy + 1);
     assert(NewOutTy.isInteger() && "Ran out of possibilities!");
 
     // A larger signed type can hold all unsigned values of the requested type,
@@ -3028,16 +3062,14 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
   case ISD::EH_DWARF_CFA: {
     SDValue CfaArg = DAG.getSExtOrTrunc(Node->getOperand(0), dl,
                                         TLI.getPointerTy(DAG.getDataLayout()));
-    SDValue Offset = DAG.getNode(ISD::ADD, dl,
-                                 CfaArg.getValueType(),
-                                 DAG.getNode(ISD::FRAME_TO_ARGS_OFFSET, dl,
-                                             CfaArg.getValueType()),
-                                 CfaArg);
+    SDValue Offset = DAG.getNode(
+        ISD::ADD, dl, CfaArg.getValueType(),
+        DAG.getNode(ISD::FRAME_TO_ARGS_OFFSET, dl, CfaArg.getValueType()),
+        CfaArg);
     SDValue FA = DAG.getNode(
         ISD::FRAMEADDR, dl, TLI.getPointerTy(DAG.getDataLayout()),
         DAG.getConstant(0, dl, TLI.getPointerTy(DAG.getDataLayout())));
-    Results.push_back(DAG.getNode(ISD::ADD, dl, FA.getValueType(),
-                                  FA, Offset));
+    Results.push_back(DAG.getNode(ISD::ADD, dl, FA.getValueType(), FA, Offset));
     break;
   }
   case ISD::GET_ROUNDING:
@@ -3157,9 +3189,8 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
       break;
     // We might as well mutate to FP_ROUND when FP_ROUND operation is legal
     // since this operation is more efficient than stack operation.
-    if (TLI.getStrictFPOperationAction(Node->getOpcode(),
-                                       Node->getValueType(0))
-        == TargetLowering::Legal)
+    if (TLI.getStrictFPOperationAction(
+            Node->getOpcode(), Node->getValueType(0)) == TargetLowering::Legal)
       break;
     // We fall back to use stack operation when the FP_ROUND operation
     // isn't available.
@@ -3184,9 +3215,8 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
       break;
     // We might as well mutate to FP_EXTEND when FP_EXTEND operation is legal
     // since this operation is more efficient than stack operation.
-    if (TLI.getStrictFPOperationAction(Node->getOpcode(),
-                                       Node->getValueType(0))
-        == TargetLowering::Legal)
+    if (TLI.getStrictFPOperationAction(
+            Node->getOpcode(), Node->getValueType(0)) == TargetLowering::Legal)
       break;
     // We fall back to use stack operation when the FP_EXTEND operation
     // isn't available.
@@ -3270,11 +3300,11 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
     // NOTE: we could fall back on load/store here too for targets without
     // SRA.  However, it is doubtful that any exist.
     EVT ShiftAmountTy = TLI.getShiftAmountTy(VT, DAG.getDataLayout());
-    unsigned BitsDiff = VT.getScalarSizeInBits() -
-                        ExtraVT.getScalarSizeInBits();
+    unsigned BitsDiff =
+        VT.getScalarSizeInBits() - ExtraVT.getScalarSizeInBits();
     SDValue ShiftCst = DAG.getConstant(BitsDiff, dl, ShiftAmountTy);
-    Tmp1 = DAG.getNode(ISD::SHL, dl, Node->getValueType(0),
-                       Node->getOperand(0), ShiftCst);
+    Tmp1 = DAG.getNode(ISD::SHL, dl, Node->getValueType(0), Node->getOperand(0),
+                       ShiftCst);
     Tmp1 = DAG.getNode(ISD::SRA, dl, Node->getValueType(0), Tmp1, ShiftCst);
     Results.push_back(Tmp1);
     break;
@@ -3314,7 +3344,7 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
   case ISD::STRICT_FP_TO_UINT:
     if (TLI.expandFP_TO_UINT(Node, Tmp1, Tmp2, DAG)) {
       // Relink the chain.
-      DAG.ReplaceAllUsesOfValueWith(SDValue(Node,1), Tmp2);
+      DAG.ReplaceAllUsesOfValueWith(SDValue(Node, 1), Tmp2);
       // Replace the new UINT result.
       ReplaceNodeWithValue(SDValue(Node, 0), Tmp1);
       LLVM_DEBUG(dbgs() << "Successfully expanded STRICT_FP_TO_UINT node\n");
@@ -3354,9 +3384,8 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
     Results.push_back(ExpandSCALAR_TO_VECTOR(Node));
     break;
   case ISD::INSERT_VECTOR_ELT:
-    Results.push_back(ExpandINSERT_VECTOR_ELT(Node->getOperand(0),
-                                              Node->getOperand(1),
-                                              Node->getOperand(2), dl));
+    Results.push_back(ExpandINSERT_VECTOR_ELT(
+        Node->getOperand(0), Node->getOperand(1), Node->getOperand(2), dl));
     break;
   case ISD::VECTOR_SHUFFLE: {
     SmallVector<int, 32> NewMask;
@@ -3389,7 +3418,7 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
 
         // Convert the shuffle mask
         unsigned int factor =
-                         NewVT.getVectorNumElements()/VT.getVectorNumElements();
+            NewVT.getVectorNumElements() / VT.getVectorNumElements();
 
         // EltVT gets smaller
         assert(factor > 0);
@@ -3398,10 +3427,9 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
           if (Mask[i] < 0) {
             for (unsigned fi = 0; fi < factor; ++fi)
               NewMask.push_back(Mask[i]);
-          }
-          else {
+          } else {
             for (unsigned fi = 0; fi < factor; ++fi)
-              NewMask.push_back(Mask[i]*factor+fi);
+              NewMask.push_back(Mask[i] * factor + fi);
           }
         }
         Mask = NewMask;
@@ -3470,8 +3498,8 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
     // Expand to CopyToReg if the target set
     // StackPointerRegisterToSaveRestore.
     if (Register SP = TLI.getStackPointerRegisterToSaveRestore()) {
-      Results.push_back(DAG.getCopyToReg(Node->getOperand(0), dl, SP,
-                                         Node->getOperand(1)));
+      Results.push_back(
+          DAG.getCopyToReg(Node->getOperand(0), dl, SP, Node->getOperand(1)));
     } else {
       Results.push_back(Node->getOperand(0));
     }
@@ -3505,11 +3533,20 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
     // Expand Y = MAX(A, B) -> Y = (A > B) ? A : B
     ISD::CondCode Pred;
     switch (Node->getOpcode()) {
-    default: llvm_unreachable("How did we get here?");
-    case ISD::SMAX: Pred = ISD::SETGT; break;
-    case ISD::SMIN: Pred = ISD::SETLT; break;
-    case ISD::UMAX: Pred = ISD::SETUGT; break;
-    case ISD::UMIN: Pred = ISD::SETULT; break;
+    default:
+      llvm_unreachable("How did we get here?");
+    case ISD::SMAX:
+      Pred = ISD::SETGT;
+      break;
+    case ISD::SMIN:
+      Pred = ISD::SETLT;
+      break;
+    case ISD::UMAX:
+      Pred = ISD::SETUGT;
+      break;
+    case ISD::UMIN:
+      Pred = ISD::SETULT;
+      break;
     }
     Tmp1 = Node->getOperand(0);
     Tmp2 = Node->getOperand(1);
@@ -3529,8 +3566,8 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
     // Turn fsin / fcos into ISD::FSINCOS node if there are a pair of fsin /
     // fcos which share the same operand and both are used.
     if ((TLI.isOperationLegalOrCustom(ISD::FSINCOS, VT) ||
-         isSinCosLibcallAvailable(Node, TLI))
-        && useSinCos(Node)) {
+         isSinCosLibcallAvailable(Node, TLI)) &&
+        useSinCos(Node)) {
       SDVTList VTs = DAG.getVTList(VT, VT);
       Tmp1 = DAG.getNode(ISD::FSINCOS, dl, VTs, Node->getOperand(0));
       if (Node->getOpcode() == ISD::FCOS)
@@ -3789,11 +3826,9 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
   case ISD::SDIVFIXSAT:
   case ISD::UDIVFIX:
   case ISD::UDIVFIXSAT:
-    if (SDValue V = TLI.expandFixedPointDiv(Node->getOpcode(), SDLoc(Node),
-                                            Node->getOperand(0),
-                                            Node->getOperand(1),
-                                            Node->getConstantOperandVal(2),
-                                            DAG)) {
+    if (SDValue V = TLI.expandFixedPointDiv(
+            Node->getOpcode(), SDLoc(Node), Node->getOperand(0),
+            Node->getOperand(1), Node->getConstantOperandVal(2), DAG)) {
       Results.push_back(V);
       break;
     }
@@ -3889,13 +3924,13 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
     Tmp2 = Node->getOperand(1);
     Tmp3 = Node->getOperand(2);
     if (Tmp1.getOpcode() == ISD::SETCC) {
-      Tmp1 = DAG.getSelectCC(dl, Tmp1.getOperand(0), Tmp1.getOperand(1),
-                             Tmp2, Tmp3,
+      Tmp1 = DAG.getSelectCC(dl, Tmp1.getOperand(0), Tmp1.getOperand(1), Tmp2,
+                             Tmp3,
                              cast<CondCodeSDNode>(Tmp1.getOperand(2))->get());
     } else {
-      Tmp1 = DAG.getSelectCC(dl, Tmp1,
-                             DAG.getConstant(0, dl, Tmp1.getValueType()),
-                             Tmp2, Tmp3, ISD::SETNE);
+      Tmp1 =
+          DAG.getSelectCC(dl, Tmp1, DAG.getConstant(0, dl, Tmp1.getValueType()),
+                          Tmp2, Tmp3, ISD::SETNE);
     }
     Tmp1->setFlags(Node->getFlags());
     Results.push_back(Tmp1);
@@ -3910,7 +3945,7 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
     EVT PTy = TLI.getPointerTy(TD);
 
     unsigned EntrySize =
-      DAG.getMachineFunction().getJumpTableInfo()->getEntrySize(TD);
+        DAG.getMachineFunction().getJumpTableInfo()->getEntrySize(TD);
 
     // For power-of-two jumptable entry sizes convert multiplication to a shift.
     // This transformation needs to be done here since otherwise the MIPS
@@ -3923,8 +3958,8 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
     else
       Index = DAG.getNode(ISD::MUL, dl, Index.getValueType(), Index,
                           DAG.getConstant(EntrySize, dl, Index.getValueType()));
-    SDValue Addr = DAG.getNode(ISD::ADD, dl, Index.getValueType(),
-                               Index, Table);
+    SDValue Addr =
+        DAG.getNode(ISD::ADD, dl, Index.getValueType(), Index, Table);
 
     EVT MemVT = EVT::getIntegerVT(*DAG.getContext(), EntrySize * 8);
     SDValue LD = DAG.getExtLoad(
@@ -3936,7 +3971,7 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
       // BRIND(load(Jumptable + index) + RelocBase)
       // RelocBase can be JumpTable, GOT or some sort of global base.
       Addr = DAG.getNode(ISD::ADD, dl, PTy, Addr,
-                          TLI.getPICJumpTableRelocBase(Table, DAG));
+                         TLI.getPICJumpTableRelocBase(Table, DAG));
     }
 
     Tmp1 = TLI.expandIndirectJTBranch(dl, LD.getValue(1), Addr, JTI, DAG);
@@ -3962,10 +3997,9 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
       else
         Tmp3 = DAG.getNode(ISD::AND, dl, Tmp2.getValueType(), Tmp2,
                            DAG.getConstant(1, dl, Tmp2.getValueType()));
-      Tmp1 = DAG.getNode(ISD::BR_CC, dl, MVT::Other, Tmp1,
-                         DAG.getCondCode(ISD::SETNE), Tmp3,
-                         DAG.getConstant(0, dl, Tmp3.getValueType()),
-                         Node->getOperand(2));
+      Tmp1 = DAG.getNode(
+          ISD::BR_CC, dl, MVT::Other, Tmp1, DAG.getCondCode(ISD::SETNE), Tmp3,
+          DAG.getConstant(0, dl, Tmp3.getValueType()), Node->getOperand(2));
     }
     Results.push_back(Tmp1);
     break;
@@ -4043,10 +4077,10 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
   }
   case ISD::SELECT_CC: {
     // TODO: need to add STRICT_SELECT_CC and STRICT_SELECT_CCS
-    Tmp1 = Node->getOperand(0);   // LHS
-    Tmp2 = Node->getOperand(1);   // RHS
-    Tmp3 = Node->getOperand(2);   // True
-    Tmp4 = Node->getOperand(3);   // False
+    Tmp1 = Node->getOperand(0); // LHS
+    Tmp2 = Node->getOperand(1); // RHS
+    Tmp3 = Node->getOperand(2); // True
+    Tmp4 = Node->getOperand(3); // False
     EVT VT = Node->getValueType(0);
     SDValue Chain;
     SDValue CC = Node->getOperand(4);
@@ -4060,7 +4094,8 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
              "Cannot expand ISD::SELECT_CC when ISD::SELECT also needs to be "
              "expanded.");
       EVT CCVT = getSetCCResultType(CmpVT);
-      SDValue Cond = DAG.getNode(ISD::SETCC, dl, CCVT, Tmp1, Tmp2, CC, Node->getFlags());
+      SDValue Cond =
+          DAG.getNode(ISD::SETCC, dl, CCVT, Tmp1, Tmp2, CC, Node->getFlags());
       Results.push_back(DAG.getSelect(dl, VT, Cond, Tmp3, Tmp4));
       break;
     }
@@ -4104,8 +4139,8 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
       // If we expanded the SETCC by swapping LHS and RHS, or by inverting the
       // condition code, create a new SELECT_CC node.
       if (CC.getNode()) {
-        Tmp1 = DAG.getNode(ISD::SELECT_CC, dl, Node->getValueType(0),
-                           Tmp1, Tmp2, Tmp3, Tmp4, CC);
+        Tmp1 = DAG.getNode(ISD::SELECT_CC, dl, Node->getValueType(0), Tmp1,
+                           Tmp2, Tmp3, Tmp4, CC);
       } else {
         Tmp2 = DAG.getConstant(0, dl, Tmp1.getValueType());
         CC = DAG.getCondCode(ISD::SETNE);
@@ -4120,10 +4155,10 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
   case ISD::BR_CC: {
     // TODO: need to add STRICT_BR_CC and STRICT_BR_CCS
     SDValue Chain;
-    Tmp1 = Node->getOperand(0);              // Chain
-    Tmp2 = Node->getOperand(2);              // LHS
-    Tmp3 = Node->getOperand(3);              // RHS
-    Tmp4 = Node->getOperand(1);              // CC
+    Tmp1 = Node->getOperand(0); // Chain
+    Tmp2 = Node->getOperand(2); // LHS
+    Tmp3 = Node->getOperand(3); // RHS
+    Tmp4 = Node->getOperand(1); // CC
 
     bool Legalized = TLI.LegalizeSetCCCondCode(
         DAG, getSetCCResultType(Tmp2.getValueType()), Tmp2, Tmp3, Tmp4,
@@ -4136,8 +4171,8 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
     if (Tmp4.getNode()) {
       assert(!NeedInvert && "Don't know how to invert BR_CC!");
 
-      Tmp1 = DAG.getNode(ISD::BR_CC, dl, Node->getValueType(0), Tmp1,
-                         Tmp4, Tmp2, Tmp3, Node->getOperand(4));
+      Tmp1 = DAG.getNode(ISD::BR_CC, dl, Node->getValueType(0), Tmp1, Tmp4,
+                         Tmp2, Tmp3, Node->getOperand(4));
     } else {
       Tmp3 = DAG.getConstant(0, dl, Tmp2.getValueType());
       Tmp4 = DAG.getCondCode(NeedInvert ? ISD::SETEQ : ISD::SETNE);
@@ -4159,7 +4194,7 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
     // Scalarize vector SRA/SRL/SHL.
     EVT VT = Node->getValueType(0);
     assert(VT.isVector() && "Unable to legalize non-vector shift");
-    assert(TLI.isTypeLegal(VT.getScalarType())&& "Element type must be legal");
+    assert(TLI.isTypeLegal(VT.getScalarType()) && "Element type must be legal");
     unsigned NumElem = VT.getVectorNumElements();
 
     SmallVector<SDValue, 8> Scalars;
@@ -4170,8 +4205,8 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
       SDValue Sh =
           DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl, VT.getScalarType(),
                       Node->getOperand(1), DAG.getVectorIdxConstant(Idx, dl));
-      Scalars.push_back(DAG.getNode(Node->getOpcode(), dl,
-                                    VT.getScalarType(), Ex, Sh));
+      Scalars.push_back(
+          DAG.getNode(Node->getOpcode(), dl, VT.getScalarType(), Ex, Sh));
     }
 
     SDValue Result = DAG.getBuildVector(Node->getValueType(0), dl, Scalars);
@@ -4219,8 +4254,8 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
     switch (Node->getOpcode()) {
     default:
       if (TLI.getStrictFPOperationAction(Node->getOpcode(),
-                                         Node->getValueType(0))
-          == TargetLowering::Legal)
+                                         Node->getValueType(0)) ==
+          TargetLowering::Legal)
         return true;
       break;
     case ISD::STRICT_FSUB: {
@@ -4234,9 +4269,9 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
       EVT VT = Node->getValueType(0);
       const SDNodeFlags Flags = Node->getFlags();
       SDValue Neg = DAG.getNode(ISD::FNEG, dl, VT, Node->getOperand(2), Flags);
-      SDValue Fadd = DAG.getNode(ISD::STRICT_FADD, dl, Node->getVTList(),
-                                 {Node->getOperand(0), Node->getOperand(1), Neg},
-                         Flags);
+      SDValue Fadd =
+          DAG.getNode(ISD::STRICT_FADD, dl, Node->getVTList(),
+                      {Node->getOperand(0), Node->getOperand(1), Neg}, Flags);
 
       Results.push_back(Fadd);
       Results.push_back(Fadd.getValue(1));
@@ -4251,8 +4286,8 @@ bool SelectionDAGLegalize::ExpandNode(SDNode *Node) {
       // These are registered by the operand type instead of the value
       // type. Reflect that here.
       if (TLI.getStrictFPOperationAction(Node->getOpcode(),
-                                         Node->getOperand(1).getValueType())
-          == TargetLowering::Legal)
+                                         Node->getOperand(1).getValueType()) ==
+          TargetLowering::Legal)
         return true;
       break;
     }
@@ -4329,10 +4364,8 @@ void SelectionDAGLegalize::ConvertNodeToLibcall(SDNode *Node) {
       // Arguments for expansion to sync libcall
       Ops.append(Node->op_begin() + 1, Node->op_end());
     }
-    std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG, LC, RetVT,
-                                                      Ops, CallOptions,
-                                                      SDLoc(Node),
-                                                      Node->getOperand(0));
+    std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(
+        DAG, LC, RetVT, Ops, CallOptions, SDLoc(Node), Node->getOperand(0));
     Results.push_back(Tmp.first);
     Results.push_back(Tmp.second);
     break;
@@ -4354,41 +4387,35 @@ void SelectionDAGLegalize::ConvertNodeToLibcall(SDNode *Node) {
   }
   case ISD::FMINNUM:
   case ISD::STRICT_FMINNUM:
-    ExpandFPLibCall(Node, RTLIB::FMIN_F32, RTLIB::FMIN_F64,
-                    RTLIB::FMIN_F80, RTLIB::FMIN_F128,
-                    RTLIB::FMIN_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::FMIN_F32, RTLIB::FMIN_F64, RTLIB::FMIN_F80,
+                    RTLIB::FMIN_F128, RTLIB::FMIN_PPCF128, Results);
     break;
   // FIXME: We do not have libcalls for FMAXIMUM and FMINIMUM. So, we cannot use
   // libcall legalization for these nodes, but there is no default expasion for
   // these nodes either (see PR63267 for example).
   case ISD::FMAXNUM:
   case ISD::STRICT_FMAXNUM:
-    ExpandFPLibCall(Node, RTLIB::FMAX_F32, RTLIB::FMAX_F64,
-                    RTLIB::FMAX_F80, RTLIB::FMAX_F128,
-                    RTLIB::FMAX_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::FMAX_F32, RTLIB::FMAX_F64, RTLIB::FMAX_F80,
+                    RTLIB::FMAX_F128, RTLIB::FMAX_PPCF128, Results);
     break;
   case ISD::FSQRT:
   case ISD::STRICT_FSQRT:
-    ExpandFPLibCall(Node, RTLIB::SQRT_F32, RTLIB::SQRT_F64,
-                    RTLIB::SQRT_F80, RTLIB::SQRT_F128,
-                    RTLIB::SQRT_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::SQRT_F32, RTLIB::SQRT_F64, RTLIB::SQRT_F80,
+                    RTLIB::SQRT_F128, RTLIB::SQRT_PPCF128, Results);
     break;
   case ISD::FCBRT:
-    ExpandFPLibCall(Node, RTLIB::CBRT_F32, RTLIB::CBRT_F64,
-                    RTLIB::CBRT_F80, RTLIB::CBRT_F128,
-                    RTLIB::CBRT_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::CBRT_F32, RTLIB::CBRT_F64, RTLIB::CBRT_F80,
+                    RTLIB::CBRT_F128, RTLIB::CBRT_PPCF128, Results);
     break;
   case ISD::FSIN:
   case ISD::STRICT_FSIN:
-    ExpandFPLibCall(Node, RTLIB::SIN_F32, RTLIB::SIN_F64,
-                    RTLIB::SIN_F80, RTLIB::SIN_F128,
-                    RTLIB::SIN_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::SIN_F32, RTLIB::SIN_F64, RTLIB::SIN_F80,
+                    RTLIB::SIN_F128, RTLIB::SIN_PPCF128, Results);
     break;
   case ISD::FCOS:
   case ISD::STRICT_FCOS:
-    ExpandFPLibCall(Node, RTLIB::COS_F32, RTLIB::COS_F64,
-                    RTLIB::COS_F80, RTLIB::COS_F128,
-                    RTLIB::COS_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::COS_F32, RTLIB::COS_F64, RTLIB::COS_F80,
+                    RTLIB::COS_F128, RTLIB::COS_PPCF128, Results);
     break;
   case ISD::FSINCOS:
     // Expand into sincos libcall.
@@ -4425,50 +4452,39 @@ void SelectionDAGLegalize::ConvertNodeToLibcall(SDNode *Node) {
     break;
   case ISD::FTRUNC:
   case ISD::STRICT_FTRUNC:
-    ExpandFPLibCall(Node, RTLIB::TRUNC_F32, RTLIB::TRUNC_F64,
-                    RTLIB::TRUNC_F80, RTLIB::TRUNC_F128,
-                    RTLIB::TRUNC_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::TRUNC_F32, RTLIB::TRUNC_F64, RTLIB::TRUNC_F80,
+                    RTLIB::TRUNC_F128, RTLIB::TRUNC_PPCF128, Results);
     break;
   case ISD::FFLOOR:
   case ISD::STRICT_FFLOOR:
-    ExpandFPLibCall(Node, RTLIB::FLOOR_F32, RTLIB::FLOOR_F64,
-                    RTLIB::FLOOR_F80, RTLIB::FLOOR_F128,
-                    RTLIB::FLOOR_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::FLOOR_F32, RTLIB::FLOOR_F64, RTLIB::FLOOR_F80,
+                    RTLIB::FLOOR_F128, RTLIB::FLOOR_PPCF128, Results);
     break;
   case ISD::FCEIL:
   case ISD::STRICT_FCEIL:
-    ExpandFPLibCall(Node, RTLIB::CEIL_F32, RTLIB::CEIL_F64,
-                    RTLIB::CEIL_F80, RTLIB::CEIL_F128,
-                    RTLIB::CEIL_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::CEIL_F32, RTLIB::CEIL_F64, RTLIB::CEIL_F80,
+                    RTLIB::CEIL_F128, RTLIB::CEIL_PPCF128, Results);
     break;
   case ISD::FRINT:
   case ISD::STRICT_FRINT:
-    ExpandFPLibCall(Node, RTLIB::RINT_F32, RTLIB::RINT_F64,
-                    RTLIB::RINT_F80, RTLIB::RINT_F128,
-                    RTLIB::RINT_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::RINT_F32, RTLIB::RINT_F64, RTLIB::RINT_F80,
+                    RTLIB::RINT_F128, RTLIB::RINT_PPCF128, Results);
     break;
   case ISD::FNEARBYINT:
   case ISD::STRICT_FNEARBYINT:
-    ExpandFPLibCall(Node, RTLIB::NEARBYINT_F32,
-                    RTLIB::NEARBYINT_F64,
-                    RTLIB::NEARBYINT_F80,
-                    RTLIB::NEARBYINT_F128,
+    ExpandFPLibCall(Node, RTLIB::NEARBYINT_F32, RTLIB::NEARBYINT_F64,
+                    RTLIB::NEARBYINT_F80, RTLIB::NEARBYINT_F128,
                     RTLIB::NEARBYINT_PPCF128, Results);
     break;
   case ISD::FROUND:
   case ISD::STRICT_FROUND:
-    ExpandFPLibCall(Node, RTLIB::ROUND_F32,
-                    RTLIB::ROUND_F64,
-                    RTLIB::ROUND_F80,
-                    RTLIB::ROUND_F128,
-                    RTLIB::ROUND_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::ROUND_F32, RTLIB::ROUND_F64, RTLIB::ROUND_F80,
+                    RTLIB::ROUND_F128, RTLIB::ROUND_PPCF128, Results);
     break;
   case ISD::FROUNDEVEN:
   case ISD::STRICT_FROUNDEVEN:
-    ExpandFPLibCall(Node, RTLIB::ROUNDEVEN_F32,
-                    RTLIB::ROUNDEVEN_F64,
-                    RTLIB::ROUNDEVEN_F80,
-                    RTLIB::ROUNDEVEN_F128,
+    ExpandFPLibCall(Node, RTLIB::ROUNDEVEN_F32, RTLIB::ROUNDEVEN_F64,
+                    RTLIB::ROUNDEVEN_F80, RTLIB::ROUNDEVEN_F128,
                     RTLIB::ROUNDEVEN_PPCF128, Results);
     break;
   case ISD::FLDEXP:
@@ -4528,61 +4544,52 @@ void SelectionDAGLegalize::ConvertNodeToLibcall(SDNode *Node) {
     break;
   case ISD::LROUND:
   case ISD::STRICT_LROUND:
-    ExpandArgFPLibCall(Node, RTLIB::LROUND_F32,
-                       RTLIB::LROUND_F64, RTLIB::LROUND_F80,
-                       RTLIB::LROUND_F128,
+    ExpandArgFPLibCall(Node, RTLIB::LROUND_F32, RTLIB::LROUND_F64,
+                       RTLIB::LROUND_F80, RTLIB::LROUND_F128,
                        RTLIB::LROUND_PPCF128, Results);
     break;
   case ISD::LLROUND:
   case ISD::STRICT_LLROUND:
-    ExpandArgFPLibCall(Node, RTLIB::LLROUND_F32,
-                       RTLIB::LLROUND_F64, RTLIB::LLROUND_F80,
-                       RTLIB::LLROUND_F128,
+    ExpandArgFPLibCall(Node, RTLIB::LLROUND_F32, RTLIB::LLROUND_F64,
+                       RTLIB::LLROUND_F80, RTLIB::LLROUND_F128,
                        RTLIB::LLROUND_PPCF128, Results);
     break;
   case ISD::LRINT:
   case ISD::STRICT_LRINT:
-    ExpandArgFPLibCall(Node, RTLIB::LRINT_F32,
-                       RTLIB::LRINT_F64, RTLIB::LRINT_F80,
-                       RTLIB::LRINT_F128,
+    ExpandArgFPLibCall(Node, RTLIB::LRINT_F32, RTLIB::LRINT_F64,
+                       RTLIB::LRINT_F80, RTLIB::LRINT_F128,
                        RTLIB::LRINT_PPCF128, Results);
     break;
   case ISD::LLRINT:
   case ISD::STRICT_LLRINT:
-    ExpandArgFPLibCall(Node, RTLIB::LLRINT_F32,
-                       RTLIB::LLRINT_F64, RTLIB::LLRINT_F80,
-                       RTLIB::LLRINT_F128,
+    ExpandArgFPLibCall(Node, RTLIB::LLRINT_F32, RTLIB::LLRINT_F64,
+                       RTLIB::LLRINT_F80, RTLIB::LLRINT_F128,
                        RTLIB::LLRINT_PPCF128, Results);
     break;
   case ISD::FDIV:
   case ISD::STRICT_FDIV:
-    ExpandFPLibCall(Node, RTLIB::DIV_F32, RTLIB::DIV_F64,
-                    RTLIB::DIV_F80, RTLIB::DIV_F128,
-                    RTLIB::DIV_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::DIV_F32, RTLIB::DIV_F64, RTLIB::DIV_F80,
+                    RTLIB::DIV_F128, RTLIB::DIV_PPCF128, Results);
     break;
   case ISD::FREM:
   case ISD::STRICT_FREM:
-    ExpandFPLibCall(Node, RTLIB::REM_F32, RTLIB::REM_F64,
-                    RTLIB::REM_F80, RTLIB::REM_F128,
-                    RTLIB::REM_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::REM_F32, RTLIB::REM_F64, RTLIB::REM_F80,
+                    RTLIB::REM_F128, RTLIB::REM_PPCF128, Results);
     break;
   case ISD::FMA:
   case ISD::STRICT_FMA:
-    ExpandFPLibCall(Node, RTLIB::FMA_F32, RTLIB::FMA_F64,
-                    RTLIB::FMA_F80, RTLIB::FMA_F128,
-                    RTLIB::FMA_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::FMA_F32, RTLIB::FMA_F64, RTLIB::FMA_F80,
+                    RTLIB::FMA_F128, RTLIB::FMA_PPCF128, Results);
     break;
   case ISD::FADD:
   case ISD::STRICT_FADD:
-    ExpandFPLibCall(Node, RTLIB::ADD_F32, RTLIB::ADD_F64,
-                    RTLIB::ADD_F80, RTLIB::ADD_F128,
-                    RTLIB::ADD_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::ADD_F32, RTLIB::ADD_F64, RTLIB::ADD_F80,
+                    RTLIB::ADD_F128, RTLIB::ADD_PPCF128, Results);
     break;
   case ISD::FMUL:
   case ISD::STRICT_FMUL:
-    ExpandFPLibCall(Node, RTLIB::MUL_F32, RTLIB::MUL_F64,
-                    RTLIB::MUL_F80, RTLIB::MUL_F128,
-                    RTLIB::MUL_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::MUL_F32, RTLIB::MUL_F64, RTLIB::MUL_F80,
+                    RTLIB::MUL_F128, RTLIB::MUL_PPCF128, Results);
     break;
   case ISD::FP16_TO_FP:
     if (Node->getValueType(0) == MVT::f32) {
@@ -4726,7 +4733,8 @@ void SelectionDAGLegalize::ConvertNodeToLibcall(SDNode *Node) {
     Results.push_back(
         ExpandLibCall(RTLIB::getFPEXT(Node->getOperand(0).getValueType(),
                                       Node->getValueType(0)),
-                      Node, false).first);
+                      Node, false)
+            .first);
     break;
   }
   case ISD::STRICT_FP_EXTEND:
@@ -4748,31 +4756,26 @@ void SelectionDAGLegalize::ConvertNodeToLibcall(SDNode *Node) {
   }
   case ISD::FSUB:
   case ISD::STRICT_FSUB:
-    ExpandFPLibCall(Node, RTLIB::SUB_F32, RTLIB::SUB_F64,
-                    RTLIB::SUB_F80, RTLIB::SUB_F128,
-                    RTLIB::SUB_PPCF128, Results);
+    ExpandFPLibCall(Node, RTLIB::SUB_F32, RTLIB::SUB_F64, RTLIB::SUB_F80,
+                    RTLIB::SUB_F128, RTLIB::SUB_PPCF128, Results);
     break;
   case ISD::SREM:
-    Results.push_back(ExpandIntLibCall(Node, true,
-                                       RTLIB::SREM_I8,
+    Results.push_back(ExpandIntLibCall(Node, true, RTLIB::SREM_I8,
                                        RTLIB::SREM_I16, RTLIB::SREM_I32,
                                        RTLIB::SREM_I64, RTLIB::SREM_I128));
     break;
   case ISD::UREM:
-    Results.push_back(ExpandIntLibCall(Node, false,
-                                       RTLIB::UREM_I8,
+    Results.push_back(ExpandIntLibCall(Node, false, RTLIB::UREM_I8,
                                        RTLIB::UREM_I16, RTLIB::UREM_I32,
                                        RTLIB::UREM_I64, RTLIB::UREM_I128));
     break;
   case ISD::SDIV:
-    Results.push_back(ExpandIntLibCall(Node, true,
-                                       RTLIB::SDIV_I8,
+    Results.push_back(ExpandIntLibCall(Node, true, RTLIB::SDIV_I8,
                                        RTLIB::SDIV_I16, RTLIB::SDIV_I32,
                                        RTLIB::SDIV_I64, RTLIB::SDIV_I128));
     break;
   case ISD::UDIV:
-    Results.push_back(ExpandIntLibCall(Node, false,
-                                       RTLIB::UDIV_I8,
+    Results.push_back(ExpandIntLibCall(Node, false, RTLIB::UDIV_I8,
                                        RTLIB::UDIV_I16, RTLIB::UDIV_I32,
                                        RTLIB::UDIV_I64, RTLIB::UDIV_I128));
     break;
@@ -4782,8 +4785,7 @@ void SelectionDAGLegalize::ConvertNodeToLibcall(SDNode *Node) {
     ExpandDivRemLibCall(Node, Results);
     break;
   case ISD::MUL:
-    Results.push_back(ExpandIntLibCall(Node, false,
-                                       RTLIB::MUL_I8,
+    Results.push_back(ExpandIntLibCall(Node, false, RTLIB::MUL_I8,
                                        RTLIB::MUL_I16, RTLIB::MUL_I32,
                                        RTLIB::MUL_I64, RTLIB::MUL_I128));
     break;
@@ -4877,8 +4879,8 @@ void SelectionDAGLegalize::ConvertNodeToLibcall(SDNode *Node) {
 
 // Determine the vector type to use in place of an original scalar element when
 // promoting equally sized vectors.
-static MVT getPromotedVectorElementType(const TargetLowering &TLI,
-                                        MVT EltVT, MVT NewEltVT) {
+static MVT getPromotedVectorElementType(const TargetLowering &TLI, MVT EltVT,
+                                        MVT NewEltVT) {
   unsigned OldEltsPerNewElt = EltVT.getSizeInBits() / NewEltVT.getSizeInBits();
   MVT MidVT = MVT::getVectorVT(NewEltVT, OldEltsPerNewElt);
   assert(TLI.isTypeLegal(MidVT) && "unexpected");
@@ -4890,8 +4892,7 @@ void SelectionDAGLegalize::PromoteNode(SDNode *Node) {
   SmallVector<SDValue, 8> Results;
   MVT OVT = Node->getSimpleValueType(0);
   if (Node->getOpcode() == ISD::UINT_TO_FP ||
-      Node->getOpcode() == ISD::SINT_TO_FP ||
-      Node->getOpcode() == ISD::SETCC ||
+      Node->getOpcode() == ISD::SINT_TO_FP || Node->getOpcode() == ISD::SETCC ||
       Node->getOpcode() == ISD::EXTRACT_VECTOR_ELT ||
       Node->getOpcode() == ISD::INSERT_VECTOR_ELT) {
     OVT = Node->getOperand(0).getSimpleValueType();
@@ -4901,8 +4902,7 @@ void SelectionDAGLegalize::PromoteNode(SDNode *Node) {
       Node->getOpcode() == ISD::STRICT_FSETCC ||
       Node->getOpcode() == ISD::STRICT_FSETCCS)
     OVT = Node->getOperand(1).getSimpleValueType();
-  if (Node->getOpcode() == ISD::BR_CC ||
-      Node->getOpcode() == ISD::SELECT_CC)
+  if (Node->getOpcode() == ISD::BR_CC || Node->getOpcode() == ISD::SELECT_CC)
     OVT = Node->getOperand(2).getSimpleValueType();
   MVT NVT = TLI.getTypeToPromoteTo(Node->getOpcode(), OVT);
   SDLoc dl(Node);
@@ -4924,10 +4924,10 @@ void SelectionDAGLegalize::PromoteNode(SDNode *Node) {
       // The count is the same in the promoted type except if the original
       // value was zero.  This can be handled by setting the bit just off
       // the top of the original type.
-      auto TopBit = APInt::getOneBitSet(NVT.getSizeInBits(),
-                                        OVT.getSizeInBits());
-      Tmp1 = DAG.getNode(ISD::OR, dl, NVT, Tmp1,
-                         DAG.getConstant(TopBit, dl, NVT));
+      auto TopBit =
+          APInt::getOneBitSet(NVT.getSizeInBits(), OVT.getSizeInBits());
+      Tmp1 =
+          DAG.getNode(ISD::OR, dl, NVT, Tmp1, DAG.getConstant(TopBit, dl, NVT));
     }
     // Perform the larger operation. For CTPOP and CTTZ_ZERO_UNDEF, this is
     // already the correct result.
@@ -4935,9 +4935,9 @@ void SelectionDAGLegalize::PromoteNode(SDNode *Node) {
     if (Node->getOpcode() == ISD::CTLZ ||
         Node->getOpcode() == ISD::CTLZ_ZERO_UNDEF) {
       // Tmp1 = Tmp1 - (sizeinbits(NVT) - sizeinbits(Old VT))
-      Tmp1 = DAG.getNode(ISD::SUB, dl, NVT, Tmp1,
-                          DAG.getConstant(NVT.getSizeInBits() -
-                                          OVT.getSizeInBits(), dl, NVT));
+      Tmp1 = DAG.getNode(
+          ISD::SUB, dl, NVT, Tmp1,
+          DAG.getConstant(NVT.getSizeInBits() - OVT.getSizeInBits(), dl, NVT));
     }
     Results.push_back(DAG.getNode(ISD::TRUNCATE, dl, OVT, Tmp1));
     break;
@@ -4972,20 +4972,20 @@ void SelectionDAGLegalize::PromoteNode(SDNode *Node) {
     break;
   case ISD::VAARG: {
     SDValue Chain = Node->getOperand(0); // Get the chain.
-    SDValue Ptr = Node->getOperand(1); // Get the pointer.
+    SDValue Ptr = Node->getOperand(1);   // Get the pointer.
 
     unsigned TruncOp;
     if (OVT.isVector()) {
       TruncOp = ISD::BITCAST;
     } else {
-      assert(OVT.isInteger()
-        && "VAARG promotion is supported only for vectors or integer types");
+      assert(OVT.isInteger() &&
+             "VAARG promotion is supported only for vectors or integer types");
       TruncOp = ISD::TRUNCATE;
     }
 
     // Perform the larger operation, then convert back
     Tmp1 = DAG.getVAArg(NVT, dl, Chain, Ptr, Node->getOperand(2),
-             Node->getConstantOperandVal(3));
+                        Node->getConstantOperandVal(3));
     Chain = Tmp1.getValue(1);
 
     Tmp2 = DAG.getNode(TruncOp, dl, OVT, Tmp1);
@@ -5011,7 +5011,7 @@ void SelectionDAGLegalize::PromoteNode(SDNode *Node) {
   case ISD::XOR: {
     unsigned ExtOp, TruncOp;
     if (OVT.isVector()) {
-      ExtOp   = ISD::BITCAST;
+      ExtOp = ISD::BITCAST;
       TruncOp = ISD::BITCAST;
     } else {
       assert(OVT.isInteger() && "Cannot promote logic operation");
@@ -5061,13 +5061,13 @@ void SelectionDAGLegalize::PromoteNode(SDNode *Node) {
     unsigned ExtOp, TruncOp;
     if (Node->getValueType(0).isVector() ||
         Node->getValueType(0).getSizeInBits() == NVT.getSizeInBits()) {
-      ExtOp   = ISD::BITCAST;
+      ExtOp = ISD::BITCAST;
       TruncOp = ISD::BITCAST;
     } else if (Node->getValueType(0).isInteger()) {
-      ExtOp   = ISD::ANY_EXTEND;
+      ExtOp = ISD::ANY_EXTEND;
       TruncOp = ISD::TRUNCATE;
     } else {
-      ExtOp   = ISD::FP_EXTEND;
+      ExtOp = ISD::FP_EXTEND;
       TruncOp = ISD::FP_ROUND;
     }
     Tmp1 = Node->getOperand(0);
@@ -5175,8 +5175,7 @@ void SelectionDAGLegalize::PromoteNode(SDNode *Node) {
   case ISD::BR_CC: {
     unsigned ExtOp = ISD::FP_EXTEND;
     if (NVT.isInteger()) {
-      ISD::CondCode CCCode =
-        cast<CondCodeSDNode>(Node->getOperand(1))->get();
+      ISD::CondCode CCCode = cast<CondCodeSDNode>(Node->getOperand(1))->get();
       ExtOp = isSignedIntSetCC(CCCode) ? ISD::SIGN_EXTEND : ISD::ZERO_EXTEND;
     }
     Tmp1 = DAG.getNode(ExtOp, dl, NVT, Node->getOperand(2));
@@ -5198,8 +5197,8 @@ void SelectionDAGLegalize::PromoteNode(SDNode *Node) {
   case ISD::FPOW:
     Tmp1 = DAG.getNode(ISD::FP_EXTEND, dl, NVT, Node->getOperand(0));
     Tmp2 = DAG.getNode(ISD::FP_EXTEND, dl, NVT, Node->getOperand(1));
-    Tmp3 = DAG.getNode(Node->getOpcode(), dl, NVT, Tmp1, Tmp2,
-                       Node->getFlags());
+    Tmp3 =
+        DAG.getNode(Node->getOpcode(), dl, NVT, Tmp1, Tmp2, Node->getFlags());
     Results.push_back(
         DAG.getNode(ISD::FP_ROUND, dl, OVT, Tmp3,
                     DAG.getIntPtrConstant(0, dl, /*isTarget=*/true)));
@@ -5400,8 +5399,8 @@ void SelectionDAGLegalize::PromoteNode(SDNode *Node) {
       SDValue IdxOffset = DAG.getConstant(I, SL, IdxVT);
       SDValue TmpIdx = DAG.getNode(ISD::ADD, SL, IdxVT, NewBaseIdx, IdxOffset);
 
-      SDValue Elt = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, SL, NewEltVT,
-                                CastVec, TmpIdx);
+      SDValue Elt =
+          DAG.getNode(ISD::EXTRACT_VECTOR_ELT, SL, NewEltVT, CastVec, TmpIdx);
       NewOps.push_back(Elt);
     }
 
@@ -5447,13 +5446,14 @@ void SelectionDAGLegalize::PromoteNode(SDNode *Node) {
     SDValue NewVec = CastVec;
     for (unsigned I = 0; I < NewEltsPerOldElt; ++I) {
       SDValue IdxOffset = DAG.getConstant(I, SL, IdxVT);
-      SDValue InEltIdx = DAG.getNode(ISD::ADD, SL, IdxVT, NewBaseIdx, IdxOffset);
+      SDValue InEltIdx =
+          DAG.getNode(ISD::ADD, SL, IdxVT, NewBaseIdx, IdxOffset);
 
-      SDValue Elt = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, SL, NewEltVT,
-                                CastVal, IdxOffset);
+      SDValue Elt = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, SL, NewEltVT, CastVal,
+                                IdxOffset);
 
-      NewVec = DAG.getNode(ISD::INSERT_VECTOR_ELT, SL, NVT,
-                           NewVec, Elt, InEltIdx);
+      NewVec =
+          DAG.getNode(ISD::INSERT_VECTOR_ELT, SL, NVT, NewVec, Elt, InEltIdx);
     }
 
     Results.push_back(DAG.getNode(ISD::BITCAST, SL, OVT, NewVec));
@@ -5496,11 +5496,9 @@ void SelectionDAGLegalize::PromoteNode(SDNode *Node) {
     assert(AM->getMemoryVT().getSizeInBits() == NVT.getSizeInBits() &&
            "unexpected atomic_swap with illegal type");
 
-    SDValue NewAtomic
-      = DAG.getAtomic(ISD::ATOMIC_SWAP, SL, NVT,
-                      DAG.getVTList(NVT, MVT::Other),
-                      { AM->getChain(), AM->getBasePtr(), CastVal },
-                      AM->getMemOperand());
+    SDValue NewAtomic = DAG.getAtomic(
+        ISD::ATOMIC_SWAP, SL, NVT, DAG.getVTList(NVT, MVT::Other),
+        {AM->getChain(), AM->getBasePtr(), CastVal}, AM->getMemOperand());
     Results.push_back(DAG.getNode(ISD::BITCAST, SL, OVT, NewAtomic));
     Results.push_back(NewAtomic.getValue(1));
     break;
@@ -5575,7 +5573,6 @@ void SelectionDAG::Legalize() {
     }
     if (!AnyLegalized)
       break;
-
   }
 
   // Remove dead nodes now.
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeFloatTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeFloatTypes.cpp
index 95f181217803515..164cdc12ceeb249 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeFloatTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeFloatTypes.cpp
@@ -29,19 +29,17 @@ using namespace llvm;
 /// GetFPLibCall - Return the right libcall for the given floating point type.
 /// FIXME: This is a local version of RTLIB::getFPLibCall that should be
 ///        refactored away (see RTLIB::getPOWI for an example).
-static RTLIB::Libcall GetFPLibCall(EVT VT,
-                                   RTLIB::Libcall Call_F32,
+static RTLIB::Libcall GetFPLibCall(EVT VT, RTLIB::Libcall Call_F32,
                                    RTLIB::Libcall Call_F64,
                                    RTLIB::Libcall Call_F80,
                                    RTLIB::Libcall Call_F128,
                                    RTLIB::Libcall Call_PPCF128) {
-  return
-    VT == MVT::f32 ? Call_F32 :
-    VT == MVT::f64 ? Call_F64 :
-    VT == MVT::f80 ? Call_F80 :
-    VT == MVT::f128 ? Call_F128 :
-    VT == MVT::ppcf128 ? Call_PPCF128 :
-    RTLIB::UNKNOWN_LIBCALL;
+  return VT == MVT::f32       ? Call_F32
+         : VT == MVT::f64     ? Call_F64
+         : VT == MVT::f80     ? Call_F80
+         : VT == MVT::f128    ? Call_F128
+         : VT == MVT::ppcf128 ? Call_PPCF128
+                              : RTLIB::UNKNOWN_LIBCALL;
 }
 
 //===----------------------------------------------------------------------===//
@@ -57,107 +55,203 @@ void DAGTypeLegalizer::SoftenFloatResult(SDNode *N, unsigned ResNo) {
   default:
 #ifndef NDEBUG
     dbgs() << "SoftenFloatResult #" << ResNo << ": ";
-    N->dump(&DAG); dbgs() << "\n";
+    N->dump(&DAG);
+    dbgs() << "\n";
 #endif
     report_fatal_error("Do not know how to soften the result of this "
                        "operator!");
 
-    case ISD::ARITH_FENCE: R = SoftenFloatRes_ARITH_FENCE(N); break;
-    case ISD::MERGE_VALUES:R = SoftenFloatRes_MERGE_VALUES(N, ResNo); break;
-    case ISD::BITCAST:     R = SoftenFloatRes_BITCAST(N); break;
-    case ISD::BUILD_PAIR:  R = SoftenFloatRes_BUILD_PAIR(N); break;
-    case ISD::ConstantFP:  R = SoftenFloatRes_ConstantFP(N); break;
-    case ISD::EXTRACT_VECTOR_ELT:
-      R = SoftenFloatRes_EXTRACT_VECTOR_ELT(N, ResNo); break;
-    case ISD::FABS:        R = SoftenFloatRes_FABS(N); break;
-    case ISD::STRICT_FMINNUM:
-    case ISD::FMINNUM:     R = SoftenFloatRes_FMINNUM(N); break;
-    case ISD::STRICT_FMAXNUM:
-    case ISD::FMAXNUM:     R = SoftenFloatRes_FMAXNUM(N); break;
-    case ISD::STRICT_FADD:
-    case ISD::FADD:        R = SoftenFloatRes_FADD(N); break;
-    case ISD::FCBRT:       R = SoftenFloatRes_FCBRT(N); break;
-    case ISD::STRICT_FCEIL:
-    case ISD::FCEIL:       R = SoftenFloatRes_FCEIL(N); break;
-    case ISD::FCOPYSIGN:   R = SoftenFloatRes_FCOPYSIGN(N); break;
-    case ISD::STRICT_FCOS:
-    case ISD::FCOS:        R = SoftenFloatRes_FCOS(N); break;
-    case ISD::STRICT_FDIV:
-    case ISD::FDIV:        R = SoftenFloatRes_FDIV(N); break;
-    case ISD::STRICT_FEXP:
-    case ISD::FEXP:        R = SoftenFloatRes_FEXP(N); break;
-    case ISD::STRICT_FEXP2:
-    case ISD::FEXP2:       R = SoftenFloatRes_FEXP2(N); break;
-    case ISD::FEXP10:      R = SoftenFloatRes_FEXP10(N); break;
-    case ISD::STRICT_FFLOOR:
-    case ISD::FFLOOR:      R = SoftenFloatRes_FFLOOR(N); break;
-    case ISD::STRICT_FLOG:
-    case ISD::FLOG:        R = SoftenFloatRes_FLOG(N); break;
-    case ISD::STRICT_FLOG2:
-    case ISD::FLOG2:       R = SoftenFloatRes_FLOG2(N); break;
-    case ISD::STRICT_FLOG10:
-    case ISD::FLOG10:      R = SoftenFloatRes_FLOG10(N); break;
-    case ISD::STRICT_FMA:
-    case ISD::FMA:         R = SoftenFloatRes_FMA(N); break;
-    case ISD::STRICT_FMUL:
-    case ISD::FMUL:        R = SoftenFloatRes_FMUL(N); break;
-    case ISD::STRICT_FNEARBYINT:
-    case ISD::FNEARBYINT:  R = SoftenFloatRes_FNEARBYINT(N); break;
-    case ISD::FNEG:        R = SoftenFloatRes_FNEG(N); break;
-    case ISD::STRICT_FP_EXTEND:
-    case ISD::FP_EXTEND:   R = SoftenFloatRes_FP_EXTEND(N); break;
-    case ISD::STRICT_FP_ROUND:
-    case ISD::FP_ROUND:    R = SoftenFloatRes_FP_ROUND(N); break;
-    case ISD::FP16_TO_FP:  R = SoftenFloatRes_FP16_TO_FP(N); break;
-    case ISD::BF16_TO_FP:  R = SoftenFloatRes_BF16_TO_FP(N); break;
-    case ISD::STRICT_FPOW:
-    case ISD::FPOW:        R = SoftenFloatRes_FPOW(N); break;
-    case ISD::STRICT_FPOWI:
-    case ISD::FPOWI:
-    case ISD::FLDEXP:
-    case ISD::STRICT_FLDEXP: R = SoftenFloatRes_ExpOp(N); break;
-    case ISD::FFREXP:
-      R = SoftenFloatRes_FFREXP(N);
-      break;
-    case ISD::STRICT_FREM:
-    case ISD::FREM:        R = SoftenFloatRes_FREM(N); break;
-    case ISD::STRICT_FRINT:
-    case ISD::FRINT:       R = SoftenFloatRes_FRINT(N); break;
-    case ISD::STRICT_FROUND:
-    case ISD::FROUND:      R = SoftenFloatRes_FROUND(N); break;
-    case ISD::STRICT_FROUNDEVEN:
-    case ISD::FROUNDEVEN:  R = SoftenFloatRes_FROUNDEVEN(N); break;
-    case ISD::STRICT_FSIN:
-    case ISD::FSIN:        R = SoftenFloatRes_FSIN(N); break;
-    case ISD::STRICT_FSQRT:
-    case ISD::FSQRT:       R = SoftenFloatRes_FSQRT(N); break;
-    case ISD::STRICT_FSUB:
-    case ISD::FSUB:        R = SoftenFloatRes_FSUB(N); break;
-    case ISD::STRICT_FTRUNC:
-    case ISD::FTRUNC:      R = SoftenFloatRes_FTRUNC(N); break;
-    case ISD::LOAD:        R = SoftenFloatRes_LOAD(N); break;
-    case ISD::ATOMIC_SWAP: R = BitcastToInt_ATOMIC_SWAP(N); break;
-    case ISD::SELECT:      R = SoftenFloatRes_SELECT(N); break;
-    case ISD::SELECT_CC:   R = SoftenFloatRes_SELECT_CC(N); break;
-    case ISD::FREEZE:      R = SoftenFloatRes_FREEZE(N); break;
-    case ISD::STRICT_SINT_TO_FP:
-    case ISD::STRICT_UINT_TO_FP:
-    case ISD::SINT_TO_FP:
-    case ISD::UINT_TO_FP:  R = SoftenFloatRes_XINT_TO_FP(N); break;
-    case ISD::UNDEF:       R = SoftenFloatRes_UNDEF(N); break;
-    case ISD::VAARG:       R = SoftenFloatRes_VAARG(N); break;
-    case ISD::VECREDUCE_FADD:
-    case ISD::VECREDUCE_FMUL:
-    case ISD::VECREDUCE_FMIN:
-    case ISD::VECREDUCE_FMAX:
-    case ISD::VECREDUCE_FMAXIMUM:
-    case ISD::VECREDUCE_FMINIMUM:
-      R = SoftenFloatRes_VECREDUCE(N);
-      break;
-    case ISD::VECREDUCE_SEQ_FADD:
-    case ISD::VECREDUCE_SEQ_FMUL:
-      R = SoftenFloatRes_VECREDUCE_SEQ(N);
-      break;
+  case ISD::ARITH_FENCE:
+    R = SoftenFloatRes_ARITH_FENCE(N);
+    break;
+  case ISD::MERGE_VALUES:
+    R = SoftenFloatRes_MERGE_VALUES(N, ResNo);
+    break;
+  case ISD::BITCAST:
+    R = SoftenFloatRes_BITCAST(N);
+    break;
+  case ISD::BUILD_PAIR:
+    R = SoftenFloatRes_BUILD_PAIR(N);
+    break;
+  case ISD::ConstantFP:
+    R = SoftenFloatRes_ConstantFP(N);
+    break;
+  case ISD::EXTRACT_VECTOR_ELT:
+    R = SoftenFloatRes_EXTRACT_VECTOR_ELT(N, ResNo);
+    break;
+  case ISD::FABS:
+    R = SoftenFloatRes_FABS(N);
+    break;
+  case ISD::STRICT_FMINNUM:
+  case ISD::FMINNUM:
+    R = SoftenFloatRes_FMINNUM(N);
+    break;
+  case ISD::STRICT_FMAXNUM:
+  case ISD::FMAXNUM:
+    R = SoftenFloatRes_FMAXNUM(N);
+    break;
+  case ISD::STRICT_FADD:
+  case ISD::FADD:
+    R = SoftenFloatRes_FADD(N);
+    break;
+  case ISD::FCBRT:
+    R = SoftenFloatRes_FCBRT(N);
+    break;
+  case ISD::STRICT_FCEIL:
+  case ISD::FCEIL:
+    R = SoftenFloatRes_FCEIL(N);
+    break;
+  case ISD::FCOPYSIGN:
+    R = SoftenFloatRes_FCOPYSIGN(N);
+    break;
+  case ISD::STRICT_FCOS:
+  case ISD::FCOS:
+    R = SoftenFloatRes_FCOS(N);
+    break;
+  case ISD::STRICT_FDIV:
+  case ISD::FDIV:
+    R = SoftenFloatRes_FDIV(N);
+    break;
+  case ISD::STRICT_FEXP:
+  case ISD::FEXP:
+    R = SoftenFloatRes_FEXP(N);
+    break;
+  case ISD::STRICT_FEXP2:
+  case ISD::FEXP2:
+    R = SoftenFloatRes_FEXP2(N);
+    break;
+  case ISD::FEXP10:
+    R = SoftenFloatRes_FEXP10(N);
+    break;
+  case ISD::STRICT_FFLOOR:
+  case ISD::FFLOOR:
+    R = SoftenFloatRes_FFLOOR(N);
+    break;
+  case ISD::STRICT_FLOG:
+  case ISD::FLOG:
+    R = SoftenFloatRes_FLOG(N);
+    break;
+  case ISD::STRICT_FLOG2:
+  case ISD::FLOG2:
+    R = SoftenFloatRes_FLOG2(N);
+    break;
+  case ISD::STRICT_FLOG10:
+  case ISD::FLOG10:
+    R = SoftenFloatRes_FLOG10(N);
+    break;
+  case ISD::STRICT_FMA:
+  case ISD::FMA:
+    R = SoftenFloatRes_FMA(N);
+    break;
+  case ISD::STRICT_FMUL:
+  case ISD::FMUL:
+    R = SoftenFloatRes_FMUL(N);
+    break;
+  case ISD::STRICT_FNEARBYINT:
+  case ISD::FNEARBYINT:
+    R = SoftenFloatRes_FNEARBYINT(N);
+    break;
+  case ISD::FNEG:
+    R = SoftenFloatRes_FNEG(N);
+    break;
+  case ISD::STRICT_FP_EXTEND:
+  case ISD::FP_EXTEND:
+    R = SoftenFloatRes_FP_EXTEND(N);
+    break;
+  case ISD::STRICT_FP_ROUND:
+  case ISD::FP_ROUND:
+    R = SoftenFloatRes_FP_ROUND(N);
+    break;
+  case ISD::FP16_TO_FP:
+    R = SoftenFloatRes_FP16_TO_FP(N);
+    break;
+  case ISD::BF16_TO_FP:
+    R = SoftenFloatRes_BF16_TO_FP(N);
+    break;
+  case ISD::STRICT_FPOW:
+  case ISD::FPOW:
+    R = SoftenFloatRes_FPOW(N);
+    break;
+  case ISD::STRICT_FPOWI:
+  case ISD::FPOWI:
+  case ISD::FLDEXP:
+  case ISD::STRICT_FLDEXP:
+    R = SoftenFloatRes_ExpOp(N);
+    break;
+  case ISD::FFREXP:
+    R = SoftenFloatRes_FFREXP(N);
+    break;
+  case ISD::STRICT_FREM:
+  case ISD::FREM:
+    R = SoftenFloatRes_FREM(N);
+    break;
+  case ISD::STRICT_FRINT:
+  case ISD::FRINT:
+    R = SoftenFloatRes_FRINT(N);
+    break;
+  case ISD::STRICT_FROUND:
+  case ISD::FROUND:
+    R = SoftenFloatRes_FROUND(N);
+    break;
+  case ISD::STRICT_FROUNDEVEN:
+  case ISD::FROUNDEVEN:
+    R = SoftenFloatRes_FROUNDEVEN(N);
+    break;
+  case ISD::STRICT_FSIN:
+  case ISD::FSIN:
+    R = SoftenFloatRes_FSIN(N);
+    break;
+  case ISD::STRICT_FSQRT:
+  case ISD::FSQRT:
+    R = SoftenFloatRes_FSQRT(N);
+    break;
+  case ISD::STRICT_FSUB:
+  case ISD::FSUB:
+    R = SoftenFloatRes_FSUB(N);
+    break;
+  case ISD::STRICT_FTRUNC:
+  case ISD::FTRUNC:
+    R = SoftenFloatRes_FTRUNC(N);
+    break;
+  case ISD::LOAD:
+    R = SoftenFloatRes_LOAD(N);
+    break;
+  case ISD::ATOMIC_SWAP:
+    R = BitcastToInt_ATOMIC_SWAP(N);
+    break;
+  case ISD::SELECT:
+    R = SoftenFloatRes_SELECT(N);
+    break;
+  case ISD::SELECT_CC:
+    R = SoftenFloatRes_SELECT_CC(N);
+    break;
+  case ISD::FREEZE:
+    R = SoftenFloatRes_FREEZE(N);
+    break;
+  case ISD::STRICT_SINT_TO_FP:
+  case ISD::STRICT_UINT_TO_FP:
+  case ISD::SINT_TO_FP:
+  case ISD::UINT_TO_FP:
+    R = SoftenFloatRes_XINT_TO_FP(N);
+    break;
+  case ISD::UNDEF:
+    R = SoftenFloatRes_UNDEF(N);
+    break;
+  case ISD::VAARG:
+    R = SoftenFloatRes_VAARG(N);
+    break;
+  case ISD::VECREDUCE_FADD:
+  case ISD::VECREDUCE_FMUL:
+  case ISD::VECREDUCE_FMIN:
+  case ISD::VECREDUCE_FMAX:
+  case ISD::VECREDUCE_FMAXIMUM:
+  case ISD::VECREDUCE_FMINIMUM:
+    R = SoftenFloatRes_VECREDUCE(N);
+    break;
+  case ISD::VECREDUCE_SEQ_FADD:
+  case ISD::VECREDUCE_SEQ_FMUL:
+    R = SoftenFloatRes_VECREDUCE_SEQ(N);
+    break;
   }
 
   // If R is null, the sub-method took care of registering the result.
@@ -178,9 +272,8 @@ SDValue DAGTypeLegalizer::SoftenFloatRes_Unary(SDNode *N, RTLIB::Libcall LC) {
   TargetLowering::MakeLibCallOptions CallOptions;
   EVT OpVT = N->getOperand(0 + Offset).getValueType();
   CallOptions.setTypeListBeforeSoften(OpVT, N->getValueType(0), true);
-  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG, LC, NVT, Op,
-                                                    CallOptions, SDLoc(N),
-                                                    Chain);
+  std::pair<SDValue, SDValue> Tmp =
+      TLI.makeLibCall(DAG, LC, NVT, Op, CallOptions, SDLoc(N), Chain);
   if (IsStrict)
     ReplaceValueWith(SDValue(N, 1), Tmp.second);
   return Tmp.first;
@@ -192,16 +285,15 @@ SDValue DAGTypeLegalizer::SoftenFloatRes_Binary(SDNode *N, RTLIB::Libcall LC) {
   unsigned Offset = IsStrict ? 1 : 0;
   assert(N->getNumOperands() == (2 + Offset) &&
          "Unexpected number of operands!");
-  SDValue Ops[2] = { GetSoftenedFloat(N->getOperand(0 + Offset)),
-                     GetSoftenedFloat(N->getOperand(1 + Offset)) };
+  SDValue Ops[2] = {GetSoftenedFloat(N->getOperand(0 + Offset)),
+                    GetSoftenedFloat(N->getOperand(1 + Offset))};
   SDValue Chain = IsStrict ? N->getOperand(0) : SDValue();
   TargetLowering::MakeLibCallOptions CallOptions;
-  EVT OpsVT[2] = { N->getOperand(0 + Offset).getValueType(),
-                   N->getOperand(1 + Offset).getValueType() };
+  EVT OpsVT[2] = {N->getOperand(0 + Offset).getValueType(),
+                  N->getOperand(1 + Offset).getValueType()};
   CallOptions.setTypeListBeforeSoften(OpsVT, N->getValueType(0), true);
-  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG, LC, NVT, Ops,
-                                                    CallOptions, SDLoc(N),
-                                                    Chain);
+  std::pair<SDValue, SDValue> Tmp =
+      TLI.makeLibCall(DAG, LC, NVT, Ops, CallOptions, SDLoc(N), Chain);
   if (IsStrict)
     ReplaceValueWith(SDValue(N, 1), Tmp.second);
   return Tmp.first;
@@ -232,11 +324,11 @@ SDValue DAGTypeLegalizer::SoftenFloatRes_MERGE_VALUES(SDNode *N,
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_BUILD_PAIR(SDNode *N) {
   // Convert the inputs to integers, and build a new pair out of them.
-  return DAG.getNode(ISD::BUILD_PAIR, SDLoc(N),
-                     TLI.getTypeToTransformTo(*DAG.getContext(),
-                                              N->getValueType(0)),
-                     BitConvertToInteger(N->getOperand(0)),
-                     BitConvertToInteger(N->getOperand(1)));
+  return DAG.getNode(
+      ISD::BUILD_PAIR, SDLoc(N),
+      TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0)),
+      BitConvertToInteger(N->getOperand(0)),
+      BitConvertToInteger(N->getOperand(1)));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_ConstantFP(SDNode *N) {
@@ -250,24 +342,25 @@ SDValue DAGTypeLegalizer::SoftenFloatRes_ConstantFP(SDNode *N) {
   // and the low 64 bits here.
   if (DAG.getDataLayout().isBigEndian() &&
       CN->getValueType(0).getSimpleVT() == llvm::MVT::ppcf128) {
-    uint64_t words[2] = { CN->getValueAPF().bitcastToAPInt().getRawData()[1],
-                          CN->getValueAPF().bitcastToAPInt().getRawData()[0] };
+    uint64_t words[2] = {CN->getValueAPF().bitcastToAPInt().getRawData()[1],
+                         CN->getValueAPF().bitcastToAPInt().getRawData()[0]};
     APInt Val(128, words);
-    return DAG.getConstant(Val, SDLoc(CN),
-                           TLI.getTypeToTransformTo(*DAG.getContext(),
-                                                    CN->getValueType(0)));
+    return DAG.getConstant(
+        Val, SDLoc(CN),
+        TLI.getTypeToTransformTo(*DAG.getContext(), CN->getValueType(0)));
   } else {
-    return DAG.getConstant(CN->getValueAPF().bitcastToAPInt(), SDLoc(CN),
-                           TLI.getTypeToTransformTo(*DAG.getContext(),
-                                                    CN->getValueType(0)));
+    return DAG.getConstant(
+        CN->getValueAPF().bitcastToAPInt(), SDLoc(CN),
+        TLI.getTypeToTransformTo(*DAG.getContext(), CN->getValueType(0)));
   }
 }
 
-SDValue DAGTypeLegalizer::SoftenFloatRes_EXTRACT_VECTOR_ELT(SDNode *N, unsigned ResNo) {
+SDValue DAGTypeLegalizer::SoftenFloatRes_EXTRACT_VECTOR_ELT(SDNode *N,
+                                                            unsigned ResNo) {
   SDValue NewOp = BitConvertVectorToIntegerVector(N->getOperand(0));
   return DAG.getNode(ISD::EXTRACT_VECTOR_ELT, SDLoc(N),
-                     NewOp.getValueType().getVectorElementType(),
-                     NewOp, N->getOperand(1));
+                     NewOp.getValueType().getVectorElementType(), NewOp,
+                     N->getOperand(1));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FABS(SDNode *N) {
@@ -285,50 +378,35 @@ SDValue DAGTypeLegalizer::SoftenFloatRes_FABS(SDNode *N) {
 SDValue DAGTypeLegalizer::SoftenFloatRes_FMINNUM(SDNode *N) {
   if (SDValue SelCC = TLI.createSelectForFMINNUM_FMAXNUM(N, DAG))
     return SoftenFloatRes_SELECT_CC(SelCC.getNode());
-  return SoftenFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                               RTLIB::FMIN_F32,
-                                               RTLIB::FMIN_F64,
-                                               RTLIB::FMIN_F80,
-                                               RTLIB::FMIN_F128,
-                                               RTLIB::FMIN_PPCF128));
+  return SoftenFloatRes_Binary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::FMIN_F32, RTLIB::FMIN_F64,
+                      RTLIB::FMIN_F80, RTLIB::FMIN_F128, RTLIB::FMIN_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FMAXNUM(SDNode *N) {
   if (SDValue SelCC = TLI.createSelectForFMINNUM_FMAXNUM(N, DAG))
     return SoftenFloatRes_SELECT_CC(SelCC.getNode());
-  return SoftenFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                               RTLIB::FMAX_F32,
-                                               RTLIB::FMAX_F64,
-                                               RTLIB::FMAX_F80,
-                                               RTLIB::FMAX_F128,
-                                               RTLIB::FMAX_PPCF128));
+  return SoftenFloatRes_Binary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::FMAX_F32, RTLIB::FMAX_F64,
+                      RTLIB::FMAX_F80, RTLIB::FMAX_F128, RTLIB::FMAX_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FADD(SDNode *N) {
-  return SoftenFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                               RTLIB::ADD_F32,
-                                               RTLIB::ADD_F64,
-                                               RTLIB::ADD_F80,
-                                               RTLIB::ADD_F128,
-                                               RTLIB::ADD_PPCF128));
+  return SoftenFloatRes_Binary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::ADD_F32, RTLIB::ADD_F64,
+                      RTLIB::ADD_F80, RTLIB::ADD_F128, RTLIB::ADD_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FCBRT(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                           RTLIB::CBRT_F32,
-                                           RTLIB::CBRT_F64,
-                                           RTLIB::CBRT_F80,
-                                           RTLIB::CBRT_F128,
-                                           RTLIB::CBRT_PPCF128));
+  return SoftenFloatRes_Unary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::CBRT_F32, RTLIB::CBRT_F64,
+                      RTLIB::CBRT_F80, RTLIB::CBRT_F128, RTLIB::CBRT_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FCEIL(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                              RTLIB::CEIL_F32,
-                                              RTLIB::CEIL_F64,
-                                              RTLIB::CEIL_F80,
-                                              RTLIB::CEIL_F128,
-                                              RTLIB::CEIL_PPCF128));
+  return SoftenFloatRes_Unary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::CEIL_F32, RTLIB::CEIL_F64,
+                      RTLIB::CEIL_F80, RTLIB::CEIL_F128, RTLIB::CEIL_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FCOPYSIGN(SDNode *N) {
@@ -380,39 +458,27 @@ SDValue DAGTypeLegalizer::SoftenFloatRes_FCOPYSIGN(SDNode *N) {
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FCOS(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                              RTLIB::COS_F32,
-                                              RTLIB::COS_F64,
-                                              RTLIB::COS_F80,
-                                              RTLIB::COS_F128,
-                                              RTLIB::COS_PPCF128));
+  return SoftenFloatRes_Unary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::COS_F32, RTLIB::COS_F64,
+                      RTLIB::COS_F80, RTLIB::COS_F128, RTLIB::COS_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FDIV(SDNode *N) {
-  return SoftenFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                               RTLIB::DIV_F32,
-                                               RTLIB::DIV_F64,
-                                               RTLIB::DIV_F80,
-                                               RTLIB::DIV_F128,
-                                               RTLIB::DIV_PPCF128));
+  return SoftenFloatRes_Binary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::DIV_F32, RTLIB::DIV_F64,
+                      RTLIB::DIV_F80, RTLIB::DIV_F128, RTLIB::DIV_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FEXP(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                              RTLIB::EXP_F32,
-                                              RTLIB::EXP_F64,
-                                              RTLIB::EXP_F80,
-                                              RTLIB::EXP_F128,
-                                              RTLIB::EXP_PPCF128));
+  return SoftenFloatRes_Unary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::EXP_F32, RTLIB::EXP_F64,
+                      RTLIB::EXP_F80, RTLIB::EXP_F128, RTLIB::EXP_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FEXP2(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                              RTLIB::EXP2_F32,
-                                              RTLIB::EXP2_F64,
-                                              RTLIB::EXP2_F80,
-                                              RTLIB::EXP2_F128,
-                                              RTLIB::EXP2_PPCF128));
+  return SoftenFloatRes_Unary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::EXP2_F32, RTLIB::EXP2_F64,
+                      RTLIB::EXP2_F80, RTLIB::EXP2_F128, RTLIB::EXP2_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FEXP10(SDNode *N) {
@@ -423,83 +489,65 @@ SDValue DAGTypeLegalizer::SoftenFloatRes_FEXP10(SDNode *N) {
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FFLOOR(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                              RTLIB::FLOOR_F32,
-                                              RTLIB::FLOOR_F64,
-                                              RTLIB::FLOOR_F80,
-                                              RTLIB::FLOOR_F128,
-                                              RTLIB::FLOOR_PPCF128));
+  return SoftenFloatRes_Unary(
+      N,
+      GetFPLibCall(N->getValueType(0), RTLIB::FLOOR_F32, RTLIB::FLOOR_F64,
+                   RTLIB::FLOOR_F80, RTLIB::FLOOR_F128, RTLIB::FLOOR_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FLOG(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                              RTLIB::LOG_F32,
-                                              RTLIB::LOG_F64,
-                                              RTLIB::LOG_F80,
-                                              RTLIB::LOG_F128,
-                                              RTLIB::LOG_PPCF128));
+  return SoftenFloatRes_Unary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::LOG_F32, RTLIB::LOG_F64,
+                      RTLIB::LOG_F80, RTLIB::LOG_F128, RTLIB::LOG_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FLOG2(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                              RTLIB::LOG2_F32,
-                                              RTLIB::LOG2_F64,
-                                              RTLIB::LOG2_F80,
-                                              RTLIB::LOG2_F128,
-                                              RTLIB::LOG2_PPCF128));
+  return SoftenFloatRes_Unary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::LOG2_F32, RTLIB::LOG2_F64,
+                      RTLIB::LOG2_F80, RTLIB::LOG2_F128, RTLIB::LOG2_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FLOG10(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                              RTLIB::LOG10_F32,
-                                              RTLIB::LOG10_F64,
-                                              RTLIB::LOG10_F80,
-                                              RTLIB::LOG10_F128,
-                                              RTLIB::LOG10_PPCF128));
+  return SoftenFloatRes_Unary(
+      N,
+      GetFPLibCall(N->getValueType(0), RTLIB::LOG10_F32, RTLIB::LOG10_F64,
+                   RTLIB::LOG10_F80, RTLIB::LOG10_F128, RTLIB::LOG10_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FMA(SDNode *N) {
   bool IsStrict = N->isStrictFPOpcode();
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
   unsigned Offset = IsStrict ? 1 : 0;
-  SDValue Ops[3] = { GetSoftenedFloat(N->getOperand(0 + Offset)),
-                     GetSoftenedFloat(N->getOperand(1 + Offset)),
-                     GetSoftenedFloat(N->getOperand(2 + Offset)) };
+  SDValue Ops[3] = {GetSoftenedFloat(N->getOperand(0 + Offset)),
+                    GetSoftenedFloat(N->getOperand(1 + Offset)),
+                    GetSoftenedFloat(N->getOperand(2 + Offset))};
   SDValue Chain = IsStrict ? N->getOperand(0) : SDValue();
   TargetLowering::MakeLibCallOptions CallOptions;
-  EVT OpsVT[3] = { N->getOperand(0 + Offset).getValueType(),
-                   N->getOperand(1 + Offset).getValueType(),
-                   N->getOperand(2 + Offset).getValueType() };
+  EVT OpsVT[3] = {N->getOperand(0 + Offset).getValueType(),
+                  N->getOperand(1 + Offset).getValueType(),
+                  N->getOperand(2 + Offset).getValueType()};
   CallOptions.setTypeListBeforeSoften(OpsVT, N->getValueType(0), true);
-  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG,
-                                                    GetFPLibCall(N->getValueType(0),
-                                                                 RTLIB::FMA_F32,
-                                                                 RTLIB::FMA_F64,
-                                                                 RTLIB::FMA_F80,
-                                                                 RTLIB::FMA_F128,
-                                                                 RTLIB::FMA_PPCF128),
-                         NVT, Ops, CallOptions, SDLoc(N), Chain);
+  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(
+      DAG,
+      GetFPLibCall(N->getValueType(0), RTLIB::FMA_F32, RTLIB::FMA_F64,
+                   RTLIB::FMA_F80, RTLIB::FMA_F128, RTLIB::FMA_PPCF128),
+      NVT, Ops, CallOptions, SDLoc(N), Chain);
   if (IsStrict)
     ReplaceValueWith(SDValue(N, 1), Tmp.second);
   return Tmp.first;
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FMUL(SDNode *N) {
-  return SoftenFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                               RTLIB::MUL_F32,
-                                               RTLIB::MUL_F64,
-                                               RTLIB::MUL_F80,
-                                               RTLIB::MUL_F128,
-                                               RTLIB::MUL_PPCF128));
+  return SoftenFloatRes_Binary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::MUL_F32, RTLIB::MUL_F64,
+                      RTLIB::MUL_F80, RTLIB::MUL_F128, RTLIB::MUL_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FNEARBYINT(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                              RTLIB::NEARBYINT_F32,
-                                              RTLIB::NEARBYINT_F64,
-                                              RTLIB::NEARBYINT_F80,
-                                              RTLIB::NEARBYINT_F128,
-                                              RTLIB::NEARBYINT_PPCF128));
+  return SoftenFloatRes_Unary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::NEARBYINT_F32,
+                      RTLIB::NEARBYINT_F64, RTLIB::NEARBYINT_F80,
+                      RTLIB::NEARBYINT_F128, RTLIB::NEARBYINT_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FNEG(SDNode *N) {
@@ -534,8 +582,8 @@ SDValue DAGTypeLegalizer::SoftenFloatRes_FP_EXTEND(SDNode *N) {
   if ((Op.getValueType() == MVT::f16 || Op.getValueType() == MVT::bf16) &&
       N->getValueType(0) != MVT::f32) {
     if (IsStrict) {
-      Op = DAG.getNode(ISD::STRICT_FP_EXTEND, SDLoc(N),
-                       { MVT::f32, MVT::Other }, { Chain, Op });
+      Op = DAG.getNode(ISD::STRICT_FP_EXTEND, SDLoc(N), {MVT::f32, MVT::Other},
+                       {Chain, Op});
       Chain = Op.getValue(1);
     } else {
       Op = DAG.getNode(ISD::FP_EXTEND, SDLoc(N), MVT::f32, Op);
@@ -550,9 +598,8 @@ SDValue DAGTypeLegalizer::SoftenFloatRes_FP_EXTEND(SDNode *N) {
   TargetLowering::MakeLibCallOptions CallOptions;
   EVT OpVT = N->getOperand(IsStrict ? 1 : 0).getValueType();
   CallOptions.setTypeListBeforeSoften(OpVT, N->getValueType(0), true);
-  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG, LC, NVT, Op,
-                                                    CallOptions, SDLoc(N),
-                                                    Chain);
+  std::pair<SDValue, SDValue> Tmp =
+      TLI.makeLibCall(DAG, LC, NVT, Op, CallOptions, SDLoc(N), Chain);
   if (IsStrict)
     ReplaceValueWith(SDValue(N, 1), Tmp.second);
   return Tmp.first;
@@ -564,10 +611,11 @@ SDValue DAGTypeLegalizer::SoftenFloatRes_FP16_TO_FP(SDNode *N) {
   EVT MidVT = TLI.getTypeToTransformTo(*DAG.getContext(), MVT::f32);
   SDValue Op = N->getOperand(0);
   TargetLowering::MakeLibCallOptions CallOptions;
-  EVT OpsVT[1] = { N->getOperand(0).getValueType() };
+  EVT OpsVT[1] = {N->getOperand(0).getValueType()};
   CallOptions.setTypeListBeforeSoften(OpsVT, N->getValueType(0), true);
   SDValue Res32 = TLI.makeLibCall(DAG, RTLIB::FPEXT_F16_F32, MidVT, Op,
-                                  CallOptions, SDLoc(N)).first;
+                                  CallOptions, SDLoc(N))
+                      .first;
   if (N->getValueType(0) == MVT::f32)
     return Res32;
 
@@ -602,21 +650,17 @@ SDValue DAGTypeLegalizer::SoftenFloatRes_FP_ROUND(SDNode *N) {
   TargetLowering::MakeLibCallOptions CallOptions;
   EVT OpVT = N->getOperand(IsStrict ? 1 : 0).getValueType();
   CallOptions.setTypeListBeforeSoften(OpVT, N->getValueType(0), true);
-  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG, LC, NVT, Op,
-                                                    CallOptions, SDLoc(N),
-                                                    Chain);
+  std::pair<SDValue, SDValue> Tmp =
+      TLI.makeLibCall(DAG, LC, NVT, Op, CallOptions, SDLoc(N), Chain);
   if (IsStrict)
     ReplaceValueWith(SDValue(N, 1), Tmp.second);
   return Tmp.first;
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FPOW(SDNode *N) {
-  return SoftenFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                               RTLIB::POW_F32,
-                                               RTLIB::POW_F64,
-                                               RTLIB::POW_F80,
-                                               RTLIB::POW_F128,
-                                               RTLIB::POW_PPCF128));
+  return SoftenFloatRes_Binary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::POW_F32, RTLIB::POW_F64,
+                      RTLIB::POW_F80, RTLIB::POW_F128, RTLIB::POW_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_ExpOp(SDNode *N) {
@@ -647,16 +691,15 @@ SDValue DAGTypeLegalizer::SoftenFloatRes_ExpOp(SDNode *N) {
   }
 
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
-  SDValue Ops[2] = { GetSoftenedFloat(N->getOperand(0 + Offset)),
-                     N->getOperand(1 + Offset) };
+  SDValue Ops[2] = {GetSoftenedFloat(N->getOperand(0 + Offset)),
+                    N->getOperand(1 + Offset)};
   SDValue Chain = IsStrict ? N->getOperand(0) : SDValue();
   TargetLowering::MakeLibCallOptions CallOptions;
-  EVT OpsVT[2] = { N->getOperand(0 + Offset).getValueType(),
-                   N->getOperand(1 + Offset).getValueType() };
+  EVT OpsVT[2] = {N->getOperand(0 + Offset).getValueType(),
+                  N->getOperand(1 + Offset).getValueType()};
   CallOptions.setTypeListBeforeSoften(OpsVT, N->getValueType(0), true);
-  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG, LC, NVT, Ops,
-                                                    CallOptions, SDLoc(N),
-                                                    Chain);
+  std::pair<SDValue, SDValue> Tmp =
+      TLI.makeLibCall(DAG, LC, NVT, Ops, CallOptions, SDLoc(N), Chain);
   if (IsStrict)
     ReplaceValueWith(SDValue(N, 1), Tmp.second);
   return Tmp.first;
@@ -702,75 +745,54 @@ SDValue DAGTypeLegalizer::SoftenFloatRes_FFREXP(SDNode *N) {
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FREM(SDNode *N) {
-  return SoftenFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                               RTLIB::REM_F32,
-                                               RTLIB::REM_F64,
-                                               RTLIB::REM_F80,
-                                               RTLIB::REM_F128,
-                                               RTLIB::REM_PPCF128));
+  return SoftenFloatRes_Binary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::REM_F32, RTLIB::REM_F64,
+                      RTLIB::REM_F80, RTLIB::REM_F128, RTLIB::REM_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FRINT(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                              RTLIB::RINT_F32,
-                                              RTLIB::RINT_F64,
-                                              RTLIB::RINT_F80,
-                                              RTLIB::RINT_F128,
-                                              RTLIB::RINT_PPCF128));
+  return SoftenFloatRes_Unary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::RINT_F32, RTLIB::RINT_F64,
+                      RTLIB::RINT_F80, RTLIB::RINT_F128, RTLIB::RINT_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FROUND(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                              RTLIB::ROUND_F32,
-                                              RTLIB::ROUND_F64,
-                                              RTLIB::ROUND_F80,
-                                              RTLIB::ROUND_F128,
-                                              RTLIB::ROUND_PPCF128));
+  return SoftenFloatRes_Unary(
+      N,
+      GetFPLibCall(N->getValueType(0), RTLIB::ROUND_F32, RTLIB::ROUND_F64,
+                   RTLIB::ROUND_F80, RTLIB::ROUND_F128, RTLIB::ROUND_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FROUNDEVEN(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                              RTLIB::ROUNDEVEN_F32,
-                                              RTLIB::ROUNDEVEN_F64,
-                                              RTLIB::ROUNDEVEN_F80,
-                                              RTLIB::ROUNDEVEN_F128,
-                                              RTLIB::ROUNDEVEN_PPCF128));
+  return SoftenFloatRes_Unary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::ROUNDEVEN_F32,
+                      RTLIB::ROUNDEVEN_F64, RTLIB::ROUNDEVEN_F80,
+                      RTLIB::ROUNDEVEN_F128, RTLIB::ROUNDEVEN_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FSIN(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                              RTLIB::SIN_F32,
-                                              RTLIB::SIN_F64,
-                                              RTLIB::SIN_F80,
-                                              RTLIB::SIN_F128,
-                                              RTLIB::SIN_PPCF128));
+  return SoftenFloatRes_Unary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::SIN_F32, RTLIB::SIN_F64,
+                      RTLIB::SIN_F80, RTLIB::SIN_F128, RTLIB::SIN_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FSQRT(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                              RTLIB::SQRT_F32,
-                                              RTLIB::SQRT_F64,
-                                              RTLIB::SQRT_F80,
-                                              RTLIB::SQRT_F128,
-                                              RTLIB::SQRT_PPCF128));
+  return SoftenFloatRes_Unary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::SQRT_F32, RTLIB::SQRT_F64,
+                      RTLIB::SQRT_F80, RTLIB::SQRT_F128, RTLIB::SQRT_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FSUB(SDNode *N) {
-  return SoftenFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                               RTLIB::SUB_F32,
-                                               RTLIB::SUB_F64,
-                                               RTLIB::SUB_F80,
-                                               RTLIB::SUB_F128,
-                                               RTLIB::SUB_PPCF128));
+  return SoftenFloatRes_Binary(
+      N, GetFPLibCall(N->getValueType(0), RTLIB::SUB_F32, RTLIB::SUB_F64,
+                      RTLIB::SUB_F80, RTLIB::SUB_F128, RTLIB::SUB_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_FTRUNC(SDNode *N) {
-  return SoftenFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                              RTLIB::TRUNC_F32,
-                                              RTLIB::TRUNC_F64,
-                                              RTLIB::TRUNC_F80,
-                                              RTLIB::TRUNC_F128,
-                                              RTLIB::TRUNC_PPCF128));
+  return SoftenFloatRes_Unary(
+      N,
+      GetFPLibCall(N->getValueType(0), RTLIB::TRUNC_F32, RTLIB::TRUNC_F64,
+                   RTLIB::TRUNC_F80, RTLIB::TRUNC_F128, RTLIB::TRUNC_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_LOAD(SDNode *N) {
@@ -809,26 +831,26 @@ SDValue DAGTypeLegalizer::SoftenFloatRes_LOAD(SDNode *N) {
 SDValue DAGTypeLegalizer::SoftenFloatRes_SELECT(SDNode *N) {
   SDValue LHS = GetSoftenedFloat(N->getOperand(1));
   SDValue RHS = GetSoftenedFloat(N->getOperand(2));
-  return DAG.getSelect(SDLoc(N),
-                       LHS.getValueType(), N->getOperand(0), LHS, RHS);
+  return DAG.getSelect(SDLoc(N), LHS.getValueType(), N->getOperand(0), LHS,
+                       RHS);
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_SELECT_CC(SDNode *N) {
   SDValue LHS = GetSoftenedFloat(N->getOperand(2));
   SDValue RHS = GetSoftenedFloat(N->getOperand(3));
-  return DAG.getNode(ISD::SELECT_CC, SDLoc(N),
-                     LHS.getValueType(), N->getOperand(0),
-                     N->getOperand(1), LHS, RHS, N->getOperand(4));
+  return DAG.getNode(ISD::SELECT_CC, SDLoc(N), LHS.getValueType(),
+                     N->getOperand(0), N->getOperand(1), LHS, RHS,
+                     N->getOperand(4));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_UNDEF(SDNode *N) {
-  return DAG.getUNDEF(TLI.getTypeToTransformTo(*DAG.getContext(),
-                                               N->getValueType(0)));
+  return DAG.getUNDEF(
+      TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0)));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatRes_VAARG(SDNode *N) {
   SDValue Chain = N->getOperand(0); // Get the chain.
-  SDValue Ptr = N->getOperand(1); // Get the pointer.
+  SDValue Ptr = N->getOperand(1);   // Get the pointer.
   EVT VT = N->getValueType(0);
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), VT);
   SDLoc dl(N);
@@ -862,7 +884,7 @@ SDValue DAGTypeLegalizer::SoftenFloatRes_XINT_TO_FP(SDNode *N) {
     NVT = (MVT::SimpleValueType)t;
     // The source needs to big enough to hold the operand.
     if (NVT.bitsGE(SVT))
-      LC = Signed ? RTLIB::getSINTTOFP(NVT, RVT):RTLIB::getUINTTOFP (NVT, RVT);
+      LC = Signed ? RTLIB::getSINTTOFP(NVT, RVT) : RTLIB::getUINTTOFP(NVT, RVT);
   }
   assert(LC != RTLIB::UNKNOWN_LIBCALL && "Unsupported XINT_TO_FP!");
 
@@ -906,42 +928,69 @@ bool DAGTypeLegalizer::SoftenFloatOperand(SDNode *N, unsigned OpNo) {
   default:
 #ifndef NDEBUG
     dbgs() << "SoftenFloatOperand Op #" << OpNo << ": ";
-    N->dump(&DAG); dbgs() << "\n";
+    N->dump(&DAG);
+    dbgs() << "\n";
 #endif
     report_fatal_error("Do not know how to soften this operator's operand!");
 
-  case ISD::BITCAST:     Res = SoftenFloatOp_BITCAST(N); break;
-  case ISD::BR_CC:       Res = SoftenFloatOp_BR_CC(N); break;
+  case ISD::BITCAST:
+    Res = SoftenFloatOp_BITCAST(N);
+    break;
+  case ISD::BR_CC:
+    Res = SoftenFloatOp_BR_CC(N);
+    break;
   case ISD::STRICT_FP_TO_FP16:
-  case ISD::FP_TO_FP16:  // Same as FP_ROUND for softening purposes
+  case ISD::FP_TO_FP16: // Same as FP_ROUND for softening purposes
   case ISD::FP_TO_BF16:
   case ISD::STRICT_FP_ROUND:
-  case ISD::FP_ROUND:    Res = SoftenFloatOp_FP_ROUND(N); break;
+  case ISD::FP_ROUND:
+    Res = SoftenFloatOp_FP_ROUND(N);
+    break;
   case ISD::STRICT_FP_TO_SINT:
   case ISD::STRICT_FP_TO_UINT:
   case ISD::FP_TO_SINT:
-  case ISD::FP_TO_UINT:  Res = SoftenFloatOp_FP_TO_XINT(N); break;
+  case ISD::FP_TO_UINT:
+    Res = SoftenFloatOp_FP_TO_XINT(N);
+    break;
   case ISD::FP_TO_SINT_SAT:
   case ISD::FP_TO_UINT_SAT:
-                         Res = SoftenFloatOp_FP_TO_XINT_SAT(N); break;
+    Res = SoftenFloatOp_FP_TO_XINT_SAT(N);
+    break;
   case ISD::STRICT_LROUND:
-  case ISD::LROUND:      Res = SoftenFloatOp_LROUND(N); break;
+  case ISD::LROUND:
+    Res = SoftenFloatOp_LROUND(N);
+    break;
   case ISD::STRICT_LLROUND:
-  case ISD::LLROUND:     Res = SoftenFloatOp_LLROUND(N); break;
+  case ISD::LLROUND:
+    Res = SoftenFloatOp_LLROUND(N);
+    break;
   case ISD::STRICT_LRINT:
-  case ISD::LRINT:       Res = SoftenFloatOp_LRINT(N); break;
+  case ISD::LRINT:
+    Res = SoftenFloatOp_LRINT(N);
+    break;
   case ISD::STRICT_LLRINT:
-  case ISD::LLRINT:      Res = SoftenFloatOp_LLRINT(N); break;
-  case ISD::SELECT_CC:   Res = SoftenFloatOp_SELECT_CC(N); break;
+  case ISD::LLRINT:
+    Res = SoftenFloatOp_LLRINT(N);
+    break;
+  case ISD::SELECT_CC:
+    Res = SoftenFloatOp_SELECT_CC(N);
+    break;
   case ISD::STRICT_FSETCC:
   case ISD::STRICT_FSETCCS:
-  case ISD::SETCC:       Res = SoftenFloatOp_SETCC(N); break;
-  case ISD::STORE:       Res = SoftenFloatOp_STORE(N, OpNo); break;
-  case ISD::FCOPYSIGN:   Res = SoftenFloatOp_FCOPYSIGN(N); break;
+  case ISD::SETCC:
+    Res = SoftenFloatOp_SETCC(N);
+    break;
+  case ISD::STORE:
+    Res = SoftenFloatOp_STORE(N, OpNo);
+    break;
+  case ISD::FCOPYSIGN:
+    Res = SoftenFloatOp_FCOPYSIGN(N);
+    break;
   }
 
   // If the result is null, the sub-method took care of registering results etc.
-  if (!Res.getNode()) return false;
+  if (!Res.getNode())
+    return false;
 
   // If the result is N, the sub-method updated N in place.  Tell the legalizer
   // core about this to re-analyze.
@@ -987,9 +1036,8 @@ SDValue DAGTypeLegalizer::SoftenFloatOp_FP_ROUND(SDNode *N) {
   Op = GetSoftenedFloat(Op);
   TargetLowering::MakeLibCallOptions CallOptions;
   CallOptions.setTypeListBeforeSoften(SVT, RVT, true);
-  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG, LC, RVT, Op,
-                                                    CallOptions, SDLoc(N),
-                                                    Chain);
+  std::pair<SDValue, SDValue> Tmp =
+      TLI.makeLibCall(DAG, LC, RVT, Op, CallOptions, SDLoc(N), Chain);
   if (IsStrict) {
     ReplaceValueWith(SDValue(N, 1), Tmp.second);
     ReplaceValueWith(SDValue(N, 0), Tmp.first);
@@ -1017,8 +1065,8 @@ SDValue DAGTypeLegalizer::SoftenFloatOp_BR_CC(SDNode *N) {
 
   // Update N to have the operands specified.
   return SDValue(DAG.UpdateNodeOperands(N, N->getOperand(0),
-                                DAG.getCondCode(CCCode), NewLHS, NewRHS,
-                                N->getOperand(4)),
+                                        DAG.getCondCode(CCCode), NewLHS, NewRHS,
+                                        N->getOperand(4)),
                  0);
 }
 
@@ -1063,8 +1111,8 @@ SDValue DAGTypeLegalizer::SoftenFloatOp_FP_TO_XINT(SDNode *N) {
   SDValue Chain = IsStrict ? N->getOperand(0) : SDValue();
   TargetLowering::MakeLibCallOptions CallOptions;
   CallOptions.setTypeListBeforeSoften(SVT, RVT, true);
-  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG, LC, NVT, Op,
-                                                    CallOptions, dl, Chain);
+  std::pair<SDValue, SDValue> Tmp =
+      TLI.makeLibCall(DAG, LC, NVT, Op, CallOptions, dl, Chain);
 
   // Truncate the result if the libcall returns a larger type.
   SDValue Res = DAG.getNode(ISD::TRUNCATE, dl, RVT, Tmp.first);
@@ -1100,9 +1148,9 @@ SDValue DAGTypeLegalizer::SoftenFloatOp_SELECT_CC(SDNode *N) {
   }
 
   // Update N to have the operands specified.
-  return SDValue(DAG.UpdateNodeOperands(N, NewLHS, NewRHS,
-                                N->getOperand(2), N->getOperand(3),
-                                DAG.getCondCode(CCCode)),
+  return SDValue(DAG.UpdateNodeOperands(N, NewLHS, NewRHS, N->getOperand(2),
+                                        N->getOperand(3),
+                                        DAG.getCondCode(CCCode)),
                  0);
 }
 
@@ -1126,8 +1174,9 @@ SDValue DAGTypeLegalizer::SoftenFloatOp_SETCC(SDNode *N) {
       NewLHS = DAG.getNode(ISD::SETCC, SDLoc(N), N->getValueType(0), NewLHS,
                            NewRHS, DAG.getCondCode(CCCode));
     else
-      return SDValue(DAG.UpdateNodeOperands(N, NewLHS, NewRHS,
-                                            DAG.getCondCode(CCCode)), 0);
+      return SDValue(
+          DAG.UpdateNodeOperands(N, NewLHS, NewRHS, DAG.getCondCode(CCCode)),
+          0);
   }
 
   // Otherwise, softenSetCCOperands returned a scalar, use it.
@@ -1204,9 +1253,8 @@ SDValue DAGTypeLegalizer::SoftenFloatOp_Unary(SDNode *N, RTLIB::Libcall LC) {
   TargetLowering::MakeLibCallOptions CallOptions;
   EVT OpVT = N->getOperand(0 + Offset).getValueType();
   CallOptions.setTypeListBeforeSoften(OpVT, N->getValueType(0), true);
-  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG, LC, NVT, Op,
-                                                    CallOptions, SDLoc(N),
-                                                    Chain);
+  std::pair<SDValue, SDValue> Tmp =
+      TLI.makeLibCall(DAG, LC, NVT, Op, CallOptions, SDLoc(N), Chain);
   if (IsStrict) {
     ReplaceValueWith(SDValue(N, 1), Tmp.second);
     ReplaceValueWith(SDValue(N, 0), Tmp.first);
@@ -1218,42 +1266,34 @@ SDValue DAGTypeLegalizer::SoftenFloatOp_Unary(SDNode *N, RTLIB::Libcall LC) {
 
 SDValue DAGTypeLegalizer::SoftenFloatOp_LROUND(SDNode *N) {
   EVT OpVT = N->getOperand(N->isStrictFPOpcode() ? 1 : 0).getValueType();
-  return SoftenFloatOp_Unary(N, GetFPLibCall(OpVT,
-                                             RTLIB::LROUND_F32,
-                                             RTLIB::LROUND_F64,
-                                             RTLIB::LROUND_F80,
-                                             RTLIB::LROUND_F128,
-                                             RTLIB::LROUND_PPCF128));
+  return SoftenFloatOp_Unary(
+      N, GetFPLibCall(OpVT, RTLIB::LROUND_F32, RTLIB::LROUND_F64,
+                      RTLIB::LROUND_F80, RTLIB::LROUND_F128,
+                      RTLIB::LROUND_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatOp_LLROUND(SDNode *N) {
   EVT OpVT = N->getOperand(N->isStrictFPOpcode() ? 1 : 0).getValueType();
-  return SoftenFloatOp_Unary(N, GetFPLibCall(OpVT,
-                                             RTLIB::LLROUND_F32,
-                                             RTLIB::LLROUND_F64,
-                                             RTLIB::LLROUND_F80,
-                                             RTLIB::LLROUND_F128,
-                                             RTLIB::LLROUND_PPCF128));
+  return SoftenFloatOp_Unary(
+      N, GetFPLibCall(OpVT, RTLIB::LLROUND_F32, RTLIB::LLROUND_F64,
+                      RTLIB::LLROUND_F80, RTLIB::LLROUND_F128,
+                      RTLIB::LLROUND_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatOp_LRINT(SDNode *N) {
   EVT OpVT = N->getOperand(N->isStrictFPOpcode() ? 1 : 0).getValueType();
-  return SoftenFloatOp_Unary(N, GetFPLibCall(OpVT,
-                                             RTLIB::LRINT_F32,
-                                             RTLIB::LRINT_F64,
-                                             RTLIB::LRINT_F80,
+  return SoftenFloatOp_Unary(N, GetFPLibCall(OpVT, RTLIB::LRINT_F32,
+                                             RTLIB::LRINT_F64, RTLIB::LRINT_F80,
                                              RTLIB::LRINT_F128,
                                              RTLIB::LRINT_PPCF128));
 }
 
 SDValue DAGTypeLegalizer::SoftenFloatOp_LLRINT(SDNode *N) {
   EVT OpVT = N->getOperand(N->isStrictFPOpcode() ? 1 : 0).getValueType();
-  return SoftenFloatOp_Unary(N, GetFPLibCall(OpVT,
-                                             RTLIB::LLRINT_F32,
-                                             RTLIB::LLRINT_F64,
-                                             RTLIB::LLRINT_F80,
-                                             RTLIB::LLRINT_F128,
-                                             RTLIB::LLRINT_PPCF128));
+  return SoftenFloatOp_Unary(
+      N, GetFPLibCall(OpVT, RTLIB::LLRINT_F32, RTLIB::LLRINT_F64,
+                      RTLIB::LLRINT_F80, RTLIB::LLRINT_F128,
+                      RTLIB::LLRINT_PPCF128));
 }
 
 //===----------------------------------------------------------------------===//
@@ -1277,88 +1317,179 @@ void DAGTypeLegalizer::ExpandFloatResult(SDNode *N, unsigned ResNo) {
   default:
 #ifndef NDEBUG
     dbgs() << "ExpandFloatResult #" << ResNo << ": ";
-    N->dump(&DAG); dbgs() << "\n";
+    N->dump(&DAG);
+    dbgs() << "\n";
 #endif
     report_fatal_error("Do not know how to expand the result of this "
                        "operator!");
 
-  case ISD::UNDEF:        SplitRes_UNDEF(N, Lo, Hi); break;
-  case ISD::SELECT:       SplitRes_Select(N, Lo, Hi); break;
-  case ISD::SELECT_CC:    SplitRes_SELECT_CC(N, Lo, Hi); break;
+  case ISD::UNDEF:
+    SplitRes_UNDEF(N, Lo, Hi);
+    break;
+  case ISD::SELECT:
+    SplitRes_Select(N, Lo, Hi);
+    break;
+  case ISD::SELECT_CC:
+    SplitRes_SELECT_CC(N, Lo, Hi);
+    break;
 
-  case ISD::MERGE_VALUES:       ExpandRes_MERGE_VALUES(N, ResNo, Lo, Hi); break;
-  case ISD::BITCAST:            ExpandRes_BITCAST(N, Lo, Hi); break;
-  case ISD::BUILD_PAIR:         ExpandRes_BUILD_PAIR(N, Lo, Hi); break;
-  case ISD::EXTRACT_ELEMENT:    ExpandRes_EXTRACT_ELEMENT(N, Lo, Hi); break;
-  case ISD::EXTRACT_VECTOR_ELT: ExpandRes_EXTRACT_VECTOR_ELT(N, Lo, Hi); break;
-  case ISD::VAARG:              ExpandRes_VAARG(N, Lo, Hi); break;
+  case ISD::MERGE_VALUES:
+    ExpandRes_MERGE_VALUES(N, ResNo, Lo, Hi);
+    break;
+  case ISD::BITCAST:
+    ExpandRes_BITCAST(N, Lo, Hi);
+    break;
+  case ISD::BUILD_PAIR:
+    ExpandRes_BUILD_PAIR(N, Lo, Hi);
+    break;
+  case ISD::EXTRACT_ELEMENT:
+    ExpandRes_EXTRACT_ELEMENT(N, Lo, Hi);
+    break;
+  case ISD::EXTRACT_VECTOR_ELT:
+    ExpandRes_EXTRACT_VECTOR_ELT(N, Lo, Hi);
+    break;
+  case ISD::VAARG:
+    ExpandRes_VAARG(N, Lo, Hi);
+    break;
 
-  case ISD::ConstantFP: ExpandFloatRes_ConstantFP(N, Lo, Hi); break;
-  case ISD::FABS:       ExpandFloatRes_FABS(N, Lo, Hi); break;
+  case ISD::ConstantFP:
+    ExpandFloatRes_ConstantFP(N, Lo, Hi);
+    break;
+  case ISD::FABS:
+    ExpandFloatRes_FABS(N, Lo, Hi);
+    break;
   case ISD::STRICT_FMINNUM:
-  case ISD::FMINNUM:    ExpandFloatRes_FMINNUM(N, Lo, Hi); break;
+  case ISD::FMINNUM:
+    ExpandFloatRes_FMINNUM(N, Lo, Hi);
+    break;
   case ISD::STRICT_FMAXNUM:
-  case ISD::FMAXNUM:    ExpandFloatRes_FMAXNUM(N, Lo, Hi); break;
+  case ISD::FMAXNUM:
+    ExpandFloatRes_FMAXNUM(N, Lo, Hi);
+    break;
   case ISD::STRICT_FADD:
-  case ISD::FADD:       ExpandFloatRes_FADD(N, Lo, Hi); break;
-  case ISD::FCBRT:      ExpandFloatRes_FCBRT(N, Lo, Hi); break;
+  case ISD::FADD:
+    ExpandFloatRes_FADD(N, Lo, Hi);
+    break;
+  case ISD::FCBRT:
+    ExpandFloatRes_FCBRT(N, Lo, Hi);
+    break;
   case ISD::STRICT_FCEIL:
-  case ISD::FCEIL:      ExpandFloatRes_FCEIL(N, Lo, Hi); break;
-  case ISD::FCOPYSIGN:  ExpandFloatRes_FCOPYSIGN(N, Lo, Hi); break;
+  case ISD::FCEIL:
+    ExpandFloatRes_FCEIL(N, Lo, Hi);
+    break;
+  case ISD::FCOPYSIGN:
+    ExpandFloatRes_FCOPYSIGN(N, Lo, Hi);
+    break;
   case ISD::STRICT_FCOS:
-  case ISD::FCOS:       ExpandFloatRes_FCOS(N, Lo, Hi); break;
+  case ISD::FCOS:
+    ExpandFloatRes_FCOS(N, Lo, Hi);
+    break;
   case ISD::STRICT_FDIV:
-  case ISD::FDIV:       ExpandFloatRes_FDIV(N, Lo, Hi); break;
+  case ISD::FDIV:
+    ExpandFloatRes_FDIV(N, Lo, Hi);
+    break;
   case ISD::STRICT_FEXP:
-  case ISD::FEXP:       ExpandFloatRes_FEXP(N, Lo, Hi); break;
+  case ISD::FEXP:
+    ExpandFloatRes_FEXP(N, Lo, Hi);
+    break;
   case ISD::STRICT_FEXP2:
-  case ISD::FEXP2:      ExpandFloatRes_FEXP2(N, Lo, Hi); break;
-  case ISD::FEXP10:     ExpandFloatRes_FEXP10(N, Lo, Hi); break;
+  case ISD::FEXP2:
+    ExpandFloatRes_FEXP2(N, Lo, Hi);
+    break;
+  case ISD::FEXP10:
+    ExpandFloatRes_FEXP10(N, Lo, Hi);
+    break;
   case ISD::STRICT_FFLOOR:
-  case ISD::FFLOOR:     ExpandFloatRes_FFLOOR(N, Lo, Hi); break;
+  case ISD::FFLOOR:
+    ExpandFloatRes_FFLOOR(N, Lo, Hi);
+    break;
   case ISD::STRICT_FLOG:
-  case ISD::FLOG:       ExpandFloatRes_FLOG(N, Lo, Hi); break;
+  case ISD::FLOG:
+    ExpandFloatRes_FLOG(N, Lo, Hi);
+    break;
   case ISD::STRICT_FLOG2:
-  case ISD::FLOG2:      ExpandFloatRes_FLOG2(N, Lo, Hi); break;
+  case ISD::FLOG2:
+    ExpandFloatRes_FLOG2(N, Lo, Hi);
+    break;
   case ISD::STRICT_FLOG10:
-  case ISD::FLOG10:     ExpandFloatRes_FLOG10(N, Lo, Hi); break;
+  case ISD::FLOG10:
+    ExpandFloatRes_FLOG10(N, Lo, Hi);
+    break;
   case ISD::STRICT_FMA:
-  case ISD::FMA:        ExpandFloatRes_FMA(N, Lo, Hi); break;
+  case ISD::FMA:
+    ExpandFloatRes_FMA(N, Lo, Hi);
+    break;
   case ISD::STRICT_FMUL:
-  case ISD::FMUL:       ExpandFloatRes_FMUL(N, Lo, Hi); break;
+  case ISD::FMUL:
+    ExpandFloatRes_FMUL(N, Lo, Hi);
+    break;
   case ISD::STRICT_FNEARBYINT:
-  case ISD::FNEARBYINT: ExpandFloatRes_FNEARBYINT(N, Lo, Hi); break;
-  case ISD::FNEG:       ExpandFloatRes_FNEG(N, Lo, Hi); break;
+  case ISD::FNEARBYINT:
+    ExpandFloatRes_FNEARBYINT(N, Lo, Hi);
+    break;
+  case ISD::FNEG:
+    ExpandFloatRes_FNEG(N, Lo, Hi);
+    break;
   case ISD::STRICT_FP_EXTEND:
-  case ISD::FP_EXTEND:  ExpandFloatRes_FP_EXTEND(N, Lo, Hi); break;
+  case ISD::FP_EXTEND:
+    ExpandFloatRes_FP_EXTEND(N, Lo, Hi);
+    break;
   case ISD::STRICT_FPOW:
-  case ISD::FPOW:       ExpandFloatRes_FPOW(N, Lo, Hi); break;
+  case ISD::FPOW:
+    ExpandFloatRes_FPOW(N, Lo, Hi);
+    break;
   case ISD::STRICT_FPOWI:
-  case ISD::FPOWI:      ExpandFloatRes_FPOWI(N, Lo, Hi); break;
+  case ISD::FPOWI:
+    ExpandFloatRes_FPOWI(N, Lo, Hi);
+    break;
   case ISD::FLDEXP:
-  case ISD::STRICT_FLDEXP: ExpandFloatRes_FLDEXP(N, Lo, Hi); break;
-  case ISD::FREEZE:     ExpandFloatRes_FREEZE(N, Lo, Hi); break;
+  case ISD::STRICT_FLDEXP:
+    ExpandFloatRes_FLDEXP(N, Lo, Hi);
+    break;
+  case ISD::FREEZE:
+    ExpandFloatRes_FREEZE(N, Lo, Hi);
+    break;
   case ISD::STRICT_FRINT:
-  case ISD::FRINT:      ExpandFloatRes_FRINT(N, Lo, Hi); break;
+  case ISD::FRINT:
+    ExpandFloatRes_FRINT(N, Lo, Hi);
+    break;
   case ISD::STRICT_FROUND:
-  case ISD::FROUND:     ExpandFloatRes_FROUND(N, Lo, Hi); break;
+  case ISD::FROUND:
+    ExpandFloatRes_FROUND(N, Lo, Hi);
+    break;
   case ISD::STRICT_FROUNDEVEN:
-  case ISD::FROUNDEVEN: ExpandFloatRes_FROUNDEVEN(N, Lo, Hi); break;
+  case ISD::FROUNDEVEN:
+    ExpandFloatRes_FROUNDEVEN(N, Lo, Hi);
+    break;
   case ISD::STRICT_FSIN:
-  case ISD::FSIN:       ExpandFloatRes_FSIN(N, Lo, Hi); break;
+  case ISD::FSIN:
+    ExpandFloatRes_FSIN(N, Lo, Hi);
+    break;
   case ISD::STRICT_FSQRT:
-  case ISD::FSQRT:      ExpandFloatRes_FSQRT(N, Lo, Hi); break;
+  case ISD::FSQRT:
+    ExpandFloatRes_FSQRT(N, Lo, Hi);
+    break;
   case ISD::STRICT_FSUB:
-  case ISD::FSUB:       ExpandFloatRes_FSUB(N, Lo, Hi); break;
+  case ISD::FSUB:
+    ExpandFloatRes_FSUB(N, Lo, Hi);
+    break;
   case ISD::STRICT_FTRUNC:
-  case ISD::FTRUNC:     ExpandFloatRes_FTRUNC(N, Lo, Hi); break;
-  case ISD::LOAD:       ExpandFloatRes_LOAD(N, Lo, Hi); break;
+  case ISD::FTRUNC:
+    ExpandFloatRes_FTRUNC(N, Lo, Hi);
+    break;
+  case ISD::LOAD:
+    ExpandFloatRes_LOAD(N, Lo, Hi);
+    break;
   case ISD::STRICT_SINT_TO_FP:
   case ISD::STRICT_UINT_TO_FP:
   case ISD::SINT_TO_FP:
-  case ISD::UINT_TO_FP: ExpandFloatRes_XINT_TO_FP(N, Lo, Hi); break;
+  case ISD::UINT_TO_FP:
+    ExpandFloatRes_XINT_TO_FP(N, Lo, Hi);
+    break;
   case ISD::STRICT_FREM:
-  case ISD::FREM:       ExpandFloatRes_FREM(N, Lo, Hi); break;
+  case ISD::FREM:
+    ExpandFloatRes_FREM(N, Lo, Hi);
+    break;
   }
 
   // If Lo/Hi is null, the sub-method took care of registering results etc.
@@ -1373,12 +1504,12 @@ void DAGTypeLegalizer::ExpandFloatRes_ConstantFP(SDNode *N, SDValue &Lo,
          "Do not know how to expand this float constant!");
   APInt C = cast<ConstantFPSDNode>(N)->getValueAPF().bitcastToAPInt();
   SDLoc dl(N);
-  Lo = DAG.getConstantFP(APFloat(DAG.EVTToAPFloatSemantics(NVT),
-                                 APInt(64, C.getRawData()[1])),
-                         dl, NVT);
-  Hi = DAG.getConstantFP(APFloat(DAG.EVTToAPFloatSemantics(NVT),
-                                 APInt(64, C.getRawData()[0])),
-                         dl, NVT);
+  Lo = DAG.getConstantFP(
+      APFloat(DAG.EVTToAPFloatSemantics(NVT), APInt(64, C.getRawData()[1])), dl,
+      NVT);
+  Hi = DAG.getConstantFP(
+      APFloat(DAG.EVTToAPFloatSemantics(NVT), APInt(64, C.getRawData()[0])), dl,
+      NVT);
 }
 
 void DAGTypeLegalizer::ExpandFloatRes_Unary(SDNode *N, RTLIB::Libcall LC,
@@ -1388,9 +1519,8 @@ void DAGTypeLegalizer::ExpandFloatRes_Unary(SDNode *N, RTLIB::Libcall LC,
   SDValue Op = N->getOperand(0 + Offset);
   SDValue Chain = IsStrict ? N->getOperand(0) : SDValue();
   TargetLowering::MakeLibCallOptions CallOptions;
-  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG, LC, N->getValueType(0),
-                                                    Op, CallOptions, SDLoc(N),
-                                                    Chain);
+  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(
+      DAG, LC, N->getValueType(0), Op, CallOptions, SDLoc(N), Chain);
   if (IsStrict)
     ReplaceValueWith(SDValue(N, 1), Tmp.second);
   GetPairElements(Tmp.first, Lo, Hi);
@@ -1400,12 +1530,11 @@ void DAGTypeLegalizer::ExpandFloatRes_Binary(SDNode *N, RTLIB::Libcall LC,
                                              SDValue &Lo, SDValue &Hi) {
   bool IsStrict = N->isStrictFPOpcode();
   unsigned Offset = IsStrict ? 1 : 0;
-  SDValue Ops[] = { N->getOperand(0 + Offset), N->getOperand(1 + Offset) };
+  SDValue Ops[] = {N->getOperand(0 + Offset), N->getOperand(1 + Offset)};
   SDValue Chain = IsStrict ? N->getOperand(0) : SDValue();
   TargetLowering::MakeLibCallOptions CallOptions;
-  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG, LC, N->getValueType(0),
-                                                    Ops, CallOptions, SDLoc(N),
-                                                    Chain);
+  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(
+      DAG, LC, N->getValueType(0), Ops, CallOptions, SDLoc(N), Chain);
   if (IsStrict)
     ReplaceValueWith(SDValue(N, 1), Tmp.second);
   GetPairElements(Tmp.first, Lo, Hi);
@@ -1421,92 +1550,99 @@ void DAGTypeLegalizer::ExpandFloatRes_FABS(SDNode *N, SDValue &Lo,
   Hi = DAG.getNode(ISD::FABS, dl, Tmp.getValueType(), Tmp);
   // Lo = Hi==fabs(Hi) ? Lo : -Lo;
   Lo = DAG.getSelectCC(dl, Tmp, Hi, Lo,
-                   DAG.getNode(ISD::FNEG, dl, Lo.getValueType(), Lo),
-                   ISD::SETEQ);
+                       DAG.getNode(ISD::FNEG, dl, Lo.getValueType(), Lo),
+                       ISD::SETEQ);
 }
 
 void DAGTypeLegalizer::ExpandFloatRes_FMINNUM(SDNode *N, SDValue &Lo,
                                               SDValue &Hi) {
-  ExpandFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::FMIN_F32, RTLIB::FMIN_F64,
-                                       RTLIB::FMIN_F80, RTLIB::FMIN_F128,
-                                       RTLIB::FMIN_PPCF128), Lo, Hi);
+  ExpandFloatRes_Binary(N,
+                        GetFPLibCall(N->getValueType(0), RTLIB::FMIN_F32,
+                                     RTLIB::FMIN_F64, RTLIB::FMIN_F80,
+                                     RTLIB::FMIN_F128, RTLIB::FMIN_PPCF128),
+                        Lo, Hi);
 }
 
 void DAGTypeLegalizer::ExpandFloatRes_FMAXNUM(SDNode *N, SDValue &Lo,
                                               SDValue &Hi) {
-  ExpandFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                        RTLIB::FMAX_F32, RTLIB::FMAX_F64,
-                                        RTLIB::FMAX_F80, RTLIB::FMAX_F128,
-                                        RTLIB::FMAX_PPCF128), Lo, Hi);
+  ExpandFloatRes_Binary(N,
+                        GetFPLibCall(N->getValueType(0), RTLIB::FMAX_F32,
+                                     RTLIB::FMAX_F64, RTLIB::FMAX_F80,
+                                     RTLIB::FMAX_F128, RTLIB::FMAX_PPCF128),
+                        Lo, Hi);
 }
 
 void DAGTypeLegalizer::ExpandFloatRes_FADD(SDNode *N, SDValue &Lo,
                                            SDValue &Hi) {
-  ExpandFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                        RTLIB::ADD_F32, RTLIB::ADD_F64,
-                                        RTLIB::ADD_F80, RTLIB::ADD_F128,
-                                        RTLIB::ADD_PPCF128), Lo, Hi);
+  ExpandFloatRes_Binary(N,
+                        GetFPLibCall(N->getValueType(0), RTLIB::ADD_F32,
+                                     RTLIB::ADD_F64, RTLIB::ADD_F80,
+                                     RTLIB::ADD_F128, RTLIB::ADD_PPCF128),
+                        Lo, Hi);
 }
 
 void DAGTypeLegalizer::ExpandFloatRes_FCBRT(SDNode *N, SDValue &Lo,
                                             SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0), RTLIB::CBRT_F32,
-                                       RTLIB::CBRT_F64, RTLIB::CBRT_F80,
-                                       RTLIB::CBRT_F128,
-                                       RTLIB::CBRT_PPCF128), Lo, Hi);
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::CBRT_F32,
+                                    RTLIB::CBRT_F64, RTLIB::CBRT_F80,
+                                    RTLIB::CBRT_F128, RTLIB::CBRT_PPCF128),
+                       Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FCEIL(SDNode *N,
-                                            SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::CEIL_F32, RTLIB::CEIL_F64,
-                                       RTLIB::CEIL_F80, RTLIB::CEIL_F128,
-                                       RTLIB::CEIL_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FCEIL(SDNode *N, SDValue &Lo,
+                                            SDValue &Hi) {
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::CEIL_F32,
+                                    RTLIB::CEIL_F64, RTLIB::CEIL_F80,
+                                    RTLIB::CEIL_F128, RTLIB::CEIL_PPCF128),
+                       Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FCOPYSIGN(SDNode *N,
-                                                SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                        RTLIB::COPYSIGN_F32,
-                                        RTLIB::COPYSIGN_F64,
-                                        RTLIB::COPYSIGN_F80,
-                                        RTLIB::COPYSIGN_F128,
-                                        RTLIB::COPYSIGN_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FCOPYSIGN(SDNode *N, SDValue &Lo,
+                                                SDValue &Hi) {
+  ExpandFloatRes_Binary(N,
+                        GetFPLibCall(N->getValueType(0), RTLIB::COPYSIGN_F32,
+                                     RTLIB::COPYSIGN_F64, RTLIB::COPYSIGN_F80,
+                                     RTLIB::COPYSIGN_F128,
+                                     RTLIB::COPYSIGN_PPCF128),
+                        Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FCOS(SDNode *N,
-                                           SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::COS_F32, RTLIB::COS_F64,
-                                       RTLIB::COS_F80, RTLIB::COS_F128,
-                                       RTLIB::COS_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FCOS(SDNode *N, SDValue &Lo,
+                                           SDValue &Hi) {
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::COS_F32,
+                                    RTLIB::COS_F64, RTLIB::COS_F80,
+                                    RTLIB::COS_F128, RTLIB::COS_PPCF128),
+                       Lo, Hi);
 }
 
 void DAGTypeLegalizer::ExpandFloatRes_FDIV(SDNode *N, SDValue &Lo,
                                            SDValue &Hi) {
-  ExpandFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                        RTLIB::DIV_F32,
-                                        RTLIB::DIV_F64,
-                                        RTLIB::DIV_F80,
-                                        RTLIB::DIV_F128,
-                                        RTLIB::DIV_PPCF128), Lo, Hi);
+  ExpandFloatRes_Binary(N,
+                        GetFPLibCall(N->getValueType(0), RTLIB::DIV_F32,
+                                     RTLIB::DIV_F64, RTLIB::DIV_F80,
+                                     RTLIB::DIV_F128, RTLIB::DIV_PPCF128),
+                        Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FEXP(SDNode *N,
-                                           SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::EXP_F32, RTLIB::EXP_F64,
-                                       RTLIB::EXP_F80, RTLIB::EXP_F128,
-                                       RTLIB::EXP_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FEXP(SDNode *N, SDValue &Lo,
+                                           SDValue &Hi) {
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::EXP_F32,
+                                    RTLIB::EXP_F64, RTLIB::EXP_F80,
+                                    RTLIB::EXP_F128, RTLIB::EXP_PPCF128),
+                       Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FEXP2(SDNode *N,
-                                            SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::EXP2_F32, RTLIB::EXP2_F64,
-                                       RTLIB::EXP2_F80, RTLIB::EXP2_F128,
-                                       RTLIB::EXP2_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FEXP2(SDNode *N, SDValue &Lo,
+                                            SDValue &Hi) {
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::EXP2_F32,
+                                    RTLIB::EXP2_F64, RTLIB::EXP2_F80,
+                                    RTLIB::EXP2_F128, RTLIB::EXP2_PPCF128),
+                       Lo, Hi);
 }
 
 void DAGTypeLegalizer::ExpandFloatRes_FEXP10(SDNode *N, SDValue &Lo,
@@ -1518,54 +1654,54 @@ void DAGTypeLegalizer::ExpandFloatRes_FEXP10(SDNode *N, SDValue &Lo,
                        Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FFLOOR(SDNode *N,
-                                             SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::FLOOR_F32, RTLIB::FLOOR_F64,
-                                       RTLIB::FLOOR_F80, RTLIB::FLOOR_F128,
-                                       RTLIB::FLOOR_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FFLOOR(SDNode *N, SDValue &Lo,
+                                             SDValue &Hi) {
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::FLOOR_F32,
+                                    RTLIB::FLOOR_F64, RTLIB::FLOOR_F80,
+                                    RTLIB::FLOOR_F128, RTLIB::FLOOR_PPCF128),
+                       Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FLOG(SDNode *N,
-                                           SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::LOG_F32, RTLIB::LOG_F64,
-                                       RTLIB::LOG_F80, RTLIB::LOG_F128,
-                                       RTLIB::LOG_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FLOG(SDNode *N, SDValue &Lo,
+                                           SDValue &Hi) {
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::LOG_F32,
+                                    RTLIB::LOG_F64, RTLIB::LOG_F80,
+                                    RTLIB::LOG_F128, RTLIB::LOG_PPCF128),
+                       Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FLOG2(SDNode *N,
-                                            SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::LOG2_F32, RTLIB::LOG2_F64,
-                                       RTLIB::LOG2_F80, RTLIB::LOG2_F128,
-                                       RTLIB::LOG2_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FLOG2(SDNode *N, SDValue &Lo,
+                                            SDValue &Hi) {
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::LOG2_F32,
+                                    RTLIB::LOG2_F64, RTLIB::LOG2_F80,
+                                    RTLIB::LOG2_F128, RTLIB::LOG2_PPCF128),
+                       Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FLOG10(SDNode *N,
-                                             SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::LOG10_F32, RTLIB::LOG10_F64,
-                                       RTLIB::LOG10_F80, RTLIB::LOG10_F128,
-                                       RTLIB::LOG10_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FLOG10(SDNode *N, SDValue &Lo,
+                                             SDValue &Hi) {
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::LOG10_F32,
+                                    RTLIB::LOG10_F64, RTLIB::LOG10_F80,
+                                    RTLIB::LOG10_F128, RTLIB::LOG10_PPCF128),
+                       Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FMA(SDNode *N, SDValue &Lo,
-                                          SDValue &Hi) {
+void DAGTypeLegalizer::ExpandFloatRes_FMA(SDNode *N, SDValue &Lo, SDValue &Hi) {
   bool IsStrict = N->isStrictFPOpcode();
   unsigned Offset = IsStrict ? 1 : 0;
-  SDValue Ops[3] = { N->getOperand(0 + Offset), N->getOperand(1 + Offset),
-                     N->getOperand(2 + Offset) };
+  SDValue Ops[3] = {N->getOperand(0 + Offset), N->getOperand(1 + Offset),
+                    N->getOperand(2 + Offset)};
   SDValue Chain = IsStrict ? N->getOperand(0) : SDValue();
   TargetLowering::MakeLibCallOptions CallOptions;
-  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG, GetFPLibCall(N->getValueType(0),
-                                                   RTLIB::FMA_F32,
-                                                   RTLIB::FMA_F64,
-                                                   RTLIB::FMA_F80,
-                                                   RTLIB::FMA_F128,
-                                                   RTLIB::FMA_PPCF128),
-                                 N->getValueType(0), Ops, CallOptions,
-                                 SDLoc(N), Chain);
+  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(
+      DAG,
+      GetFPLibCall(N->getValueType(0), RTLIB::FMA_F32, RTLIB::FMA_F64,
+                   RTLIB::FMA_F80, RTLIB::FMA_F128, RTLIB::FMA_PPCF128),
+      N->getValueType(0), Ops, CallOptions, SDLoc(N), Chain);
   if (IsStrict)
     ReplaceValueWith(SDValue(N, 1), Tmp.second);
   GetPairElements(Tmp.first, Lo, Hi);
@@ -1573,22 +1709,21 @@ void DAGTypeLegalizer::ExpandFloatRes_FMA(SDNode *N, SDValue &Lo,
 
 void DAGTypeLegalizer::ExpandFloatRes_FMUL(SDNode *N, SDValue &Lo,
                                            SDValue &Hi) {
-  ExpandFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                                   RTLIB::MUL_F32,
-                                                   RTLIB::MUL_F64,
-                                                   RTLIB::MUL_F80,
-                                                   RTLIB::MUL_F128,
-                                                   RTLIB::MUL_PPCF128), Lo, Hi);
+  ExpandFloatRes_Binary(N,
+                        GetFPLibCall(N->getValueType(0), RTLIB::MUL_F32,
+                                     RTLIB::MUL_F64, RTLIB::MUL_F80,
+                                     RTLIB::MUL_F128, RTLIB::MUL_PPCF128),
+                        Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FNEARBYINT(SDNode *N,
-                                                 SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::NEARBYINT_F32,
-                                       RTLIB::NEARBYINT_F64,
-                                       RTLIB::NEARBYINT_F80,
-                                       RTLIB::NEARBYINT_F128,
-                                       RTLIB::NEARBYINT_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FNEARBYINT(SDNode *N, SDValue &Lo,
+                                                 SDValue &Hi) {
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::NEARBYINT_F32,
+                                    RTLIB::NEARBYINT_F64, RTLIB::NEARBYINT_F80,
+                                    RTLIB::NEARBYINT_F128,
+                                    RTLIB::NEARBYINT_PPCF128),
+                       Lo, Hi);
 }
 
 void DAGTypeLegalizer::ExpandFloatRes_FNEG(SDNode *N, SDValue &Lo,
@@ -1613,31 +1748,33 @@ void DAGTypeLegalizer::ExpandFloatRes_FP_EXTEND(SDNode *N, SDValue &Lo,
       Chain = N->getOperand(0);
     } else {
       // Other we need to extend.
-      Hi = DAG.getNode(ISD::STRICT_FP_EXTEND, dl, { NVT, MVT::Other },
-                       { N->getOperand(0), N->getOperand(1) });
+      Hi = DAG.getNode(ISD::STRICT_FP_EXTEND, dl, {NVT, MVT::Other},
+                       {N->getOperand(0), N->getOperand(1)});
       Chain = Hi.getValue(1);
     }
   } else {
     Hi = DAG.getNode(ISD::FP_EXTEND, dl, NVT, N->getOperand(0));
   }
 
-  Lo = DAG.getConstantFP(APFloat(DAG.EVTToAPFloatSemantics(NVT),
-                                 APInt(NVT.getSizeInBits(), 0)), dl, NVT);
+  Lo = DAG.getConstantFP(
+      APFloat(DAG.EVTToAPFloatSemantics(NVT), APInt(NVT.getSizeInBits(), 0)),
+      dl, NVT);
 
   if (IsStrict)
     ReplaceValueWith(SDValue(N, 1), Chain);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FPOW(SDNode *N,
-                                           SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                        RTLIB::POW_F32, RTLIB::POW_F64,
-                                        RTLIB::POW_F80, RTLIB::POW_F128,
-                                        RTLIB::POW_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FPOW(SDNode *N, SDValue &Lo,
+                                           SDValue &Hi) {
+  ExpandFloatRes_Binary(N,
+                        GetFPLibCall(N->getValueType(0), RTLIB::POW_F32,
+                                     RTLIB::POW_F64, RTLIB::POW_F80,
+                                     RTLIB::POW_F128, RTLIB::POW_PPCF128),
+                        Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FPOWI(SDNode *N,
-                                            SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandFloatRes_FPOWI(SDNode *N, SDValue &Lo,
+                                            SDValue &Hi) {
   ExpandFloatRes_Binary(N, RTLIB::getPOWI(N->getValueType(0)), Lo, Hi);
 }
 
@@ -1646,8 +1783,8 @@ void DAGTypeLegalizer::ExpandFloatRes_FLDEXP(SDNode *N, SDValue &Lo,
   ExpandFloatRes_Binary(N, RTLIB::getLDEXP(N->getValueType(0)), Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FREEZE(SDNode *N,
-                                             SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandFloatRes_FREEZE(SDNode *N, SDValue &Lo,
+                                             SDValue &Hi) {
   assert(N->getValueType(0) == MVT::ppcf128 &&
          "Logic only correct for ppcf128!");
 
@@ -1657,74 +1794,77 @@ void DAGTypeLegalizer::ExpandFloatRes_FREEZE(SDNode *N,
   Hi = DAG.getNode(ISD::FREEZE, dl, Hi.getValueType(), Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FREM(SDNode *N,
-                                           SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                        RTLIB::REM_F32, RTLIB::REM_F64,
-                                        RTLIB::REM_F80, RTLIB::REM_F128,
-                                        RTLIB::REM_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FREM(SDNode *N, SDValue &Lo,
+                                           SDValue &Hi) {
+  ExpandFloatRes_Binary(N,
+                        GetFPLibCall(N->getValueType(0), RTLIB::REM_F32,
+                                     RTLIB::REM_F64, RTLIB::REM_F80,
+                                     RTLIB::REM_F128, RTLIB::REM_PPCF128),
+                        Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FRINT(SDNode *N,
-                                            SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::RINT_F32, RTLIB::RINT_F64,
-                                       RTLIB::RINT_F80, RTLIB::RINT_F128,
-                                       RTLIB::RINT_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FRINT(SDNode *N, SDValue &Lo,
+                                            SDValue &Hi) {
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::RINT_F32,
+                                    RTLIB::RINT_F64, RTLIB::RINT_F80,
+                                    RTLIB::RINT_F128, RTLIB::RINT_PPCF128),
+                       Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FROUND(SDNode *N,
-                                             SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::ROUND_F32,
-                                       RTLIB::ROUND_F64,
-                                       RTLIB::ROUND_F80,
-                                       RTLIB::ROUND_F128,
-                                       RTLIB::ROUND_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FROUND(SDNode *N, SDValue &Lo,
+                                             SDValue &Hi) {
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::ROUND_F32,
+                                    RTLIB::ROUND_F64, RTLIB::ROUND_F80,
+                                    RTLIB::ROUND_F128, RTLIB::ROUND_PPCF128),
+                       Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FROUNDEVEN(SDNode *N,
-                                             SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::ROUNDEVEN_F32,
-                                       RTLIB::ROUNDEVEN_F64,
-                                       RTLIB::ROUNDEVEN_F80,
-                                       RTLIB::ROUNDEVEN_F128,
-                                       RTLIB::ROUNDEVEN_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FROUNDEVEN(SDNode *N, SDValue &Lo,
+                                                 SDValue &Hi) {
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::ROUNDEVEN_F32,
+                                    RTLIB::ROUNDEVEN_F64, RTLIB::ROUNDEVEN_F80,
+                                    RTLIB::ROUNDEVEN_F128,
+                                    RTLIB::ROUNDEVEN_PPCF128),
+                       Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FSIN(SDNode *N,
-                                           SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::SIN_F32, RTLIB::SIN_F64,
-                                       RTLIB::SIN_F80, RTLIB::SIN_F128,
-                                       RTLIB::SIN_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FSIN(SDNode *N, SDValue &Lo,
+                                           SDValue &Hi) {
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::SIN_F32,
+                                    RTLIB::SIN_F64, RTLIB::SIN_F80,
+                                    RTLIB::SIN_F128, RTLIB::SIN_PPCF128),
+                       Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FSQRT(SDNode *N,
-                                            SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::SQRT_F32, RTLIB::SQRT_F64,
-                                       RTLIB::SQRT_F80, RTLIB::SQRT_F128,
-                                       RTLIB::SQRT_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FSQRT(SDNode *N, SDValue &Lo,
+                                            SDValue &Hi) {
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::SQRT_F32,
+                                    RTLIB::SQRT_F64, RTLIB::SQRT_F80,
+                                    RTLIB::SQRT_F128, RTLIB::SQRT_PPCF128),
+                       Lo, Hi);
 }
 
 void DAGTypeLegalizer::ExpandFloatRes_FSUB(SDNode *N, SDValue &Lo,
                                            SDValue &Hi) {
-  ExpandFloatRes_Binary(N, GetFPLibCall(N->getValueType(0),
-                                        RTLIB::SUB_F32,
-                                        RTLIB::SUB_F64,
-                                        RTLIB::SUB_F80,
-                                        RTLIB::SUB_F128,
-                                        RTLIB::SUB_PPCF128), Lo, Hi);
+  ExpandFloatRes_Binary(N,
+                        GetFPLibCall(N->getValueType(0), RTLIB::SUB_F32,
+                                     RTLIB::SUB_F64, RTLIB::SUB_F80,
+                                     RTLIB::SUB_F128, RTLIB::SUB_PPCF128),
+                        Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandFloatRes_FTRUNC(SDNode *N,
-                                             SDValue &Lo, SDValue &Hi) {
-  ExpandFloatRes_Unary(N, GetFPLibCall(N->getValueType(0),
-                                       RTLIB::TRUNC_F32, RTLIB::TRUNC_F64,
-                                       RTLIB::TRUNC_F80, RTLIB::TRUNC_F128,
-                                       RTLIB::TRUNC_PPCF128), Lo, Hi);
+void DAGTypeLegalizer::ExpandFloatRes_FTRUNC(SDNode *N, SDValue &Lo,
+                                             SDValue &Hi) {
+  ExpandFloatRes_Unary(N,
+                       GetFPLibCall(N->getValueType(0), RTLIB::TRUNC_F32,
+                                    RTLIB::TRUNC_F64, RTLIB::TRUNC_F80,
+                                    RTLIB::TRUNC_F128, RTLIB::TRUNC_PPCF128),
+                       Lo, Hi);
 }
 
 void DAGTypeLegalizer::ExpandFloatRes_LOAD(SDNode *N, SDValue &Lo,
@@ -1751,8 +1891,9 @@ void DAGTypeLegalizer::ExpandFloatRes_LOAD(SDNode *N, SDValue &Lo,
   Chain = Hi.getValue(1);
 
   // The low part is zero.
-  Lo = DAG.getConstantFP(APFloat(DAG.EVTToAPFloatSemantics(NVT),
-                                 APInt(NVT.getSizeInBits(), 0)), dl, NVT);
+  Lo = DAG.getConstantFP(
+      APFloat(DAG.EVTToAPFloatSemantics(NVT), APInt(NVT.getSizeInBits(), 0)),
+      dl, NVT);
 
   // Modified the chain - switch anything that used the old chain to use the
   // new one.
@@ -1781,8 +1922,9 @@ void DAGTypeLegalizer::ExpandFloatRes_XINT_TO_FP(SDNode *N, SDValue &Lo,
   // though.
   if (SrcVT.bitsLE(MVT::i32)) {
     // The integer can be represented exactly in an f64.
-    Lo = DAG.getConstantFP(APFloat(DAG.EVTToAPFloatSemantics(NVT),
-                                   APInt(NVT.getSizeInBits(), 0)), dl, NVT);
+    Lo = DAG.getConstantFP(
+        APFloat(DAG.EVTToAPFloatSemantics(NVT), APInt(NVT.getSizeInBits(), 0)),
+        dl, NVT);
     if (Strict) {
       Hi = DAG.getNode(N->getOpcode(), dl, DAG.getVTList(NVT, MVT::Other),
                        {Chain, Src}, Flags);
@@ -1826,9 +1968,9 @@ void DAGTypeLegalizer::ExpandFloatRes_XINT_TO_FP(SDNode *N, SDValue &Lo,
   SrcVT = Src.getValueType();
 
   // x>=0 ? (ppcf128)(iN)x : (ppcf128)(iN)x + 2^N; N=32,64,128.
-  static const uint64_t TwoE32[]  = { 0x41f0000000000000LL, 0 };
-  static const uint64_t TwoE64[]  = { 0x43f0000000000000LL, 0 };
-  static const uint64_t TwoE128[] = { 0x47f0000000000000LL, 0 };
+  static const uint64_t TwoE32[] = {0x41f0000000000000LL, 0};
+  static const uint64_t TwoE64[] = {0x43f0000000000000LL, 0};
+  static const uint64_t TwoE128[] = {0x47f0000000000000LL, 0};
   ArrayRef<uint64_t> Parts;
 
   switch (SrcVT.getSimpleVT().SimpleTy) {
@@ -1855,12 +1997,11 @@ void DAGTypeLegalizer::ExpandFloatRes_XINT_TO_FP(SDNode *N, SDValue &Lo,
     ReplaceValueWith(SDValue(N, 1), Chain);
   } else
     Lo = DAG.getNode(ISD::FADD, dl, VT, Hi, NewLo);
-  Lo = DAG.getSelectCC(dl, Src, DAG.getConstant(0, dl, SrcVT),
-                       Lo, Hi, ISD::SETLT);
+  Lo = DAG.getSelectCC(dl, Src, DAG.getConstant(0, dl, SrcVT), Lo, Hi,
+                       ISD::SETLT);
   GetPairElements(Lo, Lo, Hi);
 }
 
-
 //===----------------------------------------------------------------------===//
 //  Float Operand Expansion
 //===----------------------------------------------------------------------===//
@@ -1881,36 +2022,65 @@ bool DAGTypeLegalizer::ExpandFloatOperand(SDNode *N, unsigned OpNo) {
   default:
 #ifndef NDEBUG
     dbgs() << "ExpandFloatOperand Op #" << OpNo << ": ";
-    N->dump(&DAG); dbgs() << "\n";
+    N->dump(&DAG);
+    dbgs() << "\n";
 #endif
     report_fatal_error("Do not know how to expand this operator's operand!");
 
-  case ISD::BITCAST:         Res = ExpandOp_BITCAST(N); break;
-  case ISD::BUILD_VECTOR:    Res = ExpandOp_BUILD_VECTOR(N); break;
-  case ISD::EXTRACT_ELEMENT: Res = ExpandOp_EXTRACT_ELEMENT(N); break;
+  case ISD::BITCAST:
+    Res = ExpandOp_BITCAST(N);
+    break;
+  case ISD::BUILD_VECTOR:
+    Res = ExpandOp_BUILD_VECTOR(N);
+    break;
+  case ISD::EXTRACT_ELEMENT:
+    Res = ExpandOp_EXTRACT_ELEMENT(N);
+    break;
 
-  case ISD::BR_CC:      Res = ExpandFloatOp_BR_CC(N); break;
-  case ISD::FCOPYSIGN:  Res = ExpandFloatOp_FCOPYSIGN(N); break;
+  case ISD::BR_CC:
+    Res = ExpandFloatOp_BR_CC(N);
+    break;
+  case ISD::FCOPYSIGN:
+    Res = ExpandFloatOp_FCOPYSIGN(N);
+    break;
   case ISD::STRICT_FP_ROUND:
-  case ISD::FP_ROUND:   Res = ExpandFloatOp_FP_ROUND(N); break;
+  case ISD::FP_ROUND:
+    Res = ExpandFloatOp_FP_ROUND(N);
+    break;
   case ISD::STRICT_FP_TO_SINT:
   case ISD::STRICT_FP_TO_UINT:
   case ISD::FP_TO_SINT:
-  case ISD::FP_TO_UINT: Res = ExpandFloatOp_FP_TO_XINT(N); break;
-  case ISD::LROUND:     Res = ExpandFloatOp_LROUND(N); break;
-  case ISD::LLROUND:    Res = ExpandFloatOp_LLROUND(N); break;
-  case ISD::LRINT:      Res = ExpandFloatOp_LRINT(N); break;
-  case ISD::LLRINT:     Res = ExpandFloatOp_LLRINT(N); break;
-  case ISD::SELECT_CC:  Res = ExpandFloatOp_SELECT_CC(N); break;
+  case ISD::FP_TO_UINT:
+    Res = ExpandFloatOp_FP_TO_XINT(N);
+    break;
+  case ISD::LROUND:
+    Res = ExpandFloatOp_LROUND(N);
+    break;
+  case ISD::LLROUND:
+    Res = ExpandFloatOp_LLROUND(N);
+    break;
+  case ISD::LRINT:
+    Res = ExpandFloatOp_LRINT(N);
+    break;
+  case ISD::LLRINT:
+    Res = ExpandFloatOp_LLRINT(N);
+    break;
+  case ISD::SELECT_CC:
+    Res = ExpandFloatOp_SELECT_CC(N);
+    break;
   case ISD::STRICT_FSETCC:
   case ISD::STRICT_FSETCCS:
-  case ISD::SETCC:      Res = ExpandFloatOp_SETCC(N); break;
-  case ISD::STORE:      Res = ExpandFloatOp_STORE(cast<StoreSDNode>(N),
-                                                  OpNo); break;
+  case ISD::SETCC:
+    Res = ExpandFloatOp_SETCC(N);
+    break;
+  case ISD::STORE:
+    Res = ExpandFloatOp_STORE(cast<StoreSDNode>(N), OpNo);
+    break;
   }
 
   // If the result is null, the sub-method took care of registering results etc.
-  if (!Res.getNode()) return false;
+  if (!Res.getNode())
+    return false;
 
   // If the result is N, the sub-method updated N in place.  Tell the legalizer
   // core about this.
@@ -1950,16 +2120,15 @@ void DAGTypeLegalizer::FloatExpandSetCCOperands(SDValue &NewLHS,
                       RHSLo, CCCode, OutputChain, IsSignaling);
   OutputChain = Tmp2->getNumValues() > 1 ? Tmp2.getValue(1) : SDValue();
   Tmp3 = DAG.getNode(ISD::AND, dl, Tmp1.getValueType(), Tmp1, Tmp2);
-  Tmp1 =
-      DAG.getSetCC(dl, getSetCCResultType(LHSHi.getValueType()), LHSHi, RHSHi,
-                   ISD::SETUNE, OutputChain, IsSignaling);
+  Tmp1 = DAG.getSetCC(dl, getSetCCResultType(LHSHi.getValueType()), LHSHi,
+                      RHSHi, ISD::SETUNE, OutputChain, IsSignaling);
   OutputChain = Tmp1->getNumValues() > 1 ? Tmp1.getValue(1) : SDValue();
   Tmp2 = DAG.getSetCC(dl, getSetCCResultType(LHSHi.getValueType()), LHSHi,
                       RHSHi, CCCode, OutputChain, IsSignaling);
   OutputChain = Tmp2->getNumValues() > 1 ? Tmp2.getValue(1) : SDValue();
   Tmp1 = DAG.getNode(ISD::AND, dl, Tmp1.getValueType(), Tmp1, Tmp2);
   NewLHS = DAG.getNode(ISD::OR, dl, Tmp1.getValueType(), Tmp1, Tmp3);
-  NewRHS = SDValue();   // LHS is the result, not a compare.
+  NewRHS = SDValue(); // LHS is the result, not a compare.
   Chain = OutputChain;
 }
 
@@ -1978,8 +2147,9 @@ SDValue DAGTypeLegalizer::ExpandFloatOp_BR_CC(SDNode *N) {
 
   // Update N to have the operands specified.
   return SDValue(DAG.UpdateNodeOperands(N, N->getOperand(0),
-                                DAG.getCondCode(CCCode), NewLHS, NewRHS,
-                                N->getOperand(4)), 0);
+                                        DAG.getCondCode(CCCode), NewLHS, NewRHS,
+                                        N->getOperand(4)),
+                 0);
 }
 
 SDValue DAGTypeLegalizer::ExpandFloatOp_FCOPYSIGN(SDNode *N) {
@@ -1989,8 +2159,8 @@ SDValue DAGTypeLegalizer::ExpandFloatOp_FCOPYSIGN(SDNode *N) {
   GetExpandedFloat(N->getOperand(1), Lo, Hi);
   // The ppcf128 value is providing only the sign; take it from the
   // higher-order double (which must have the larger magnitude).
-  return DAG.getNode(ISD::FCOPYSIGN, SDLoc(N),
-                     N->getValueType(0), N->getOperand(0), Hi);
+  return DAG.getNode(ISD::FCOPYSIGN, SDLoc(N), N->getValueType(0),
+                     N->getOperand(0), Hi);
 }
 
 SDValue DAGTypeLegalizer::ExpandFloatOp_FP_ROUND(SDNode *N) {
@@ -2002,8 +2172,8 @@ SDValue DAGTypeLegalizer::ExpandFloatOp_FP_ROUND(SDNode *N) {
 
   if (!IsStrict)
     // Round it the rest of the way (e.g. to f32) if needed.
-    return DAG.getNode(ISD::FP_ROUND, SDLoc(N),
-                       N->getValueType(0), Hi, N->getOperand(1));
+    return DAG.getNode(ISD::FP_ROUND, SDLoc(N), N->getValueType(0), Hi,
+                       N->getOperand(1));
 
   // Eliminate the node if the input float type is the same as the output float
   // type.
@@ -2061,9 +2231,10 @@ SDValue DAGTypeLegalizer::ExpandFloatOp_SELECT_CC(SDNode *N) {
   }
 
   // Update N to have the operands specified.
-  return SDValue(DAG.UpdateNodeOperands(N, NewLHS, NewRHS,
-                                N->getOperand(2), N->getOperand(3),
-                                DAG.getCondCode(CCCode)), 0);
+  return SDValue(DAG.UpdateNodeOperands(N, NewLHS, NewRHS, N->getOperand(2),
+                                        N->getOperand(3),
+                                        DAG.getCondCode(CCCode)),
+                 0);
 }
 
 SDValue DAGTypeLegalizer::ExpandFloatOp_SETCC(SDNode *N) {
@@ -2108,60 +2279,60 @@ SDValue DAGTypeLegalizer::ExpandFloatOp_STORE(SDNode *N, unsigned OpNo) {
   SDValue Lo, Hi;
   GetExpandedOp(ST->getValue(), Lo, Hi);
 
-  return DAG.getTruncStore(Chain, SDLoc(N), Hi, Ptr,
-                           ST->getMemoryVT(), ST->getMemOperand());
+  return DAG.getTruncStore(Chain, SDLoc(N), Hi, Ptr, ST->getMemoryVT(),
+                           ST->getMemOperand());
 }
 
 SDValue DAGTypeLegalizer::ExpandFloatOp_LROUND(SDNode *N) {
   EVT RVT = N->getValueType(0);
   EVT RetVT = N->getOperand(0).getValueType();
   TargetLowering::MakeLibCallOptions CallOptions;
-  return TLI.makeLibCall(DAG, GetFPLibCall(RetVT,
-                                           RTLIB::LROUND_F32,
-                                           RTLIB::LROUND_F64,
-                                           RTLIB::LROUND_F80,
-                                           RTLIB::LROUND_F128,
-                                           RTLIB::LROUND_PPCF128),
-                         RVT, N->getOperand(0), CallOptions, SDLoc(N)).first;
+  return TLI
+      .makeLibCall(DAG,
+                   GetFPLibCall(RetVT, RTLIB::LROUND_F32, RTLIB::LROUND_F64,
+                                RTLIB::LROUND_F80, RTLIB::LROUND_F128,
+                                RTLIB::LROUND_PPCF128),
+                   RVT, N->getOperand(0), CallOptions, SDLoc(N))
+      .first;
 }
 
 SDValue DAGTypeLegalizer::ExpandFloatOp_LLROUND(SDNode *N) {
   EVT RVT = N->getValueType(0);
   EVT RetVT = N->getOperand(0).getValueType();
   TargetLowering::MakeLibCallOptions CallOptions;
-  return TLI.makeLibCall(DAG, GetFPLibCall(RetVT,
-                                           RTLIB::LLROUND_F32,
-                                           RTLIB::LLROUND_F64,
-                                           RTLIB::LLROUND_F80,
-                                           RTLIB::LLROUND_F128,
-                                           RTLIB::LLROUND_PPCF128),
-                         RVT, N->getOperand(0), CallOptions, SDLoc(N)).first;
+  return TLI
+      .makeLibCall(DAG,
+                   GetFPLibCall(RetVT, RTLIB::LLROUND_F32, RTLIB::LLROUND_F64,
+                                RTLIB::LLROUND_F80, RTLIB::LLROUND_F128,
+                                RTLIB::LLROUND_PPCF128),
+                   RVT, N->getOperand(0), CallOptions, SDLoc(N))
+      .first;
 }
 
 SDValue DAGTypeLegalizer::ExpandFloatOp_LRINT(SDNode *N) {
   EVT RVT = N->getValueType(0);
   EVT RetVT = N->getOperand(0).getValueType();
   TargetLowering::MakeLibCallOptions CallOptions;
-  return TLI.makeLibCall(DAG, GetFPLibCall(RetVT,
-                                           RTLIB::LRINT_F32,
-                                           RTLIB::LRINT_F64,
-                                           RTLIB::LRINT_F80,
-                                           RTLIB::LRINT_F128,
-                                           RTLIB::LRINT_PPCF128),
-                         RVT, N->getOperand(0), CallOptions, SDLoc(N)).first;
+  return TLI
+      .makeLibCall(DAG,
+                   GetFPLibCall(RetVT, RTLIB::LRINT_F32, RTLIB::LRINT_F64,
+                                RTLIB::LRINT_F80, RTLIB::LRINT_F128,
+                                RTLIB::LRINT_PPCF128),
+                   RVT, N->getOperand(0), CallOptions, SDLoc(N))
+      .first;
 }
 
 SDValue DAGTypeLegalizer::ExpandFloatOp_LLRINT(SDNode *N) {
   EVT RVT = N->getValueType(0);
   EVT RetVT = N->getOperand(0).getValueType();
   TargetLowering::MakeLibCallOptions CallOptions;
-  return TLI.makeLibCall(DAG, GetFPLibCall(RetVT,
-                                           RTLIB::LLRINT_F32,
-                                           RTLIB::LLRINT_F64,
-                                           RTLIB::LLRINT_F80,
-                                           RTLIB::LLRINT_F128,
-                                           RTLIB::LLRINT_PPCF128),
-                         RVT, N->getOperand(0), CallOptions, SDLoc(N)).first;
+  return TLI
+      .makeLibCall(DAG,
+                   GetFPLibCall(RetVT, RTLIB::LLRINT_F32, RTLIB::LLRINT_F64,
+                                RTLIB::LLRINT_F80, RTLIB::LLRINT_F128,
+                                RTLIB::LLRINT_PPCF128),
+                   RVT, N->getOperand(0), CallOptions, SDLoc(N))
+      .first;
 }
 
 //===----------------------------------------------------------------------===//
@@ -2199,24 +2370,40 @@ bool DAGTypeLegalizer::PromoteFloatOperand(SDNode *N, unsigned OpNo) {
   // promotion-requiring floating point result have their operands legalized as
   // a part of PromoteFloatResult.
   switch (N->getOpcode()) {
-    default:
-  #ifndef NDEBUG
-      dbgs() << "PromoteFloatOperand Op #" << OpNo << ": ";
-      N->dump(&DAG); dbgs() << "\n";
-  #endif
-      report_fatal_error("Do not know how to promote this operator's operand!");
-
-    case ISD::BITCAST:    R = PromoteFloatOp_BITCAST(N, OpNo); break;
-    case ISD::FCOPYSIGN:  R = PromoteFloatOp_FCOPYSIGN(N, OpNo); break;
-    case ISD::FP_TO_SINT:
-    case ISD::FP_TO_UINT: R = PromoteFloatOp_FP_TO_XINT(N, OpNo); break;
-    case ISD::FP_TO_SINT_SAT:
-    case ISD::FP_TO_UINT_SAT:
-                          R = PromoteFloatOp_FP_TO_XINT_SAT(N, OpNo); break;
-    case ISD::FP_EXTEND:  R = PromoteFloatOp_FP_EXTEND(N, OpNo); break;
-    case ISD::SELECT_CC:  R = PromoteFloatOp_SELECT_CC(N, OpNo); break;
-    case ISD::SETCC:      R = PromoteFloatOp_SETCC(N, OpNo); break;
-    case ISD::STORE:      R = PromoteFloatOp_STORE(N, OpNo); break;
+  default:
+#ifndef NDEBUG
+    dbgs() << "PromoteFloatOperand Op #" << OpNo << ": ";
+    N->dump(&DAG);
+    dbgs() << "\n";
+#endif
+    report_fatal_error("Do not know how to promote this operator's operand!");
+
+  case ISD::BITCAST:
+    R = PromoteFloatOp_BITCAST(N, OpNo);
+    break;
+  case ISD::FCOPYSIGN:
+    R = PromoteFloatOp_FCOPYSIGN(N, OpNo);
+    break;
+  case ISD::FP_TO_SINT:
+  case ISD::FP_TO_UINT:
+    R = PromoteFloatOp_FP_TO_XINT(N, OpNo);
+    break;
+  case ISD::FP_TO_SINT_SAT:
+  case ISD::FP_TO_UINT_SAT:
+    R = PromoteFloatOp_FP_TO_XINT_SAT(N, OpNo);
+    break;
+  case ISD::FP_EXTEND:
+    R = PromoteFloatOp_FP_EXTEND(N, OpNo);
+    break;
+  case ISD::SELECT_CC:
+    R = PromoteFloatOp_SELECT_CC(N, OpNo);
+    break;
+  case ISD::SETCC:
+    R = PromoteFloatOp_SETCC(N, OpNo);
+    break;
+  case ISD::STORE:
+    R = PromoteFloatOp_STORE(N, OpNo);
+    break;
   }
 
   if (R.getNode())
@@ -2243,7 +2430,7 @@ SDValue DAGTypeLegalizer::PromoteFloatOp_BITCAST(SDNode *N, unsigned OpNo) {
 // Promote Operand 1 of FCOPYSIGN.  Operand 0 ought to be handled by
 // PromoteFloatRes_FCOPYSIGN.
 SDValue DAGTypeLegalizer::PromoteFloatOp_FCOPYSIGN(SDNode *N, unsigned OpNo) {
-  assert (OpNo == 1 && "Only Operand 1 must need promotion here");
+  assert(OpNo == 1 && "Only Operand 1 must need promotion here");
   SDValue Op1 = GetPromotedFloat(N->getOperand(1));
 
   return DAG.getNode(N->getOpcode(), SDLoc(N), N->getValueType(0),
@@ -2282,9 +2469,8 @@ SDValue DAGTypeLegalizer::PromoteFloatOp_SELECT_CC(SDNode *N, unsigned OpNo) {
   SDValue LHS = GetPromotedFloat(N->getOperand(0));
   SDValue RHS = GetPromotedFloat(N->getOperand(1));
 
-  return DAG.getNode(ISD::SELECT_CC, SDLoc(N), N->getValueType(0),
-                     LHS, RHS, N->getOperand(2), N->getOperand(3),
-                     N->getOperand(4));
+  return DAG.getNode(ISD::SELECT_CC, SDLoc(N), N->getValueType(0), LHS, RHS,
+                     N->getOperand(2), N->getOperand(3), N->getOperand(4));
 }
 
 // Construct a SETCC that compares the promoted values and sets the conditional
@@ -2296,7 +2482,6 @@ SDValue DAGTypeLegalizer::PromoteFloatOp_SETCC(SDNode *N, unsigned OpNo) {
   ISD::CondCode CCCode = cast<CondCodeSDNode>(N->getOperand(2))->get();
 
   return DAG.getSetCC(SDLoc(N), VT, Op0, Op1, CCCode);
-
 }
 
 // Lower the promoted Float down to the integer value of same size and construct
@@ -2311,8 +2496,8 @@ SDValue DAGTypeLegalizer::PromoteFloatOp_STORE(SDNode *N, unsigned OpNo) {
   EVT IVT = EVT::getIntegerVT(*DAG.getContext(), VT.getSizeInBits());
 
   SDValue NewVal;
-  NewVal = DAG.getNode(GetPromotionOpcode(Promoted.getValueType(), VT), DL,
-                       IVT, Promoted);
+  NewVal = DAG.getNode(GetPromotionOpcode(Promoted.getValueType(), VT), DL, IVT,
+                       Promoted);
 
   return DAG.getStore(ST->getChain(), DL, NewVal, ST->getBasePtr(),
                       ST->getMemOperand());
@@ -2334,85 +2519,117 @@ void DAGTypeLegalizer::PromoteFloatResult(SDNode *N, unsigned ResNo) {
   }
 
   switch (N->getOpcode()) {
-    // These opcodes cannot appear if promotion of FP16 is done in the backend
-    // instead of Clang
-    case ISD::FP16_TO_FP:
-    case ISD::FP_TO_FP16:
-    default:
+  // These opcodes cannot appear if promotion of FP16 is done in the backend
+  // instead of Clang
+  case ISD::FP16_TO_FP:
+  case ISD::FP_TO_FP16:
+  default:
 #ifndef NDEBUG
-      dbgs() << "PromoteFloatResult #" << ResNo << ": ";
-      N->dump(&DAG); dbgs() << "\n";
+    dbgs() << "PromoteFloatResult #" << ResNo << ": ";
+    N->dump(&DAG);
+    dbgs() << "\n";
 #endif
-      report_fatal_error("Do not know how to promote this operator's result!");
-
-    case ISD::BITCAST:    R = PromoteFloatRes_BITCAST(N); break;
-    case ISD::ConstantFP: R = PromoteFloatRes_ConstantFP(N); break;
-    case ISD::EXTRACT_VECTOR_ELT:
-                          R = PromoteFloatRes_EXTRACT_VECTOR_ELT(N); break;
-    case ISD::FCOPYSIGN:  R = PromoteFloatRes_FCOPYSIGN(N); break;
-
-    // Unary FP Operations
-    case ISD::FABS:
-    case ISD::FCBRT:
-    case ISD::FCEIL:
-    case ISD::FCOS:
-    case ISD::FEXP:
-    case ISD::FEXP2:
-    case ISD::FEXP10:
-    case ISD::FFLOOR:
-    case ISD::FLOG:
-    case ISD::FLOG2:
-    case ISD::FLOG10:
-    case ISD::FNEARBYINT:
-    case ISD::FNEG:
-    case ISD::FRINT:
-    case ISD::FROUND:
-    case ISD::FROUNDEVEN:
-    case ISD::FSIN:
-    case ISD::FSQRT:
-    case ISD::FTRUNC:
-    case ISD::FCANONICALIZE: R = PromoteFloatRes_UnaryOp(N); break;
-
-    // Binary FP Operations
-    case ISD::FADD:
-    case ISD::FDIV:
-    case ISD::FMAXIMUM:
-    case ISD::FMINIMUM:
-    case ISD::FMAXNUM:
-    case ISD::FMINNUM:
-    case ISD::FMUL:
-    case ISD::FPOW:
-    case ISD::FREM:
-    case ISD::FSUB:       R = PromoteFloatRes_BinOp(N); break;
-
-    case ISD::FMA:        // FMA is same as FMAD
-    case ISD::FMAD:       R = PromoteFloatRes_FMAD(N); break;
-
-    case ISD::FPOWI:
-    case ISD::FLDEXP:     R = PromoteFloatRes_ExpOp(N); break;
-    case ISD::FFREXP:     R = PromoteFloatRes_FFREXP(N); break;
-
-    case ISD::FP_ROUND:   R = PromoteFloatRes_FP_ROUND(N); break;
-    case ISD::LOAD:       R = PromoteFloatRes_LOAD(N); break;
-    case ISD::SELECT:     R = PromoteFloatRes_SELECT(N); break;
-    case ISD::SELECT_CC:  R = PromoteFloatRes_SELECT_CC(N); break;
-
-    case ISD::SINT_TO_FP:
-    case ISD::UINT_TO_FP: R = PromoteFloatRes_XINT_TO_FP(N); break;
-    case ISD::UNDEF:      R = PromoteFloatRes_UNDEF(N); break;
-    case ISD::ATOMIC_SWAP: R = BitcastToInt_ATOMIC_SWAP(N); break;
-    case ISD::VECREDUCE_FADD:
-    case ISD::VECREDUCE_FMUL:
-    case ISD::VECREDUCE_FMIN:
-    case ISD::VECREDUCE_FMAX:
-    case ISD::VECREDUCE_FMAXIMUM:
-    case ISD::VECREDUCE_FMINIMUM:
-      R = PromoteFloatRes_VECREDUCE(N);
-      break;
-    case ISD::VECREDUCE_SEQ_FADD:
-    case ISD::VECREDUCE_SEQ_FMUL:
-      R = PromoteFloatRes_VECREDUCE_SEQ(N);
-      break;
+    report_fatal_error("Do not know how to promote this operator's result!");
+
+  case ISD::BITCAST:
+    R = PromoteFloatRes_BITCAST(N);
+    break;
+  case ISD::ConstantFP:
+    R = PromoteFloatRes_ConstantFP(N);
+    break;
+  case ISD::EXTRACT_VECTOR_ELT:
+    R = PromoteFloatRes_EXTRACT_VECTOR_ELT(N);
+    break;
+  case ISD::FCOPYSIGN:
+    R = PromoteFloatRes_FCOPYSIGN(N);
+    break;
+
+  // Unary FP Operations
+  case ISD::FABS:
+  case ISD::FCBRT:
+  case ISD::FCEIL:
+  case ISD::FCOS:
+  case ISD::FEXP:
+  case ISD::FEXP2:
+  case ISD::FEXP10:
+  case ISD::FFLOOR:
+  case ISD::FLOG:
+  case ISD::FLOG2:
+  case ISD::FLOG10:
+  case ISD::FNEARBYINT:
+  case ISD::FNEG:
+  case ISD::FRINT:
+  case ISD::FROUND:
+  case ISD::FROUNDEVEN:
+  case ISD::FSIN:
+  case ISD::FSQRT:
+  case ISD::FTRUNC:
+  case ISD::FCANONICALIZE:
+    R = PromoteFloatRes_UnaryOp(N);
+    break;
+
+  // Binary FP Operations
+  case ISD::FADD:
+  case ISD::FDIV:
+  case ISD::FMAXIMUM:
+  case ISD::FMINIMUM:
+  case ISD::FMAXNUM:
+  case ISD::FMINNUM:
+  case ISD::FMUL:
+  case ISD::FPOW:
+  case ISD::FREM:
+  case ISD::FSUB:
+    R = PromoteFloatRes_BinOp(N);
+    break;
+
+  case ISD::FMA: // FMA is same as FMAD
+  case ISD::FMAD:
+    R = PromoteFloatRes_FMAD(N);
+    break;
+
+  case ISD::FPOWI:
+  case ISD::FLDEXP:
+    R = PromoteFloatRes_ExpOp(N);
+    break;
+  case ISD::FFREXP:
+    R = PromoteFloatRes_FFREXP(N);
+    break;
+
+  case ISD::FP_ROUND:
+    R = PromoteFloatRes_FP_ROUND(N);
+    break;
+  case ISD::LOAD:
+    R = PromoteFloatRes_LOAD(N);
+    break;
+  case ISD::SELECT:
+    R = PromoteFloatRes_SELECT(N);
+    break;
+  case ISD::SELECT_CC:
+    R = PromoteFloatRes_SELECT_CC(N);
+    break;
+
+  case ISD::SINT_TO_FP:
+  case ISD::UINT_TO_FP:
+    R = PromoteFloatRes_XINT_TO_FP(N);
+    break;
+  case ISD::UNDEF:
+    R = PromoteFloatRes_UNDEF(N);
+    break;
+  case ISD::ATOMIC_SWAP:
+    R = BitcastToInt_ATOMIC_SWAP(N);
+    break;
+  case ISD::VECREDUCE_FADD:
+  case ISD::VECREDUCE_FMUL:
+  case ISD::VECREDUCE_FMIN:
+  case ISD::VECREDUCE_FMAX:
+  case ISD::VECREDUCE_FMAXIMUM:
+  case ISD::VECREDUCE_FMINIMUM:
+    R = PromoteFloatRes_VECREDUCE(N);
+    break;
+  case ISD::VECREDUCE_SEQ_FADD:
+  case ISD::VECREDUCE_SEQ_FMUL:
+    R = PromoteFloatRes_VECREDUCE_SEQ(N);
+    break;
   }
 
   if (R.getNode())
@@ -2442,8 +2659,7 @@ SDValue DAGTypeLegalizer::PromoteFloatRes_ConstantFP(SDNode *N) {
 
   // Get the (bit-cast) APInt of the APFloat and build an integer constant
   EVT IVT = EVT::getIntegerVT(*DAG.getContext(), VT.getSizeInBits());
-  SDValue C = DAG.getConstant(CFPNode->getValueAPF().bitcastToAPInt(), DL,
-                              IVT);
+  SDValue C = DAG.getConstant(CFPNode->getValueAPF().bitcastToAPInt(), DL, IVT);
 
   // Convert the Constant to the desired FP type
   // FIXME We might be able to do the conversion during compilation and get rid
@@ -2470,7 +2686,8 @@ SDValue DAGTypeLegalizer::PromoteFloatRes_EXTRACT_VECTOR_ELT(SDNode *N) {
     uint64_t IdxVal = cast<ConstantSDNode>(Idx)->getZExtValue();
 
     switch (getTypeAction(VecVT)) {
-    default: break;
+    default:
+      break;
     case TargetLowering::TypeScalarizeVector: {
       SDValue Res = GetScalarizedVector(N->getOperand(0));
       ReplaceValueWith(SDValue(N, 0), Res);
@@ -2491,13 +2708,12 @@ SDValue DAGTypeLegalizer::PromoteFloatRes_EXTRACT_VECTOR_ELT(SDNode *N) {
       if (IdxVal < LoElts)
         Res = DAG.getNode(N->getOpcode(), DL, EltVT, Lo, Idx);
       else
-        Res = DAG.getNode(N->getOpcode(), DL, EltVT, Hi,
-                          DAG.getConstant(IdxVal - LoElts, DL,
-                                          Idx.getValueType()));
+        Res = DAG.getNode(
+            N->getOpcode(), DL, EltVT, Hi,
+            DAG.getConstant(IdxVal - LoElts, DL, Idx.getValueType()));
       ReplaceValueWith(SDValue(N, 0), Res);
       return SDValue();
     }
-
     }
   }
 
@@ -2506,8 +2722,8 @@ SDValue DAGTypeLegalizer::PromoteFloatRes_EXTRACT_VECTOR_ELT(SDNode *N) {
   EVT IVT = NewOp.getValueType().getVectorElementType();
 
   // Extract the element as an (bit-cast) integer value
-  SDValue NewVal = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, DL, IVT,
-                               NewOp, N->getOperand(1));
+  SDValue NewVal =
+      DAG.getNode(ISD::EXTRACT_VECTOR_ELT, DL, IVT, NewOp, N->getOperand(1));
 
   // Convert the element to the desired FP type
   EVT VT = N->getValueType(0);
@@ -2652,8 +2868,8 @@ SDValue DAGTypeLegalizer::PromoteFloatRes_XINT_TO_FP(SDNode *N) {
 }
 
 SDValue DAGTypeLegalizer::PromoteFloatRes_UNDEF(SDNode *N) {
-  return DAG.getUNDEF(TLI.getTypeToTransformTo(*DAG.getContext(),
-                                               N->getValueType(0)));
+  return DAG.getUNDEF(
+      TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0)));
 }
 
 SDValue DAGTypeLegalizer::PromoteFloatRes_VECREDUCE(SDNode *N) {
@@ -2679,18 +2895,15 @@ SDValue DAGTypeLegalizer::BitcastToInt_ATOMIC_SWAP(SDNode *N) {
   SDValue CastVal = BitConvertToInteger(AM->getVal());
   EVT CastVT = CastVal.getValueType();
 
-  SDValue NewAtomic
-    = DAG.getAtomic(ISD::ATOMIC_SWAP, SL, CastVT,
-                    DAG.getVTList(CastVT, MVT::Other),
-                    { AM->getChain(), AM->getBasePtr(), CastVal },
-                    AM->getMemOperand());
+  SDValue NewAtomic = DAG.getAtomic(
+      ISD::ATOMIC_SWAP, SL, CastVT, DAG.getVTList(CastVT, MVT::Other),
+      {AM->getChain(), AM->getBasePtr(), CastVal}, AM->getMemOperand());
 
   SDValue Result = NewAtomic;
 
   if (getTypeAction(VT) == TargetLowering::TypePromoteFloat) {
     EVT NFPVT = TLI.getTypeToTransformTo(*DAG.getContext(), VT);
-    Result = DAG.getNode(GetPromotionOpcode(VT, NFPVT), SL, NFPVT,
-                                     NewAtomic);
+    Result = DAG.getNode(GetPromotionOpcode(VT, NFPVT), SL, NFPVT, NewAtomic);
   }
 
   // Legalize the chain result by replacing uses of the old value chain with the
@@ -2698,7 +2911,6 @@ SDValue DAGTypeLegalizer::BitcastToInt_ATOMIC_SWAP(SDNode *N) {
   ReplaceValueWith(SDValue(N, 1), NewAtomic.getValue(1));
 
   return Result;
-
 }
 
 //===----------------------------------------------------------------------===//
@@ -2720,18 +2932,28 @@ void DAGTypeLegalizer::SoftPromoteHalfResult(SDNode *N, unsigned ResNo) {
   default:
 #ifndef NDEBUG
     dbgs() << "SoftPromoteHalfResult #" << ResNo << ": ";
-    N->dump(&DAG); dbgs() << "\n";
+    N->dump(&DAG);
+    dbgs() << "\n";
 #endif
     report_fatal_error("Do not know how to soft promote this operator's "
                        "result!");
 
-  case ISD::BITCAST:    R = SoftPromoteHalfRes_BITCAST(N); break;
-  case ISD::ConstantFP: R = SoftPromoteHalfRes_ConstantFP(N); break;
+  case ISD::BITCAST:
+    R = SoftPromoteHalfRes_BITCAST(N);
+    break;
+  case ISD::ConstantFP:
+    R = SoftPromoteHalfRes_ConstantFP(N);
+    break;
   case ISD::EXTRACT_VECTOR_ELT:
-    R = SoftPromoteHalfRes_EXTRACT_VECTOR_ELT(N); break;
-  case ISD::FCOPYSIGN:  R = SoftPromoteHalfRes_FCOPYSIGN(N); break;
+    R = SoftPromoteHalfRes_EXTRACT_VECTOR_ELT(N);
+    break;
+  case ISD::FCOPYSIGN:
+    R = SoftPromoteHalfRes_FCOPYSIGN(N);
+    break;
   case ISD::STRICT_FP_ROUND:
-  case ISD::FP_ROUND:   R = SoftPromoteHalfRes_FP_ROUND(N); break;
+  case ISD::FP_ROUND:
+    R = SoftPromoteHalfRes_FP_ROUND(N);
+    break;
 
   // Unary FP Operations
   case ISD::FABS:
@@ -2754,7 +2976,9 @@ void DAGTypeLegalizer::SoftPromoteHalfResult(SDNode *N, unsigned ResNo) {
   case ISD::FSIN:
   case ISD::FSQRT:
   case ISD::FTRUNC:
-  case ISD::FCANONICALIZE: R = SoftPromoteHalfRes_UnaryOp(N); break;
+  case ISD::FCANONICALIZE:
+    R = SoftPromoteHalfRes_UnaryOp(N);
+    break;
 
   // Binary FP Operations
   case ISD::FADD:
@@ -2766,21 +2990,39 @@ void DAGTypeLegalizer::SoftPromoteHalfResult(SDNode *N, unsigned ResNo) {
   case ISD::FMUL:
   case ISD::FPOW:
   case ISD::FREM:
-  case ISD::FSUB:        R = SoftPromoteHalfRes_BinOp(N); break;
+  case ISD::FSUB:
+    R = SoftPromoteHalfRes_BinOp(N);
+    break;
 
-  case ISD::FMA:         // FMA is same as FMAD
-  case ISD::FMAD:        R = SoftPromoteHalfRes_FMAD(N); break;
+  case ISD::FMA: // FMA is same as FMAD
+  case ISD::FMAD:
+    R = SoftPromoteHalfRes_FMAD(N);
+    break;
 
   case ISD::FPOWI:
-  case ISD::FLDEXP:      R = SoftPromoteHalfRes_ExpOp(N); break;
+  case ISD::FLDEXP:
+    R = SoftPromoteHalfRes_ExpOp(N);
+    break;
 
-  case ISD::LOAD:        R = SoftPromoteHalfRes_LOAD(N); break;
-  case ISD::SELECT:      R = SoftPromoteHalfRes_SELECT(N); break;
-  case ISD::SELECT_CC:   R = SoftPromoteHalfRes_SELECT_CC(N); break;
+  case ISD::LOAD:
+    R = SoftPromoteHalfRes_LOAD(N);
+    break;
+  case ISD::SELECT:
+    R = SoftPromoteHalfRes_SELECT(N);
+    break;
+  case ISD::SELECT_CC:
+    R = SoftPromoteHalfRes_SELECT_CC(N);
+    break;
   case ISD::SINT_TO_FP:
-  case ISD::UINT_TO_FP:  R = SoftPromoteHalfRes_XINT_TO_FP(N); break;
-  case ISD::UNDEF:       R = SoftPromoteHalfRes_UNDEF(N); break;
-  case ISD::ATOMIC_SWAP: R = BitcastToInt_ATOMIC_SWAP(N); break;
+  case ISD::UINT_TO_FP:
+    R = SoftPromoteHalfRes_XINT_TO_FP(N);
+    break;
+  case ISD::UNDEF:
+    R = SoftPromoteHalfRes_UNDEF(N);
+    break;
+  case ISD::ATOMIC_SWAP:
+    R = BitcastToInt_ATOMIC_SWAP(N);
+    break;
   case ISD::VECREDUCE_FADD:
   case ISD::VECREDUCE_FMUL:
   case ISD::VECREDUCE_FMIN:
@@ -3031,25 +3273,41 @@ bool DAGTypeLegalizer::SoftPromoteHalfOperand(SDNode *N, unsigned OpNo) {
   // operands legalized as a part of PromoteFloatResult.
   switch (N->getOpcode()) {
   default:
-  #ifndef NDEBUG
+#ifndef NDEBUG
     dbgs() << "SoftPromoteHalfOperand Op #" << OpNo << ": ";
-    N->dump(&DAG); dbgs() << "\n";
-  #endif
+    N->dump(&DAG);
+    dbgs() << "\n";
+#endif
     report_fatal_error("Do not know how to soft promote this operator's "
                        "operand!");
 
-  case ISD::BITCAST:    Res = SoftPromoteHalfOp_BITCAST(N); break;
-  case ISD::FCOPYSIGN:  Res = SoftPromoteHalfOp_FCOPYSIGN(N, OpNo); break;
+  case ISD::BITCAST:
+    Res = SoftPromoteHalfOp_BITCAST(N);
+    break;
+  case ISD::FCOPYSIGN:
+    Res = SoftPromoteHalfOp_FCOPYSIGN(N, OpNo);
+    break;
   case ISD::FP_TO_SINT:
-  case ISD::FP_TO_UINT: Res = SoftPromoteHalfOp_FP_TO_XINT(N); break;
+  case ISD::FP_TO_UINT:
+    Res = SoftPromoteHalfOp_FP_TO_XINT(N);
+    break;
   case ISD::FP_TO_SINT_SAT:
   case ISD::FP_TO_UINT_SAT:
-                        Res = SoftPromoteHalfOp_FP_TO_XINT_SAT(N); break;
+    Res = SoftPromoteHalfOp_FP_TO_XINT_SAT(N);
+    break;
   case ISD::STRICT_FP_EXTEND:
-  case ISD::FP_EXTEND:  Res = SoftPromoteHalfOp_FP_EXTEND(N); break;
-  case ISD::SELECT_CC:  Res = SoftPromoteHalfOp_SELECT_CC(N, OpNo); break;
-  case ISD::SETCC:      Res = SoftPromoteHalfOp_SETCC(N); break;
-  case ISD::STORE:      Res = SoftPromoteHalfOp_STORE(N, OpNo); break;
+  case ISD::FP_EXTEND:
+    Res = SoftPromoteHalfOp_FP_EXTEND(N);
+    break;
+  case ISD::SELECT_CC:
+    Res = SoftPromoteHalfOp_SELECT_CC(N, OpNo);
+    break;
+  case ISD::SETCC:
+    Res = SoftPromoteHalfOp_SETCC(N);
+    break;
+  case ISD::STORE:
+    Res = SoftPromoteHalfOp_STORE(N, OpNo);
+    break;
   case ISD::STACKMAP:
     Res = SoftPromoteHalfOp_STACKMAP(N, OpNo);
     break;
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
index 888cb187a5b3fb5..4ad11a2fedef80d 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeIntegerTypes.cpp
@@ -53,36 +53,64 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
   default:
 #ifndef NDEBUG
     dbgs() << "PromoteIntegerResult #" << ResNo << ": ";
-    N->dump(&DAG); dbgs() << "\n";
+    N->dump(&DAG);
+    dbgs() << "\n";
 #endif
     report_fatal_error("Do not know how to promote this operator!");
-  case ISD::MERGE_VALUES:Res = PromoteIntRes_MERGE_VALUES(N, ResNo); break;
-  case ISD::AssertSext:  Res = PromoteIntRes_AssertSext(N); break;
-  case ISD::AssertZext:  Res = PromoteIntRes_AssertZext(N); break;
-  case ISD::BITCAST:     Res = PromoteIntRes_BITCAST(N); break;
+  case ISD::MERGE_VALUES:
+    Res = PromoteIntRes_MERGE_VALUES(N, ResNo);
+    break;
+  case ISD::AssertSext:
+    Res = PromoteIntRes_AssertSext(N);
+    break;
+  case ISD::AssertZext:
+    Res = PromoteIntRes_AssertZext(N);
+    break;
+  case ISD::BITCAST:
+    Res = PromoteIntRes_BITCAST(N);
+    break;
   case ISD::VP_BITREVERSE:
-  case ISD::BITREVERSE:  Res = PromoteIntRes_BITREVERSE(N); break;
+  case ISD::BITREVERSE:
+    Res = PromoteIntRes_BITREVERSE(N);
+    break;
   case ISD::VP_BSWAP:
-  case ISD::BSWAP:       Res = PromoteIntRes_BSWAP(N); break;
-  case ISD::BUILD_PAIR:  Res = PromoteIntRes_BUILD_PAIR(N); break;
-  case ISD::Constant:    Res = PromoteIntRes_Constant(N); break;
+  case ISD::BSWAP:
+    Res = PromoteIntRes_BSWAP(N);
+    break;
+  case ISD::BUILD_PAIR:
+    Res = PromoteIntRes_BUILD_PAIR(N);
+    break;
+  case ISD::Constant:
+    Res = PromoteIntRes_Constant(N);
+    break;
   case ISD::VP_CTLZ_ZERO_UNDEF:
   case ISD::VP_CTLZ:
   case ISD::CTLZ_ZERO_UNDEF:
-  case ISD::CTLZ:        Res = PromoteIntRes_CTLZ(N); break;
+  case ISD::CTLZ:
+    Res = PromoteIntRes_CTLZ(N);
+    break;
   case ISD::PARITY:
   case ISD::VP_CTPOP:
-  case ISD::CTPOP:       Res = PromoteIntRes_CTPOP_PARITY(N); break;
+  case ISD::CTPOP:
+    Res = PromoteIntRes_CTPOP_PARITY(N);
+    break;
   case ISD::VP_CTTZ_ZERO_UNDEF:
   case ISD::VP_CTTZ:
   case ISD::CTTZ_ZERO_UNDEF:
-  case ISD::CTTZ:        Res = PromoteIntRes_CTTZ(N); break;
+  case ISD::CTTZ:
+    Res = PromoteIntRes_CTTZ(N);
+    break;
   case ISD::EXTRACT_VECTOR_ELT:
-                         Res = PromoteIntRes_EXTRACT_VECTOR_ELT(N); break;
-  case ISD::LOAD:        Res = PromoteIntRes_LOAD(cast<LoadSDNode>(N)); break;
-  case ISD::MLOAD:       Res = PromoteIntRes_MLOAD(cast<MaskedLoadSDNode>(N));
+    Res = PromoteIntRes_EXTRACT_VECTOR_ELT(N);
     break;
-  case ISD::MGATHER:     Res = PromoteIntRes_MGATHER(cast<MaskedGatherSDNode>(N));
+  case ISD::LOAD:
+    Res = PromoteIntRes_LOAD(cast<LoadSDNode>(N));
+    break;
+  case ISD::MLOAD:
+    Res = PromoteIntRes_MLOAD(cast<MaskedLoadSDNode>(N));
+    break;
+  case ISD::MGATHER:
+    Res = PromoteIntRes_MGATHER(cast<MaskedGatherSDNode>(N));
     break;
   case ISD::SELECT:
   case ISD::VSELECT:
@@ -90,45 +118,74 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
   case ISD::VP_MERGE:
     Res = PromoteIntRes_Select(N);
     break;
-  case ISD::SELECT_CC:   Res = PromoteIntRes_SELECT_CC(N); break;
+  case ISD::SELECT_CC:
+    Res = PromoteIntRes_SELECT_CC(N);
+    break;
   case ISD::STRICT_FSETCC:
   case ISD::STRICT_FSETCCS:
-  case ISD::SETCC:       Res = PromoteIntRes_SETCC(N); break;
+  case ISD::SETCC:
+    Res = PromoteIntRes_SETCC(N);
+    break;
   case ISD::SMIN:
-  case ISD::SMAX:        Res = PromoteIntRes_SExtIntBinOp(N); break;
+  case ISD::SMAX:
+    Res = PromoteIntRes_SExtIntBinOp(N);
+    break;
   case ISD::UMIN:
-  case ISD::UMAX:        Res = PromoteIntRes_UMINUMAX(N); break;
+  case ISD::UMAX:
+    Res = PromoteIntRes_UMINUMAX(N);
+    break;
 
   case ISD::SHL:
-  case ISD::VP_SHL:      Res = PromoteIntRes_SHL(N); break;
+  case ISD::VP_SHL:
+    Res = PromoteIntRes_SHL(N);
+    break;
   case ISD::SIGN_EXTEND_INREG:
-                         Res = PromoteIntRes_SIGN_EXTEND_INREG(N); break;
+    Res = PromoteIntRes_SIGN_EXTEND_INREG(N);
+    break;
   case ISD::SRA:
-  case ISD::VP_ASHR:     Res = PromoteIntRes_SRA(N); break;
+  case ISD::VP_ASHR:
+    Res = PromoteIntRes_SRA(N);
+    break;
   case ISD::SRL:
-  case ISD::VP_LSHR:     Res = PromoteIntRes_SRL(N); break;
+  case ISD::VP_LSHR:
+    Res = PromoteIntRes_SRL(N);
+    break;
   case ISD::VP_TRUNCATE:
-  case ISD::TRUNCATE:    Res = PromoteIntRes_TRUNCATE(N); break;
-  case ISD::UNDEF:       Res = PromoteIntRes_UNDEF(N); break;
-  case ISD::VAARG:       Res = PromoteIntRes_VAARG(N); break;
-  case ISD::VSCALE:      Res = PromoteIntRes_VSCALE(N); break;
+  case ISD::TRUNCATE:
+    Res = PromoteIntRes_TRUNCATE(N);
+    break;
+  case ISD::UNDEF:
+    Res = PromoteIntRes_UNDEF(N);
+    break;
+  case ISD::VAARG:
+    Res = PromoteIntRes_VAARG(N);
+    break;
+  case ISD::VSCALE:
+    Res = PromoteIntRes_VSCALE(N);
+    break;
 
   case ISD::EXTRACT_SUBVECTOR:
-                         Res = PromoteIntRes_EXTRACT_SUBVECTOR(N); break;
+    Res = PromoteIntRes_EXTRACT_SUBVECTOR(N);
+    break;
   case ISD::INSERT_SUBVECTOR:
-                         Res = PromoteIntRes_INSERT_SUBVECTOR(N); break;
+    Res = PromoteIntRes_INSERT_SUBVECTOR(N);
+    break;
   case ISD::VECTOR_REVERSE:
-                         Res = PromoteIntRes_VECTOR_REVERSE(N); break;
+    Res = PromoteIntRes_VECTOR_REVERSE(N);
+    break;
   case ISD::VECTOR_SHUFFLE:
-                         Res = PromoteIntRes_VECTOR_SHUFFLE(N); break;
+    Res = PromoteIntRes_VECTOR_SHUFFLE(N);
+    break;
   case ISD::VECTOR_SPLICE:
-                         Res = PromoteIntRes_VECTOR_SPLICE(N); break;
+    Res = PromoteIntRes_VECTOR_SPLICE(N);
+    break;
   case ISD::VECTOR_INTERLEAVE:
   case ISD::VECTOR_DEINTERLEAVE:
     Res = PromoteIntRes_VECTOR_INTERLEAVE_DEINTERLEAVE(N);
     return;
   case ISD::INSERT_VECTOR_ELT:
-                         Res = PromoteIntRes_INSERT_VECTOR_ELT(N); break;
+    Res = PromoteIntRes_INSERT_VECTOR_ELT(N);
+    break;
   case ISD::BUILD_VECTOR:
     Res = PromoteIntRes_BUILD_VECTOR(N);
     break;
@@ -136,38 +193,49 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
   case ISD::SCALAR_TO_VECTOR:
     Res = PromoteIntRes_ScalarOp(N);
     break;
-  case ISD::STEP_VECTOR: Res = PromoteIntRes_STEP_VECTOR(N); break;
+  case ISD::STEP_VECTOR:
+    Res = PromoteIntRes_STEP_VECTOR(N);
+    break;
   case ISD::CONCAT_VECTORS:
-                         Res = PromoteIntRes_CONCAT_VECTORS(N); break;
+    Res = PromoteIntRes_CONCAT_VECTORS(N);
+    break;
 
   case ISD::ANY_EXTEND_VECTOR_INREG:
   case ISD::SIGN_EXTEND_VECTOR_INREG:
   case ISD::ZERO_EXTEND_VECTOR_INREG:
-                         Res = PromoteIntRes_EXTEND_VECTOR_INREG(N); break;
+    Res = PromoteIntRes_EXTEND_VECTOR_INREG(N);
+    break;
 
   case ISD::SIGN_EXTEND:
   case ISD::VP_SIGN_EXTEND:
   case ISD::ZERO_EXTEND:
   case ISD::VP_ZERO_EXTEND:
-  case ISD::ANY_EXTEND:  Res = PromoteIntRes_INT_EXTEND(N); break;
+  case ISD::ANY_EXTEND:
+    Res = PromoteIntRes_INT_EXTEND(N);
+    break;
 
   case ISD::VP_FP_TO_SINT:
   case ISD::VP_FP_TO_UINT:
   case ISD::STRICT_FP_TO_SINT:
   case ISD::STRICT_FP_TO_UINT:
   case ISD::FP_TO_SINT:
-  case ISD::FP_TO_UINT:  Res = PromoteIntRes_FP_TO_XINT(N); break;
+  case ISD::FP_TO_UINT:
+    Res = PromoteIntRes_FP_TO_XINT(N);
+    break;
 
   case ISD::FP_TO_SINT_SAT:
   case ISD::FP_TO_UINT_SAT:
-                         Res = PromoteIntRes_FP_TO_XINT_SAT(N); break;
+    Res = PromoteIntRes_FP_TO_XINT_SAT(N);
+    break;
 
   case ISD::FP_TO_BF16:
   case ISD::FP_TO_FP16:
     Res = PromoteIntRes_FP_TO_FP16_BF16(N);
     break;
 
-  case ISD::GET_ROUNDING: Res = PromoteIntRes_GET_ROUNDING(N); break;
+  case ISD::GET_ROUNDING:
+    Res = PromoteIntRes_GET_ROUNDING(N);
+    break;
 
   case ISD::AND:
   case ISD::OR:
@@ -180,58 +248,83 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
   case ISD::VP_XOR:
   case ISD::VP_ADD:
   case ISD::VP_SUB:
-  case ISD::VP_MUL:      Res = PromoteIntRes_SimpleIntBinOp(N); break;
+  case ISD::VP_MUL:
+    Res = PromoteIntRes_SimpleIntBinOp(N);
+    break;
 
   case ISD::VP_SMIN:
   case ISD::VP_SMAX:
   case ISD::SDIV:
   case ISD::SREM:
   case ISD::VP_SDIV:
-  case ISD::VP_SREM:     Res = PromoteIntRes_SExtIntBinOp(N); break;
+  case ISD::VP_SREM:
+    Res = PromoteIntRes_SExtIntBinOp(N);
+    break;
 
   case ISD::VP_UMIN:
   case ISD::VP_UMAX:
   case ISD::UDIV:
   case ISD::UREM:
   case ISD::VP_UDIV:
-  case ISD::VP_UREM:     Res = PromoteIntRes_ZExtIntBinOp(N); break;
+  case ISD::VP_UREM:
+    Res = PromoteIntRes_ZExtIntBinOp(N);
+    break;
 
   case ISD::SADDO:
-  case ISD::SSUBO:       Res = PromoteIntRes_SADDSUBO(N, ResNo); break;
+  case ISD::SSUBO:
+    Res = PromoteIntRes_SADDSUBO(N, ResNo);
+    break;
   case ISD::UADDO:
-  case ISD::USUBO:       Res = PromoteIntRes_UADDSUBO(N, ResNo); break;
+  case ISD::USUBO:
+    Res = PromoteIntRes_UADDSUBO(N, ResNo);
+    break;
   case ISD::SMULO:
-  case ISD::UMULO:       Res = PromoteIntRes_XMULO(N, ResNo); break;
+  case ISD::UMULO:
+    Res = PromoteIntRes_XMULO(N, ResNo);
+    break;
 
   case ISD::ADDE:
   case ISD::SUBE:
   case ISD::UADDO_CARRY:
-  case ISD::USUBO_CARRY: Res = PromoteIntRes_UADDSUBO_CARRY(N, ResNo); break;
+  case ISD::USUBO_CARRY:
+    Res = PromoteIntRes_UADDSUBO_CARRY(N, ResNo);
+    break;
 
   case ISD::SADDO_CARRY:
-  case ISD::SSUBO_CARRY: Res = PromoteIntRes_SADDSUBO_CARRY(N, ResNo); break;
+  case ISD::SSUBO_CARRY:
+    Res = PromoteIntRes_SADDSUBO_CARRY(N, ResNo);
+    break;
 
   case ISD::SADDSAT:
   case ISD::UADDSAT:
   case ISD::SSUBSAT:
   case ISD::USUBSAT:
   case ISD::SSHLSAT:
-  case ISD::USHLSAT:     Res = PromoteIntRes_ADDSUBSHLSAT(N); break;
+  case ISD::USHLSAT:
+    Res = PromoteIntRes_ADDSUBSHLSAT(N);
+    break;
 
   case ISD::SMULFIX:
   case ISD::SMULFIXSAT:
   case ISD::UMULFIX:
-  case ISD::UMULFIXSAT:  Res = PromoteIntRes_MULFIX(N); break;
+  case ISD::UMULFIXSAT:
+    Res = PromoteIntRes_MULFIX(N);
+    break;
 
   case ISD::SDIVFIX:
   case ISD::SDIVFIXSAT:
   case ISD::UDIVFIX:
-  case ISD::UDIVFIXSAT:  Res = PromoteIntRes_DIVFIX(N); break;
+  case ISD::UDIVFIXSAT:
+    Res = PromoteIntRes_DIVFIX(N);
+    break;
 
-  case ISD::ABS:         Res = PromoteIntRes_ABS(N); break;
+  case ISD::ABS:
+    Res = PromoteIntRes_ABS(N);
+    break;
 
   case ISD::ATOMIC_LOAD:
-    Res = PromoteIntRes_Atomic0(cast<AtomicSDNode>(N)); break;
+    Res = PromoteIntRes_Atomic0(cast<AtomicSDNode>(N));
+    break;
 
   case ISD::ATOMIC_LOAD_ADD:
   case ISD::ATOMIC_LOAD_SUB:
@@ -245,7 +338,8 @@ void DAGTypeLegalizer::PromoteIntegerResult(SDNode *N, unsigned ResNo) {
   case ISD::ATOMIC_LOAD_UMIN:
   case ISD::ATOMIC_LOAD_UMAX:
   case ISD::ATOMIC_SWAP:
-    Res = PromoteIntRes_Atomic1(cast<AtomicSDNode>(N)); break;
+    Res = PromoteIntRes_Atomic1(cast<AtomicSDNode>(N));
+    break;
 
   case ISD::ATOMIC_CMP_SWAP:
   case ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS:
@@ -317,23 +411,22 @@ SDValue DAGTypeLegalizer::PromoteIntRes_MERGE_VALUES(SDNode *N,
 SDValue DAGTypeLegalizer::PromoteIntRes_AssertSext(SDNode *N) {
   // Sign-extend the new bits, and continue the assertion.
   SDValue Op = SExtPromotedInteger(N->getOperand(0));
-  return DAG.getNode(ISD::AssertSext, SDLoc(N),
-                     Op.getValueType(), Op, N->getOperand(1));
+  return DAG.getNode(ISD::AssertSext, SDLoc(N), Op.getValueType(), Op,
+                     N->getOperand(1));
 }
 
 SDValue DAGTypeLegalizer::PromoteIntRes_AssertZext(SDNode *N) {
   // Zero the new bits, and continue the assertion.
   SDValue Op = ZExtPromotedInteger(N->getOperand(0));
-  return DAG.getNode(ISD::AssertZext, SDLoc(N),
-                     Op.getValueType(), Op, N->getOperand(1));
+  return DAG.getNode(ISD::AssertZext, SDLoc(N), Op.getValueType(), Op,
+                     N->getOperand(1));
 }
 
 SDValue DAGTypeLegalizer::PromoteIntRes_Atomic0(AtomicSDNode *N) {
   EVT ResVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
-  SDValue Res = DAG.getAtomic(N->getOpcode(), SDLoc(N),
-                              N->getMemoryVT(), ResVT,
-                              N->getChain(), N->getBasePtr(),
-                              N->getMemOperand());
+  SDValue Res =
+      DAG.getAtomic(N->getOpcode(), SDLoc(N), N->getMemoryVT(), ResVT,
+                    N->getChain(), N->getBasePtr(), N->getMemOperand());
   // Legalize the chain result - switch anything that used the old chain to
   // use the new one.
   ReplaceValueWith(SDValue(N, 1), Res.getValue(1));
@@ -342,10 +435,9 @@ SDValue DAGTypeLegalizer::PromoteIntRes_Atomic0(AtomicSDNode *N) {
 
 SDValue DAGTypeLegalizer::PromoteIntRes_Atomic1(AtomicSDNode *N) {
   SDValue Op2 = GetPromotedInteger(N->getOperand(2));
-  SDValue Res = DAG.getAtomic(N->getOpcode(), SDLoc(N),
-                              N->getMemoryVT(),
-                              N->getChain(), N->getBasePtr(),
-                              Op2, N->getMemOperand());
+  SDValue Res =
+      DAG.getAtomic(N->getOpcode(), SDLoc(N), N->getMemoryVT(), N->getChain(),
+                    N->getBasePtr(), Op2, N->getMemOperand());
   // Legalize the chain result - switch anything that used the old chain to
   // use the new one.
   ReplaceValueWith(SDValue(N, 1), Res.getValue(1));
@@ -394,9 +486,9 @@ SDValue DAGTypeLegalizer::PromoteIntRes_AtomicCmpSwap(AtomicSDNode *N,
 
   SDVTList VTs =
       DAG.getVTList(Op2.getValueType(), N->getValueType(1), MVT::Other);
-  SDValue Res = DAG.getAtomicCmpSwap(
-      N->getOpcode(), SDLoc(N), N->getMemoryVT(), VTs, N->getChain(),
-      N->getBasePtr(), Op2, Op3, N->getMemOperand());
+  SDValue Res = DAG.getAtomicCmpSwap(N->getOpcode(), SDLoc(N), N->getMemoryVT(),
+                                     VTs, N->getChain(), N->getBasePtr(), Op2,
+                                     Op3, N->getMemOperand());
   // Update the use to N with the newly created Res.
   for (unsigned i = 1, NumResults = N->getNumValues(); i < NumResults; ++i)
     ReplaceValueWith(SDValue(N, i), Res.getValue(i));
@@ -454,10 +546,10 @@ SDValue DAGTypeLegalizer::PromoteIntRes_BITCAST(SDNode *N) {
       if (DAG.getDataLayout().isBigEndian())
         std::swap(Lo, Hi);
 
-      InOp = DAG.getNode(ISD::ANY_EXTEND, dl,
-                         EVT::getIntegerVT(*DAG.getContext(),
-                                           NOutVT.getSizeInBits()),
-                         JoinIntegers(Lo, Hi));
+      InOp = DAG.getNode(
+          ISD::ANY_EXTEND, dl,
+          EVT::getIntegerVT(*DAG.getContext(), NOutVT.getSizeInBits()),
+          JoinIntegers(Lo, Hi));
       return DAG.getNode(ISD::BITCAST, dl, NOutVT, InOp);
     }
     break;
@@ -465,10 +557,11 @@ SDValue DAGTypeLegalizer::PromoteIntRes_BITCAST(SDNode *N) {
   case TargetLowering::TypeWidenVector:
     // The input is widened to the same size. Convert to the widened value.
     // Make sure that the outgoing value is not a vector, because this would
-    // make us bitcast between two vectors which are legalized in different ways.
+    // make us bitcast between two vectors which are legalized in different
+    // ways.
     if (NOutVT.bitsEq(NInVT) && !NOutVT.isVector()) {
       SDValue Res =
-        DAG.getNode(ISD::BITCAST, dl, NOutVT, GetWidenedVector(InOp));
+          DAG.getNode(ISD::BITCAST, dl, NOutVT, GetWidenedVector(InOp));
 
       // For big endian targets we need to shift the casted value or the
       // interesting bits will end up at the wrong place.
@@ -507,8 +600,7 @@ SDValue DAGTypeLegalizer::PromoteIntRes_BITCAST(SDNode *N) {
 
 SDValue DAGTypeLegalizer::PromoteIntRes_FREEZE(SDNode *N) {
   SDValue V = GetPromotedInteger(N->getOperand(0));
-  return DAG.getNode(ISD::FREEZE, SDLoc(N),
-                     V.getValueType(), V);
+  return DAG.getNode(ISD::FREEZE, SDLoc(N), V.getValueType(), V);
 }
 
 SDValue DAGTypeLegalizer::PromoteIntRes_BSWAP(SDNode *N) {
@@ -570,10 +662,10 @@ SDValue DAGTypeLegalizer::PromoteIntRes_BITREVERSE(SDNode *N) {
 SDValue DAGTypeLegalizer::PromoteIntRes_BUILD_PAIR(SDNode *N) {
   // The pair element type may be legal, or may not promote to the same type as
   // the result, for example i14 = BUILD_PAIR (i7, i7).  Handle all cases.
-  return DAG.getNode(ISD::ANY_EXTEND, SDLoc(N),
-                     TLI.getTypeToTransformTo(*DAG.getContext(),
-                     N->getValueType(0)), JoinIntegers(N->getOperand(0),
-                     N->getOperand(1)));
+  return DAG.getNode(
+      ISD::ANY_EXTEND, SDLoc(N),
+      TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0)),
+      JoinIntegers(N->getOperand(0), N->getOperand(1)));
 }
 
 SDValue DAGTypeLegalizer::PromoteIntRes_Constant(SDNode *N) {
@@ -583,9 +675,8 @@ SDValue DAGTypeLegalizer::PromoteIntRes_Constant(SDNode *N) {
   // Zero extend things like i1, sign extend everything else.  It shouldn't
   // matter in theory which one we pick, but this tends to give better code?
   unsigned Opc = VT.isByteSized() ? ISD::SIGN_EXTEND : ISD::ZERO_EXTEND;
-  SDValue Result = DAG.getNode(Opc, dl,
-                               TLI.getTypeToTransformTo(*DAG.getContext(), VT),
-                               SDValue(N, 0));
+  SDValue Result = DAG.getNode(
+      Opc, dl, TLI.getTypeToTransformTo(*DAG.getContext(), VT), SDValue(N, 0));
   assert(isa<ConstantSDNode>(Result) && "Didn't constant fold ext?");
   return Result;
 }
@@ -698,8 +789,8 @@ SDValue DAGTypeLegalizer::PromoteIntRes_EXTRACT_VECTOR_ELT(SDNode *N) {
 
   // If the input also needs to be promoted, do that first so we can get a
   // get a good idea for the output type.
-  if (TLI.getTypeAction(*DAG.getContext(), Op0.getValueType())
-      == TargetLowering::TypePromoteInteger) {
+  if (TLI.getTypeAction(*DAG.getContext(), Op0.getValueType()) ==
+      TargetLowering::TypePromoteInteger) {
     SDValue In = GetPromotedInteger(Op0);
 
     // If the new type is larger than NVT, use it. We probably won't need to
@@ -746,8 +837,8 @@ SDValue DAGTypeLegalizer::PromoteIntRes_FP_TO_XINT(SDNode *N) {
     // use the new one.
     ReplaceValueWith(SDValue(N, 1), Res.getValue(1));
   } else if (NewOpc == ISD::VP_FP_TO_SINT || NewOpc == ISD::VP_FP_TO_UINT) {
-    Res = DAG.getNode(NewOpc, dl, NVT, {N->getOperand(0), N->getOperand(1),
-                      N->getOperand(2)});
+    Res = DAG.getNode(NewOpc, dl, NVT,
+                      {N->getOperand(0), N->getOperand(1), N->getOperand(2)});
   } else {
     Res = DAG.getNode(NewOpc, dl, NVT, N->getOperand(0));
   }
@@ -756,7 +847,8 @@ SDValue DAGTypeLegalizer::PromoteIntRes_FP_TO_XINT(SDNode *N) {
   // (eg: because the value being converted is too big), then the result of the
   // original operation was undefined anyway, so the assert is still correct.
   //
-  // NOTE: fp-to-uint to fp-to-sint promotion guarantees zero extend. For example:
+  // NOTE: fp-to-uint to fp-to-sint promotion guarantees zero extend. For
+  // example:
   //   before legalization: fp-to-uint16, 65534. -> 0xfffe
   //   after legalization: fp-to-sint32, 65534. -> 0x0000fffe
   return DAG.getNode((N->getOpcode() == ISD::FP_TO_UINT ||
@@ -800,8 +892,8 @@ SDValue DAGTypeLegalizer::PromoteIntRes_INT_EXTEND(SDNode *N) {
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
   SDLoc dl(N);
 
-  if (getTypeAction(N->getOperand(0).getValueType())
-      == TargetLowering::TypePromoteInteger) {
+  if (getTypeAction(N->getOperand(0).getValueType()) ==
+      TargetLowering::TypePromoteInteger) {
     SDValue Res = GetPromotedInteger(N->getOperand(0));
     assert(Res.getValueType().bitsLE(NVT) && "Extension doesn't make sense!");
 
@@ -833,7 +925,7 @@ SDValue DAGTypeLegalizer::PromoteIntRes_LOAD(LoadSDNode *N) {
   assert(ISD::isUNINDEXEDLoad(N) && "Indexed load during type legalization!");
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
   ISD::LoadExtType ExtType =
-    ISD::isNON_EXTLoad(N) ? ISD::EXTLOAD : N->getExtensionType();
+      ISD::isNON_EXTLoad(N) ? ISD::EXTLOAD : N->getExtensionType();
   SDLoc dl(N);
   SDValue Res = DAG.getExtLoad(ExtType, dl, NVT, N->getChain(), N->getBasePtr(),
                                N->getMemoryVT(), N->getMemOperand());
@@ -853,11 +945,10 @@ SDValue DAGTypeLegalizer::PromoteIntRes_MLOAD(MaskedLoadSDNode *N) {
     ExtType = ISD::EXTLOAD;
 
   SDLoc dl(N);
-  SDValue Res = DAG.getMaskedLoad(NVT, dl, N->getChain(), N->getBasePtr(),
-                                  N->getOffset(), N->getMask(), ExtPassThru,
-                                  N->getMemoryVT(), N->getMemOperand(),
-                                  N->getAddressingMode(), ExtType,
-                                  N->isExpandingLoad());
+  SDValue Res = DAG.getMaskedLoad(
+      NVT, dl, N->getChain(), N->getBasePtr(), N->getOffset(), N->getMask(),
+      ExtPassThru, N->getMemoryVT(), N->getMemOperand(), N->getAddressingMode(),
+      ExtType, N->isExpandingLoad());
   // Legalize the chain result - switch anything that used the old chain to
   // use the new one.
   ReplaceValueWith(SDValue(N, 1), Res.getValue(1));
@@ -867,7 +958,8 @@ SDValue DAGTypeLegalizer::PromoteIntRes_MLOAD(MaskedLoadSDNode *N) {
 SDValue DAGTypeLegalizer::PromoteIntRes_MGATHER(MaskedGatherSDNode *N) {
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
   SDValue ExtPassThru = GetPromotedInteger(N->getPassThru());
-  assert(NVT == ExtPassThru.getValueType() &&
+  assert(
+      NVT == ExtPassThru.getValueType() &&
       "Gather result type and the passThru argument type should be the same");
 
   ISD::LoadExtType ExtType = N->getExtensionType();
@@ -875,12 +967,11 @@ SDValue DAGTypeLegalizer::PromoteIntRes_MGATHER(MaskedGatherSDNode *N) {
     ExtType = ISD::EXTLOAD;
 
   SDLoc dl(N);
-  SDValue Ops[] = {N->getChain(), ExtPassThru, N->getMask(), N->getBasePtr(),
-                   N->getIndex(), N->getScale() };
-  SDValue Res = DAG.getMaskedGather(DAG.getVTList(NVT, MVT::Other),
-                                    N->getMemoryVT(), dl, Ops,
-                                    N->getMemOperand(), N->getIndexType(),
-                                    ExtType);
+  SDValue Ops[] = {N->getChain(),   ExtPassThru,   N->getMask(),
+                   N->getBasePtr(), N->getIndex(), N->getScale()};
+  SDValue Res =
+      DAG.getMaskedGather(DAG.getVTList(NVT, MVT::Other), N->getMemoryVT(), dl,
+                          Ops, N->getMemOperand(), N->getIndexType(), ExtType);
   // Legalize the chain result - switch anything that used the old chain to
   // use the new one.
   ReplaceValueWith(SDValue(N, 1), Res.getValue(1));
@@ -894,7 +985,7 @@ SDValue DAGTypeLegalizer::PromoteIntRes_Overflow(SDNode *N) {
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(1));
   EVT VT = N->getValueType(0);
   EVT SVT = getSetCCResultType(VT);
-  SDValue Ops[3] = { N->getOperand(0), N->getOperand(1) };
+  SDValue Ops[3] = {N->getOperand(0), N->getOperand(1)};
   unsigned NumOps = N->getNumOperands();
   assert(NumOps <= 3 && "Too many operands");
   if (NumOps == 3)
@@ -1037,9 +1128,8 @@ SDValue DAGTypeLegalizer::PromoteIntRes_MULFIX(SDNode *N) {
                      N->getOperand(2));
 }
 
-static SDValue SaturateWidenedDIVFIX(SDValue V, SDLoc &dl,
-                                     unsigned SatW, bool Signed,
-                                     const TargetLowering &TLI,
+static SDValue SaturateWidenedDIVFIX(SDValue V, SDLoc &dl, unsigned SatW,
+                                     bool Signed, const TargetLowering &TLI,
                                      SelectionDAG &DAG) {
   EVT VT = V.getValueType();
   unsigned VTW = VT.getScalarSizeInBits();
@@ -1047,21 +1137,20 @@ static SDValue SaturateWidenedDIVFIX(SDValue V, SDLoc &dl,
   if (!Signed) {
     // Saturate to the unsigned maximum by getting the minimum of V and the
     // maximum.
-    return DAG.getNode(ISD::UMIN, dl, VT, V,
-                       DAG.getConstant(APInt::getLowBitsSet(VTW, SatW),
-                                       dl, VT));
+    return DAG.getNode(
+        ISD::UMIN, dl, VT, V,
+        DAG.getConstant(APInt::getLowBitsSet(VTW, SatW), dl, VT));
   }
 
   // Saturate to the signed maximum (the low SatW - 1 bits) by taking the
   // signed minimum of it and V.
   V = DAG.getNode(ISD::SMIN, dl, VT, V,
-                  DAG.getConstant(APInt::getLowBitsSet(VTW, SatW - 1),
-                                  dl, VT));
+                  DAG.getConstant(APInt::getLowBitsSet(VTW, SatW - 1), dl, VT));
   // Saturate to the signed minimum (the high SatW + 1 bits) by taking the
   // signed maximum of it and V.
-  V = DAG.getNode(ISD::SMAX, dl, VT, V,
-                  DAG.getConstant(APInt::getHighBitsSet(VTW, VTW - SatW + 1),
-                                  dl, VT));
+  V = DAG.getNode(
+      ISD::SMAX, dl, VT, V,
+      DAG.getConstant(APInt::getHighBitsSet(VTW, VTW - SatW + 1), dl, VT));
   return V;
 }
 
@@ -1070,22 +1159,22 @@ static SDValue earlyExpandDIVFIX(SDNode *N, SDValue LHS, SDValue RHS,
                                  SelectionDAG &DAG, unsigned SatW = 0) {
   EVT VT = LHS.getValueType();
   unsigned VTSize = VT.getScalarSizeInBits();
-  bool Signed = N->getOpcode() == ISD::SDIVFIX ||
-                N->getOpcode() == ISD::SDIVFIXSAT;
-  bool Saturating = N->getOpcode() == ISD::SDIVFIXSAT ||
-                    N->getOpcode() == ISD::UDIVFIXSAT;
+  bool Signed =
+      N->getOpcode() == ISD::SDIVFIX || N->getOpcode() == ISD::SDIVFIXSAT;
+  bool Saturating =
+      N->getOpcode() == ISD::SDIVFIXSAT || N->getOpcode() == ISD::UDIVFIXSAT;
 
   SDLoc dl(N);
   // Widen the types by a factor of two. This is guaranteed to expand, since it
   // will always have enough high bits in the LHS to shift into.
   EVT WideVT = EVT::getIntegerVT(*DAG.getContext(), VTSize * 2);
   if (VT.isVector())
-    WideVT = EVT::getVectorVT(*DAG.getContext(), WideVT,
-                              VT.getVectorElementCount());
+    WideVT =
+        EVT::getVectorVT(*DAG.getContext(), WideVT, VT.getVectorElementCount());
   LHS = DAG.getExtOrTrunc(Signed, LHS, dl, WideVT);
   RHS = DAG.getExtOrTrunc(Signed, RHS, dl, WideVT);
-  SDValue Res = TLI.expandFixedPointDiv(N->getOpcode(), dl, LHS, RHS, Scale,
-                                        DAG);
+  SDValue Res =
+      TLI.expandFixedPointDiv(N->getOpcode(), dl, LHS, RHS, Scale, DAG);
   assert(Res && "Expanding DIVFIX with wide type failed?");
   if (Saturating) {
     // If the caller has told us to saturate at something less, use that width
@@ -1093,8 +1182,8 @@ static SDValue earlyExpandDIVFIX(SDNode *N, SDValue LHS, SDValue RHS,
     // what we just widened!
     assert(SatW <= VTSize &&
            "Tried to saturate to more than the original type?");
-    Res = SaturateWidenedDIVFIX(Res, dl, SatW == 0 ? VTSize : SatW, Signed,
-                                TLI, DAG);
+    Res = SaturateWidenedDIVFIX(Res, dl, SatW == 0 ? VTSize : SatW, Signed, TLI,
+                                DAG);
   }
   return DAG.getZExtOrTrunc(Res, dl, VT);
 }
@@ -1102,10 +1191,10 @@ static SDValue earlyExpandDIVFIX(SDNode *N, SDValue LHS, SDValue RHS,
 SDValue DAGTypeLegalizer::PromoteIntRes_DIVFIX(SDNode *N) {
   SDLoc dl(N);
   SDValue Op1Promoted, Op2Promoted;
-  bool Signed = N->getOpcode() == ISD::SDIVFIX ||
-                N->getOpcode() == ISD::SDIVFIXSAT;
-  bool Saturating = N->getOpcode() == ISD::SDIVFIXSAT ||
-                    N->getOpcode() == ISD::UDIVFIXSAT;
+  bool Signed =
+      N->getOpcode() == ISD::SDIVFIX || N->getOpcode() == ISD::SDIVFIXSAT;
+  bool Saturating =
+      N->getOpcode() == ISD::SDIVFIXSAT || N->getOpcode() == ISD::UDIVFIXSAT;
   if (Signed) {
     Op1Promoted = SExtPromotedInteger(N->getOperand(0));
     Op2Promoted = SExtPromotedInteger(N->getOperand(1));
@@ -1139,18 +1228,17 @@ SDValue DAGTypeLegalizer::PromoteIntRes_DIVFIX(SDNode *N) {
 
   // See if we can perform the division in this type without expanding.
   if (SDValue Res = TLI.expandFixedPointDiv(N->getOpcode(), dl, Op1Promoted,
-                                        Op2Promoted, Scale, DAG)) {
+                                            Op2Promoted, Scale, DAG)) {
     if (Saturating)
-      Res = SaturateWidenedDIVFIX(Res, dl,
-                                  N->getValueType(0).getScalarSizeInBits(),
-                                  Signed, TLI, DAG);
+      Res = SaturateWidenedDIVFIX(
+          Res, dl, N->getValueType(0).getScalarSizeInBits(), Signed, TLI, DAG);
     return Res;
   }
   // If we cannot, expand it to twice the type width. If we are saturating, give
   // it the original width as a saturating width so we don't need to emit
   // two saturations.
   return earlyExpandDIVFIX(N, Op1Promoted, Op2Promoted, Scale, TLI, DAG,
-                            N->getValueType(0).getScalarSizeInBits());
+                           N->getValueType(0).getScalarSizeInBits());
 }
 
 SDValue DAGTypeLegalizer::PromoteIntRes_SADDSUBO(SDNode *N, unsigned ResNo) {
@@ -1171,8 +1259,8 @@ SDValue DAGTypeLegalizer::PromoteIntRes_SADDSUBO(SDNode *N, unsigned ResNo) {
 
   // Calculate the overflow flag: sign extend the arithmetic result from
   // the original type.
-  SDValue Ofl = DAG.getNode(ISD::SIGN_EXTEND_INREG, dl, NVT, Res,
-                            DAG.getValueType(OVT));
+  SDValue Ofl =
+      DAG.getNode(ISD::SIGN_EXTEND_INREG, dl, NVT, Res, DAG.getValueType(OVT));
   // Overflowed if and only if this is not equal to Res.
   Ofl = DAG.getSetCC(dl, N->getValueType(1), Ofl, Res, ISD::SETNE);
 
@@ -1198,9 +1286,9 @@ SDValue DAGTypeLegalizer::PromoteIntRes_Select(SDNode *N) {
 SDValue DAGTypeLegalizer::PromoteIntRes_SELECT_CC(SDNode *N) {
   SDValue LHS = GetPromotedInteger(N->getOperand(2));
   SDValue RHS = GetPromotedInteger(N->getOperand(3));
-  return DAG.getNode(ISD::SELECT_CC, SDLoc(N),
-                     LHS.getValueType(), N->getOperand(0),
-                     N->getOperand(1), LHS, RHS, N->getOperand(4));
+  return DAG.getNode(ISD::SELECT_CC, SDLoc(N), LHS.getValueType(),
+                     N->getOperand(0), N->getOperand(1), LHS, RHS,
+                     N->getOperand(4));
 }
 
 SDValue DAGTypeLegalizer::PromoteIntRes_SETCC(SDNode *N) {
@@ -1231,8 +1319,8 @@ SDValue DAGTypeLegalizer::PromoteIntRes_SETCC(SDNode *N) {
   SDValue SetCC;
   if (N->isStrictFPOpcode()) {
     SDVTList VTs = DAG.getVTList({SVT, MVT::Other});
-    SDValue Opers[] = {N->getOperand(0), N->getOperand(1),
-                       N->getOperand(2), N->getOperand(3)};
+    SDValue Opers[] = {N->getOperand(0), N->getOperand(1), N->getOperand(2),
+                       N->getOperand(3)};
     SetCC = DAG.getNode(N->getOpcode(), dl, VTs, Opers, N->getFlags());
     // Legalize the chain result - switch anything that used the old chain to
     // use the new one.
@@ -1278,8 +1366,8 @@ SDValue DAGTypeLegalizer::PromoteIntRes_SHL(SDNode *N) {
 
 SDValue DAGTypeLegalizer::PromoteIntRes_SIGN_EXTEND_INREG(SDNode *N) {
   SDValue Op = GetPromotedInteger(N->getOperand(0));
-  return DAG.getNode(ISD::SIGN_EXTEND_INREG, SDLoc(N),
-                     Op.getValueType(), Op, N->getOperand(1));
+  return DAG.getNode(ISD::SIGN_EXTEND_INREG, SDLoc(N), Op.getValueType(), Op,
+                     N->getOperand(1));
 }
 
 SDValue DAGTypeLegalizer::PromoteIntRes_SimpleIntBinOp(SDNode *N) {
@@ -1325,8 +1413,7 @@ SDValue DAGTypeLegalizer::PromoteIntRes_UMINUMAX(SDNode *N) {
   // whatever is best for the target.
   SDValue LHS = SExtOrZExtPromotedInteger(N->getOperand(0));
   SDValue RHS = SExtOrZExtPromotedInteger(N->getOperand(1));
-  return DAG.getNode(N->getOpcode(), SDLoc(N),
-                     LHS.getValueType(), LHS, RHS);
+  return DAG.getNode(N->getOpcode(), SDLoc(N), LHS.getValueType(), LHS, RHS);
 }
 
 SDValue DAGTypeLegalizer::PromoteIntRes_SRA(SDNode *N) {
@@ -1470,7 +1557,8 @@ SDValue DAGTypeLegalizer::PromoteIntRes_TRUNCATE(SDNode *N) {
   SDLoc dl(N);
 
   switch (getTypeAction(InOp.getValueType())) {
-  default: llvm_unreachable("Unknown type action!");
+  default:
+    llvm_unreachable("Unknown type action!");
   case TargetLowering::TypeLegal:
   case TargetLowering::TypeExpandInteger:
     Res = InOp;
@@ -1654,9 +1742,9 @@ SDValue DAGTypeLegalizer::PromoteIntRes_XMULO(SDNode *N, unsigned ResNo) {
     SDValue Hi =
         DAG.getNode(ISD::SRL, DL, Mul.getValueType(), Mul,
                     DAG.getShiftAmountConstant(Shift, Mul.getValueType(), DL));
-    Overflow = DAG.getSetCC(DL, N->getValueType(1), Hi,
-                            DAG.getConstant(0, DL, Hi.getValueType()),
-                            ISD::SETNE);
+    Overflow =
+        DAG.getSetCC(DL, N->getValueType(1), Hi,
+                     DAG.getConstant(0, DL, Hi.getValueType()), ISD::SETNE);
   } else {
     // Signed overflow occurred if the high part does not sign extend the low.
     SDValue SExt = DAG.getNode(ISD::SIGN_EXTEND_INREG, DL, Mul.getValueType(),
@@ -1675,8 +1763,8 @@ SDValue DAGTypeLegalizer::PromoteIntRes_XMULO(SDNode *N, unsigned ResNo) {
 }
 
 SDValue DAGTypeLegalizer::PromoteIntRes_UNDEF(SDNode *N) {
-  return DAG.getUNDEF(TLI.getTypeToTransformTo(*DAG.getContext(),
-                                               N->getValueType(0)));
+  return DAG.getUNDEF(
+      TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0)));
 }
 
 SDValue DAGTypeLegalizer::PromoteIntRes_VSCALE(SDNode *N) {
@@ -1688,7 +1776,7 @@ SDValue DAGTypeLegalizer::PromoteIntRes_VSCALE(SDNode *N) {
 
 SDValue DAGTypeLegalizer::PromoteIntRes_VAARG(SDNode *N) {
   SDValue Chain = N->getOperand(0); // Get the chain.
-  SDValue Ptr = N->getOperand(1); // Get the pointer.
+  SDValue Ptr = N->getOperand(1);   // Get the pointer.
   EVT VT = N->getValueType(0);
   SDLoc dl(N);
 
@@ -1744,24 +1832,41 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
   }
 
   switch (N->getOpcode()) {
-    default:
-  #ifndef NDEBUG
+  default:
+#ifndef NDEBUG
     dbgs() << "PromoteIntegerOperand Op #" << OpNo << ": ";
-    N->dump(&DAG); dbgs() << "\n";
-  #endif
+    N->dump(&DAG);
+    dbgs() << "\n";
+#endif
     report_fatal_error("Do not know how to promote this operator's operand!");
 
-  case ISD::ANY_EXTEND:   Res = PromoteIntOp_ANY_EXTEND(N); break;
+  case ISD::ANY_EXTEND:
+    Res = PromoteIntOp_ANY_EXTEND(N);
+    break;
   case ISD::ATOMIC_STORE:
     Res = PromoteIntOp_ATOMIC_STORE(cast<AtomicSDNode>(N));
     break;
-  case ISD::BITCAST:      Res = PromoteIntOp_BITCAST(N); break;
-  case ISD::BR_CC:        Res = PromoteIntOp_BR_CC(N, OpNo); break;
-  case ISD::BRCOND:       Res = PromoteIntOp_BRCOND(N, OpNo); break;
-  case ISD::BUILD_PAIR:   Res = PromoteIntOp_BUILD_PAIR(N); break;
-  case ISD::BUILD_VECTOR: Res = PromoteIntOp_BUILD_VECTOR(N); break;
-  case ISD::CONCAT_VECTORS: Res = PromoteIntOp_CONCAT_VECTORS(N); break;
-  case ISD::EXTRACT_VECTOR_ELT: Res = PromoteIntOp_EXTRACT_VECTOR_ELT(N); break;
+  case ISD::BITCAST:
+    Res = PromoteIntOp_BITCAST(N);
+    break;
+  case ISD::BR_CC:
+    Res = PromoteIntOp_BR_CC(N, OpNo);
+    break;
+  case ISD::BRCOND:
+    Res = PromoteIntOp_BRCOND(N, OpNo);
+    break;
+  case ISD::BUILD_PAIR:
+    Res = PromoteIntOp_BUILD_PAIR(N);
+    break;
+  case ISD::BUILD_VECTOR:
+    Res = PromoteIntOp_BUILD_VECTOR(N);
+    break;
+  case ISD::CONCAT_VECTORS:
+    Res = PromoteIntOp_CONCAT_VECTORS(N);
+    break;
+  case ISD::EXTRACT_VECTOR_ELT:
+    Res = PromoteIntOp_EXTRACT_VECTOR_ELT(N);
+    break;
   case ISD::INSERT_VECTOR_ELT:
     Res = PromoteIntOp_INSERT_VECTOR_ELT(N, OpNo);
     break;
@@ -1770,55 +1875,98 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
     Res = PromoteIntOp_ScalarOp(N);
     break;
   case ISD::VSELECT:
-  case ISD::SELECT:       Res = PromoteIntOp_SELECT(N, OpNo); break;
-  case ISD::SELECT_CC:    Res = PromoteIntOp_SELECT_CC(N, OpNo); break;
+  case ISD::SELECT:
+    Res = PromoteIntOp_SELECT(N, OpNo);
+    break;
+  case ISD::SELECT_CC:
+    Res = PromoteIntOp_SELECT_CC(N, OpNo);
+    break;
   case ISD::VP_SETCC:
-  case ISD::SETCC:        Res = PromoteIntOp_SETCC(N, OpNo); break;
-  case ISD::SIGN_EXTEND:  Res = PromoteIntOp_SIGN_EXTEND(N); break;
-  case ISD::VP_SIGN_EXTEND: Res = PromoteIntOp_VP_SIGN_EXTEND(N); break;
+  case ISD::SETCC:
+    Res = PromoteIntOp_SETCC(N, OpNo);
+    break;
+  case ISD::SIGN_EXTEND:
+    Res = PromoteIntOp_SIGN_EXTEND(N);
+    break;
+  case ISD::VP_SIGN_EXTEND:
+    Res = PromoteIntOp_VP_SIGN_EXTEND(N);
+    break;
   case ISD::VP_SINT_TO_FP:
-  case ISD::SINT_TO_FP:   Res = PromoteIntOp_SINT_TO_FP(N); break;
-  case ISD::STRICT_SINT_TO_FP: Res = PromoteIntOp_STRICT_SINT_TO_FP(N); break;
-  case ISD::STORE:        Res = PromoteIntOp_STORE(cast<StoreSDNode>(N),
-                                                   OpNo); break;
-  case ISD::MSTORE:       Res = PromoteIntOp_MSTORE(cast<MaskedStoreSDNode>(N),
-                                                    OpNo); break;
-  case ISD::MLOAD:        Res = PromoteIntOp_MLOAD(cast<MaskedLoadSDNode>(N),
-                                                    OpNo); break;
-  case ISD::MGATHER:  Res = PromoteIntOp_MGATHER(cast<MaskedGatherSDNode>(N),
-                                                 OpNo); break;
-  case ISD::MSCATTER: Res = PromoteIntOp_MSCATTER(cast<MaskedScatterSDNode>(N),
-                                                  OpNo); break;
+  case ISD::SINT_TO_FP:
+    Res = PromoteIntOp_SINT_TO_FP(N);
+    break;
+  case ISD::STRICT_SINT_TO_FP:
+    Res = PromoteIntOp_STRICT_SINT_TO_FP(N);
+    break;
+  case ISD::STORE:
+    Res = PromoteIntOp_STORE(cast<StoreSDNode>(N), OpNo);
+    break;
+  case ISD::MSTORE:
+    Res = PromoteIntOp_MSTORE(cast<MaskedStoreSDNode>(N), OpNo);
+    break;
+  case ISD::MLOAD:
+    Res = PromoteIntOp_MLOAD(cast<MaskedLoadSDNode>(N), OpNo);
+    break;
+  case ISD::MGATHER:
+    Res = PromoteIntOp_MGATHER(cast<MaskedGatherSDNode>(N), OpNo);
+    break;
+  case ISD::MSCATTER:
+    Res = PromoteIntOp_MSCATTER(cast<MaskedScatterSDNode>(N), OpNo);
+    break;
   case ISD::VP_TRUNCATE:
-  case ISD::TRUNCATE:     Res = PromoteIntOp_TRUNCATE(N); break;
+  case ISD::TRUNCATE:
+    Res = PromoteIntOp_TRUNCATE(N);
+    break;
   case ISD::BF16_TO_FP:
   case ISD::FP16_TO_FP:
   case ISD::VP_UINT_TO_FP:
-  case ISD::UINT_TO_FP:   Res = PromoteIntOp_UINT_TO_FP(N); break;
-  case ISD::STRICT_UINT_TO_FP:  Res = PromoteIntOp_STRICT_UINT_TO_FP(N); break;
-  case ISD::ZERO_EXTEND:  Res = PromoteIntOp_ZERO_EXTEND(N); break;
-  case ISD::VP_ZERO_EXTEND: Res = PromoteIntOp_VP_ZERO_EXTEND(N); break;
-  case ISD::EXTRACT_SUBVECTOR: Res = PromoteIntOp_EXTRACT_SUBVECTOR(N); break;
-  case ISD::INSERT_SUBVECTOR: Res = PromoteIntOp_INSERT_SUBVECTOR(N); break;
+  case ISD::UINT_TO_FP:
+    Res = PromoteIntOp_UINT_TO_FP(N);
+    break;
+  case ISD::STRICT_UINT_TO_FP:
+    Res = PromoteIntOp_STRICT_UINT_TO_FP(N);
+    break;
+  case ISD::ZERO_EXTEND:
+    Res = PromoteIntOp_ZERO_EXTEND(N);
+    break;
+  case ISD::VP_ZERO_EXTEND:
+    Res = PromoteIntOp_VP_ZERO_EXTEND(N);
+    break;
+  case ISD::EXTRACT_SUBVECTOR:
+    Res = PromoteIntOp_EXTRACT_SUBVECTOR(N);
+    break;
+  case ISD::INSERT_SUBVECTOR:
+    Res = PromoteIntOp_INSERT_SUBVECTOR(N);
+    break;
 
   case ISD::SHL:
   case ISD::SRA:
   case ISD::SRL:
   case ISD::ROTL:
-  case ISD::ROTR: Res = PromoteIntOp_Shift(N); break;
+  case ISD::ROTR:
+    Res = PromoteIntOp_Shift(N);
+    break;
 
   case ISD::FSHL:
-  case ISD::FSHR: Res = PromoteIntOp_FunnelShift(N); break;
+  case ISD::FSHR:
+    Res = PromoteIntOp_FunnelShift(N);
+    break;
 
   case ISD::SADDO_CARRY:
   case ISD::SSUBO_CARRY:
   case ISD::UADDO_CARRY:
-  case ISD::USUBO_CARRY: Res = PromoteIntOp_ADDSUBO_CARRY(N, OpNo); break;
+  case ISD::USUBO_CARRY:
+    Res = PromoteIntOp_ADDSUBO_CARRY(N, OpNo);
+    break;
 
   case ISD::FRAMEADDR:
-  case ISD::RETURNADDR: Res = PromoteIntOp_FRAMERETURNADDR(N); break;
+  case ISD::RETURNADDR:
+    Res = PromoteIntOp_FRAMERETURNADDR(N);
+    break;
 
-  case ISD::PREFETCH: Res = PromoteIntOp_PREFETCH(N, OpNo); break;
+  case ISD::PREFETCH:
+    Res = PromoteIntOp_PREFETCH(N, OpNo);
+    break;
 
   case ISD::SMULFIX:
   case ISD::SMULFIXSAT:
@@ -1827,11 +1975,15 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
   case ISD::SDIVFIX:
   case ISD::SDIVFIXSAT:
   case ISD::UDIVFIX:
-  case ISD::UDIVFIXSAT: Res = PromoteIntOp_FIX(N); break;
+  case ISD::UDIVFIXSAT:
+    Res = PromoteIntOp_FIX(N);
+    break;
   case ISD::FPOWI:
   case ISD::STRICT_FPOWI:
   case ISD::FLDEXP:
-  case ISD::STRICT_FLDEXP: Res = PromoteIntOp_ExpOp(N); break;
+  case ISD::STRICT_FLDEXP:
+    Res = PromoteIntOp_ExpOp(N);
+    break;
   case ISD::VECREDUCE_ADD:
   case ISD::VECREDUCE_MUL:
   case ISD::VECREDUCE_AND:
@@ -1840,7 +1992,9 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
   case ISD::VECREDUCE_SMAX:
   case ISD::VECREDUCE_SMIN:
   case ISD::VECREDUCE_UMAX:
-  case ISD::VECREDUCE_UMIN: Res = PromoteIntOp_VECREDUCE(N); break;
+  case ISD::VECREDUCE_UMIN:
+    Res = PromoteIntOp_VECREDUCE(N);
+    break;
   case ISD::VP_REDUCE_ADD:
   case ISD::VP_REDUCE_MUL:
   case ISD::VP_REDUCE_AND:
@@ -1853,7 +2007,9 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
     Res = PromoteIntOp_VP_REDUCE(N, OpNo);
     break;
 
-  case ISD::SET_ROUNDING: Res = PromoteIntOp_SET_ROUNDING(N); break;
+  case ISD::SET_ROUNDING:
+    Res = PromoteIntOp_SET_ROUNDING(N);
+    break;
   case ISD::STACKMAP:
     Res = PromoteIntOp_STACKMAP(N, OpNo);
     break;
@@ -1867,7 +2023,8 @@ bool DAGTypeLegalizer::PromoteIntegerOperand(SDNode *N, unsigned OpNo) {
   }
 
   // If the result is null, the sub-method took care of registering results etc.
-  if (!Res.getNode()) return false;
+  if (!Res.getNode())
+    return false;
 
   // If the result is N, the sub-method updated N in place.  Tell the legalizer
   // core about this.
@@ -1914,10 +2071,8 @@ void DAGTypeLegalizer::PromoteSetCCOperands(SDValue &LHS, SDValue &RHS,
     // The target would prefer to promote the comparison operand with sign
     // extension. Honor that unless the promoted values are already zero
     // extended.
-    unsigned OpLEffectiveBits =
-        DAG.computeKnownBits(OpL).countMaxActiveBits();
-    unsigned OpREffectiveBits =
-        DAG.computeKnownBits(OpR).countMaxActiveBits();
+    unsigned OpLEffectiveBits = DAG.computeKnownBits(OpL).countMaxActiveBits();
+    unsigned OpREffectiveBits = DAG.computeKnownBits(OpR).countMaxActiveBits();
     if (OpLEffectiveBits <= LHS.getScalarValueSizeInBits() &&
         OpREffectiveBits <= RHS.getScalarValueSizeInBits()) {
       LHS = OpL;
@@ -1976,8 +2131,8 @@ SDValue DAGTypeLegalizer::PromoteIntOp_BR_CC(SDNode *N, unsigned OpNo) {
 
   // The chain (Op#0), CC (#1) and basic block destination (Op#4) are always
   // legal types.
-  return SDValue(DAG.UpdateNodeOperands(N, N->getOperand(0),
-                                N->getOperand(1), LHS, RHS, N->getOperand(4)),
+  return SDValue(DAG.UpdateNodeOperands(N, N->getOperand(0), N->getOperand(1),
+                                        LHS, RHS, N->getOperand(4)),
                  0);
 }
 
@@ -1988,8 +2143,8 @@ SDValue DAGTypeLegalizer::PromoteIntOp_BRCOND(SDNode *N, unsigned OpNo) {
   SDValue Cond = PromoteTargetBoolean(N->getOperand(1), MVT::Other);
 
   // The chain (Op#0) and basic block destination (Op#2) are always legal types.
-  return SDValue(DAG.UpdateNodeOperands(N, N->getOperand(0), Cond,
-                                        N->getOperand(2)), 0);
+  return SDValue(
+      DAG.UpdateNodeOperands(N, N->getOperand(0), Cond, N->getOperand(2)), 0);
 }
 
 SDValue DAGTypeLegalizer::PromoteIntOp_BUILD_PAIR(SDNode *N) {
@@ -2019,7 +2174,7 @@ SDValue DAGTypeLegalizer::PromoteIntOp_BUILD_VECTOR(SDNode *N) {
   // vector element type.  Check that any extra bits introduced will be
   // truncated away.
   assert(N->getOperand(0).getValueSizeInBits() >=
-         N->getValueType(0).getScalarSizeInBits() &&
+             N->getValueType(0).getScalarSizeInBits() &&
          "Type of inserted value narrower than vector element type!");
 
   SmallVector<SDValue, 16> NewOps;
@@ -2037,11 +2192,11 @@ SDValue DAGTypeLegalizer::PromoteIntOp_INSERT_VECTOR_ELT(SDNode *N,
 
     // Check that any extra bits introduced will be truncated away.
     assert(N->getOperand(1).getValueSizeInBits() >=
-           N->getValueType(0).getScalarSizeInBits() &&
+               N->getValueType(0).getScalarSizeInBits() &&
            "Type of inserted value narrower than vector element type!");
     return SDValue(DAG.UpdateNodeOperands(N, N->getOperand(0),
-                                  GetPromotedInteger(N->getOperand(1)),
-                                  N->getOperand(2)),
+                                          GetPromotedInteger(N->getOperand(1)),
+                                          N->getOperand(2)),
                    0);
   }
 
@@ -2050,15 +2205,15 @@ SDValue DAGTypeLegalizer::PromoteIntOp_INSERT_VECTOR_ELT(SDNode *N,
   // Promote the index.
   SDValue Idx = DAG.getZExtOrTrunc(N->getOperand(2), SDLoc(N),
                                    TLI.getVectorIdxTy(DAG.getDataLayout()));
-  return SDValue(DAG.UpdateNodeOperands(N, N->getOperand(0),
-                                N->getOperand(1), Idx), 0);
+  return SDValue(
+      DAG.UpdateNodeOperands(N, N->getOperand(0), N->getOperand(1), Idx), 0);
 }
 
 SDValue DAGTypeLegalizer::PromoteIntOp_ScalarOp(SDNode *N) {
   // Integer SPLAT_VECTOR/SCALAR_TO_VECTOR operands are implicitly truncated,
   // so just promote the operand in place.
-  return SDValue(DAG.UpdateNodeOperands(N,
-                                GetPromotedInteger(N->getOperand(0))), 0);
+  return SDValue(
+      DAG.UpdateNodeOperands(N, GetPromotedInteger(N->getOperand(0))), 0);
 }
 
 SDValue DAGTypeLegalizer::PromoteIntOp_SELECT(SDNode *N, unsigned OpNo) {
@@ -2068,15 +2223,15 @@ SDValue DAGTypeLegalizer::PromoteIntOp_SELECT(SDNode *N, unsigned OpNo) {
 
   if (N->getOpcode() == ISD::VSELECT)
     if (SDValue Res = WidenVSELECTMask(N))
-      return DAG.getNode(N->getOpcode(), SDLoc(N), N->getValueType(0),
-                         Res, N->getOperand(1), N->getOperand(2));
+      return DAG.getNode(N->getOpcode(), SDLoc(N), N->getValueType(0), Res,
+                         N->getOperand(1), N->getOperand(2));
 
   // Promote all the way up to the canonical SetCC type.
   EVT OpVT = N->getOpcode() == ISD::SELECT ? OpTy.getScalarType() : OpTy;
   Cond = PromoteTargetBoolean(Cond, OpVT);
 
-  return SDValue(DAG.UpdateNodeOperands(N, Cond, N->getOperand(1),
-                                        N->getOperand(2)), 0);
+  return SDValue(
+      DAG.UpdateNodeOperands(N, Cond, N->getOperand(1), N->getOperand(2)), 0);
 }
 
 SDValue DAGTypeLegalizer::PromoteIntOp_SELECT_CC(SDNode *N, unsigned OpNo) {
@@ -2088,7 +2243,8 @@ SDValue DAGTypeLegalizer::PromoteIntOp_SELECT_CC(SDNode *N, unsigned OpNo) {
 
   // The CC (#4) and the possible return values (#2 and #3) have legal types.
   return SDValue(DAG.UpdateNodeOperands(N, LHS, RHS, N->getOperand(2),
-                                N->getOperand(3), N->getOperand(4)), 0);
+                                        N->getOperand(3), N->getOperand(4)),
+                 0);
 }
 
 SDValue DAGTypeLegalizer::PromoteIntOp_SETCC(SDNode *N, unsigned OpNo) {
@@ -2111,20 +2267,22 @@ SDValue DAGTypeLegalizer::PromoteIntOp_SETCC(SDNode *N, unsigned OpNo) {
 
 SDValue DAGTypeLegalizer::PromoteIntOp_Shift(SDNode *N) {
   return SDValue(DAG.UpdateNodeOperands(N, N->getOperand(0),
-                                ZExtPromotedInteger(N->getOperand(1))), 0);
+                                        ZExtPromotedInteger(N->getOperand(1))),
+                 0);
 }
 
 SDValue DAGTypeLegalizer::PromoteIntOp_FunnelShift(SDNode *N) {
   return SDValue(DAG.UpdateNodeOperands(N, N->getOperand(0), N->getOperand(1),
-                                ZExtPromotedInteger(N->getOperand(2))), 0);
+                                        ZExtPromotedInteger(N->getOperand(2))),
+                 0);
 }
 
 SDValue DAGTypeLegalizer::PromoteIntOp_SIGN_EXTEND(SDNode *N) {
   SDValue Op = GetPromotedInteger(N->getOperand(0));
   SDLoc dl(N);
   Op = DAG.getNode(ISD::ANY_EXTEND, dl, N->getValueType(0), Op);
-  return DAG.getNode(ISD::SIGN_EXTEND_INREG, dl, Op.getValueType(),
-                     Op, DAG.getValueType(N->getOperand(0).getValueType()));
+  return DAG.getNode(ISD::SIGN_EXTEND_INREG, dl, Op.getValueType(), Op,
+                     DAG.getValueType(N->getOperand(0).getValueType()));
 }
 
 SDValue DAGTypeLegalizer::PromoteIntOp_VP_SIGN_EXTEND(SDNode *N) {
@@ -2150,25 +2308,26 @@ SDValue DAGTypeLegalizer::PromoteIntOp_SINT_TO_FP(SDNode *N) {
                                           SExtPromotedInteger(N->getOperand(0)),
                                           N->getOperand(1), N->getOperand(2)),
                    0);
-  return SDValue(DAG.UpdateNodeOperands(N,
-                                SExtPromotedInteger(N->getOperand(0))), 0);
+  return SDValue(
+      DAG.UpdateNodeOperands(N, SExtPromotedInteger(N->getOperand(0))), 0);
 }
 
 SDValue DAGTypeLegalizer::PromoteIntOp_STRICT_SINT_TO_FP(SDNode *N) {
   return SDValue(DAG.UpdateNodeOperands(N, N->getOperand(0),
-                                SExtPromotedInteger(N->getOperand(1))), 0);
+                                        SExtPromotedInteger(N->getOperand(1))),
+                 0);
 }
 
-SDValue DAGTypeLegalizer::PromoteIntOp_STORE(StoreSDNode *N, unsigned OpNo){
+SDValue DAGTypeLegalizer::PromoteIntOp_STORE(StoreSDNode *N, unsigned OpNo) {
   assert(ISD::isUNINDEXEDStore(N) && "Indexed store during type legalization!");
   SDValue Ch = N->getChain(), Ptr = N->getBasePtr();
   SDLoc dl(N);
 
-  SDValue Val = GetPromotedInteger(N->getValue());  // Get promoted value.
+  SDValue Val = GetPromotedInteger(N->getValue()); // Get promoted value.
 
   // Truncate the value and store the result.
-  return DAG.getTruncStore(Ch, dl, Val, Ptr,
-                           N->getMemoryVT(), N->getMemOperand());
+  return DAG.getTruncStore(Ch, dl, Val, Ptr, N->getMemoryVT(),
+                           N->getMemOperand());
 }
 
 SDValue DAGTypeLegalizer::PromoteIntOp_MSTORE(MaskedStoreSDNode *N,
@@ -2279,13 +2438,14 @@ SDValue DAGTypeLegalizer::PromoteIntOp_UINT_TO_FP(SDNode *N) {
                                           ZExtPromotedInteger(N->getOperand(0)),
                                           N->getOperand(1), N->getOperand(2)),
                    0);
-  return SDValue(DAG.UpdateNodeOperands(N,
-                                ZExtPromotedInteger(N->getOperand(0))), 0);
+  return SDValue(
+      DAG.UpdateNodeOperands(N, ZExtPromotedInteger(N->getOperand(0))), 0);
 }
 
 SDValue DAGTypeLegalizer::PromoteIntOp_STRICT_UINT_TO_FP(SDNode *N) {
   return SDValue(DAG.UpdateNodeOperands(N, N->getOperand(0),
-                                ZExtPromotedInteger(N->getOperand(1))), 0);
+                                        ZExtPromotedInteger(N->getOperand(1))),
+                 0);
 }
 
 SDValue DAGTypeLegalizer::PromoteIntOp_ZERO_EXTEND(SDNode *N) {
@@ -2576,44 +2736,95 @@ void DAGTypeLegalizer::ExpandIntegerResult(SDNode *N, unsigned ResNo) {
   default:
 #ifndef NDEBUG
     dbgs() << "ExpandIntegerResult #" << ResNo << ": ";
-    N->dump(&DAG); dbgs() << "\n";
+    N->dump(&DAG);
+    dbgs() << "\n";
 #endif
     report_fatal_error("Do not know how to expand the result of this "
                        "operator!");
 
-  case ISD::ARITH_FENCE:  SplitRes_ARITH_FENCE(N, Lo, Hi); break;
-  case ISD::MERGE_VALUES: SplitRes_MERGE_VALUES(N, ResNo, Lo, Hi); break;
-  case ISD::SELECT:       SplitRes_Select(N, Lo, Hi); break;
-  case ISD::SELECT_CC:    SplitRes_SELECT_CC(N, Lo, Hi); break;
-  case ISD::UNDEF:        SplitRes_UNDEF(N, Lo, Hi); break;
-  case ISD::FREEZE:       SplitRes_FREEZE(N, Lo, Hi); break;
-
-  case ISD::BITCAST:            ExpandRes_BITCAST(N, Lo, Hi); break;
-  case ISD::BUILD_PAIR:         ExpandRes_BUILD_PAIR(N, Lo, Hi); break;
-  case ISD::EXTRACT_ELEMENT:    ExpandRes_EXTRACT_ELEMENT(N, Lo, Hi); break;
-  case ISD::EXTRACT_VECTOR_ELT: ExpandRes_EXTRACT_VECTOR_ELT(N, Lo, Hi); break;
-  case ISD::VAARG:              ExpandRes_VAARG(N, Lo, Hi); break;
-
-  case ISD::ANY_EXTEND:  ExpandIntRes_ANY_EXTEND(N, Lo, Hi); break;
-  case ISD::AssertSext:  ExpandIntRes_AssertSext(N, Lo, Hi); break;
-  case ISD::AssertZext:  ExpandIntRes_AssertZext(N, Lo, Hi); break;
-  case ISD::BITREVERSE:  ExpandIntRes_BITREVERSE(N, Lo, Hi); break;
-  case ISD::BSWAP:       ExpandIntRes_BSWAP(N, Lo, Hi); break;
-  case ISD::PARITY:      ExpandIntRes_PARITY(N, Lo, Hi); break;
-  case ISD::Constant:    ExpandIntRes_Constant(N, Lo, Hi); break;
-  case ISD::ABS:         ExpandIntRes_ABS(N, Lo, Hi); break;
+  case ISD::ARITH_FENCE:
+    SplitRes_ARITH_FENCE(N, Lo, Hi);
+    break;
+  case ISD::MERGE_VALUES:
+    SplitRes_MERGE_VALUES(N, ResNo, Lo, Hi);
+    break;
+  case ISD::SELECT:
+    SplitRes_Select(N, Lo, Hi);
+    break;
+  case ISD::SELECT_CC:
+    SplitRes_SELECT_CC(N, Lo, Hi);
+    break;
+  case ISD::UNDEF:
+    SplitRes_UNDEF(N, Lo, Hi);
+    break;
+  case ISD::FREEZE:
+    SplitRes_FREEZE(N, Lo, Hi);
+    break;
+
+  case ISD::BITCAST:
+    ExpandRes_BITCAST(N, Lo, Hi);
+    break;
+  case ISD::BUILD_PAIR:
+    ExpandRes_BUILD_PAIR(N, Lo, Hi);
+    break;
+  case ISD::EXTRACT_ELEMENT:
+    ExpandRes_EXTRACT_ELEMENT(N, Lo, Hi);
+    break;
+  case ISD::EXTRACT_VECTOR_ELT:
+    ExpandRes_EXTRACT_VECTOR_ELT(N, Lo, Hi);
+    break;
+  case ISD::VAARG:
+    ExpandRes_VAARG(N, Lo, Hi);
+    break;
+
+  case ISD::ANY_EXTEND:
+    ExpandIntRes_ANY_EXTEND(N, Lo, Hi);
+    break;
+  case ISD::AssertSext:
+    ExpandIntRes_AssertSext(N, Lo, Hi);
+    break;
+  case ISD::AssertZext:
+    ExpandIntRes_AssertZext(N, Lo, Hi);
+    break;
+  case ISD::BITREVERSE:
+    ExpandIntRes_BITREVERSE(N, Lo, Hi);
+    break;
+  case ISD::BSWAP:
+    ExpandIntRes_BSWAP(N, Lo, Hi);
+    break;
+  case ISD::PARITY:
+    ExpandIntRes_PARITY(N, Lo, Hi);
+    break;
+  case ISD::Constant:
+    ExpandIntRes_Constant(N, Lo, Hi);
+    break;
+  case ISD::ABS:
+    ExpandIntRes_ABS(N, Lo, Hi);
+    break;
   case ISD::CTLZ_ZERO_UNDEF:
-  case ISD::CTLZ:        ExpandIntRes_CTLZ(N, Lo, Hi); break;
-  case ISD::CTPOP:       ExpandIntRes_CTPOP(N, Lo, Hi); break;
+  case ISD::CTLZ:
+    ExpandIntRes_CTLZ(N, Lo, Hi);
+    break;
+  case ISD::CTPOP:
+    ExpandIntRes_CTPOP(N, Lo, Hi);
+    break;
   case ISD::CTTZ_ZERO_UNDEF:
-  case ISD::CTTZ:        ExpandIntRes_CTTZ(N, Lo, Hi); break;
-  case ISD::GET_ROUNDING:ExpandIntRes_GET_ROUNDING(N, Lo, Hi); break;
+  case ISD::CTTZ:
+    ExpandIntRes_CTTZ(N, Lo, Hi);
+    break;
+  case ISD::GET_ROUNDING:
+    ExpandIntRes_GET_ROUNDING(N, Lo, Hi);
+    break;
   case ISD::STRICT_FP_TO_SINT:
   case ISD::FP_TO_SINT:
   case ISD::STRICT_FP_TO_UINT:
-  case ISD::FP_TO_UINT:  ExpandIntRes_FP_TO_XINT(N, Lo, Hi); break;
+  case ISD::FP_TO_UINT:
+    ExpandIntRes_FP_TO_XINT(N, Lo, Hi);
+    break;
   case ISD::FP_TO_SINT_SAT:
-  case ISD::FP_TO_UINT_SAT: ExpandIntRes_FP_TO_XINT_SAT(N, Lo, Hi); break;
+  case ISD::FP_TO_UINT_SAT:
+    ExpandIntRes_FP_TO_XINT_SAT(N, Lo, Hi);
+    break;
   case ISD::STRICT_LROUND:
   case ISD::STRICT_LRINT:
   case ISD::LROUND:
@@ -2621,19 +2832,45 @@ void DAGTypeLegalizer::ExpandIntegerResult(SDNode *N, unsigned ResNo) {
   case ISD::STRICT_LLROUND:
   case ISD::STRICT_LLRINT:
   case ISD::LLROUND:
-  case ISD::LLRINT:      ExpandIntRes_XROUND_XRINT(N, Lo, Hi); break;
-  case ISD::LOAD:        ExpandIntRes_LOAD(cast<LoadSDNode>(N), Lo, Hi); break;
-  case ISD::MUL:         ExpandIntRes_MUL(N, Lo, Hi); break;
-  case ISD::READCYCLECOUNTER: ExpandIntRes_READCYCLECOUNTER(N, Lo, Hi); break;
-  case ISD::SDIV:        ExpandIntRes_SDIV(N, Lo, Hi); break;
-  case ISD::SIGN_EXTEND: ExpandIntRes_SIGN_EXTEND(N, Lo, Hi); break;
-  case ISD::SIGN_EXTEND_INREG: ExpandIntRes_SIGN_EXTEND_INREG(N, Lo, Hi); break;
-  case ISD::SREM:        ExpandIntRes_SREM(N, Lo, Hi); break;
-  case ISD::TRUNCATE:    ExpandIntRes_TRUNCATE(N, Lo, Hi); break;
-  case ISD::UDIV:        ExpandIntRes_UDIV(N, Lo, Hi); break;
-  case ISD::UREM:        ExpandIntRes_UREM(N, Lo, Hi); break;
-  case ISD::ZERO_EXTEND: ExpandIntRes_ZERO_EXTEND(N, Lo, Hi); break;
-  case ISD::ATOMIC_LOAD: ExpandIntRes_ATOMIC_LOAD(N, Lo, Hi); break;
+  case ISD::LLRINT:
+    ExpandIntRes_XROUND_XRINT(N, Lo, Hi);
+    break;
+  case ISD::LOAD:
+    ExpandIntRes_LOAD(cast<LoadSDNode>(N), Lo, Hi);
+    break;
+  case ISD::MUL:
+    ExpandIntRes_MUL(N, Lo, Hi);
+    break;
+  case ISD::READCYCLECOUNTER:
+    ExpandIntRes_READCYCLECOUNTER(N, Lo, Hi);
+    break;
+  case ISD::SDIV:
+    ExpandIntRes_SDIV(N, Lo, Hi);
+    break;
+  case ISD::SIGN_EXTEND:
+    ExpandIntRes_SIGN_EXTEND(N, Lo, Hi);
+    break;
+  case ISD::SIGN_EXTEND_INREG:
+    ExpandIntRes_SIGN_EXTEND_INREG(N, Lo, Hi);
+    break;
+  case ISD::SREM:
+    ExpandIntRes_SREM(N, Lo, Hi);
+    break;
+  case ISD::TRUNCATE:
+    ExpandIntRes_TRUNCATE(N, Lo, Hi);
+    break;
+  case ISD::UDIV:
+    ExpandIntRes_UDIV(N, Lo, Hi);
+    break;
+  case ISD::UREM:
+    ExpandIntRes_UREM(N, Lo, Hi);
+    break;
+  case ISD::ZERO_EXTEND:
+    ExpandIntRes_ZERO_EXTEND(N, Lo, Hi);
+    break;
+  case ISD::ATOMIC_LOAD:
+    ExpandIntRes_ATOMIC_LOAD(N, Lo, Hi);
+    break;
 
   case ISD::ATOMIC_LOAD_ADD:
   case ISD::ATOMIC_LOAD_SUB:
@@ -2656,10 +2893,10 @@ void DAGTypeLegalizer::ExpandIntegerResult(SDNode *N, unsigned ResNo) {
   case ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS: {
     AtomicSDNode *AN = cast<AtomicSDNode>(N);
     SDVTList VTs = DAG.getVTList(N->getValueType(0), MVT::Other);
-    SDValue Tmp = DAG.getAtomicCmpSwap(
-        ISD::ATOMIC_CMP_SWAP, SDLoc(N), AN->getMemoryVT(), VTs,
-        N->getOperand(0), N->getOperand(1), N->getOperand(2), N->getOperand(3),
-        AN->getMemOperand());
+    SDValue Tmp = DAG.getAtomicCmpSwap(ISD::ATOMIC_CMP_SWAP, SDLoc(N),
+                                       AN->getMemoryVT(), VTs, N->getOperand(0),
+                                       N->getOperand(1), N->getOperand(2),
+                                       N->getOperand(3), AN->getMemOperand());
 
     // Expanding to the strong ATOMIC_CMP_SWAP node means we can determine
     // success simply by comparing the loaded value against the ingoing
@@ -2675,56 +2912,86 @@ void DAGTypeLegalizer::ExpandIntegerResult(SDNode *N, unsigned ResNo) {
 
   case ISD::AND:
   case ISD::OR:
-  case ISD::XOR: ExpandIntRes_Logical(N, Lo, Hi); break;
+  case ISD::XOR:
+    ExpandIntRes_Logical(N, Lo, Hi);
+    break;
 
   case ISD::UMAX:
   case ISD::SMAX:
   case ISD::UMIN:
-  case ISD::SMIN: ExpandIntRes_MINMAX(N, Lo, Hi); break;
+  case ISD::SMIN:
+    ExpandIntRes_MINMAX(N, Lo, Hi);
+    break;
 
   case ISD::ADD:
-  case ISD::SUB: ExpandIntRes_ADDSUB(N, Lo, Hi); break;
+  case ISD::SUB:
+    ExpandIntRes_ADDSUB(N, Lo, Hi);
+    break;
 
   case ISD::ADDC:
-  case ISD::SUBC: ExpandIntRes_ADDSUBC(N, Lo, Hi); break;
+  case ISD::SUBC:
+    ExpandIntRes_ADDSUBC(N, Lo, Hi);
+    break;
 
   case ISD::ADDE:
-  case ISD::SUBE: ExpandIntRes_ADDSUBE(N, Lo, Hi); break;
+  case ISD::SUBE:
+    ExpandIntRes_ADDSUBE(N, Lo, Hi);
+    break;
 
   case ISD::UADDO_CARRY:
-  case ISD::USUBO_CARRY: ExpandIntRes_UADDSUBO_CARRY(N, Lo, Hi); break;
+  case ISD::USUBO_CARRY:
+    ExpandIntRes_UADDSUBO_CARRY(N, Lo, Hi);
+    break;
 
   case ISD::SADDO_CARRY:
-  case ISD::SSUBO_CARRY: ExpandIntRes_SADDSUBO_CARRY(N, Lo, Hi); break;
+  case ISD::SSUBO_CARRY:
+    ExpandIntRes_SADDSUBO_CARRY(N, Lo, Hi);
+    break;
 
   case ISD::SHL:
   case ISD::SRA:
-  case ISD::SRL: ExpandIntRes_Shift(N, Lo, Hi); break;
+  case ISD::SRL:
+    ExpandIntRes_Shift(N, Lo, Hi);
+    break;
 
   case ISD::SADDO:
-  case ISD::SSUBO: ExpandIntRes_SADDSUBO(N, Lo, Hi); break;
+  case ISD::SSUBO:
+    ExpandIntRes_SADDSUBO(N, Lo, Hi);
+    break;
   case ISD::UADDO:
-  case ISD::USUBO: ExpandIntRes_UADDSUBO(N, Lo, Hi); break;
+  case ISD::USUBO:
+    ExpandIntRes_UADDSUBO(N, Lo, Hi);
+    break;
   case ISD::UMULO:
-  case ISD::SMULO: ExpandIntRes_XMULO(N, Lo, Hi); break;
+  case ISD::SMULO:
+    ExpandIntRes_XMULO(N, Lo, Hi);
+    break;
 
   case ISD::SADDSAT:
   case ISD::UADDSAT:
   case ISD::SSUBSAT:
-  case ISD::USUBSAT: ExpandIntRes_ADDSUBSAT(N, Lo, Hi); break;
+  case ISD::USUBSAT:
+    ExpandIntRes_ADDSUBSAT(N, Lo, Hi);
+    break;
 
   case ISD::SSHLSAT:
-  case ISD::USHLSAT: ExpandIntRes_SHLSAT(N, Lo, Hi); break;
+  case ISD::USHLSAT:
+    ExpandIntRes_SHLSAT(N, Lo, Hi);
+    break;
 
   case ISD::SMULFIX:
   case ISD::SMULFIXSAT:
   case ISD::UMULFIX:
-  case ISD::UMULFIXSAT: ExpandIntRes_MULFIX(N, Lo, Hi); break;
+  case ISD::UMULFIXSAT:
+    ExpandIntRes_MULFIX(N, Lo, Hi);
+    break;
 
   case ISD::SDIVFIX:
   case ISD::SDIVFIXSAT:
   case ISD::UDIVFIX:
-  case ISD::UDIVFIXSAT: ExpandIntRes_DIVFIX(N, Lo, Hi); break;
+  case ISD::UDIVFIXSAT:
+    ExpandIntRes_DIVFIX(N, Lo, Hi);
+    break;
 
   case ISD::VECREDUCE_ADD:
   case ISD::VECREDUCE_MUL:
@@ -2734,7 +3001,9 @@ void DAGTypeLegalizer::ExpandIntegerResult(SDNode *N, unsigned ResNo) {
   case ISD::VECREDUCE_SMAX:
   case ISD::VECREDUCE_SMIN:
   case ISD::VECREDUCE_UMAX:
-  case ISD::VECREDUCE_UMIN: ExpandIntRes_VECREDUCE(N, Lo, Hi); break;
+  case ISD::VECREDUCE_UMIN:
+    ExpandIntRes_VECREDUCE(N, Lo, Hi);
+    break;
 
   case ISD::ROTL:
   case ISD::ROTR:
@@ -2757,7 +3026,7 @@ void DAGTypeLegalizer::ExpandIntegerResult(SDNode *N, unsigned ResNo) {
 }
 
 /// Lower an atomic node to the appropriate builtin call.
-std::pair <SDValue, SDValue> DAGTypeLegalizer::ExpandAtomic(SDNode *Node) {
+std::pair<SDValue, SDValue> DAGTypeLegalizer::ExpandAtomic(SDNode *Node) {
   unsigned Opc = Node->getOpcode();
   MVT VT = cast<AtomicSDNode>(Node)->getMemoryVT().getSimpleVT();
   AtomicOrdering order = cast<AtomicSDNode>(Node)->getMergedOrdering();
@@ -2807,18 +3076,18 @@ void DAGTypeLegalizer::ExpandShiftByConstant(SDNode *N, const APInt &Amt,
       Lo = Hi = DAG.getConstant(0, DL, NVT);
     } else if (Amt.ugt(NVTBits)) {
       Lo = DAG.getConstant(0, DL, NVT);
-      Hi = DAG.getNode(ISD::SHL, DL,
-                       NVT, InL, DAG.getConstant(Amt - NVTBits, DL, ShTy));
+      Hi = DAG.getNode(ISD::SHL, DL, NVT, InL,
+                       DAG.getConstant(Amt - NVTBits, DL, ShTy));
     } else if (Amt == NVTBits) {
       Lo = DAG.getConstant(0, DL, NVT);
       Hi = InL;
     } else {
       Lo = DAG.getNode(ISD::SHL, DL, NVT, InL, DAG.getConstant(Amt, DL, ShTy));
-      Hi = DAG.getNode(ISD::OR, DL, NVT,
-                       DAG.getNode(ISD::SHL, DL, NVT, InH,
-                                   DAG.getConstant(Amt, DL, ShTy)),
-                       DAG.getNode(ISD::SRL, DL, NVT, InL,
-                                   DAG.getConstant(-Amt + NVTBits, DL, ShTy)));
+      Hi = DAG.getNode(
+          ISD::OR, DL, NVT,
+          DAG.getNode(ISD::SHL, DL, NVT, InH, DAG.getConstant(Amt, DL, ShTy)),
+          DAG.getNode(ISD::SRL, DL, NVT, InL,
+                      DAG.getConstant(-Amt + NVTBits, DL, ShTy)));
     }
     return;
   }
@@ -2827,18 +3096,18 @@ void DAGTypeLegalizer::ExpandShiftByConstant(SDNode *N, const APInt &Amt,
     if (Amt.uge(VTBits)) {
       Lo = Hi = DAG.getConstant(0, DL, NVT);
     } else if (Amt.ugt(NVTBits)) {
-      Lo = DAG.getNode(ISD::SRL, DL,
-                       NVT, InH, DAG.getConstant(Amt - NVTBits, DL, ShTy));
+      Lo = DAG.getNode(ISD::SRL, DL, NVT, InH,
+                       DAG.getConstant(Amt - NVTBits, DL, ShTy));
       Hi = DAG.getConstant(0, DL, NVT);
     } else if (Amt == NVTBits) {
       Lo = InH;
       Hi = DAG.getConstant(0, DL, NVT);
     } else {
-      Lo = DAG.getNode(ISD::OR, DL, NVT,
-                       DAG.getNode(ISD::SRL, DL, NVT, InL,
-                                   DAG.getConstant(Amt, DL, ShTy)),
-                       DAG.getNode(ISD::SHL, DL, NVT, InH,
-                                   DAG.getConstant(-Amt + NVTBits, DL, ShTy)));
+      Lo = DAG.getNode(
+          ISD::OR, DL, NVT,
+          DAG.getNode(ISD::SRL, DL, NVT, InL, DAG.getConstant(Amt, DL, ShTy)),
+          DAG.getNode(ISD::SHL, DL, NVT, InH,
+                      DAG.getConstant(-Amt + NVTBits, DL, ShTy)));
       Hi = DAG.getNode(ISD::SRL, DL, NVT, InH, DAG.getConstant(Amt, DL, ShTy));
     }
     return;
@@ -2858,11 +3127,11 @@ void DAGTypeLegalizer::ExpandShiftByConstant(SDNode *N, const APInt &Amt,
     Hi = DAG.getNode(ISD::SRA, DL, NVT, InH,
                      DAG.getConstant(NVTBits - 1, DL, ShTy));
   } else {
-    Lo = DAG.getNode(ISD::OR, DL, NVT,
-                     DAG.getNode(ISD::SRL, DL, NVT, InL,
-                                 DAG.getConstant(Amt, DL, ShTy)),
-                     DAG.getNode(ISD::SHL, DL, NVT, InH,
-                                 DAG.getConstant(-Amt + NVTBits, DL, ShTy)));
+    Lo = DAG.getNode(
+        ISD::OR, DL, NVT,
+        DAG.getNode(ISD::SRL, DL, NVT, InL, DAG.getConstant(Amt, DL, ShTy)),
+        DAG.getNode(ISD::SHL, DL, NVT, InH,
+                    DAG.getConstant(-Amt + NVTBits, DL, ShTy)));
     Hi = DAG.getNode(ISD::SRA, DL, NVT, InH, DAG.getConstant(Amt, DL, ShTy));
   }
 }
@@ -2871,8 +3140,8 @@ void DAGTypeLegalizer::ExpandShiftByConstant(SDNode *N, const APInt &Amt,
 /// this shift based on knowledge of the high bit of the shift amount.  If we
 /// can tell this, we know that it is >= 32 or < 32, without knowing the actual
 /// shift amount.
-bool DAGTypeLegalizer::
-ExpandShiftWithKnownAmountBit(SDNode *N, SDValue &Lo, SDValue &Hi) {
+bool DAGTypeLegalizer::ExpandShiftWithKnownAmountBit(SDNode *N, SDValue &Lo,
+                                                     SDValue &Hi) {
   SDValue Amt = N->getOperand(1);
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
   EVT ShTy = Amt.getValueType();
@@ -2886,7 +3155,7 @@ ExpandShiftWithKnownAmountBit(SDNode *N, SDValue &Lo, SDValue &Hi) {
   KnownBits Known = DAG.computeKnownBits(N->getOperand(1));
 
   // If we don't know anything about the high bits, exit.
-  if (((Known.Zero|Known.One) & HighBitMask) == 0)
+  if (((Known.Zero | Known.One) & HighBitMask) == 0)
     return false;
 
   // Get the incoming operand to be shifted.
@@ -2901,7 +3170,8 @@ ExpandShiftWithKnownAmountBit(SDNode *N, SDValue &Lo, SDValue &Hi) {
                       DAG.getConstant(~HighBitMask, dl, ShTy));
 
     switch (N->getOpcode()) {
-    default: llvm_unreachable("Unknown shift");
+    default:
+      llvm_unreachable("Unknown shift");
     case ISD::SHL:
       Lo = DAG.getConstant(0, dl, NVT);              // Low part is zero.
       Hi = DAG.getNode(ISD::SHL, dl, NVT, InL, Amt); // High part from Lo part.
@@ -2911,7 +3181,7 @@ ExpandShiftWithKnownAmountBit(SDNode *N, SDValue &Lo, SDValue &Hi) {
       Lo = DAG.getNode(ISD::SRL, dl, NVT, InH, Amt); // Lo part from Hi part.
       return true;
     case ISD::SRA:
-      Hi = DAG.getNode(ISD::SRA, dl, NVT, InH,       // Sign extend high part.
+      Hi = DAG.getNode(ISD::SRA, dl, NVT, InH, // Sign extend high part.
                        DAG.getConstant(NVTBits - 1, dl, ShTy));
       Lo = DAG.getNode(ISD::SRA, dl, NVT, InH, Amt); // Lo part from Hi part.
       return true;
@@ -2929,10 +3199,17 @@ ExpandShiftWithKnownAmountBit(SDNode *N, SDValue &Lo, SDValue &Hi) {
 
     unsigned Op1, Op2;
     switch (N->getOpcode()) {
-    default: llvm_unreachable("Unknown shift");
-    case ISD::SHL:  Op1 = ISD::SHL; Op2 = ISD::SRL; break;
+    default:
+      llvm_unreachable("Unknown shift");
+    case ISD::SHL:
+      Op1 = ISD::SHL;
+      Op2 = ISD::SRL;
+      break;
     case ISD::SRL:
-    case ISD::SRA:  Op1 = ISD::SRL; Op2 = ISD::SHL; break;
+    case ISD::SRA:
+      Op1 = ISD::SRL;
+      Op2 = ISD::SHL;
+      break;
     }
 
     // When shifting right the arithmetic for Lo and Hi is swapped.
@@ -2946,7 +3223,8 @@ ExpandShiftWithKnownAmountBit(SDNode *N, SDValue &Lo, SDValue &Hi) {
     SDValue Sh2 = DAG.getNode(Op2, dl, NVT, Sh1, Amt2);
 
     Lo = DAG.getNode(N->getOpcode(), dl, NVT, InL, Amt);
-    Hi = DAG.getNode(ISD::OR, dl, NVT, DAG.getNode(Op1, dl, NVT, InH, Amt),Sh2);
+    Hi =
+        DAG.getNode(ISD::OR, dl, NVT, DAG.getNode(Op1, dl, NVT, InH, Amt), Sh2);
 
     if (N->getOpcode() != ISD::SHL)
       std::swap(Hi, Lo);
@@ -2958,8 +3236,8 @@ ExpandShiftWithKnownAmountBit(SDNode *N, SDValue &Lo, SDValue &Hi) {
 
 /// ExpandShiftWithUnknownAmountBit - Fully general expansion of integer shift
 /// of any size.
-bool DAGTypeLegalizer::
-ExpandShiftWithUnknownAmountBit(SDNode *N, SDValue &Lo, SDValue &Hi) {
+bool DAGTypeLegalizer::ExpandShiftWithUnknownAmountBit(SDNode *N, SDValue &Lo,
+                                                       SDValue &Hi) {
   SDValue Amt = N->getOperand(1);
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
   EVT ShTy = Amt.getValueType();
@@ -2975,21 +3253,21 @@ ExpandShiftWithUnknownAmountBit(SDNode *N, SDValue &Lo, SDValue &Hi) {
   SDValue NVBitsNode = DAG.getConstant(NVTBits, dl, ShTy);
   SDValue AmtExcess = DAG.getNode(ISD::SUB, dl, ShTy, Amt, NVBitsNode);
   SDValue AmtLack = DAG.getNode(ISD::SUB, dl, ShTy, NVBitsNode, Amt);
-  SDValue isShort = DAG.getSetCC(dl, getSetCCResultType(ShTy),
-                                 Amt, NVBitsNode, ISD::SETULT);
-  SDValue isZero = DAG.getSetCC(dl, getSetCCResultType(ShTy),
-                                Amt, DAG.getConstant(0, dl, ShTy),
-                                ISD::SETEQ);
+  SDValue isShort =
+      DAG.getSetCC(dl, getSetCCResultType(ShTy), Amt, NVBitsNode, ISD::SETULT);
+  SDValue isZero = DAG.getSetCC(dl, getSetCCResultType(ShTy), Amt,
+                                DAG.getConstant(0, dl, ShTy), ISD::SETEQ);
 
   SDValue LoS, HiS, LoL, HiL;
   switch (N->getOpcode()) {
-  default: llvm_unreachable("Unknown shift");
+  default:
+    llvm_unreachable("Unknown shift");
   case ISD::SHL:
     // Short: ShAmt < NVTBits
     LoS = DAG.getNode(ISD::SHL, dl, NVT, InL, Amt);
-    HiS = DAG.getNode(ISD::OR, dl, NVT,
-                      DAG.getNode(ISD::SHL, dl, NVT, InH, Amt),
-                      DAG.getNode(ISD::SRL, dl, NVT, InL, AmtLack));
+    HiS =
+        DAG.getNode(ISD::OR, dl, NVT, DAG.getNode(ISD::SHL, dl, NVT, InH, Amt),
+                    DAG.getNode(ISD::SRL, dl, NVT, InL, AmtLack));
 
     // Long: ShAmt >= NVTBits
     LoL = DAG.getConstant(0, dl, NVT);                    // Lo part is zero.
@@ -3002,11 +3280,11 @@ ExpandShiftWithUnknownAmountBit(SDNode *N, SDValue &Lo, SDValue &Hi) {
   case ISD::SRL:
     // Short: ShAmt < NVTBits
     HiS = DAG.getNode(ISD::SRL, dl, NVT, InH, Amt);
-    LoS = DAG.getNode(ISD::OR, dl, NVT,
-                      DAG.getNode(ISD::SRL, dl, NVT, InL, Amt),
-    // FIXME: If Amt is zero, the following shift generates an undefined result
-    // on some architectures.
-                      DAG.getNode(ISD::SHL, dl, NVT, InH, AmtLack));
+    LoS =
+        DAG.getNode(ISD::OR, dl, NVT, DAG.getNode(ISD::SRL, dl, NVT, InL, Amt),
+                    // FIXME: If Amt is zero, the following shift generates an
+                    // undefined result on some architectures.
+                    DAG.getNode(ISD::SHL, dl, NVT, InH, AmtLack));
 
     // Long: ShAmt >= NVTBits
     HiL = DAG.getConstant(0, dl, NVT);                    // Hi part is zero.
@@ -3019,12 +3297,12 @@ ExpandShiftWithUnknownAmountBit(SDNode *N, SDValue &Lo, SDValue &Hi) {
   case ISD::SRA:
     // Short: ShAmt < NVTBits
     HiS = DAG.getNode(ISD::SRA, dl, NVT, InH, Amt);
-    LoS = DAG.getNode(ISD::OR, dl, NVT,
-                      DAG.getNode(ISD::SRL, dl, NVT, InL, Amt),
-                      DAG.getNode(ISD::SHL, dl, NVT, InH, AmtLack));
+    LoS =
+        DAG.getNode(ISD::OR, dl, NVT, DAG.getNode(ISD::SRL, dl, NVT, InL, Amt),
+                    DAG.getNode(ISD::SHL, dl, NVT, InH, AmtLack));
 
     // Long: ShAmt >= NVTBits
-    HiL = DAG.getNode(ISD::SRA, dl, NVT, InH,             // Sign of Hi part.
+    HiL = DAG.getNode(ISD::SRA, dl, NVT, InH, // Sign of Hi part.
                       DAG.getConstant(NVTBits - 1, dl, ShTy));
     LoL = DAG.getNode(ISD::SRA, dl, NVT, InH, AmtExcess); // Lo from Hi part.
 
@@ -3038,20 +3316,21 @@ ExpandShiftWithUnknownAmountBit(SDNode *N, SDValue &Lo, SDValue &Hi) {
 static std::pair<ISD::CondCode, ISD::NodeType> getExpandedMinMaxOps(int Op) {
 
   switch (Op) {
-    default: llvm_unreachable("invalid min/max opcode");
-    case ISD::SMAX:
-      return std::make_pair(ISD::SETGT, ISD::UMAX);
-    case ISD::UMAX:
-      return std::make_pair(ISD::SETUGT, ISD::UMAX);
-    case ISD::SMIN:
-      return std::make_pair(ISD::SETLT, ISD::UMIN);
-    case ISD::UMIN:
-      return std::make_pair(ISD::SETULT, ISD::UMIN);
+  default:
+    llvm_unreachable("invalid min/max opcode");
+  case ISD::SMAX:
+    return std::make_pair(ISD::SETGT, ISD::UMAX);
+  case ISD::UMAX:
+    return std::make_pair(ISD::SETUGT, ISD::UMAX);
+  case ISD::SMIN:
+    return std::make_pair(ISD::SETLT, ISD::UMIN);
+  case ISD::UMIN:
+    return std::make_pair(ISD::SETULT, ISD::UMIN);
   }
 }
 
-void DAGTypeLegalizer::ExpandIntRes_MINMAX(SDNode *N,
-                                           SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_MINMAX(SDNode *N, SDValue &Lo,
+                                           SDValue &Hi) {
   SDLoc DL(N);
 
   SDValue LHS = N->getOperand(0);
@@ -3102,8 +3381,8 @@ void DAGTypeLegalizer::ExpandIntRes_MINMAX(SDNode *N,
   // The high half of MIN/MAX is always just the the MIN/MAX of the
   // high halves of the operands.  Expand this way if it appears profitable.
   if (RHSVal && (N->getOpcode() == ISD::UMIN || N->getOpcode() == ISD::UMAX) &&
-                 (RHSVal->countLeadingOnes() >= NumHalfBits ||
-                  RHSVal->countLeadingZeros() >= NumHalfBits)) {
+      (RHSVal->countLeadingOnes() >= NumHalfBits ||
+       RHSVal->countLeadingZeros() >= NumHalfBits)) {
     SDValue LHSL, LHSH, RHSL, RHSH;
     GetExpandedInteger(LHS, LHSL, LHSH);
     GetExpandedInteger(RHS, RHSL, RHSH);
@@ -3134,7 +3413,8 @@ void DAGTypeLegalizer::ExpandIntRes_MINMAX(SDNode *N,
   // the compare.
   ISD::CondCode Pred;
   switch (N->getOpcode()) {
-  default: llvm_unreachable("How did we get here?");
+  default:
+    llvm_unreachable("How did we get here?");
   case ISD::SMAX:
     if (RHSVal && RHSVal->countTrailingZeros() >= NumHalfBits)
       Pred = ISD::SETGE;
@@ -3167,8 +3447,8 @@ void DAGTypeLegalizer::ExpandIntRes_MINMAX(SDNode *N,
   SplitInteger(Result, Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_ADDSUB(SDNode *N,
-                                           SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_ADDSUB(SDNode *N, SDValue &Lo,
+                                           SDValue &Hi) {
   SDLoc dl(N);
   // Expand the subcomponents.
   SDValue LHSL, LHSH, RHSL, RHSH;
@@ -3176,8 +3456,8 @@ void DAGTypeLegalizer::ExpandIntRes_ADDSUB(SDNode *N,
   GetExpandedInteger(N->getOperand(1), RHSL, RHSH);
 
   EVT NVT = LHSL.getValueType();
-  SDValue LoOps[2] = { LHSL, RHSL };
-  SDValue HiOps[3] = { LHSH, RHSH };
+  SDValue LoOps[2] = {LHSL, RHSL};
+  SDValue HiOps[3] = {LHSH, RHSH};
 
   bool HasOpCarry = TLI.isOperationLegalOrCustom(
       N->getOpcode() == ISD::ADD ? ISD::UADDO_CARRY : ISD::USUBO_CARRY,
@@ -3205,10 +3485,9 @@ void DAGTypeLegalizer::ExpandIntRes_ADDSUB(SDNode *N,
   // ADDC/ADDE/SUBC/SUBE.  The problem is that these operations generate
   // a carry of type MVT::Glue, but there doesn't seem to be any way to
   // generate a value of this type in the expanded code sequence.
-  bool hasCarry =
-    TLI.isOperationLegalOrCustom(N->getOpcode() == ISD::ADD ?
-                                   ISD::ADDC : ISD::SUBC,
-                                 TLI.getTypeToExpandTo(*DAG.getContext(), NVT));
+  bool hasCarry = TLI.isOperationLegalOrCustom(
+      N->getOpcode() == ISD::ADD ? ISD::ADDC : ISD::SUBC,
+      TLI.getTypeToExpandTo(*DAG.getContext(), NVT));
 
   if (hasCarry) {
     SDVTList VTList = DAG.getVTList(NVT, MVT::Glue);
@@ -3224,10 +3503,9 @@ void DAGTypeLegalizer::ExpandIntRes_ADDSUB(SDNode *N,
     return;
   }
 
-  bool hasOVF =
-    TLI.isOperationLegalOrCustom(N->getOpcode() == ISD::ADD ?
-                                   ISD::UADDO : ISD::USUBO,
-                                 TLI.getTypeToExpandTo(*DAG.getContext(), NVT));
+  bool hasOVF = TLI.isOperationLegalOrCustom(
+      N->getOpcode() == ISD::ADD ? ISD::UADDO : ISD::USUBO,
+      TLI.getTypeToExpandTo(*DAG.getContext(), NVT));
   TargetLoweringBase::BooleanContent BoolType = TLI.getBooleanContents(NVT);
 
   if (hasOVF) {
@@ -3247,7 +3525,8 @@ void DAGTypeLegalizer::ExpandIntRes_ADDSUB(SDNode *N,
 
     switch (BoolType) {
     case TargetLoweringBase::UndefinedBooleanContent:
-      OVF = DAG.getNode(ISD::AND, dl, OvfVT, DAG.getConstant(1, dl, OvfVT), OVF);
+      OVF =
+          DAG.getNode(ISD::AND, dl, OvfVT, DAG.getConstant(1, dl, OvfVT), OVF);
       [[fallthrough]];
     case TargetLoweringBase::ZeroOrOneBooleanContent:
       OVF = DAG.getZExtOrTrunc(OVF, dl, NVT);
@@ -3277,15 +3556,15 @@ void DAGTypeLegalizer::ExpandIntRes_ADDSUB(SDNode *N,
         Cmp = DAG.getSetCC(dl, getSetCCResultType(NVT), LoOps[0],
                            DAG.getConstant(0, dl, NVT), ISD::SETNE);
     } else
-      Cmp = DAG.getSetCC(dl, getSetCCResultType(NVT), Lo, LoOps[0],
-                         ISD::SETULT);
+      Cmp =
+          DAG.getSetCC(dl, getSetCCResultType(NVT), Lo, LoOps[0], ISD::SETULT);
 
     SDValue Carry;
     if (BoolType == TargetLoweringBase::ZeroOrOneBooleanContent)
       Carry = DAG.getZExtOrTrunc(Cmp, dl, NVT);
     else
       Carry = DAG.getSelect(dl, NVT, Cmp, DAG.getConstant(1, dl, NVT),
-                             DAG.getConstant(0, dl, NVT));
+                            DAG.getConstant(0, dl, NVT));
 
     if (isAllOnesConstant(LoOps[1]) && isAllOnesConstant(HiOps[1]))
       Hi = DAG.getNode(ISD::SUB, dl, NVT, HiOps[0], Carry);
@@ -3294,9 +3573,8 @@ void DAGTypeLegalizer::ExpandIntRes_ADDSUB(SDNode *N,
   } else {
     Lo = DAG.getNode(ISD::SUB, dl, NVT, LoOps);
     Hi = DAG.getNode(ISD::SUB, dl, NVT, ArrayRef(HiOps, 2));
-    SDValue Cmp =
-      DAG.getSetCC(dl, getSetCCResultType(LoOps[0].getValueType()),
-                   LoOps[0], LoOps[1], ISD::SETULT);
+    SDValue Cmp = DAG.getSetCC(dl, getSetCCResultType(LoOps[0].getValueType()),
+                               LoOps[0], LoOps[1], ISD::SETULT);
 
     SDValue Borrow;
     if (BoolType == TargetLoweringBase::ZeroOrOneBooleanContent)
@@ -3309,16 +3587,16 @@ void DAGTypeLegalizer::ExpandIntRes_ADDSUB(SDNode *N,
   }
 }
 
-void DAGTypeLegalizer::ExpandIntRes_ADDSUBC(SDNode *N,
-                                            SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_ADDSUBC(SDNode *N, SDValue &Lo,
+                                            SDValue &Hi) {
   // Expand the subcomponents.
   SDValue LHSL, LHSH, RHSL, RHSH;
   SDLoc dl(N);
   GetExpandedInteger(N->getOperand(0), LHSL, LHSH);
   GetExpandedInteger(N->getOperand(1), RHSL, RHSH);
   SDVTList VTList = DAG.getVTList(LHSL.getValueType(), MVT::Glue);
-  SDValue LoOps[2] = { LHSL, RHSL };
-  SDValue HiOps[3] = { LHSH, RHSH };
+  SDValue LoOps[2] = {LHSL, RHSL};
+  SDValue HiOps[3] = {LHSH, RHSH};
 
   if (N->getOpcode() == ISD::ADDC) {
     Lo = DAG.getNode(ISD::ADDC, dl, VTList, LoOps);
@@ -3335,16 +3613,16 @@ void DAGTypeLegalizer::ExpandIntRes_ADDSUBC(SDNode *N,
   ReplaceValueWith(SDValue(N, 1), Hi.getValue(1));
 }
 
-void DAGTypeLegalizer::ExpandIntRes_ADDSUBE(SDNode *N,
-                                            SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_ADDSUBE(SDNode *N, SDValue &Lo,
+                                            SDValue &Hi) {
   // Expand the subcomponents.
   SDValue LHSL, LHSH, RHSL, RHSH;
   SDLoc dl(N);
   GetExpandedInteger(N->getOperand(0), LHSL, LHSH);
   GetExpandedInteger(N->getOperand(1), RHSL, RHSH);
   SDVTList VTList = DAG.getVTList(LHSL.getValueType(), MVT::Glue);
-  SDValue LoOps[3] = { LHSL, RHSL, N->getOperand(2) };
-  SDValue HiOps[3] = { LHSH, RHSH };
+  SDValue LoOps[3] = {LHSL, RHSL, N->getOperand(2)};
+  SDValue HiOps[3] = {LHSH, RHSH};
 
   Lo = DAG.getNode(N->getOpcode(), dl, VTList, LoOps);
   HiOps[2] = Lo.getValue(1);
@@ -3355,8 +3633,8 @@ void DAGTypeLegalizer::ExpandIntRes_ADDSUBE(SDNode *N,
   ReplaceValueWith(SDValue(N, 1), Hi.getValue(1));
 }
 
-void DAGTypeLegalizer::ExpandIntRes_UADDSUBO(SDNode *N,
-                                             SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_UADDSUBO(SDNode *N, SDValue &Lo,
+                                             SDValue &Hi) {
   SDValue LHS = N->getOperand(0);
   SDValue RHS = N->getOperand(1);
   SDLoc dl(N);
@@ -3365,19 +3643,19 @@ void DAGTypeLegalizer::ExpandIntRes_UADDSUBO(SDNode *N,
 
   unsigned CarryOp, NoCarryOp;
   ISD::CondCode Cond;
-  switch(N->getOpcode()) {
-    case ISD::UADDO:
-      CarryOp = ISD::UADDO_CARRY;
-      NoCarryOp = ISD::ADD;
-      Cond = ISD::SETULT;
-      break;
-    case ISD::USUBO:
-      CarryOp = ISD::USUBO_CARRY;
-      NoCarryOp = ISD::SUB;
-      Cond = ISD::SETUGT;
-      break;
-    default:
-      llvm_unreachable("Node has unexpected Opcode");
+  switch (N->getOpcode()) {
+  case ISD::UADDO:
+    CarryOp = ISD::UADDO_CARRY;
+    NoCarryOp = ISD::ADD;
+    Cond = ISD::SETULT;
+    break;
+  case ISD::USUBO:
+    CarryOp = ISD::USUBO_CARRY;
+    NoCarryOp = ISD::SUB;
+    Cond = ISD::SETUGT;
+    break;
+  default:
+    llvm_unreachable("Node has unexpected Opcode");
   }
 
   bool HasCarryOp = TLI.isOperationLegalOrCustom(
@@ -3389,8 +3667,8 @@ void DAGTypeLegalizer::ExpandIntRes_UADDSUBO(SDNode *N,
     GetExpandedInteger(LHS, LHSL, LHSH);
     GetExpandedInteger(RHS, RHSL, RHSH);
     SDVTList VTList = DAG.getVTList(LHSL.getValueType(), N->getValueType(1));
-    SDValue LoOps[2] = { LHSL, RHSL };
-    SDValue HiOps[3] = { LHSH, RHSH };
+    SDValue LoOps[2] = {LHSL, RHSL};
+    SDValue HiOps[3] = {LHSH, RHSH};
 
     Lo = DAG.getNode(N->getOpcode(), dl, VTList, LoOps);
     HiOps[2] = Lo.getValue(1);
@@ -3434,8 +3712,8 @@ void DAGTypeLegalizer::ExpandIntRes_UADDSUBO_CARRY(SDNode *N, SDValue &Lo,
   GetExpandedInteger(N->getOperand(0), LHSL, LHSH);
   GetExpandedInteger(N->getOperand(1), RHSL, RHSH);
   SDVTList VTList = DAG.getVTList(LHSL.getValueType(), N->getValueType(1));
-  SDValue LoOps[3] = { LHSL, RHSL, N->getOperand(2) };
-  SDValue HiOps[3] = { LHSH, RHSH, SDValue() };
+  SDValue LoOps[3] = {LHSL, RHSL, N->getOperand(2)};
+  SDValue HiOps[3] = {LHSH, RHSH, SDValue()};
 
   Lo = DAG.getNode(N->getOpcode(), dl, VTList, LoOps);
   HiOps[2] = Lo.getValue(1);
@@ -3446,8 +3724,8 @@ void DAGTypeLegalizer::ExpandIntRes_UADDSUBO_CARRY(SDNode *N, SDValue &Lo,
   ReplaceValueWith(SDValue(N, 1), Hi.getValue(1));
 }
 
-void DAGTypeLegalizer::ExpandIntRes_SADDSUBO_CARRY(SDNode *N,
-                                                   SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_SADDSUBO_CARRY(SDNode *N, SDValue &Lo,
+                                                   SDValue &Hi) {
   // Expand the subcomponents.
   SDValue LHSL, LHSH, RHSL, RHSH;
   SDLoc dl(N);
@@ -3458,28 +3736,28 @@ void DAGTypeLegalizer::ExpandIntRes_SADDSUBO_CARRY(SDNode *N,
   // We need to use an unsigned carry op for the lo part.
   unsigned CarryOp =
       N->getOpcode() == ISD::SADDO_CARRY ? ISD::UADDO_CARRY : ISD::USUBO_CARRY;
-  Lo = DAG.getNode(CarryOp, dl, VTList, { LHSL, RHSL, N->getOperand(2) });
-  Hi = DAG.getNode(N->getOpcode(), dl, VTList, { LHSH, RHSH, Lo.getValue(1) });
+  Lo = DAG.getNode(CarryOp, dl, VTList, {LHSL, RHSL, N->getOperand(2)});
+  Hi = DAG.getNode(N->getOpcode(), dl, VTList, {LHSH, RHSH, Lo.getValue(1)});
 
   // Legalized the flag result - switch anything that used the old flag to
   // use the new one.
   ReplaceValueWith(SDValue(N, 1), Hi.getValue(1));
 }
 
-void DAGTypeLegalizer::ExpandIntRes_ANY_EXTEND(SDNode *N,
-                                               SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_ANY_EXTEND(SDNode *N, SDValue &Lo,
+                                               SDValue &Hi) {
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
   SDLoc dl(N);
   SDValue Op = N->getOperand(0);
   if (Op.getValueType().bitsLE(NVT)) {
     // The low part is any extension of the input (which degenerates to a copy).
     Lo = DAG.getNode(ISD::ANY_EXTEND, dl, NVT, Op);
-    Hi = DAG.getUNDEF(NVT);   // The high part is undefined.
+    Hi = DAG.getUNDEF(NVT); // The high part is undefined.
   } else {
     // For example, extension of an i48 to an i64.  The operand type necessarily
     // promotes to the result type, so will end up being expanded too.
     assert(getTypeAction(Op.getValueType()) ==
-           TargetLowering::TypePromoteInteger &&
+               TargetLowering::TypePromoteInteger &&
            "Only know how to promote this result!");
     SDValue Res = GetPromotedInteger(Op);
     assert(Res.getValueType() == N->getValueType(0) &&
@@ -3489,8 +3767,8 @@ void DAGTypeLegalizer::ExpandIntRes_ANY_EXTEND(SDNode *N,
   }
 }
 
-void DAGTypeLegalizer::ExpandIntRes_AssertSext(SDNode *N,
-                                               SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_AssertSext(SDNode *N, SDValue &Lo,
+                                               SDValue &Hi) {
   SDLoc dl(N);
   GetExpandedInteger(N->getOperand(0), Lo, Hi);
   EVT NVT = Lo.getValueType();
@@ -3511,8 +3789,8 @@ void DAGTypeLegalizer::ExpandIntRes_AssertSext(SDNode *N,
   }
 }
 
-void DAGTypeLegalizer::ExpandIntRes_AssertZext(SDNode *N,
-                                               SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_AssertZext(SDNode *N, SDValue &Lo,
+                                               SDValue &Hi) {
   SDLoc dl(N);
   GetExpandedInteger(N->getOperand(0), Lo, Hi);
   EVT NVT = Lo.getValueType();
@@ -3531,18 +3809,17 @@ void DAGTypeLegalizer::ExpandIntRes_AssertZext(SDNode *N,
   }
 }
 
-void DAGTypeLegalizer::ExpandIntRes_BITREVERSE(SDNode *N,
-                                               SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_BITREVERSE(SDNode *N, SDValue &Lo,
+                                               SDValue &Hi) {
   SDLoc dl(N);
-  GetExpandedInteger(N->getOperand(0), Hi, Lo);  // Note swapped operands.
+  GetExpandedInteger(N->getOperand(0), Hi, Lo); // Note swapped operands.
   Lo = DAG.getNode(ISD::BITREVERSE, dl, Lo.getValueType(), Lo);
   Hi = DAG.getNode(ISD::BITREVERSE, dl, Hi.getValueType(), Hi);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_BSWAP(SDNode *N,
-                                          SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_BSWAP(SDNode *N, SDValue &Lo, SDValue &Hi) {
   SDLoc dl(N);
-  GetExpandedInteger(N->getOperand(0), Hi, Lo);  // Note swapped operands.
+  GetExpandedInteger(N->getOperand(0), Hi, Lo); // Note swapped operands.
   Lo = DAG.getNode(ISD::BSWAP, dl, Lo.getValueType(), Lo);
   Hi = DAG.getNode(ISD::BSWAP, dl, Hi.getValueType(), Hi);
 }
@@ -3558,8 +3835,8 @@ void DAGTypeLegalizer::ExpandIntRes_PARITY(SDNode *N, SDValue &Lo,
   Hi = DAG.getConstant(0, dl, NVT);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_Constant(SDNode *N,
-                                             SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_Constant(SDNode *N, SDValue &Lo,
+                                             SDValue &Hi) {
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
   unsigned NBitWidth = NVT.getSizeInBits();
   auto Constant = cast<ConstantSDNode>(N);
@@ -3609,8 +3886,7 @@ void DAGTypeLegalizer::ExpandIntRes_ABS(SDNode *N, SDValue &Lo, SDValue &Hi) {
 
   // abs(HiLo) -> (Hi < 0 ? -HiLo : HiLo)
   EVT VT = N->getValueType(0);
-  SDValue Neg = DAG.getNode(ISD::SUB, dl, VT,
-                            DAG.getConstant(0, dl, VT), N0);
+  SDValue Neg = DAG.getNode(ISD::SUB, dl, VT, DAG.getConstant(0, dl, VT), N0);
   SDValue NegLo, NegHi;
   SplitInteger(Neg, NegLo, NegHi);
 
@@ -3620,8 +3896,7 @@ void DAGTypeLegalizer::ExpandIntRes_ABS(SDNode *N, SDValue &Lo, SDValue &Hi) {
   Hi = DAG.getSelect(dl, NVT, HiIsNeg, NegHi, Hi);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_CTLZ(SDNode *N,
-                                         SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_CTLZ(SDNode *N, SDValue &Lo, SDValue &Hi) {
   SDLoc dl(N);
   // ctlz (HiLo) -> Hi != 0 ? ctlz(Hi) : (ctlz(Lo)+32)
   GetExpandedInteger(N->getOperand(0), Lo, Hi);
@@ -3633,15 +3908,14 @@ void DAGTypeLegalizer::ExpandIntRes_CTLZ(SDNode *N,
   SDValue LoLZ = DAG.getNode(N->getOpcode(), dl, NVT, Lo);
   SDValue HiLZ = DAG.getNode(ISD::CTLZ_ZERO_UNDEF, dl, NVT, Hi);
 
-  Lo = DAG.getSelect(dl, NVT, HiNotZero, HiLZ,
-                     DAG.getNode(ISD::ADD, dl, NVT, LoLZ,
-                                 DAG.getConstant(NVT.getSizeInBits(), dl,
-                                                 NVT)));
+  Lo =
+      DAG.getSelect(dl, NVT, HiNotZero, HiLZ,
+                    DAG.getNode(ISD::ADD, dl, NVT, LoLZ,
+                                DAG.getConstant(NVT.getSizeInBits(), dl, NVT)));
   Hi = DAG.getConstant(0, dl, NVT);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_CTPOP(SDNode *N,
-                                          SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_CTPOP(SDNode *N, SDValue &Lo, SDValue &Hi) {
   SDLoc dl(N);
   // ctpop(HiLo) -> ctpop(Hi)+ctpop(Lo)
   GetExpandedInteger(N->getOperand(0), Lo, Hi);
@@ -3651,8 +3925,7 @@ void DAGTypeLegalizer::ExpandIntRes_CTPOP(SDNode *N,
   Hi = DAG.getConstant(0, dl, NVT);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_CTTZ(SDNode *N,
-                                         SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_CTTZ(SDNode *N, SDValue &Lo, SDValue &Hi) {
   SDLoc dl(N);
   // cttz (HiLo) -> Lo != 0 ? cttz(Lo) : (cttz(Hi)+32)
   GetExpandedInteger(N->getOperand(0), Lo, Hi);
@@ -3664,15 +3937,15 @@ void DAGTypeLegalizer::ExpandIntRes_CTTZ(SDNode *N,
   SDValue LoLZ = DAG.getNode(ISD::CTTZ_ZERO_UNDEF, dl, NVT, Lo);
   SDValue HiLZ = DAG.getNode(N->getOpcode(), dl, NVT, Hi);
 
-  Lo = DAG.getSelect(dl, NVT, LoNotZero, LoLZ,
-                     DAG.getNode(ISD::ADD, dl, NVT, HiLZ,
-                                 DAG.getConstant(NVT.getSizeInBits(), dl,
-                                                 NVT)));
+  Lo =
+      DAG.getSelect(dl, NVT, LoNotZero, LoLZ,
+                    DAG.getNode(ISD::ADD, dl, NVT, HiLZ,
+                                DAG.getConstant(NVT.getSizeInBits(), dl, NVT)));
   Hi = DAG.getConstant(0, dl, NVT);
 }
 
 void DAGTypeLegalizer::ExpandIntRes_GET_ROUNDING(SDNode *N, SDValue &Lo,
-                                               SDValue &Hi) {
+                                                 SDValue &Hi) {
   SDLoc dl(N);
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
   unsigned NBitWidth = NVT.getSizeInBits();
@@ -3733,8 +4006,8 @@ void DAGTypeLegalizer::ExpandIntRes_FP_TO_XINT(SDNode *N, SDValue &Lo,
   assert(LC != RTLIB::UNKNOWN_LIBCALL && "Unexpected fp-to-xint conversion!");
   TargetLowering::MakeLibCallOptions CallOptions;
   CallOptions.setSExt(true);
-  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG, LC, VT, Op,
-                                                    CallOptions, dl, Chain);
+  std::pair<SDValue, SDValue> Tmp =
+      TLI.makeLibCall(DAG, LC, VT, Op, CallOptions, dl, Chain);
   SplitInteger(Tmp.first, Lo, Hi);
 
   if (IsStrict)
@@ -3766,8 +4039,7 @@ void DAGTypeLegalizer::ExpandIntRes_XROUND_XRINT(SDNode *N, SDValue &Lo,
   }
 
   RTLIB::Libcall LC = RTLIB::UNKNOWN_LIBCALL;
-  if (N->getOpcode() == ISD::LROUND ||
-      N->getOpcode() == ISD::STRICT_LROUND) {
+  if (N->getOpcode() == ISD::LROUND || N->getOpcode() == ISD::STRICT_LROUND) {
     if (VT == MVT::f32)
       LC = RTLIB::LROUND_F32;
     else if (VT == MVT::f64)
@@ -3793,7 +4065,7 @@ void DAGTypeLegalizer::ExpandIntRes_XROUND_XRINT(SDNode *N, SDValue &Lo,
       LC = RTLIB::LRINT_PPCF128;
     assert(LC != RTLIB::UNKNOWN_LIBCALL && "Unexpected lrint input type!");
   } else if (N->getOpcode() == ISD::LLROUND ||
-      N->getOpcode() == ISD::STRICT_LLROUND) {
+             N->getOpcode() == ISD::STRICT_LLROUND) {
     if (VT == MVT::f32)
       LC = RTLIB::LLROUND_F32;
     else if (VT == MVT::f64)
@@ -3825,17 +4097,16 @@ void DAGTypeLegalizer::ExpandIntRes_XROUND_XRINT(SDNode *N, SDValue &Lo,
 
   TargetLowering::MakeLibCallOptions CallOptions;
   CallOptions.setSExt(true);
-  std::pair<SDValue, SDValue> Tmp = TLI.makeLibCall(DAG, LC, RetVT,
-                                                    Op, CallOptions, dl,
-                                                    Chain);
+  std::pair<SDValue, SDValue> Tmp =
+      TLI.makeLibCall(DAG, LC, RetVT, Op, CallOptions, dl, Chain);
   SplitInteger(Tmp.first, Lo, Hi);
 
   if (N->isStrictFPOpcode())
     ReplaceValueWith(SDValue(N, 1), Tmp.second);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_LOAD(LoadSDNode *N,
-                                         SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_LOAD(LoadSDNode *N, SDValue &Lo,
+                                         SDValue &Hi) {
   if (N->isAtomic()) {
     // It's typical to have larger CAS than atomic load instructions.
     SDLoc dl(N);
@@ -3843,8 +4114,7 @@ void DAGTypeLegalizer::ExpandIntRes_LOAD(LoadSDNode *N,
     SDVTList VTs = DAG.getVTList(VT, MVT::i1, MVT::Other);
     SDValue Zero = DAG.getConstant(0, dl, VT);
     SDValue Swap = DAG.getAtomicCmpSwap(
-        ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS, dl,
-        VT, VTs, N->getOperand(0),
+        ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS, dl, VT, VTs, N->getOperand(0),
         N->getOperand(1), Zero, Zero, N->getMemOperand());
     ReplaceValueWith(SDValue(N, 0), Swap.getValue(0));
     ReplaceValueWith(SDValue(N, 1), Swap.getValue(2));
@@ -3860,7 +4130,7 @@ void DAGTypeLegalizer::ExpandIntRes_LOAD(LoadSDNode *N,
 
   EVT VT = N->getValueType(0);
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), VT);
-  SDValue Ch  = N->getChain();
+  SDValue Ch = N->getChain();
   SDValue Ptr = N->getBasePtr();
   ISD::LoadExtType ExtType = N->getExtensionType();
   MachineMemOperand::Flags MMOFlags = N->getMemOperand()->getFlags();
@@ -3899,11 +4169,11 @@ void DAGTypeLegalizer::ExpandIntRes_LOAD(LoadSDNode *N,
                      N->getOriginalAlign(), MMOFlags, AAInfo);
 
     unsigned ExcessBits =
-      N->getMemoryVT().getSizeInBits() - NVT.getSizeInBits();
+        N->getMemoryVT().getSizeInBits() - NVT.getSizeInBits();
     EVT NEVT = EVT::getIntegerVT(*DAG.getContext(), ExcessBits);
 
     // Increment the pointer to the other half.
-    unsigned IncrementSize = NVT.getSizeInBits()/8;
+    unsigned IncrementSize = NVT.getSizeInBits() / 8;
     Ptr = DAG.getMemBasePlusOffset(Ptr, TypeSize::Fixed(IncrementSize), dl);
     Hi = DAG.getExtLoad(ExtType, dl, NVT, Ch, Ptr,
                         N->getPointerInfo().getWithOffset(IncrementSize), NEVT,
@@ -3918,8 +4188,8 @@ void DAGTypeLegalizer::ExpandIntRes_LOAD(LoadSDNode *N,
     // the cost of some bit-fiddling.
     EVT MemVT = N->getMemoryVT();
     unsigned EBytes = MemVT.getStoreSize();
-    unsigned IncrementSize = NVT.getSizeInBits()/8;
-    unsigned ExcessBits = (EBytes - IncrementSize)*8;
+    unsigned IncrementSize = NVT.getSizeInBits() / 8;
+    unsigned ExcessBits = (EBytes - IncrementSize) * 8;
 
     // Load both the high bits and maybe some of the low bits.
     Hi = DAG.getExtLoad(ExtType, dl, NVT, Ch, Ptr, N->getPointerInfo(),
@@ -3960,8 +4230,8 @@ void DAGTypeLegalizer::ExpandIntRes_LOAD(LoadSDNode *N,
   ReplaceValueWith(SDValue(N, 1), Ch);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_Logical(SDNode *N,
-                                            SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_Logical(SDNode *N, SDValue &Lo,
+                                            SDValue &Hi) {
   SDLoc dl(N);
   SDValue LL, LH, RL, RH;
   GetExpandedInteger(N->getOperand(0), LL, LH);
@@ -3970,8 +4240,7 @@ void DAGTypeLegalizer::ExpandIntRes_Logical(SDNode *N,
   Hi = DAG.getNode(N->getOpcode(), dl, LL.getValueType(), LH, RH);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_MUL(SDNode *N,
-                                        SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_MUL(SDNode *N, SDValue &Lo, SDValue &Hi) {
   EVT VT = N->getValueType(0);
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), VT);
   SDLoc dl(N);
@@ -3981,8 +4250,8 @@ void DAGTypeLegalizer::ExpandIntRes_MUL(SDNode *N,
   GetExpandedInteger(N->getOperand(1), RL, RH);
 
   if (TLI.expandMUL(N, Lo, Hi, NVT, DAG,
-                    TargetLowering::MulExpansionKind::OnlyLegalOrCustom,
-                    LL, LH, RL, RH))
+                    TargetLowering::MulExpansionKind::OnlyLegalOrCustom, LL, LH,
+                    RL, RH))
     return;
 
   // If nothing else, we can make a libcall.
@@ -4003,8 +4272,8 @@ void DAGTypeLegalizer::ExpandIntRes_MUL(SDNode *N,
     // 4.3.1).
     unsigned Bits = NVT.getSizeInBits();
     unsigned HalfBits = Bits >> 1;
-    SDValue Mask = DAG.getConstant(APInt::getLowBitsSet(Bits, HalfBits), dl,
-                                   NVT);
+    SDValue Mask =
+        DAG.getConstant(APInt::getLowBitsSet(Bits, HalfBits), dl, NVT);
     SDValue LLL = DAG.getNode(ISD::AND, dl, NVT, LL, Mask);
     SDValue RLL = DAG.getNode(ISD::AND, dl, NVT, RL, Mask);
 
@@ -4025,9 +4294,9 @@ void DAGTypeLegalizer::ExpandIntRes_MUL(SDNode *N,
                             DAG.getNode(ISD::MUL, dl, NVT, LLL, RLH), UL);
     SDValue VH = DAG.getNode(ISD::SRL, dl, NVT, V, Shift);
 
-    SDValue W = DAG.getNode(ISD::ADD, dl, NVT,
-                            DAG.getNode(ISD::MUL, dl, NVT, LLH, RLH),
-                            DAG.getNode(ISD::ADD, dl, NVT, UH, VH));
+    SDValue W =
+        DAG.getNode(ISD::ADD, dl, NVT, DAG.getNode(ISD::MUL, dl, NVT, LLH, RLH),
+                    DAG.getNode(ISD::ADD, dl, NVT, UH, VH));
     Lo = DAG.getNode(ISD::ADD, dl, NVT, TL,
                      DAG.getNode(ISD::SHL, dl, NVT, V, Shift));
 
@@ -4038,11 +4307,11 @@ void DAGTypeLegalizer::ExpandIntRes_MUL(SDNode *N,
     return;
   }
 
-  SDValue Ops[2] = { N->getOperand(0), N->getOperand(1) };
+  SDValue Ops[2] = {N->getOperand(0), N->getOperand(1)};
   TargetLowering::MakeLibCallOptions CallOptions;
   CallOptions.setSExt(true);
-  SplitInteger(TLI.makeLibCall(DAG, LC, VT, Ops, CallOptions, dl).first,
-               Lo, Hi);
+  SplitInteger(TLI.makeLibCall(DAG, LC, VT, Ops, CallOptions, dl).first, Lo,
+               Hi);
 }
 
 void DAGTypeLegalizer::ExpandIntRes_READCYCLECOUNTER(SDNode *N, SDValue &Lo,
@@ -4081,10 +4350,10 @@ void DAGTypeLegalizer::ExpandIntRes_MULFIX(SDNode *N, SDValue &Lo,
   SDValue LHS = N->getOperand(0);
   SDValue RHS = N->getOperand(1);
   uint64_t Scale = N->getConstantOperandVal(2);
-  bool Saturating = (N->getOpcode() == ISD::SMULFIXSAT ||
-                     N->getOpcode() == ISD::UMULFIXSAT);
-  bool Signed = (N->getOpcode() == ISD::SMULFIX ||
-                 N->getOpcode() == ISD::SMULFIXSAT);
+  bool Saturating =
+      (N->getOpcode() == ISD::SMULFIXSAT || N->getOpcode() == ISD::UMULFIXSAT);
+  bool Signed =
+      (N->getOpcode() == ISD::SMULFIX || N->getOpcode() == ISD::SMULFIXSAT);
 
   // Handle special case when scale is equal to zero.
   if (!Scale) {
@@ -4269,8 +4538,8 @@ void DAGTypeLegalizer::ExpandIntRes_MULFIX(SDNode *N, SDValue &Lo,
     // This is similar to the case when we saturate if Scale < NVTSize, but we
     // only need to check HH.
     unsigned OverflowBits = VTSize - Scale + 1;
-    SDValue HHHiMask = DAG.getConstant(
-        APInt::getHighBitsSet(NVTSize, OverflowBits), dl, NVT);
+    SDValue HHHiMask =
+        DAG.getConstant(APInt::getHighBitsSet(NVTSize, OverflowBits), dl, NVT);
     SDValue HHLoMask = DAG.getConstant(
         APInt::getLowBitsSet(NVTSize, NVTSize - OverflowBits), dl, NVT);
     SatMax = DAG.getSetCC(dl, BoolNVT, ResultHH, HHLoMask, ISD::SETGT);
@@ -4303,8 +4572,8 @@ void DAGTypeLegalizer::ExpandIntRes_DIVFIX(SDNode *N, SDValue &Lo,
   SplitInteger(Res, Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_SADDSUBO(SDNode *Node,
-                                             SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_SADDSUBO(SDNode *Node, SDValue &Lo,
+                                             SDValue &Hi) {
   assert((Node->getOpcode() == ISD::SADDO || Node->getOpcode() == ISD::SSUBO) &&
          "Node has unexpected Opcode");
   SDValue LHS = Node->getOperand(0);
@@ -4327,15 +4596,15 @@ void DAGTypeLegalizer::ExpandIntRes_SADDSUBO(SDNode *Node,
     SDVTList VTList = DAG.getVTList(LHSL.getValueType(), Node->getValueType(1));
 
     Lo = DAG.getNode(IsAdd ? ISD::UADDO : ISD::USUBO, dl, VTList, {LHSL, RHSL});
-    Hi = DAG.getNode(CarryOp, dl, VTList, { LHSH, RHSH, Lo.getValue(1) });
+    Hi = DAG.getNode(CarryOp, dl, VTList, {LHSH, RHSH, Lo.getValue(1)});
 
     Ovf = Hi.getValue(1);
   } else {
     // Expand the result by simply replacing it with the equivalent
     // non-overflow-checking operation.
-    SDValue Sum = DAG.getNode(Node->getOpcode() == ISD::SADDO ?
-                              ISD::ADD : ISD::SUB, dl, LHS.getValueType(),
-                              LHS, RHS);
+    SDValue Sum =
+        DAG.getNode(Node->getOpcode() == ISD::SADDO ? ISD::ADD : ISD::SUB, dl,
+                    LHS.getValueType(), LHS, RHS);
     SplitInteger(Sum, Lo, Hi);
 
     // Compute the overflow.
@@ -4376,11 +4645,10 @@ void DAGTypeLegalizer::ExpandIntRes_SADDSUBO(SDNode *Node,
   ReplaceValueWith(SDValue(Node, 1), Ovf);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_SDIV(SDNode *N,
-                                         SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_SDIV(SDNode *N, SDValue &Lo, SDValue &Hi) {
   EVT VT = N->getValueType(0);
   SDLoc dl(N);
-  SDValue Ops[2] = { N->getOperand(0), N->getOperand(1) };
+  SDValue Ops[2] = {N->getOperand(0), N->getOperand(1)};
 
   if (TLI.getOperationAction(ISD::SDIVREM, VT) == TargetLowering::Custom) {
     SDValue Res = DAG.getNode(ISD::SDIVREM, dl, DAG.getVTList(VT, VT), Ops);
@@ -4401,7 +4669,8 @@ void DAGTypeLegalizer::ExpandIntRes_SDIV(SDNode *N,
 
   TargetLowering::MakeLibCallOptions CallOptions;
   CallOptions.setSExt(true);
-  SplitInteger(TLI.makeLibCall(DAG, LC, VT, Ops, CallOptions, dl).first, Lo, Hi);
+  SplitInteger(TLI.makeLibCall(DAG, LC, VT, Ops, CallOptions, dl).first, Lo,
+               Hi);
 }
 
 void DAGTypeLegalizer::ExpandIntRes_ShiftThroughStack(SDNode *N, SDValue &Lo,
@@ -4509,8 +4778,7 @@ void DAGTypeLegalizer::ExpandIntRes_ShiftThroughStack(SDNode *N, SDValue &Lo,
   SplitInteger(Res, Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_Shift(SDNode *N,
-                                          SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_Shift(SDNode *N, SDValue &Lo, SDValue &Hi) {
   EVT VT = N->getValueType(0);
   SDLoc dl(N);
 
@@ -4541,8 +4809,8 @@ void DAGTypeLegalizer::ExpandIntRes_Shift(SDNode *N,
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), VT);
   TargetLowering::LegalizeAction Action = TLI.getOperationAction(PartsOpc, NVT);
   const bool LegalOrCustom =
-    (Action == TargetLowering::Legal && TLI.isTypeLegal(NVT)) ||
-    Action == TargetLowering::Custom;
+      (Action == TargetLowering::Legal && TLI.isTypeLegal(NVT)) ||
+      Action == TargetLowering::Custom;
 
   unsigned ExpansionFactor = 1;
   // That VT->NVT expansion is one step. But will we re-expand NVT?
@@ -4575,7 +4843,7 @@ void DAGTypeLegalizer::ExpandIntRes_Shift(SDNode *N,
     if (ShiftOp.getValueType() != ShiftTy)
       ShiftOp = DAG.getZExtOrTrunc(ShiftOp, dl, ShiftTy);
 
-    SDValue Ops[] = { LHSL, LHSH, ShiftOp };
+    SDValue Ops[] = {LHSL, LHSH, ShiftOp};
     Lo = DAG.getNode(PartsOpc, dl, DAG.getVTList(VT, VT), Ops);
     Hi = Lo.getValue(1);
     return;
@@ -4624,7 +4892,8 @@ void DAGTypeLegalizer::ExpandIntRes_Shift(SDNode *N,
     SDValue Ops[2] = {N->getOperand(0), ShAmt};
     TargetLowering::MakeLibCallOptions CallOptions;
     CallOptions.setSExt(isSigned);
-    SplitInteger(TLI.makeLibCall(DAG, LC, VT, Ops, CallOptions, dl).first, Lo, Hi);
+    SplitInteger(TLI.makeLibCall(DAG, LC, VT, Ops, CallOptions, dl).first, Lo,
+                 Hi);
     return;
   }
 
@@ -4632,8 +4901,8 @@ void DAGTypeLegalizer::ExpandIntRes_Shift(SDNode *N,
     llvm_unreachable("Unsupported shift!");
 }
 
-void DAGTypeLegalizer::ExpandIntRes_SIGN_EXTEND(SDNode *N,
-                                                SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_SIGN_EXTEND(SDNode *N, SDValue &Lo,
+                                                SDValue &Hi) {
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
   SDLoc dl(N);
   SDValue Op = N->getOperand(0);
@@ -4649,7 +4918,7 @@ void DAGTypeLegalizer::ExpandIntRes_SIGN_EXTEND(SDNode *N,
     // For example, extension of an i48 to an i64.  The operand type necessarily
     // promotes to the result type, so will end up being expanded too.
     assert(getTypeAction(Op.getValueType()) ==
-           TargetLowering::TypePromoteInteger &&
+               TargetLowering::TypePromoteInteger &&
            "Only know how to promote this result!");
     SDValue Res = GetPromotedInteger(Op);
     assert(Res.getValueType() == N->getValueType(0) &&
@@ -4657,14 +4926,14 @@ void DAGTypeLegalizer::ExpandIntRes_SIGN_EXTEND(SDNode *N,
     // Split the promoted operand.  This will simplify when it is expanded.
     SplitInteger(Res, Lo, Hi);
     unsigned ExcessBits = Op.getValueSizeInBits() - NVT.getSizeInBits();
-    Hi = DAG.getNode(ISD::SIGN_EXTEND_INREG, dl, Hi.getValueType(), Hi,
-                     DAG.getValueType(EVT::getIntegerVT(*DAG.getContext(),
-                                                        ExcessBits)));
+    Hi = DAG.getNode(
+        ISD::SIGN_EXTEND_INREG, dl, Hi.getValueType(), Hi,
+        DAG.getValueType(EVT::getIntegerVT(*DAG.getContext(), ExcessBits)));
   }
 }
 
-void DAGTypeLegalizer::
-ExpandIntRes_SIGN_EXTEND_INREG(SDNode *N, SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_SIGN_EXTEND_INREG(SDNode *N, SDValue &Lo,
+                                                      SDValue &Hi) {
   SDLoc dl(N);
   GetExpandedInteger(N->getOperand(0), Lo, Hi);
   EVT EVT = cast<VTSDNode>(N->getOperand(1))->getVT();
@@ -4683,17 +4952,16 @@ ExpandIntRes_SIGN_EXTEND_INREG(SDNode *N, SDValue &Lo, SDValue &Hi) {
     // For example, extension of an i48 to an i64.  Leave the low part alone,
     // sext_inreg the high part.
     unsigned ExcessBits = EVT.getSizeInBits() - Lo.getValueSizeInBits();
-    Hi = DAG.getNode(ISD::SIGN_EXTEND_INREG, dl, Hi.getValueType(), Hi,
-                     DAG.getValueType(EVT::getIntegerVT(*DAG.getContext(),
-                                                        ExcessBits)));
+    Hi = DAG.getNode(
+        ISD::SIGN_EXTEND_INREG, dl, Hi.getValueType(), Hi,
+        DAG.getValueType(EVT::getIntegerVT(*DAG.getContext(), ExcessBits)));
   }
 }
 
-void DAGTypeLegalizer::ExpandIntRes_SREM(SDNode *N,
-                                         SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_SREM(SDNode *N, SDValue &Lo, SDValue &Hi) {
   EVT VT = N->getValueType(0);
   SDLoc dl(N);
-  SDValue Ops[2] = { N->getOperand(0), N->getOperand(1) };
+  SDValue Ops[2] = {N->getOperand(0), N->getOperand(1)};
 
   if (TLI.getOperationAction(ISD::SDIVREM, VT) == TargetLowering::Custom) {
     SDValue Res = DAG.getNode(ISD::SDIVREM, dl, DAG.getVTList(VT, VT), Ops);
@@ -4714,11 +4982,12 @@ void DAGTypeLegalizer::ExpandIntRes_SREM(SDNode *N,
 
   TargetLowering::MakeLibCallOptions CallOptions;
   CallOptions.setSExt(true);
-  SplitInteger(TLI.makeLibCall(DAG, LC, VT, Ops, CallOptions, dl).first, Lo, Hi);
+  SplitInteger(TLI.makeLibCall(DAG, LC, VT, Ops, CallOptions, dl).first, Lo,
+               Hi);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_TRUNCATE(SDNode *N,
-                                             SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_TRUNCATE(SDNode *N, SDValue &Lo,
+                                             SDValue &Hi) {
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
   SDLoc dl(N);
   Lo = DAG.getNode(ISD::TRUNCATE, dl, NVT, N->getOperand(0));
@@ -4729,8 +4998,7 @@ void DAGTypeLegalizer::ExpandIntRes_TRUNCATE(SDNode *N,
   Hi = DAG.getNode(ISD::TRUNCATE, dl, NVT, Hi);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_XMULO(SDNode *N,
-                                          SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_XMULO(SDNode *N, SDValue &Lo, SDValue &Hi) {
   EVT VT = N->getValueType(0);
   SDLoc dl(N);
 
@@ -4758,9 +5026,10 @@ void DAGTypeLegalizer::ExpandIntRes_XMULO(SDNode *N,
     SDVTList VTHalfWithO = DAG.getVTList(HalfVT, BitVT);
 
     SDValue HalfZero = DAG.getConstant(0, dl, HalfVT);
-    SDValue Overflow = DAG.getNode(ISD::AND, dl, BitVT,
-      DAG.getSetCC(dl, BitVT, LHSHigh, HalfZero, ISD::SETNE),
-      DAG.getSetCC(dl, BitVT, RHSHigh, HalfZero, ISD::SETNE));
+    SDValue Overflow =
+        DAG.getNode(ISD::AND, dl, BitVT,
+                    DAG.getSetCC(dl, BitVT, LHSHigh, HalfZero, ISD::SETNE),
+                    DAG.getSetCC(dl, BitVT, RHSHigh, HalfZero, ISD::SETNE));
 
     SDValue One = DAG.getNode(ISD::UMULO, dl, VTHalfWithO, LHSHigh, RHSLow);
     Overflow = DAG.getNode(ISD::OR, dl, BitVT, Overflow, One.getValue(1));
@@ -4777,8 +5046,8 @@ void DAGTypeLegalizer::ExpandIntRes_XMULO(SDNode *N,
     // Many backends understand this pattern and will convert into LOHI
     // themselves, if applicable.
     SDValue Three = DAG.getNode(ISD::MUL, dl, VT,
-      DAG.getNode(ISD::ZERO_EXTEND, dl, VT, LHSLow),
-      DAG.getNode(ISD::ZERO_EXTEND, dl, VT, RHSLow));
+                                DAG.getNode(ISD::ZERO_EXTEND, dl, VT, LHSLow),
+                                DAG.getNode(ISD::ZERO_EXTEND, dl, VT, RHSLow));
     SplitInteger(Three, Lo, Hi);
 
     Hi = DAG.getNode(ISD::UADDO, dl, VTHalfWithO, Hi, HighSum);
@@ -4861,17 +5130,15 @@ void DAGTypeLegalizer::ExpandIntRes_XMULO(SDNode *N,
   SDValue Temp2 =
       DAG.getLoad(PtrVT, dl, CallInfo.second, Temp, MachinePointerInfo());
   SDValue Ofl = DAG.getSetCC(dl, N->getValueType(1), Temp2,
-                             DAG.getConstant(0, dl, PtrVT),
-                             ISD::SETNE);
+                             DAG.getConstant(0, dl, PtrVT), ISD::SETNE);
   // Use the overflow from the libcall everywhere.
   ReplaceValueWith(SDValue(N, 1), Ofl);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_UDIV(SDNode *N,
-                                         SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_UDIV(SDNode *N, SDValue &Lo, SDValue &Hi) {
   EVT VT = N->getValueType(0);
   SDLoc dl(N);
-  SDValue Ops[2] = { N->getOperand(0), N->getOperand(1) };
+  SDValue Ops[2] = {N->getOperand(0), N->getOperand(1)};
 
   if (TLI.getOperationAction(ISD::UDIVREM, VT) == TargetLowering::Custom) {
     SDValue Res = DAG.getNode(ISD::UDIVREM, dl, DAG.getVTList(VT, VT), Ops);
@@ -4907,14 +5174,14 @@ void DAGTypeLegalizer::ExpandIntRes_UDIV(SDNode *N,
   assert(LC != RTLIB::UNKNOWN_LIBCALL && "Unsupported UDIV!");
 
   TargetLowering::MakeLibCallOptions CallOptions;
-  SplitInteger(TLI.makeLibCall(DAG, LC, VT, Ops, CallOptions, dl).first, Lo, Hi);
+  SplitInteger(TLI.makeLibCall(DAG, LC, VT, Ops, CallOptions, dl).first, Lo,
+               Hi);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_UREM(SDNode *N,
-                                         SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_UREM(SDNode *N, SDValue &Lo, SDValue &Hi) {
   EVT VT = N->getValueType(0);
   SDLoc dl(N);
-  SDValue Ops[2] = { N->getOperand(0), N->getOperand(1) };
+  SDValue Ops[2] = {N->getOperand(0), N->getOperand(1)};
 
   if (TLI.getOperationAction(ISD::UDIVREM, VT) == TargetLowering::Custom) {
     SDValue Res = DAG.getNode(ISD::UDIVREM, dl, DAG.getVTList(VT, VT), Ops);
@@ -4950,23 +5217,24 @@ void DAGTypeLegalizer::ExpandIntRes_UREM(SDNode *N,
   assert(LC != RTLIB::UNKNOWN_LIBCALL && "Unsupported UREM!");
 
   TargetLowering::MakeLibCallOptions CallOptions;
-  SplitInteger(TLI.makeLibCall(DAG, LC, VT, Ops, CallOptions, dl).first, Lo, Hi);
+  SplitInteger(TLI.makeLibCall(DAG, LC, VT, Ops, CallOptions, dl).first, Lo,
+               Hi);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_ZERO_EXTEND(SDNode *N,
-                                                SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_ZERO_EXTEND(SDNode *N, SDValue &Lo,
+                                                SDValue &Hi) {
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
   SDLoc dl(N);
   SDValue Op = N->getOperand(0);
   if (Op.getValueType().bitsLE(NVT)) {
     // The low part is zero extension of the input (degenerates to a copy).
     Lo = DAG.getNode(ISD::ZERO_EXTEND, dl, NVT, N->getOperand(0));
-    Hi = DAG.getConstant(0, dl, NVT);   // The high part is just a zero.
+    Hi = DAG.getConstant(0, dl, NVT); // The high part is just a zero.
   } else {
     // For example, extension of an i48 to an i64.  The operand type necessarily
     // promotes to the result type, so will end up being expanded too.
     assert(getTypeAction(Op.getValueType()) ==
-           TargetLowering::TypePromoteInteger &&
+               TargetLowering::TypePromoteInteger &&
            "Only know how to promote this result!");
     SDValue Res = GetPromotedInteger(Op);
     assert(Res.getValueType() == N->getValueType(0) &&
@@ -4974,14 +5242,13 @@ void DAGTypeLegalizer::ExpandIntRes_ZERO_EXTEND(SDNode *N,
     // Split the promoted operand.  This will simplify when it is expanded.
     SplitInteger(Res, Lo, Hi);
     unsigned ExcessBits = Op.getValueSizeInBits() - NVT.getSizeInBits();
-    Hi = DAG.getZeroExtendInReg(Hi, dl,
-                                EVT::getIntegerVT(*DAG.getContext(),
-                                                  ExcessBits));
+    Hi = DAG.getZeroExtendInReg(
+        Hi, dl, EVT::getIntegerVT(*DAG.getContext(), ExcessBits));
   }
 }
 
-void DAGTypeLegalizer::ExpandIntRes_ATOMIC_LOAD(SDNode *N,
-                                                SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_ATOMIC_LOAD(SDNode *N, SDValue &Lo,
+                                                SDValue &Hi) {
   SDLoc dl(N);
   EVT VT = cast<AtomicSDNode>(N)->getMemoryVT();
   SDVTList VTs = DAG.getVTList(VT, MVT::i1, MVT::Other);
@@ -4995,16 +5262,16 @@ void DAGTypeLegalizer::ExpandIntRes_ATOMIC_LOAD(SDNode *N,
   ReplaceValueWith(SDValue(N, 1), Swap.getValue(2));
 }
 
-void DAGTypeLegalizer::ExpandIntRes_VECREDUCE(SDNode *N,
-                                              SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_VECREDUCE(SDNode *N, SDValue &Lo,
+                                              SDValue &Hi) {
   // TODO For VECREDUCE_(AND|OR|XOR) we could split the vector and calculate
   // both halves independently.
   SDValue Res = TLI.expandVecReduce(N, DAG);
   SplitInteger(Res, Lo, Hi);
 }
 
-void DAGTypeLegalizer::ExpandIntRes_Rotate(SDNode *N,
-                                           SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::ExpandIntRes_Rotate(SDNode *N, SDValue &Lo,
+                                           SDValue &Hi) {
   // Delegate to funnel-shift expansion.
   SDLoc DL(N);
   unsigned Opcode = N->getOpcode() == ISD::ROTL ? ISD::FSHL : ISD::FSHR;
@@ -5079,38 +5346,71 @@ bool DAGTypeLegalizer::ExpandIntegerOperand(SDNode *N, unsigned OpNo) {
 
   switch (N->getOpcode()) {
   default:
-  #ifndef NDEBUG
+#ifndef NDEBUG
     dbgs() << "ExpandIntegerOperand Op #" << OpNo << ": ";
-    N->dump(&DAG); dbgs() << "\n";
-  #endif
+    N->dump(&DAG);
+    dbgs() << "\n";
+#endif
     report_fatal_error("Do not know how to expand this operator's operand!");
 
-  case ISD::BITCAST:           Res = ExpandOp_BITCAST(N); break;
-  case ISD::BR_CC:             Res = ExpandIntOp_BR_CC(N); break;
-  case ISD::BUILD_VECTOR:      Res = ExpandOp_BUILD_VECTOR(N); break;
-  case ISD::EXTRACT_ELEMENT:   Res = ExpandOp_EXTRACT_ELEMENT(N); break;
-  case ISD::INSERT_VECTOR_ELT: Res = ExpandOp_INSERT_VECTOR_ELT(N); break;
-  case ISD::SCALAR_TO_VECTOR:  Res = ExpandOp_SCALAR_TO_VECTOR(N); break;
-  case ISD::SPLAT_VECTOR:      Res = ExpandIntOp_SPLAT_VECTOR(N); break;
-  case ISD::SELECT_CC:         Res = ExpandIntOp_SELECT_CC(N); break;
-  case ISD::SETCC:             Res = ExpandIntOp_SETCC(N); break;
-  case ISD::SETCCCARRY:        Res = ExpandIntOp_SETCCCARRY(N); break;
+  case ISD::BITCAST:
+    Res = ExpandOp_BITCAST(N);
+    break;
+  case ISD::BR_CC:
+    Res = ExpandIntOp_BR_CC(N);
+    break;
+  case ISD::BUILD_VECTOR:
+    Res = ExpandOp_BUILD_VECTOR(N);
+    break;
+  case ISD::EXTRACT_ELEMENT:
+    Res = ExpandOp_EXTRACT_ELEMENT(N);
+    break;
+  case ISD::INSERT_VECTOR_ELT:
+    Res = ExpandOp_INSERT_VECTOR_ELT(N);
+    break;
+  case ISD::SCALAR_TO_VECTOR:
+    Res = ExpandOp_SCALAR_TO_VECTOR(N);
+    break;
+  case ISD::SPLAT_VECTOR:
+    Res = ExpandIntOp_SPLAT_VECTOR(N);
+    break;
+  case ISD::SELECT_CC:
+    Res = ExpandIntOp_SELECT_CC(N);
+    break;
+  case ISD::SETCC:
+    Res = ExpandIntOp_SETCC(N);
+    break;
+  case ISD::SETCCCARRY:
+    Res = ExpandIntOp_SETCCCARRY(N);
+    break;
   case ISD::STRICT_SINT_TO_FP:
   case ISD::SINT_TO_FP:
   case ISD::STRICT_UINT_TO_FP:
-  case ISD::UINT_TO_FP:        Res = ExpandIntOp_XINT_TO_FP(N); break;
-  case ISD::STORE:   Res = ExpandIntOp_STORE(cast<StoreSDNode>(N), OpNo); break;
-  case ISD::TRUNCATE:          Res = ExpandIntOp_TRUNCATE(N); break;
+  case ISD::UINT_TO_FP:
+    Res = ExpandIntOp_XINT_TO_FP(N);
+    break;
+  case ISD::STORE:
+    Res = ExpandIntOp_STORE(cast<StoreSDNode>(N), OpNo);
+    break;
+  case ISD::TRUNCATE:
+    Res = ExpandIntOp_TRUNCATE(N);
+    break;
 
   case ISD::SHL:
   case ISD::SRA:
   case ISD::SRL:
   case ISD::ROTL:
-  case ISD::ROTR:              Res = ExpandIntOp_Shift(N); break;
+  case ISD::ROTR:
+    Res = ExpandIntOp_Shift(N);
+    break;
   case ISD::RETURNADDR:
-  case ISD::FRAMEADDR:         Res = ExpandIntOp_RETURNADDR(N); break;
+  case ISD::FRAMEADDR:
+    Res = ExpandIntOp_RETURNADDR(N);
+    break;
 
-  case ISD::ATOMIC_STORE:      Res = ExpandIntOp_ATOMIC_STORE(N); break;
+  case ISD::ATOMIC_STORE:
+    Res = ExpandIntOp_ATOMIC_STORE(N);
+    break;
   case ISD::STACKMAP:
     Res = ExpandIntOp_STACKMAP(N, OpNo);
     break;
@@ -5124,7 +5424,8 @@ bool DAGTypeLegalizer::ExpandIntegerOperand(SDNode *N, unsigned OpNo) {
   }
 
   // If the result is null, the sub-method took care of registering results etc.
-  if (!Res.getNode()) return false;
+  if (!Res.getNode())
+    return false;
 
   // If the result is N, the sub-method updated N in place.  Tell the legalizer
   // core about this.
@@ -5176,15 +5477,24 @@ void DAGTypeLegalizer::IntegerExpandSetCCOperands(SDValue &NewLHS,
   // FIXME: This generated code sucks.
   ISD::CondCode LowCC;
   switch (CCCode) {
-  default: llvm_unreachable("Unknown integer setcc!");
+  default:
+    llvm_unreachable("Unknown integer setcc!");
   case ISD::SETLT:
-  case ISD::SETULT: LowCC = ISD::SETULT; break;
+  case ISD::SETULT:
+    LowCC = ISD::SETULT;
+    break;
   case ISD::SETGT:
-  case ISD::SETUGT: LowCC = ISD::SETUGT; break;
+  case ISD::SETUGT:
+    LowCC = ISD::SETUGT;
+    break;
   case ISD::SETLE:
-  case ISD::SETULE: LowCC = ISD::SETULE; break;
+  case ISD::SETULE:
+    LowCC = ISD::SETULE;
+    break;
   case ISD::SETGE:
-  case ISD::SETUGE: LowCC = ISD::SETUGE; break;
+  case ISD::SETUGE:
+    LowCC = ISD::SETUGE;
+    break;
   }
 
   // LoCmp = lo(op1) < lo(op2)   // Always unsigned comparison
@@ -5248,11 +5558,24 @@ void DAGTypeLegalizer::IntegerExpandSetCCOperands(SDValue &NewLHS,
     // operands and condition code.
     bool FlipOperands = false;
     switch (CCCode) {
-    case ISD::SETGT:  CCCode = ISD::SETLT;  FlipOperands = true; break;
-    case ISD::SETUGT: CCCode = ISD::SETULT; FlipOperands = true; break;
-    case ISD::SETLE:  CCCode = ISD::SETGE;  FlipOperands = true; break;
-    case ISD::SETULE: CCCode = ISD::SETUGE; FlipOperands = true; break;
-    default: break;
+    case ISD::SETGT:
+      CCCode = ISD::SETLT;
+      FlipOperands = true;
+      break;
+    case ISD::SETUGT:
+      CCCode = ISD::SETULT;
+      FlipOperands = true;
+      break;
+    case ISD::SETLE:
+      CCCode = ISD::SETGE;
+      FlipOperands = true;
+      break;
+    case ISD::SETULE:
+      CCCode = ISD::SETUGE;
+      FlipOperands = true;
+      break;
+    default:
+      break;
     }
     if (FlipOperands) {
       std::swap(LHSLo, RHSLo);
@@ -5265,9 +5588,9 @@ void DAGTypeLegalizer::IntegerExpandSetCCOperands(SDValue &NewLHS,
     EVT LoVT = LHSLo.getValueType();
     SDVTList VTList = DAG.getVTList(LoVT, getSetCCResultType(LoVT));
     SDValue LowCmp = DAG.getNode(ISD::USUBO, dl, VTList, LHSLo, RHSLo);
-    SDValue Res = DAG.getNode(ISD::SETCCCARRY, dl, getSetCCResultType(HiVT),
-                              LHSHi, RHSHi, LowCmp.getValue(1),
-                              DAG.getCondCode(CCCode));
+    SDValue Res =
+        DAG.getNode(ISD::SETCCCARRY, dl, getSetCCResultType(HiVT), LHSHi, RHSHi,
+                    LowCmp.getValue(1), DAG.getCondCode(CCCode));
     NewLHS = Res;
     NewRHS = SDValue();
     return;
@@ -5296,8 +5619,9 @@ SDValue DAGTypeLegalizer::ExpandIntOp_BR_CC(SDNode *N) {
 
   // Update N to have the operands specified.
   return SDValue(DAG.UpdateNodeOperands(N, N->getOperand(0),
-                                DAG.getCondCode(CCCode), NewLHS, NewRHS,
-                                N->getOperand(4)), 0);
+                                        DAG.getCondCode(CCCode), NewLHS, NewRHS,
+                                        N->getOperand(4)),
+                 0);
 }
 
 SDValue DAGTypeLegalizer::ExpandIntOp_SELECT_CC(SDNode *N) {
@@ -5313,9 +5637,10 @@ SDValue DAGTypeLegalizer::ExpandIntOp_SELECT_CC(SDNode *N) {
   }
 
   // Update N to have the operands specified.
-  return SDValue(DAG.UpdateNodeOperands(N, NewLHS, NewRHS,
-                                N->getOperand(2), N->getOperand(3),
-                                DAG.getCondCode(CCCode)), 0);
+  return SDValue(DAG.UpdateNodeOperands(N, NewLHS, NewRHS, N->getOperand(2),
+                                        N->getOperand(3),
+                                        DAG.getCondCode(CCCode)),
+                 0);
 }
 
 SDValue DAGTypeLegalizer::ExpandIntOp_SETCC(SDNode *N) {
@@ -5408,11 +5733,9 @@ SDValue DAGTypeLegalizer::ExpandIntOp_STORE(StoreSDNode *N, unsigned OpNo) {
   if (N->isAtomic()) {
     // It's typical to have larger CAS than atomic store instructions.
     SDLoc dl(N);
-    SDValue Swap = DAG.getAtomic(ISD::ATOMIC_SWAP, dl,
-                                 N->getMemoryVT(),
-                                 N->getOperand(0), N->getOperand(2),
-                                 N->getOperand(1),
-                                 N->getMemOperand());
+    SDValue Swap =
+        DAG.getAtomic(ISD::ATOMIC_SWAP, dl, N->getMemoryVT(), N->getOperand(0),
+                      N->getOperand(2), N->getOperand(1), N->getMemOperand());
     return Swap.getValue(1);
   }
   if (ISD::isNormalStore(N))
@@ -5423,7 +5746,7 @@ SDValue DAGTypeLegalizer::ExpandIntOp_STORE(StoreSDNode *N, unsigned OpNo) {
 
   EVT VT = N->getOperand(1).getValueType();
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), VT);
-  SDValue Ch  = N->getChain();
+  SDValue Ch = N->getChain();
   SDValue Ptr = N->getBasePtr();
   MachineMemOperand::Flags MMOFlags = N->getMemOperand()->getFlags();
   AAMDNodes AAInfo = N->getAAInfo();
@@ -5447,11 +5770,11 @@ SDValue DAGTypeLegalizer::ExpandIntOp_STORE(StoreSDNode *N, unsigned OpNo) {
                       N->getOriginalAlign(), MMOFlags, AAInfo);
 
     unsigned ExcessBits =
-      N->getMemoryVT().getSizeInBits() - NVT.getSizeInBits();
+        N->getMemoryVT().getSizeInBits() - NVT.getSizeInBits();
     EVT NEVT = EVT::getIntegerVT(*DAG.getContext(), ExcessBits);
 
     // Increment the pointer to the other half.
-    unsigned IncrementSize = NVT.getSizeInBits()/8;
+    unsigned IncrementSize = NVT.getSizeInBits() / 8;
     Ptr = DAG.getObjectPtrOffset(dl, Ptr, TypeSize::Fixed(IncrementSize));
     Hi = DAG.getTruncStore(Ch, dl, Hi, Ptr,
                            N->getPointerInfo().getWithOffset(IncrementSize),
@@ -5465,10 +5788,10 @@ SDValue DAGTypeLegalizer::ExpandIntOp_STORE(StoreSDNode *N, unsigned OpNo) {
 
   EVT ExtVT = N->getMemoryVT();
   unsigned EBytes = ExtVT.getStoreSize();
-  unsigned IncrementSize = NVT.getSizeInBits()/8;
-  unsigned ExcessBits = (EBytes - IncrementSize)*8;
-  EVT HiVT = EVT::getIntegerVT(*DAG.getContext(),
-                               ExtVT.getSizeInBits() - ExcessBits);
+  unsigned IncrementSize = NVT.getSizeInBits() / 8;
+  unsigned ExcessBits = (EBytes - IncrementSize) * 8;
+  EVT HiVT =
+      EVT::getIntegerVT(*DAG.getContext(), ExtVT.getSizeInBits() - ExcessBits);
 
   if (ExcessBits < NVT.getSizeInBits()) {
     // Transfer high bits from the top of Lo to the bottom of Hi.
@@ -5533,14 +5856,15 @@ SDValue DAGTypeLegalizer::PromoteIntRes_VECTOR_SPLICE(SDNode *N) {
   return DAG.getNode(ISD::VECTOR_SPLICE, dl, OutVT, V0, V1, N->getOperand(2));
 }
 
-SDValue DAGTypeLegalizer::PromoteIntRes_VECTOR_INTERLEAVE_DEINTERLEAVE(SDNode *N) {
+SDValue
+DAGTypeLegalizer::PromoteIntRes_VECTOR_INTERLEAVE_DEINTERLEAVE(SDNode *N) {
   SDLoc dl(N);
 
   SDValue V0 = GetPromotedInteger(N->getOperand(0));
   SDValue V1 = GetPromotedInteger(N->getOperand(1));
   EVT ResVT = V0.getValueType();
-  SDValue Res = DAG.getNode(N->getOpcode(), dl,
-                            DAG.getVTList(ResVT, ResVT), V0, V1);
+  SDValue Res =
+      DAG.getNode(N->getOpcode(), dl, DAG.getVTList(ResVT, ResVT), V0, V1);
   SetPromotedInteger(SDValue(N, 0), Res.getValue(0));
   SetPromotedInteger(SDValue(N, 1), Res.getValue(1));
   return SDValue();
@@ -5590,7 +5914,7 @@ SDValue DAGTypeLegalizer::PromoteIntRes_EXTRACT_SUBVECTOR(SDNode *N) {
     // Otherwise, use the BUILD_VECTOR approach below
     if (getTypeAction(InVT) == TargetLowering::TypePromoteInteger) {
       // Collect the (promoted) operands
-      SDValue Ops[] = { GetPromotedInteger(InOp0), BaseIdx };
+      SDValue Ops[] = {GetPromotedInteger(InOp0), BaseIdx};
 
       EVT PromEltVT = Ops[0].getValueType().getVectorElementType();
       assert(PromEltVT.bitsLE(NOutVTElem) &&
@@ -5617,10 +5941,11 @@ SDValue DAGTypeLegalizer::PromoteIntRes_EXTRACT_SUBVECTOR(SDNode *N) {
   for (unsigned i = 0; i != OutNumElems; ++i) {
 
     // Extract the element from the original vector.
-    SDValue Index = DAG.getNode(ISD::ADD, dl, BaseIdx.getValueType(),
-      BaseIdx, DAG.getConstant(i, dl, BaseIdx.getValueType()));
-    SDValue Ext = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl,
-      InVT.getVectorElementType(), N->getOperand(0), Index);
+    SDValue Index = DAG.getNode(ISD::ADD, dl, BaseIdx.getValueType(), BaseIdx,
+                                DAG.getConstant(i, dl, BaseIdx.getValueType()));
+    SDValue Ext =
+        DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl, InVT.getVectorElementType(),
+                    N->getOperand(0), Index);
 
     SDValue Op = DAG.getAnyExtOrTrunc(Ext, dl, NOutVTElem);
     // Insert the converted element to the new vector.
@@ -5680,7 +6005,8 @@ SDValue DAGTypeLegalizer::PromoteIntRes_BUILD_VECTOR(SDNode *N) {
   assert(NOutVT.isVector() && "This type must be promoted to a vector type");
   unsigned NumElems = N->getNumOperands();
   EVT NOutVTElem = NOutVT.getVectorElementType();
-  TargetLoweringBase::BooleanContent NOutBoolType = TLI.getBooleanContents(NOutVT);
+  TargetLoweringBase::BooleanContent NOutBoolType =
+      TLI.getBooleanContents(NOutVT);
   unsigned NOutExtOpc = TargetLowering::getExtendForContent(NOutBoolType);
   SDLoc dl(N);
 
@@ -5695,9 +6021,10 @@ SDValue DAGTypeLegalizer::PromoteIntRes_BUILD_VECTOR(SDNode *N) {
     // (v?i16 = BV <i32>, <i32>, ...), and we can't any_extend <i32> to <i16>.
     if (OpVT.bitsLT(NOutVTElem)) {
       unsigned ExtOpc = ISD::ANY_EXTEND;
-      // Attempt to extend constant bool vectors to match target's BooleanContent.
-      // While not necessary, this improves chances of the constant correctly
-      // folding with compare results (e.g. for NOT patterns).
+      // Attempt to extend constant bool vectors to match target's
+      // BooleanContent. While not necessary, this improves chances of the
+      // constant correctly folding with compare results (e.g. for NOT
+      // patterns).
       if (OpVT == MVT::i1 && Op.getOpcode() == ISD::Constant)
         ExtOpc = NOutExtOpc;
       Op = DAG.getNode(ExtOpc, dl, NOutVTElem, Op);
@@ -5817,22 +6144,22 @@ SDValue DAGTypeLegalizer::PromoteIntRes_EXTEND_VECTOR_INREG(SDNode *N) {
   // appropriately (ZERO_EXTEND or SIGN_EXTEND) from the original pre-promotion
   // type, and then construct a new *_EXTEND_VECTOR_INREG node to the promote-to
   // type..
-  if (getTypeAction(N->getOperand(0).getValueType())
-      == TargetLowering::TypePromoteInteger) {
+  if (getTypeAction(N->getOperand(0).getValueType()) ==
+      TargetLowering::TypePromoteInteger) {
     SDValue Promoted;
 
-    switch(N->getOpcode()) {
-      case ISD::SIGN_EXTEND_VECTOR_INREG:
-        Promoted = SExtPromotedInteger(N->getOperand(0));
-        break;
-      case ISD::ZERO_EXTEND_VECTOR_INREG:
-        Promoted = ZExtPromotedInteger(N->getOperand(0));
-        break;
-      case ISD::ANY_EXTEND_VECTOR_INREG:
-        Promoted = GetPromotedInteger(N->getOperand(0));
-        break;
-      default:
-        llvm_unreachable("Node has unexpected Opcode");
+    switch (N->getOpcode()) {
+    case ISD::SIGN_EXTEND_VECTOR_INREG:
+      Promoted = SExtPromotedInteger(N->getOperand(0));
+      break;
+    case ISD::ZERO_EXTEND_VECTOR_INREG:
+      Promoted = ZExtPromotedInteger(N->getOperand(0));
+      break;
+    case ISD::ANY_EXTEND_VECTOR_INREG:
+      Promoted = GetPromotedInteger(N->getOperand(0));
+      break;
+    default:
+      llvm_unreachable("Node has unexpected Opcode");
     }
     return DAG.getNode(N->getOpcode(), dl, NVT, Promoted);
   }
@@ -5851,10 +6178,10 @@ SDValue DAGTypeLegalizer::PromoteIntRes_INSERT_VECTOR_ELT(SDNode *N) {
   SDLoc dl(N);
   SDValue V0 = GetPromotedInteger(N->getOperand(0));
 
-  SDValue ConvElem = DAG.getNode(ISD::ANY_EXTEND, dl,
-    NOutVTElem, N->getOperand(1));
-  return DAG.getNode(ISD::INSERT_VECTOR_ELT, dl, NOutVT,
-    V0, ConvElem, N->getOperand(2));
+  SDValue ConvElem =
+      DAG.getNode(ISD::ANY_EXTEND, dl, NOutVTElem, N->getOperand(1));
+  return DAG.getNode(ISD::INSERT_VECTOR_ELT, dl, NOutVT, V0, ConvElem,
+                     N->getOperand(2));
 }
 
 SDValue DAGTypeLegalizer::PromoteIntRes_VECREDUCE(SDNode *N) {
@@ -5881,7 +6208,7 @@ SDValue DAGTypeLegalizer::PromoteIntOp_EXTRACT_VECTOR_ELT(SDNode *N) {
   SDValue V1 = DAG.getZExtOrTrunc(N->getOperand(1), dl,
                                   TLI.getVectorIdxTy(DAG.getDataLayout()));
   SDValue Ext = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl,
-    V0->getValueType(0).getScalarType(), V0, V1);
+                            V0->getValueType(0).getScalarType(), V0, V1);
 
   // EXTRACT_VECTOR_ELT can return types which are wider than the incoming
   // element types. If this is the case then we need to expand the outgoing
@@ -5910,7 +6237,8 @@ SDValue DAGTypeLegalizer::PromoteIntOp_EXTRACT_SUBVECTOR(SDNode *N) {
   MVT InVT = V0.getValueType().getSimpleVT();
   MVT OutVT = MVT::getVectorVT(InVT.getVectorElementType(),
                                N->getValueType(0).getVectorNumElements());
-  SDValue Ext = DAG.getNode(ISD::EXTRACT_SUBVECTOR, dl, OutVT, V0, N->getOperand(1));
+  SDValue Ext =
+      DAG.getNode(ISD::EXTRACT_SUBVECTOR, dl, OutVT, V0, N->getOperand(1));
   return DAG.getNode(ISD::TRUNCATE, dl, N->getValueType(0), Ext);
 }
 
@@ -5944,7 +6272,7 @@ SDValue DAGTypeLegalizer::PromoteIntOp_CONCAT_VECTORS(SDNode *N) {
     EVT SclrTy = Incoming->getValueType(0).getVectorElementType();
     unsigned NumElem = Incoming->getValueType(0).getVectorNumElements();
 
-    for (unsigned i=0; i<NumElem; ++i) {
+    for (unsigned i = 0; i < NumElem; ++i) {
       // Extract element from incoming vector
       SDValue Ex = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl, SclrTy, Incoming,
                                DAG.getVectorIdxConstant(i, dl));
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp
index 328939e44dcb660..423325ec321f7d7 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.cpp
@@ -22,8 +22,8 @@ using namespace llvm;
 
 #define DEBUG_TYPE "legalize-types"
 
-static cl::opt<bool>
-EnableExpensiveChecks("enable-legalize-types-checking", cl::Hidden);
+static cl::opt<bool> EnableExpensiveChecks("enable-legalize-types-checking",
+                                           cl::Hidden);
 
 /// Do extensive, expensive, basic correctness checking.
 void DAGTypeLegalizer::PerformExpensiveChecks() {
@@ -70,7 +70,7 @@ void DAGTypeLegalizer::PerformExpensiveChecks() {
   // another node, and that new node was not seen by the LegalizeTypes machinery
   // (for example because it was created but not used).  In general, we cannot
   // distinguish between new nodes and deleted nodes.
-  SmallVector<SDNode*, 16> NewNodes;
+  SmallVector<SDNode *, 16> NewNodes;
   for (SDNode &Node : DAG.allnodes()) {
     // Remember nodes marked NewNode - they are subject to extra checking below.
     if (Node.getNodeId() == NewNode)
@@ -297,104 +297,105 @@ bool DAGTypeLegalizer::run() {
       }
     }
 
-ScanOperands:
+  ScanOperands:
     // Scan the operand list for the node, handling any nodes with operands that
     // are illegal.
     {
-    unsigned NumOperands = N->getNumOperands();
-    bool NeedsReanalyzing = false;
-    unsigned i;
-    for (i = 0; i != NumOperands; ++i) {
-      if (IgnoreNodeResults(N->getOperand(i).getNode()))
-        continue;
-
-      const auto &Op = N->getOperand(i);
-      LLVM_DEBUG(dbgs() << "Analyzing operand: "; Op.dump(&DAG));
-      EVT OpVT = Op.getValueType();
-      switch (getTypeAction(OpVT)) {
-      case TargetLowering::TypeLegal:
-        LLVM_DEBUG(dbgs() << "Legal operand\n");
-        continue;
-      case TargetLowering::TypeScalarizeScalableVector:
-        report_fatal_error(
-            "Scalarization of scalable vectors is not supported.");
-      // The following calls must either replace all of the node's results
-      // using ReplaceValueWith, and return "false"; or update the node's
-      // operands in place, and return "true".
-      case TargetLowering::TypePromoteInteger:
-        NeedsReanalyzing = PromoteIntegerOperand(N, i);
-        Changed = true;
-        break;
-      case TargetLowering::TypeExpandInteger:
-        NeedsReanalyzing = ExpandIntegerOperand(N, i);
-        Changed = true;
-        break;
-      case TargetLowering::TypeSoftenFloat:
-        NeedsReanalyzing = SoftenFloatOperand(N, i);
-        Changed = true;
-        break;
-      case TargetLowering::TypeExpandFloat:
-        NeedsReanalyzing = ExpandFloatOperand(N, i);
-        Changed = true;
-        break;
-      case TargetLowering::TypeScalarizeVector:
-        NeedsReanalyzing = ScalarizeVectorOperand(N, i);
-        Changed = true;
-        break;
-      case TargetLowering::TypeSplitVector:
-        NeedsReanalyzing = SplitVectorOperand(N, i);
-        Changed = true;
-        break;
-      case TargetLowering::TypeWidenVector:
-        NeedsReanalyzing = WidenVectorOperand(N, i);
-        Changed = true;
-        break;
-      case TargetLowering::TypePromoteFloat:
-        NeedsReanalyzing = PromoteFloatOperand(N, i);
-        Changed = true;
-        break;
-      case TargetLowering::TypeSoftPromoteHalf:
-        NeedsReanalyzing = SoftPromoteHalfOperand(N, i);
-        Changed = true;
+      unsigned NumOperands = N->getNumOperands();
+      bool NeedsReanalyzing = false;
+      unsigned i;
+      for (i = 0; i != NumOperands; ++i) {
+        if (IgnoreNodeResults(N->getOperand(i).getNode()))
+          continue;
+
+        const auto &Op = N->getOperand(i);
+        LLVM_DEBUG(dbgs() << "Analyzing operand: "; Op.dump(&DAG));
+        EVT OpVT = Op.getValueType();
+        switch (getTypeAction(OpVT)) {
+        case TargetLowering::TypeLegal:
+          LLVM_DEBUG(dbgs() << "Legal operand\n");
+          continue;
+        case TargetLowering::TypeScalarizeScalableVector:
+          report_fatal_error(
+              "Scalarization of scalable vectors is not supported.");
+        // The following calls must either replace all of the node's results
+        // using ReplaceValueWith, and return "false"; or update the node's
+        // operands in place, and return "true".
+        case TargetLowering::TypePromoteInteger:
+          NeedsReanalyzing = PromoteIntegerOperand(N, i);
+          Changed = true;
+          break;
+        case TargetLowering::TypeExpandInteger:
+          NeedsReanalyzing = ExpandIntegerOperand(N, i);
+          Changed = true;
+          break;
+        case TargetLowering::TypeSoftenFloat:
+          NeedsReanalyzing = SoftenFloatOperand(N, i);
+          Changed = true;
+          break;
+        case TargetLowering::TypeExpandFloat:
+          NeedsReanalyzing = ExpandFloatOperand(N, i);
+          Changed = true;
+          break;
+        case TargetLowering::TypeScalarizeVector:
+          NeedsReanalyzing = ScalarizeVectorOperand(N, i);
+          Changed = true;
+          break;
+        case TargetLowering::TypeSplitVector:
+          NeedsReanalyzing = SplitVectorOperand(N, i);
+          Changed = true;
+          break;
+        case TargetLowering::TypeWidenVector:
+          NeedsReanalyzing = WidenVectorOperand(N, i);
+          Changed = true;
+          break;
+        case TargetLowering::TypePromoteFloat:
+          NeedsReanalyzing = PromoteFloatOperand(N, i);
+          Changed = true;
+          break;
+        case TargetLowering::TypeSoftPromoteHalf:
+          NeedsReanalyzing = SoftPromoteHalfOperand(N, i);
+          Changed = true;
+          break;
+        }
         break;
       }
-      break;
-    }
 
-    // The sub-method updated N in place.  Check to see if any operands are new,
-    // and if so, mark them.  If the node needs revisiting, don't add all users
-    // to the worklist etc.
-    if (NeedsReanalyzing) {
-      assert(N->getNodeId() == ReadyToProcess && "Node ID recalculated?");
-
-      N->setNodeId(NewNode);
-      // Recompute the NodeId and correct processed operands, adding the node to
-      // the worklist if ready.
-      SDNode *M = AnalyzeNewNode(N);
-      if (M == N)
-        // The node didn't morph - nothing special to do, it will be revisited.
+      // The sub-method updated N in place.  Check to see if any operands are
+      // new, and if so, mark them.  If the node needs revisiting, don't add all
+      // users to the worklist etc.
+      if (NeedsReanalyzing) {
+        assert(N->getNodeId() == ReadyToProcess && "Node ID recalculated?");
+
+        N->setNodeId(NewNode);
+        // Recompute the NodeId and correct processed operands, adding the node
+        // to the worklist if ready.
+        SDNode *M = AnalyzeNewNode(N);
+        if (M == N)
+          // The node didn't morph - nothing special to do, it will be
+          // revisited.
+          continue;
+
+        // The node morphed - this is equivalent to legalizing by replacing
+        // every value of N with the corresponding value of M.  So do that now.
+        assert(N->getNumValues() == M->getNumValues() &&
+               "Node morphing changed the number of results!");
+        for (unsigned i = 0, e = N->getNumValues(); i != e; ++i)
+          // Replacing the value takes care of remapping the new value.
+          ReplaceValueWith(SDValue(N, i), SDValue(M, i));
+        assert(N->getNodeId() == NewNode && "Unexpected node state!");
+        // The node continues to live on as part of the NewNode fungus that
+        // grows on top of the useful nodes.  Nothing more needs to be done
+        // with it - move on to the next node.
         continue;
+      }
 
-      // The node morphed - this is equivalent to legalizing by replacing every
-      // value of N with the corresponding value of M.  So do that now.
-      assert(N->getNumValues() == M->getNumValues() &&
-             "Node morphing changed the number of results!");
-      for (unsigned i = 0, e = N->getNumValues(); i != e; ++i)
-        // Replacing the value takes care of remapping the new value.
-        ReplaceValueWith(SDValue(N, i), SDValue(M, i));
-      assert(N->getNodeId() == NewNode && "Unexpected node state!");
-      // The node continues to live on as part of the NewNode fungus that
-      // grows on top of the useful nodes.  Nothing more needs to be done
-      // with it - move on to the next node.
-      continue;
-    }
-
-    if (i == NumOperands) {
-      LLVM_DEBUG(dbgs() << "Legally typed node: "; N->dump(&DAG);
-                 dbgs() << "\n");
-    }
+      if (i == NumOperands) {
+        LLVM_DEBUG(dbgs() << "Legally typed node: "; N->dump(&DAG);
+                   dbgs() << "\n");
+      }
     }
-NodeDone:
+  NodeDone:
 
     // If we reach here, the node was processed, potentially creating new nodes.
     // Mark it as processed and add its users to the worklist as appropriate.
@@ -407,10 +408,10 @@ bool DAGTypeLegalizer::run() {
       // This node has two options: it can either be a new node or its Node ID
       // may be a count of the number of operands it has that are not ready.
       if (NodeId > 0) {
-        User->setNodeId(NodeId-1);
+        User->setNodeId(NodeId - 1);
 
         // If this was the last use it was waiting on, add it to the ready list.
-        if (NodeId-1 == ReadyToProcess)
+        if (NodeId - 1 == ReadyToProcess)
           Worklist.push_back(User);
         continue;
       }
@@ -472,19 +473,20 @@ bool DAGTypeLegalizer::run() {
       }
 
     if (Node.getNodeId() != Processed) {
-       if (Node.getNodeId() == NewNode)
-         dbgs() << "New node not analyzed?\n";
-       else if (Node.getNodeId() == Unanalyzed)
-         dbgs() << "Unanalyzed node not noticed?\n";
-       else if (Node.getNodeId() > 0)
-         dbgs() << "Operand not processed?\n";
-       else if (Node.getNodeId() == ReadyToProcess)
-         dbgs() << "Not added to worklist?\n";
-       Failed = true;
+      if (Node.getNodeId() == NewNode)
+        dbgs() << "New node not analyzed?\n";
+      else if (Node.getNodeId() == Unanalyzed)
+        dbgs() << "Unanalyzed node not noticed?\n";
+      else if (Node.getNodeId() > 0)
+        dbgs() << "Operand not processed?\n";
+      else if (Node.getNodeId() == ReadyToProcess)
+        dbgs() << "Not added to worklist?\n";
+      Failed = true;
     }
 
     if (Failed) {
-      Node.dump(&DAG); dbgs() << "\n";
+      Node.dump(&DAG);
+      dbgs() << "\n";
       llvm_unreachable(nullptr);
     }
   }
@@ -596,51 +598,51 @@ void DAGTypeLegalizer::RemapId(TableId &Id) {
 }
 
 namespace {
-  /// This class is a DAGUpdateListener that listens for updates to nodes and
-  /// recomputes their ready state.
-  class NodeUpdateListener : public SelectionDAG::DAGUpdateListener {
-    DAGTypeLegalizer &DTL;
-    SmallSetVector<SDNode*, 16> &NodesToAnalyze;
-  public:
-    explicit NodeUpdateListener(DAGTypeLegalizer &dtl,
-                                SmallSetVector<SDNode*, 16> &nta)
-      : SelectionDAG::DAGUpdateListener(dtl.getDAG()),
-        DTL(dtl), NodesToAnalyze(nta) {}
-
-    void NodeDeleted(SDNode *N, SDNode *E) override {
-      assert(N->getNodeId() != DAGTypeLegalizer::ReadyToProcess &&
-             N->getNodeId() != DAGTypeLegalizer::Processed &&
-             "Invalid node ID for RAUW deletion!");
-      // It is possible, though rare, for the deleted node N to occur as a
-      // target in a map, so note the replacement N -> E in ReplacedValues.
-      assert(E && "Node not replaced?");
-      DTL.NoteDeletion(N, E);
-
-      // In theory the deleted node could also have been scheduled for analysis.
-      // So remove it from the set of nodes which will be analyzed.
-      NodesToAnalyze.remove(N);
-
-      // In general nothing needs to be done for E, since it didn't change but
-      // only gained new uses.  However N -> E was just added to ReplacedValues,
-      // and the result of a ReplacedValues mapping is not allowed to be marked
-      // NewNode.  So if E is marked NewNode, then it needs to be analyzed.
-      if (E->getNodeId() == DAGTypeLegalizer::NewNode)
-        NodesToAnalyze.insert(E);
-    }
-
-    void NodeUpdated(SDNode *N) override {
-      // Node updates can mean pretty much anything.  It is possible that an
-      // operand was set to something already processed (f.e.) in which case
-      // this node could become ready.  Recompute its flags.
-      assert(N->getNodeId() != DAGTypeLegalizer::ReadyToProcess &&
-             N->getNodeId() != DAGTypeLegalizer::Processed &&
-             "Invalid node ID for RAUW deletion!");
-      N->setNodeId(DAGTypeLegalizer::NewNode);
-      NodesToAnalyze.insert(N);
-    }
-  };
-}
+/// This class is a DAGUpdateListener that listens for updates to nodes and
+/// recomputes their ready state.
+class NodeUpdateListener : public SelectionDAG::DAGUpdateListener {
+  DAGTypeLegalizer &DTL;
+  SmallSetVector<SDNode *, 16> &NodesToAnalyze;
+
+public:
+  explicit NodeUpdateListener(DAGTypeLegalizer &dtl,
+                              SmallSetVector<SDNode *, 16> &nta)
+      : SelectionDAG::DAGUpdateListener(dtl.getDAG()), DTL(dtl),
+        NodesToAnalyze(nta) {}
+
+  void NodeDeleted(SDNode *N, SDNode *E) override {
+    assert(N->getNodeId() != DAGTypeLegalizer::ReadyToProcess &&
+           N->getNodeId() != DAGTypeLegalizer::Processed &&
+           "Invalid node ID for RAUW deletion!");
+    // It is possible, though rare, for the deleted node N to occur as a
+    // target in a map, so note the replacement N -> E in ReplacedValues.
+    assert(E && "Node not replaced?");
+    DTL.NoteDeletion(N, E);
+
+    // In theory the deleted node could also have been scheduled for analysis.
+    // So remove it from the set of nodes which will be analyzed.
+    NodesToAnalyze.remove(N);
+
+    // In general nothing needs to be done for E, since it didn't change but
+    // only gained new uses.  However N -> E was just added to ReplacedValues,
+    // and the result of a ReplacedValues mapping is not allowed to be marked
+    // NewNode.  So if E is marked NewNode, then it needs to be analyzed.
+    if (E->getNodeId() == DAGTypeLegalizer::NewNode)
+      NodesToAnalyze.insert(E);
+  }
 
+  void NodeUpdated(SDNode *N) override {
+    // Node updates can mean pretty much anything.  It is possible that an
+    // operand was set to something already processed (f.e.) in which case
+    // this node could become ready.  Recompute its flags.
+    assert(N->getNodeId() != DAGTypeLegalizer::ReadyToProcess &&
+           N->getNodeId() != DAGTypeLegalizer::Processed &&
+           "Invalid node ID for RAUW deletion!");
+    N->setNodeId(DAGTypeLegalizer::NewNode);
+    NodesToAnalyze.insert(N);
+  }
+};
+} // namespace
 
 /// The specified value was legalized to the specified other value.
 /// Update the DAG and NodeIds replacing any uses of From to use To instead.
@@ -652,7 +654,7 @@ void DAGTypeLegalizer::ReplaceValueWith(SDValue From, SDValue To) {
 
   // Anything that used the old node should now use the new one.  Note that this
   // can potentially cause recursive merging.
-  SmallSetVector<SDNode*, 16> NodesToAnalyze;
+  SmallSetVector<SDNode *, 16> NodesToAnalyze;
   NodeUpdateListener NUL(*this, NodesToAnalyze);
   do {
 
@@ -708,7 +710,7 @@ void DAGTypeLegalizer::ReplaceValueWith(SDValue From, SDValue To) {
 
 void DAGTypeLegalizer::SetPromotedInteger(SDValue Op, SDValue Result) {
   assert(Result.getValueType() ==
-         TLI.getTypeToTransformTo(*DAG.getContext(), Op.getValueType()) &&
+             TLI.getTypeToTransformTo(*DAG.getContext(), Op.getValueType()) &&
          "Invalid type for promoted integer");
   AnalyzeNewValue(Result);
 
@@ -736,7 +738,7 @@ void DAGTypeLegalizer::SetSoftenedFloat(SDValue Op, SDValue Result) {
 
 void DAGTypeLegalizer::SetPromotedFloat(SDValue Op, SDValue Result) {
   assert(Result.getValueType() ==
-         TLI.getTypeToTransformTo(*DAG.getContext(), Op.getValueType()) &&
+             TLI.getTypeToTransformTo(*DAG.getContext(), Op.getValueType()) &&
          "Invalid type for promoted float");
   AnalyzeNewValue(Result);
 
@@ -779,10 +781,9 @@ void DAGTypeLegalizer::GetExpandedInteger(SDValue Op, SDValue &Lo,
   Hi = getSDValue(Entry.second);
 }
 
-void DAGTypeLegalizer::SetExpandedInteger(SDValue Op, SDValue Lo,
-                                          SDValue Hi) {
+void DAGTypeLegalizer::SetExpandedInteger(SDValue Op, SDValue Lo, SDValue Hi) {
   assert(Lo.getValueType() ==
-         TLI.getTypeToTransformTo(*DAG.getContext(), Op.getValueType()) &&
+             TLI.getTypeToTransformTo(*DAG.getContext(), Op.getValueType()) &&
          Hi.getValueType() == Lo.getValueType() &&
          "Invalid type for expanded integer");
   // Lo/Hi may have been newly allocated, if so, add nodeid's as relevant.
@@ -808,18 +809,16 @@ void DAGTypeLegalizer::SetExpandedInteger(SDValue Op, SDValue Lo,
   Entry.second = getTableId(Hi);
 }
 
-void DAGTypeLegalizer::GetExpandedFloat(SDValue Op, SDValue &Lo,
-                                        SDValue &Hi) {
+void DAGTypeLegalizer::GetExpandedFloat(SDValue Op, SDValue &Lo, SDValue &Hi) {
   std::pair<TableId, TableId> &Entry = ExpandedFloats[getTableId(Op)];
   assert((Entry.first != 0) && "Operand isn't expanded");
   Lo = getSDValue(Entry.first);
   Hi = getSDValue(Entry.second);
 }
 
-void DAGTypeLegalizer::SetExpandedFloat(SDValue Op, SDValue Lo,
-                                        SDValue Hi) {
+void DAGTypeLegalizer::SetExpandedFloat(SDValue Op, SDValue Lo, SDValue Hi) {
   assert(Lo.getValueType() ==
-         TLI.getTypeToTransformTo(*DAG.getContext(), Op.getValueType()) &&
+             TLI.getTypeToTransformTo(*DAG.getContext(), Op.getValueType()) &&
          Hi.getValueType() == Lo.getValueType() &&
          "Invalid type for expanded float");
   // Lo/Hi may have been newly allocated, if so, add nodeid's as relevant.
@@ -832,8 +831,7 @@ void DAGTypeLegalizer::SetExpandedFloat(SDValue Op, SDValue Lo,
   Entry.second = getTableId(Hi);
 }
 
-void DAGTypeLegalizer::GetSplitVector(SDValue Op, SDValue &Lo,
-                                      SDValue &Hi) {
+void DAGTypeLegalizer::GetSplitVector(SDValue Op, SDValue &Lo, SDValue &Hi) {
   std::pair<TableId, TableId> &Entry = SplitVectors[getTableId(Op)];
   Lo = getSDValue(Entry.first);
   Hi = getSDValue(Entry.second);
@@ -841,8 +839,7 @@ void DAGTypeLegalizer::GetSplitVector(SDValue Op, SDValue &Lo,
   ;
 }
 
-void DAGTypeLegalizer::SetSplitVector(SDValue Op, SDValue Lo,
-                                      SDValue Hi) {
+void DAGTypeLegalizer::SetSplitVector(SDValue Op, SDValue Lo, SDValue Hi) {
   assert(Lo.getValueType().getVectorElementType() ==
              Op.getValueType().getVectorElementType() &&
          Lo.getValueType().getVectorElementCount() * 2 ==
@@ -862,7 +859,7 @@ void DAGTypeLegalizer::SetSplitVector(SDValue Op, SDValue Lo,
 
 void DAGTypeLegalizer::SetWidenedVector(SDValue Op, SDValue Result) {
   assert(Result.getValueType() ==
-         TLI.getTypeToTransformTo(*DAG.getContext(), Op.getValueType()) &&
+             TLI.getTypeToTransformTo(*DAG.getContext(), Op.getValueType()) &&
          "Invalid type for widened vector");
   AnalyzeNewValue(Result);
 
@@ -871,7 +868,6 @@ void DAGTypeLegalizer::SetWidenedVector(SDValue Op, SDValue Result) {
   OpIdEntry = getTableId(Result);
 }
 
-
 //===----------------------------------------------------------------------===//
 // Utilities.
 //===----------------------------------------------------------------------===//
@@ -893,8 +889,7 @@ SDValue DAGTypeLegalizer::BitConvertVectorToIntegerVector(SDValue Op) {
                      EVT::getVectorVT(*DAG.getContext(), EltNVT, EltCnt), Op);
 }
 
-SDValue DAGTypeLegalizer::CreateStackStoreLoad(SDValue Op,
-                                               EVT DestVT) {
+SDValue DAGTypeLegalizer::CreateStackStoreLoad(SDValue Op, EVT DestVT) {
   SDLoc dl(Op);
   // Create the stack frame object.  Make sure it is aligned for both
   // the source and destination types.
@@ -945,7 +940,6 @@ bool DAGTypeLegalizer::CustomLowerNode(SDNode *N, EVT VT, bool LegalizeResult) {
   return true;
 }
 
-
 /// Widen the node's results with custom code provided by the target and return
 /// "true", or do nothing and return "false".
 bool DAGTypeLegalizer::CustomWidenLowerNode(SDNode *N, EVT VT) {
@@ -983,8 +977,7 @@ SDValue DAGTypeLegalizer::DisintegrateMERGE_VALUES(SDNode *N, unsigned ResNo) {
 
 /// Use ISD::EXTRACT_ELEMENT nodes to extract the low and high parts of the
 /// given value.
-void DAGTypeLegalizer::GetPairElements(SDValue Pair,
-                                       SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::GetPairElements(SDValue Pair, SDValue &Lo, SDValue &Hi) {
   SDLoc dl(Pair);
   EVT NVT = TLI.getTypeToTransformTo(*DAG.getContext(), Pair.getValueType());
   std::tie(Lo, Hi) = DAG.SplitScalar(Pair, dl, NVT, NVT);
@@ -1018,12 +1011,12 @@ SDValue DAGTypeLegalizer::PromoteTargetBoolean(SDValue Bool, EVT ValVT) {
 }
 
 /// Return the lower LoVT bits of Op in Lo and the upper HiVT bits in Hi.
-void DAGTypeLegalizer::SplitInteger(SDValue Op,
-                                    EVT LoVT, EVT HiVT,
-                                    SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::SplitInteger(SDValue Op, EVT LoVT, EVT HiVT, SDValue &Lo,
+                                    SDValue &Hi) {
   SDLoc dl(Op);
   assert(LoVT.getSizeInBits() + HiVT.getSizeInBits() ==
-         Op.getValueSizeInBits() && "Invalid integer splitting!");
+             Op.getValueSizeInBits() &&
+         "Invalid integer splitting!");
   Lo = DAG.getNode(ISD::TRUNCATE, dl, LoVT, Op);
   unsigned ReqShiftAmountInBits =
       Log2_32_Ceil(Op.getValueType().getSizeInBits());
@@ -1038,14 +1031,12 @@ void DAGTypeLegalizer::SplitInteger(SDValue Op,
 
 /// Return the lower and upper halves of Op's bits in a value type half the
 /// size of Op's.
-void DAGTypeLegalizer::SplitInteger(SDValue Op,
-                                    SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::SplitInteger(SDValue Op, SDValue &Lo, SDValue &Hi) {
   EVT HalfVT =
       EVT::getIntegerVT(*DAG.getContext(), Op.getValueSizeInBits() / 2);
   SplitInteger(Op, HalfVT, HalfVT, Lo, Hi);
 }
 
-
 //===----------------------------------------------------------------------===//
 //  Entry Point
 //===----------------------------------------------------------------------===//
@@ -1055,6 +1046,4 @@ void DAGTypeLegalizer::SplitInteger(SDValue Op,
 ///
 /// Note that this is an involved process that may invalidate pointers into
 /// the graph.
-bool SelectionDAG::LegalizeTypes() {
-  return DAGTypeLegalizer(*this).run();
-}
+bool SelectionDAG::LegalizeTypes() { return DAGTypeLegalizer(*this).run(); }
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
index 62d71d86846c2f8..046442f3ce1e1c9 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypes.h
@@ -30,6 +30,7 @@ namespace llvm {
 class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
   const TargetLowering &TLI;
   SelectionDAG &DAG;
+
 public:
   /// This pass uses the NodeId on the SDNodes to hold information about the
   /// state of the node. The enum has all the values.
@@ -50,8 +51,8 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
 
     // 1+ - This is a node which has this many unprocessed operands.
   };
-private:
 
+private:
   /// This is a bitvector that contains two bits for each simple value type,
   /// where the two bits correspond to the LegalizeAction enum from
   /// TargetLowering. This can be queried with "getTypeAction(VT)".
@@ -64,7 +65,8 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
 
   /// Return true if this type is legal on this target.
   bool isTypeLegal(EVT VT) const {
-    return TLI.getTypeAction(*DAG.getContext(), VT) == TargetLowering::TypeLegal;
+    return TLI.getTypeAction(*DAG.getContext(), VT) ==
+           TargetLowering::TypeLegal;
   }
 
   /// Return true if this is a simple legal type.
@@ -134,7 +136,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
 
   /// This defines a worklist of nodes to process. In order to be pushed onto
   /// this worklist, all operands of a node must have already been processed.
-  SmallVector<SDNode*, 128> Worklist;
+  SmallVector<SDNode *, 128> Worklist;
 
   TableId getTableId(SDValue V) {
     assert(V.getNode() && "Getting TableId on SDValue()");
@@ -165,8 +167,8 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
 
 public:
   explicit DAGTypeLegalizer(SelectionDAG &dag)
-    : TLI(dag.getTargetLoweringInfo()), DAG(dag),
-    ValueTypeActions(TLI.getValueTypeActions()) {
+      : TLI(dag.getTargetLoweringInfo()), DAG(dag),
+        ValueTypeActions(TLI.getValueTypeActions()) {
     static_assert(MVT::LAST_VALUETYPE <= MVT::MAX_ALLOWED_VALUETYPE,
                   "Too many value types for ValueTypeActions to hold!");
   }
@@ -233,8 +235,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
 
   void ReplaceValueWith(SDValue From, SDValue To);
   void SplitInteger(SDValue Op, SDValue &Lo, SDValue &Hi);
-  void SplitInteger(SDValue Op, EVT LoVT, EVT HiVT,
-                    SDValue &Lo, SDValue &Hi);
+  void SplitInteger(SDValue Op, EVT LoVT, EVT HiVT, SDValue &Lo, SDValue &Hi);
 
   //===--------------------------------------------------------------------===//
   // Integer Promotion Support: LegalizeIntegerTypes.cpp
@@ -411,7 +412,7 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
   SDValue PromoteIntOp_PATCHPOINT(SDNode *N, unsigned OpNo);
   SDValue PromoteIntOp_VP_STRIDED(SDNode *N, unsigned OpNo);
 
-  void PromoteSetCCOperands(SDValue &LHS,SDValue &RHS, ISD::CondCode Code);
+  void PromoteSetCCOperands(SDValue &LHS, SDValue &RHS, ISD::CondCode Code);
 
   //===--------------------------------------------------------------------===//
   // Integer Expansion Support: LegalizeIntegerTypes.cpp
@@ -428,62 +429,62 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
 
   // Integer Result Expansion.
   void ExpandIntegerResult(SDNode *N, unsigned ResNo);
-  void ExpandIntRes_ANY_EXTEND        (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_AssertSext        (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_AssertZext        (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_Constant          (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_ABS               (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_CTLZ              (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_CTPOP             (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_CTTZ              (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_LOAD          (LoadSDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_READCYCLECOUNTER  (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_SIGN_EXTEND       (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_SIGN_EXTEND_INREG (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_TRUNCATE          (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_ZERO_EXTEND       (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_GET_ROUNDING      (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_FP_TO_XINT        (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_FP_TO_XINT_SAT    (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_XROUND_XRINT      (SDNode *N, SDValue &Lo, SDValue &Hi);
-
-  void ExpandIntRes_Logical           (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_ADDSUB            (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_ADDSUBC           (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_ADDSUBE           (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_UADDSUBO_CARRY    (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_SADDSUBO_CARRY    (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_BITREVERSE        (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_BSWAP             (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_PARITY            (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_MUL               (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_SDIV              (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_SREM              (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_UDIV              (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_UREM              (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_ShiftThroughStack (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_Shift             (SDNode *N, SDValue &Lo, SDValue &Hi);
-
-  void ExpandIntRes_MINMAX            (SDNode *N, SDValue &Lo, SDValue &Hi);
-
-  void ExpandIntRes_SADDSUBO          (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_UADDSUBO          (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_XMULO             (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_ADDSUBSAT         (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_SHLSAT            (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_MULFIX            (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_DIVFIX            (SDNode *N, SDValue &Lo, SDValue &Hi);
-
-  void ExpandIntRes_ATOMIC_LOAD       (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_VECREDUCE         (SDNode *N, SDValue &Lo, SDValue &Hi);
-
-  void ExpandIntRes_Rotate            (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandIntRes_FunnelShift       (SDNode *N, SDValue &Lo, SDValue &Hi);
-
-  void ExpandIntRes_VSCALE            (SDNode *N, SDValue &Lo, SDValue &Hi);
-
-  void ExpandShiftByConstant(SDNode *N, const APInt &Amt,
-                             SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_ANY_EXTEND(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_AssertSext(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_AssertZext(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_Constant(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_ABS(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_CTLZ(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_CTPOP(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_CTTZ(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_LOAD(LoadSDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_READCYCLECOUNTER(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_SIGN_EXTEND(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_SIGN_EXTEND_INREG(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_TRUNCATE(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_ZERO_EXTEND(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_GET_ROUNDING(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_FP_TO_XINT(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_FP_TO_XINT_SAT(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_XROUND_XRINT(SDNode *N, SDValue &Lo, SDValue &Hi);
+
+  void ExpandIntRes_Logical(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_ADDSUB(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_ADDSUBC(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_ADDSUBE(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_UADDSUBO_CARRY(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_SADDSUBO_CARRY(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_BITREVERSE(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_BSWAP(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_PARITY(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_MUL(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_SDIV(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_SREM(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_UDIV(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_UREM(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_ShiftThroughStack(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_Shift(SDNode *N, SDValue &Lo, SDValue &Hi);
+
+  void ExpandIntRes_MINMAX(SDNode *N, SDValue &Lo, SDValue &Hi);
+
+  void ExpandIntRes_SADDSUBO(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_UADDSUBO(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_XMULO(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_ADDSUBSAT(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_SHLSAT(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_MULFIX(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_DIVFIX(SDNode *N, SDValue &Lo, SDValue &Hi);
+
+  void ExpandIntRes_ATOMIC_LOAD(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_VECREDUCE(SDNode *N, SDValue &Lo, SDValue &Hi);
+
+  void ExpandIntRes_Rotate(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandIntRes_FunnelShift(SDNode *N, SDValue &Lo, SDValue &Hi);
+
+  void ExpandIntRes_VSCALE(SDNode *N, SDValue &Lo, SDValue &Hi);
+
+  void ExpandShiftByConstant(SDNode *N, const APInt &Amt, SDValue &Lo,
+                             SDValue &Hi);
   bool ExpandShiftWithKnownAmountBit(SDNode *N, SDValue &Lo, SDValue &Hi);
   bool ExpandShiftWithUnknownAmountBit(SDNode *N, SDValue &Lo, SDValue &Hi);
 
@@ -618,44 +619,44 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
   // Float Result Expansion.
   void ExpandFloatResult(SDNode *N, unsigned ResNo);
   void ExpandFloatRes_ConstantFP(SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_Unary(SDNode *N, RTLIB::Libcall LC,
-                            SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_Binary(SDNode *N, RTLIB::Libcall LC,
-                             SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FABS      (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FMINNUM   (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FMAXNUM   (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FADD      (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FCBRT     (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FCEIL     (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FCOPYSIGN (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FCOS      (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FDIV      (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FEXP      (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FEXP2     (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FEXP10    (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FFLOOR    (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FLOG      (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FLOG2     (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FLOG10    (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FMA       (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FMUL      (SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_Unary(SDNode *N, RTLIB::Libcall LC, SDValue &Lo,
+                            SDValue &Hi);
+  void ExpandFloatRes_Binary(SDNode *N, RTLIB::Libcall LC, SDValue &Lo,
+                             SDValue &Hi);
+  void ExpandFloatRes_FABS(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FMINNUM(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FMAXNUM(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FADD(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FCBRT(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FCEIL(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FCOPYSIGN(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FCOS(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FDIV(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FEXP(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FEXP2(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FEXP10(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FFLOOR(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FLOG(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FLOG2(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FLOG10(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FMA(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FMUL(SDNode *N, SDValue &Lo, SDValue &Hi);
   void ExpandFloatRes_FNEARBYINT(SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FNEG      (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FP_EXTEND (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FPOW      (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FPOWI     (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FLDEXP    (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FREEZE    (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FREM      (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FRINT     (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FROUND    (SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FNEG(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FP_EXTEND(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FPOW(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FPOWI(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FLDEXP(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FREEZE(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FREM(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FRINT(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FROUND(SDNode *N, SDValue &Lo, SDValue &Hi);
   void ExpandFloatRes_FROUNDEVEN(SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FSIN      (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FSQRT     (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FSUB      (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_FTRUNC    (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandFloatRes_LOAD      (SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FSIN(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FSQRT(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FSUB(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_FTRUNC(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandFloatRes_LOAD(SDNode *N, SDValue &Lo, SDValue &Hi);
   void ExpandFloatRes_XINT_TO_FP(SDNode *N, SDValue &Lo, SDValue &Hi);
 
   // Float Operand Expansion.
@@ -858,8 +859,8 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
   void SplitVecRes_InregOp(SDNode *N, SDValue &Lo, SDValue &Hi);
   void SplitVecRes_ExtVecInRegOp(SDNode *N, SDValue &Lo, SDValue &Hi);
   void SplitVecRes_StrictFPOp(SDNode *N, SDValue &Lo, SDValue &Hi);
-  void SplitVecRes_OverflowOp(SDNode *N, unsigned ResNo,
-                              SDValue &Lo, SDValue &Hi);
+  void SplitVecRes_OverflowOp(SDNode *N, unsigned ResNo, SDValue &Lo,
+                              SDValue &Hi);
 
   void SplitVecRes_FIX(SDNode *N, SDValue &Lo, SDValue &Hi);
 
@@ -953,27 +954,27 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
 
   // Widen Vector Result Promotion.
   void WidenVectorResult(SDNode *N, unsigned ResNo);
-  SDValue WidenVecRes_MERGE_VALUES(SDNode* N, unsigned ResNo);
-  SDValue WidenVecRes_AssertZext(SDNode* N);
-  SDValue WidenVecRes_BITCAST(SDNode* N);
-  SDValue WidenVecRes_BUILD_VECTOR(SDNode* N);
-  SDValue WidenVecRes_CONCAT_VECTORS(SDNode* N);
-  SDValue WidenVecRes_EXTEND_VECTOR_INREG(SDNode* N);
-  SDValue WidenVecRes_EXTRACT_SUBVECTOR(SDNode* N);
+  SDValue WidenVecRes_MERGE_VALUES(SDNode *N, unsigned ResNo);
+  SDValue WidenVecRes_AssertZext(SDNode *N);
+  SDValue WidenVecRes_BITCAST(SDNode *N);
+  SDValue WidenVecRes_BUILD_VECTOR(SDNode *N);
+  SDValue WidenVecRes_CONCAT_VECTORS(SDNode *N);
+  SDValue WidenVecRes_EXTEND_VECTOR_INREG(SDNode *N);
+  SDValue WidenVecRes_EXTRACT_SUBVECTOR(SDNode *N);
   SDValue WidenVecRes_INSERT_SUBVECTOR(SDNode *N);
-  SDValue WidenVecRes_INSERT_VECTOR_ELT(SDNode* N);
-  SDValue WidenVecRes_LOAD(SDNode* N);
+  SDValue WidenVecRes_INSERT_VECTOR_ELT(SDNode *N);
+  SDValue WidenVecRes_LOAD(SDNode *N);
   SDValue WidenVecRes_VP_LOAD(VPLoadSDNode *N);
   SDValue WidenVecRes_VP_STRIDED_LOAD(VPStridedLoadSDNode *N);
-  SDValue WidenVecRes_MLOAD(MaskedLoadSDNode* N);
-  SDValue WidenVecRes_MGATHER(MaskedGatherSDNode* N);
-  SDValue WidenVecRes_VP_GATHER(VPGatherSDNode* N);
-  SDValue WidenVecRes_ScalarOp(SDNode* N);
+  SDValue WidenVecRes_MLOAD(MaskedLoadSDNode *N);
+  SDValue WidenVecRes_MGATHER(MaskedGatherSDNode *N);
+  SDValue WidenVecRes_VP_GATHER(VPGatherSDNode *N);
+  SDValue WidenVecRes_ScalarOp(SDNode *N);
   SDValue WidenVecRes_Select(SDNode *N);
   SDValue WidenVSELECTMask(SDNode *N);
-  SDValue WidenVecRes_SELECT_CC(SDNode* N);
-  SDValue WidenVecRes_SETCC(SDNode* N);
-  SDValue WidenVecRes_STRICT_FSETCC(SDNode* N);
+  SDValue WidenVecRes_SELECT_CC(SDNode *N);
+  SDValue WidenVecRes_SETCC(SDNode *N);
+  SDValue WidenVecRes_STRICT_FSETCC(SDNode *N);
   SDValue WidenVecRes_UNDEF(SDNode *N);
   SDValue WidenVecRes_VECTOR_SHUFFLE(ShuffleVectorSDNode *N);
   SDValue WidenVecRes_VECTOR_REVERSE(SDNode *N);
@@ -1001,15 +1002,15 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
   SDValue WidenVecOp_EXTRACT_VECTOR_ELT(SDNode *N);
   SDValue WidenVecOp_INSERT_SUBVECTOR(SDNode *N);
   SDValue WidenVecOp_EXTRACT_SUBVECTOR(SDNode *N);
-  SDValue WidenVecOp_STORE(SDNode* N);
+  SDValue WidenVecOp_STORE(SDNode *N);
   SDValue WidenVecOp_VP_STORE(SDNode *N, unsigned OpNo);
   SDValue WidenVecOp_VP_STRIDED_STORE(SDNode *N, unsigned OpNo);
-  SDValue WidenVecOp_MSTORE(SDNode* N, unsigned OpNo);
-  SDValue WidenVecOp_MGATHER(SDNode* N, unsigned OpNo);
-  SDValue WidenVecOp_MSCATTER(SDNode* N, unsigned OpNo);
-  SDValue WidenVecOp_VP_SCATTER(SDNode* N, unsigned OpNo);
-  SDValue WidenVecOp_SETCC(SDNode* N);
-  SDValue WidenVecOp_STRICT_FSETCC(SDNode* N);
+  SDValue WidenVecOp_MSTORE(SDNode *N, unsigned OpNo);
+  SDValue WidenVecOp_MGATHER(SDNode *N, unsigned OpNo);
+  SDValue WidenVecOp_MSCATTER(SDNode *N, unsigned OpNo);
+  SDValue WidenVecOp_VP_SCATTER(SDNode *N, unsigned OpNo);
+  SDValue WidenVecOp_SETCC(SDNode *N);
+  SDValue WidenVecOp_STRICT_FSETCC(SDNode *N);
   SDValue WidenVecOp_VSELECT(SDNode *N);
 
   SDValue WidenVecOp_Convert(SDNode *N);
@@ -1083,14 +1084,14 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
   void GetPairElements(SDValue Pair, SDValue &Lo, SDValue &Hi);
 
   // Generic Result Splitting.
-  void SplitRes_MERGE_VALUES(SDNode *N, unsigned ResNo,
-                             SDValue &Lo, SDValue &Hi);
-  void SplitVecRes_AssertZext  (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void SplitRes_ARITH_FENCE (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void SplitRes_Select      (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void SplitRes_SELECT_CC   (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void SplitRes_UNDEF       (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void SplitRes_FREEZE      (SDNode *N, SDValue &Lo, SDValue &Hi);
+  void SplitRes_MERGE_VALUES(SDNode *N, unsigned ResNo, SDValue &Lo,
+                             SDValue &Hi);
+  void SplitVecRes_AssertZext(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void SplitRes_ARITH_FENCE(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void SplitRes_Select(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void SplitRes_SELECT_CC(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void SplitRes_UNDEF(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void SplitRes_FREEZE(SDNode *N, SDValue &Lo, SDValue &Hi);
 
   //===--------------------------------------------------------------------===//
   // Generic Expansion: LegalizeTypesGeneric.cpp
@@ -1108,29 +1109,28 @@ class LLVM_LIBRARY_VISIBILITY DAGTypeLegalizer {
       GetExpandedFloat(Op, Lo, Hi);
   }
 
-
   /// This function will split the integer \p Op into \p NumElements
   /// operations of type \p EltVT and store them in \p Ops.
   void IntegerToVector(SDValue Op, unsigned NumElements,
                        SmallVectorImpl<SDValue> &Ops, EVT EltVT);
 
   // Generic Result Expansion.
-  void ExpandRes_MERGE_VALUES      (SDNode *N, unsigned ResNo,
-                                    SDValue &Lo, SDValue &Hi);
-  void ExpandRes_BITCAST           (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandRes_BUILD_PAIR        (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandRes_EXTRACT_ELEMENT   (SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandRes_MERGE_VALUES(SDNode *N, unsigned ResNo, SDValue &Lo,
+                              SDValue &Hi);
+  void ExpandRes_BITCAST(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandRes_BUILD_PAIR(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandRes_EXTRACT_ELEMENT(SDNode *N, SDValue &Lo, SDValue &Hi);
   void ExpandRes_EXTRACT_VECTOR_ELT(SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandRes_NormalLoad        (SDNode *N, SDValue &Lo, SDValue &Hi);
-  void ExpandRes_VAARG             (SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandRes_NormalLoad(SDNode *N, SDValue &Lo, SDValue &Hi);
+  void ExpandRes_VAARG(SDNode *N, SDValue &Lo, SDValue &Hi);
 
   // Generic Operand Expansion.
-  SDValue ExpandOp_BITCAST          (SDNode *N);
-  SDValue ExpandOp_BUILD_VECTOR     (SDNode *N);
-  SDValue ExpandOp_EXTRACT_ELEMENT  (SDNode *N);
+  SDValue ExpandOp_BITCAST(SDNode *N);
+  SDValue ExpandOp_BUILD_VECTOR(SDNode *N);
+  SDValue ExpandOp_EXTRACT_ELEMENT(SDNode *N);
   SDValue ExpandOp_INSERT_VECTOR_ELT(SDNode *N);
-  SDValue ExpandOp_SCALAR_TO_VECTOR (SDNode *N);
-  SDValue ExpandOp_NormalStore      (SDNode *N, unsigned OpNo);
+  SDValue ExpandOp_SCALAR_TO_VECTOR(SDNode *N);
+  SDValue ExpandOp_NormalStore(SDNode *N, unsigned OpNo);
 };
 
 } // end namespace llvm.
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypesGeneric.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypesGeneric.cpp
index 296242c00401c2f..3dfeaf3abf3a615 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeTypesGeneric.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeTypesGeneric.cpp
@@ -46,57 +46,57 @@ void DAGTypeLegalizer::ExpandRes_BITCAST(SDNode *N, SDValue &Lo, SDValue &Hi) {
 
   // Handle some special cases efficiently.
   switch (getTypeAction(InVT)) {
-    case TargetLowering::TypeLegal:
-    case TargetLowering::TypePromoteInteger:
-      break;
-    case TargetLowering::TypePromoteFloat:
-    case TargetLowering::TypeSoftPromoteHalf:
-      llvm_unreachable("Bitcast of a promotion-needing float should never need"
-                       "expansion");
-    case TargetLowering::TypeSoftenFloat:
-      SplitInteger(GetSoftenedFloat(InOp), Lo, Hi);
-      Lo = DAG.getNode(ISD::BITCAST, dl, NOutVT, Lo);
-      Hi = DAG.getNode(ISD::BITCAST, dl, NOutVT, Hi);
-      return;
-    case TargetLowering::TypeExpandInteger:
-    case TargetLowering::TypeExpandFloat: {
-      auto &DL = DAG.getDataLayout();
-      // Convert the expanded pieces of the input.
-      GetExpandedOp(InOp, Lo, Hi);
-      if (TLI.hasBigEndianPartOrdering(InVT, DL) !=
-          TLI.hasBigEndianPartOrdering(OutVT, DL))
-        std::swap(Lo, Hi);
-      Lo = DAG.getNode(ISD::BITCAST, dl, NOutVT, Lo);
-      Hi = DAG.getNode(ISD::BITCAST, dl, NOutVT, Hi);
-      return;
-    }
-    case TargetLowering::TypeSplitVector:
-      GetSplitVector(InOp, Lo, Hi);
-      if (TLI.hasBigEndianPartOrdering(OutVT, DAG.getDataLayout()))
-        std::swap(Lo, Hi);
-      Lo = DAG.getNode(ISD::BITCAST, dl, NOutVT, Lo);
-      Hi = DAG.getNode(ISD::BITCAST, dl, NOutVT, Hi);
-      return;
-    case TargetLowering::TypeScalarizeVector:
-      // Convert the element instead.
-      SplitInteger(BitConvertToInteger(GetScalarizedVector(InOp)), Lo, Hi);
-      Lo = DAG.getNode(ISD::BITCAST, dl, NOutVT, Lo);
-      Hi = DAG.getNode(ISD::BITCAST, dl, NOutVT, Hi);
-      return;
-    case TargetLowering::TypeScalarizeScalableVector:
-      report_fatal_error("Scalarization of scalable vectors is not supported.");
-    case TargetLowering::TypeWidenVector: {
-      assert(!(InVT.getVectorNumElements() & 1) && "Unsupported BITCAST");
-      InOp = GetWidenedVector(InOp);
-      EVT LoVT, HiVT;
-      std::tie(LoVT, HiVT) = DAG.GetSplitDestVTs(InVT);
-      std::tie(Lo, Hi) = DAG.SplitVector(InOp, dl, LoVT, HiVT);
-      if (TLI.hasBigEndianPartOrdering(OutVT, DAG.getDataLayout()))
-        std::swap(Lo, Hi);
-      Lo = DAG.getNode(ISD::BITCAST, dl, NOutVT, Lo);
-      Hi = DAG.getNode(ISD::BITCAST, dl, NOutVT, Hi);
-      return;
-    }
+  case TargetLowering::TypeLegal:
+  case TargetLowering::TypePromoteInteger:
+    break;
+  case TargetLowering::TypePromoteFloat:
+  case TargetLowering::TypeSoftPromoteHalf:
+    llvm_unreachable("Bitcast of a promotion-needing float should never need"
+                     "expansion");
+  case TargetLowering::TypeSoftenFloat:
+    SplitInteger(GetSoftenedFloat(InOp), Lo, Hi);
+    Lo = DAG.getNode(ISD::BITCAST, dl, NOutVT, Lo);
+    Hi = DAG.getNode(ISD::BITCAST, dl, NOutVT, Hi);
+    return;
+  case TargetLowering::TypeExpandInteger:
+  case TargetLowering::TypeExpandFloat: {
+    auto &DL = DAG.getDataLayout();
+    // Convert the expanded pieces of the input.
+    GetExpandedOp(InOp, Lo, Hi);
+    if (TLI.hasBigEndianPartOrdering(InVT, DL) !=
+        TLI.hasBigEndianPartOrdering(OutVT, DL))
+      std::swap(Lo, Hi);
+    Lo = DAG.getNode(ISD::BITCAST, dl, NOutVT, Lo);
+    Hi = DAG.getNode(ISD::BITCAST, dl, NOutVT, Hi);
+    return;
+  }
+  case TargetLowering::TypeSplitVector:
+    GetSplitVector(InOp, Lo, Hi);
+    if (TLI.hasBigEndianPartOrdering(OutVT, DAG.getDataLayout()))
+      std::swap(Lo, Hi);
+    Lo = DAG.getNode(ISD::BITCAST, dl, NOutVT, Lo);
+    Hi = DAG.getNode(ISD::BITCAST, dl, NOutVT, Hi);
+    return;
+  case TargetLowering::TypeScalarizeVector:
+    // Convert the element instead.
+    SplitInteger(BitConvertToInteger(GetScalarizedVector(InOp)), Lo, Hi);
+    Lo = DAG.getNode(ISD::BITCAST, dl, NOutVT, Lo);
+    Hi = DAG.getNode(ISD::BITCAST, dl, NOutVT, Hi);
+    return;
+  case TargetLowering::TypeScalarizeScalableVector:
+    report_fatal_error("Scalarization of scalable vectors is not supported.");
+  case TargetLowering::TypeWidenVector: {
+    assert(!(InVT.getVectorNumElements() & 1) && "Unsupported BITCAST");
+    InOp = GetWidenedVector(InOp);
+    EVT LoVT, HiVT;
+    std::tie(LoVT, HiVT) = DAG.GetSplitDestVTs(InVT);
+    std::tie(Lo, Hi) = DAG.SplitVector(InOp, dl, LoVT, HiVT);
+    if (TLI.hasBigEndianPartOrdering(OutVT, DAG.getDataLayout()))
+      std::swap(Lo, Hi);
+    Lo = DAG.getNode(ISD::BITCAST, dl, NOutVT, Lo);
+    Hi = DAG.getNode(ISD::BITCAST, dl, NOutVT, Hi);
+    return;
+  }
   }
 
   if (InVT.isVector() && OutVT.isInteger()) {
@@ -305,7 +305,6 @@ void DAGTypeLegalizer::ExpandRes_VAARG(SDNode *N, SDValue &Lo, SDValue &Hi) {
   ReplaceValueWith(SDValue(N, 1), Chain);
 }
 
-
 //===--------------------------------------------------------------------===//
 // Generic Operand Expansion.
 //===--------------------------------------------------------------------===//
@@ -379,7 +378,7 @@ SDValue DAGTypeLegalizer::ExpandOp_BUILD_VECTOR(SDNode *N) {
   // Build a vector of twice the length out of the expanded elements.
   // For example <3 x i64> -> <6 x i32>.
   SmallVector<SDValue, 16> NewElts;
-  NewElts.reserve(NumElts*2);
+  NewElts.reserve(NumElts * 2);
 
   for (unsigned i = 0; i < NumElts; ++i) {
     SDValue Lo, Hi;
@@ -418,9 +417,8 @@ SDValue DAGTypeLegalizer::ExpandOp_INSERT_VECTOR_ELT(SDNode *N) {
 
   // Bitconvert to a vector of twice the length with elements of the expanded
   // type, insert the expanded vector elements, and then convert back.
-  EVT NewVecVT = EVT::getVectorVT(*DAG.getContext(), NewEVT, NumElts*2);
-  SDValue NewVec = DAG.getNode(ISD::BITCAST, dl,
-                               NewVecVT, N->getOperand(0));
+  EVT NewVecVT = EVT::getVectorVT(*DAG.getContext(), NewEVT, NumElts * 2);
+  SDValue NewVec = DAG.getNode(ISD::BITCAST, dl, NewVecVT, N->getOperand(0));
 
   SDValue Lo, Hi;
   GetExpandedOp(Val, Lo, Hi);
@@ -430,10 +428,9 @@ SDValue DAGTypeLegalizer::ExpandOp_INSERT_VECTOR_ELT(SDNode *N) {
   SDValue Idx = N->getOperand(2);
   Idx = DAG.getNode(ISD::ADD, dl, Idx.getValueType(), Idx, Idx);
   NewVec = DAG.getNode(ISD::INSERT_VECTOR_ELT, dl, NewVecVT, NewVec, Lo, Idx);
-  Idx = DAG.getNode(ISD::ADD, dl,
-                    Idx.getValueType(), Idx,
+  Idx = DAG.getNode(ISD::ADD, dl, Idx.getValueType(), Idx,
                     DAG.getConstant(1, dl, Idx.getValueType()));
-  NewVec =  DAG.getNode(ISD::INSERT_VECTOR_ELT, dl, NewVecVT, NewVec, Hi, Idx);
+  NewVec = DAG.getNode(ISD::INSERT_VECTOR_ELT, dl, NewVecVT, NewVec, Hi, Idx);
 
   // Convert the new vector to the old vector type.
   return DAG.getNode(ISD::BITCAST, dl, VecVT, NewVec);
@@ -487,7 +484,6 @@ SDValue DAGTypeLegalizer::ExpandOp_NormalStore(SDNode *N, unsigned OpNo) {
   return DAG.getNode(ISD::TokenFactor, dl, MVT::Other, Lo, Hi);
 }
 
-
 //===--------------------------------------------------------------------===//
 // Generic Result Splitting.
 //===--------------------------------------------------------------------===//
@@ -551,8 +547,7 @@ void DAGTypeLegalizer::SplitRes_Select(SDNode *N, SDValue &Lo, SDValue &Hi) {
   Hi = DAG.getNode(Opcode, dl, LH.getValueType(), CH, LH, RH, EVLHi);
 }
 
-void DAGTypeLegalizer::SplitRes_SELECT_CC(SDNode *N, SDValue &Lo,
-                                          SDValue &Hi) {
+void DAGTypeLegalizer::SplitRes_SELECT_CC(SDNode *N, SDValue &Lo, SDValue &Hi) {
   SDValue LL, LH, RL, RH;
   SDLoc dl(N);
   GetSplitOp(N->getOperand(2), LL, LH);
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
index dec81475f3a88fc..2ab7220e1f664e8 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp
@@ -51,7 +51,7 @@ using namespace llvm;
 namespace {
 
 class VectorLegalizer {
-  SelectionDAG& DAG;
+  SelectionDAG &DAG;
   const TargetLowering &TLI;
   bool Changed = false; // Keep track of whether anything changed
 
@@ -106,9 +106,9 @@ class VectorLegalizer {
 
   /// Implement expansion for ANY_EXTEND_VECTOR_INREG.
   ///
-  /// Shuffles the low lanes of the operand into place and bitcasts to the proper
-  /// type. The contents of the bits in the extended part of each element are
-  /// undef.
+  /// Shuffles the low lanes of the operand into place and bitcasts to the
+  /// proper type. The contents of the bits in the extended part of each element
+  /// are undef.
   SDValue ExpandANY_EXTEND_VECTOR_INREG(SDNode *Node);
 
   /// Implement expansion for SIGN_EXTEND_VECTOR_INREG.
@@ -174,8 +174,8 @@ class VectorLegalizer {
   void PromoteReduction(SDNode *Node, SmallVectorImpl<SDValue> &Results);
 
 public:
-  VectorLegalizer(SelectionDAG& dag) :
-      DAG(dag), TLI(dag.getTargetLoweringInfo()) {}
+  VectorLegalizer(SelectionDAG &dag)
+      : DAG(dag), TLI(dag.getTargetLoweringInfo()) {}
 
   /// Begin legalizer the vector operations in the DAG.
   bool Run();
@@ -187,7 +187,8 @@ bool VectorLegalizer::Run() {
   // Before we start legalizing vector nodes, check if there are any vectors.
   bool HasVectors = false;
   for (SelectionDAG::allnodes_iterator I = DAG.allnodes_begin(),
-       E = std::prev(DAG.allnodes_end()); I != std::next(E); ++I) {
+                                       E = std::prev(DAG.allnodes_end());
+       I != std::next(E); ++I) {
     // Check if the values of the nodes contain vectors. We don't need to check
     // the operands because we are going to check their values at some point.
     HasVectors = llvm::any_of(I->values(), [](EVT T) { return T.isVector(); });
@@ -209,7 +210,8 @@ bool VectorLegalizer::Run() {
   // node is only legalized after all of its operands are legalized.
   DAG.AssignTopologicalOrder();
   for (SelectionDAG::allnodes_iterator I = DAG.allnodes_begin(),
-       E = std::prev(DAG.allnodes_end()); I != std::next(E); ++I)
+                                       E = std::prev(DAG.allnodes_end());
+       I != std::next(E); ++I)
     LegalizeOp(SDValue(&*I, 0));
 
   // Finally, it's possible the root changed.  Get the new root.
@@ -252,7 +254,8 @@ SDValue VectorLegalizer::LegalizeOp(SDValue Op) {
   // Note that LegalizeOp may be reentered even from single-use nodes, which
   // means that we always must cache transformed nodes.
   DenseMap<SDValue, SDValue>::iterator I = LegalizedNodes.find(Op);
-  if (I != LegalizedNodes.end()) return I->second;
+  if (I != LegalizedNodes.end())
+    return I->second;
 
   // Legalize the operands
   SmallVector<SDValue, 8> Ops;
@@ -322,10 +325,10 @@ SDValue VectorLegalizer::LegalizeOp(SDValue Op) {
         TLI.getStrictFPOperationAction(Node->getOpcode(), ValVT) ==
             TargetLowering::Legal) {
       EVT EltVT = ValVT.getVectorElementType();
-      if (TLI.getOperationAction(Node->getOpcode(), EltVT)
-          == TargetLowering::Expand &&
-          TLI.getStrictFPOperationAction(Node->getOpcode(), EltVT)
-          == TargetLowering::Legal)
+      if (TLI.getOperationAction(Node->getOpcode(), EltVT) ==
+              TargetLowering::Expand &&
+          TLI.getStrictFPOperationAction(Node->getOpcode(), EltVT) ==
+              TargetLowering::Legal)
         Action = TargetLowering::Legal;
     }
     break;
@@ -498,7 +501,8 @@ SDValue VectorLegalizer::LegalizeOp(SDValue Op) {
 
   SmallVector<SDValue, 8> ResultVals;
   switch (Action) {
-  default: llvm_unreachable("This action is not supported yet!");
+  default:
+    llvm_unreachable("This action is not supported yet!");
   case TargetLowering::Promote:
     assert((Op.getOpcode() != ISD::LOAD && Op.getOpcode() != ISD::STORE) &&
            "This action is not supported yet!");
@@ -988,7 +992,8 @@ void VectorLegalizer::Expand(SDNode *Node, SmallVectorImpl<SDValue> &Results) {
     break;
   case ISD::FP_TO_SINT_SAT:
   case ISD::FP_TO_UINT_SAT:
-    // Expand the fpsosisat if it is scalable to prevent it from unrolling below.
+    // Expand the fpsosisat if it is scalable to prevent it from unrolling
+    // below.
     if (Node->getValueType(0).isScalableVector()) {
       if (SDValue Expanded = TLI.expandFP_TO_INT_SAT(Node, DAG)) {
         Results.push_back(Expanded);
@@ -1068,8 +1073,8 @@ SDValue VectorLegalizer::ExpandSELECT(SDNode *Node) {
   SDValue Op1 = Node->getOperand(1);
   SDValue Op2 = Node->getOperand(2);
 
-  assert(VT.isVector() && !Mask.getValueType().isVector()
-         && Op1.getValueType() == Op2.getValueType() && "Invalid type");
+  assert(VT.isVector() && !Mask.getValueType().isVector() &&
+         Op1.getValueType() == Op2.getValueType() && "Invalid type");
 
   // If we can't even use the basic vector operations of
   // AND,OR,XOR, we will have to scalarize the op.
@@ -1252,7 +1257,8 @@ SDValue VectorLegalizer::ExpandBSWAP(SDNode *Node) {
   if (TLI.isShuffleMaskLegal(ShuffleMask, ByteVT)) {
     SDLoc DL(Node);
     SDValue Op = DAG.getNode(ISD::BITCAST, DL, ByteVT, Node->getOperand(0));
-    Op = DAG.getVectorShuffle(ByteVT, DL, Op, DAG.getUNDEF(ByteVT), ShuffleMask);
+    Op =
+        DAG.getVectorShuffle(ByteVT, DL, Op, DAG.getUNDEF(ByteVT), ShuffleMask);
     return DAG.getNode(ISD::BITCAST, DL, VT, Op);
   }
 
@@ -1302,8 +1308,8 @@ void VectorLegalizer::ExpandBITREVERSE(SDNode *Node,
           TLI.isOperationLegalOrCustomOrPromote(ISD::OR, ByteVT)))) {
       SDLoc DL(Node);
       SDValue Op = DAG.getNode(ISD::BITCAST, DL, ByteVT, Node->getOperand(0));
-      Op = DAG.getVectorShuffle(ByteVT, DL, Op, DAG.getUNDEF(ByteVT),
-                                BSWAPMask);
+      Op =
+          DAG.getVectorShuffle(ByteVT, DL, Op, DAG.getUNDEF(ByteVT), BSWAPMask);
       Op = DAG.getNode(ISD::BITREVERSE, DL, ByteVT, Op);
       Op = DAG.getNode(ISD::BITCAST, DL, VT, Op);
       Results.push_back(Op);
@@ -1452,7 +1458,8 @@ SDValue VectorLegalizer::ExpandVP_REM(SDNode *Node) {
   // Implement VP_SREM/UREM in terms of VP_SDIV/VP_UDIV, VP_MUL, VP_SUB.
   EVT VT = Node->getValueType(0);
 
-  unsigned DivOpc = Node->getOpcode() == ISD::VP_SREM ? ISD::VP_SDIV : ISD::VP_UDIV;
+  unsigned DivOpc =
+      Node->getOpcode() == ISD::VP_SREM ? ISD::VP_SDIV : ISD::VP_UDIV;
 
   if (!TLI.isOperationLegalOrCustom(DivOpc, VT) ||
       !TLI.isOperationLegalOrCustom(ISD::VP_MUL, VT) ||
@@ -1717,8 +1724,9 @@ void VectorLegalizer::ExpandMULO(SDNode *Node,
 void VectorLegalizer::ExpandFixedPointDiv(SDNode *Node,
                                           SmallVectorImpl<SDValue> &Results) {
   SDNode *N = Node;
-  if (SDValue Expanded = TLI.expandFixedPointDiv(N->getOpcode(), SDLoc(N),
-          N->getOperand(0), N->getOperand(1), N->getConstantOperandVal(2), DAG))
+  if (SDValue Expanded = TLI.expandFixedPointDiv(
+          N->getOpcode(), SDLoc(N), N->getOperand(0), N->getOperand(1),
+          N->getConstantOperandVal(2), DAG))
     Results.push_back(Expanded);
 }
 
@@ -1764,8 +1772,8 @@ void VectorLegalizer::UnrollStrictFPOp(SDNode *Node,
   EVT TmpEltVT = EltVT;
   if (Node->getOpcode() == ISD::STRICT_FSETCC ||
       Node->getOpcode() == ISD::STRICT_FSETCCS)
-    TmpEltVT = TLI.getSetCCResultType(DAG.getDataLayout(),
-                                      *DAG.getContext(), TmpEltVT);
+    TmpEltVT = TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
+                                      TmpEltVT);
 
   EVT ValueVTs[] = {TmpEltVT, MVT::Other};
   SDValue Chain = Node->getOperand(0);
@@ -1838,6 +1846,4 @@ SDValue VectorLegalizer::UnrollVSETCC(SDNode *Node) {
   return DAG.getBuildVector(VT, dl, Ops);
 }
 
-bool SelectionDAG::LegalizeVectors() {
-  return VectorLegalizer(*this).Run();
-}
+bool SelectionDAG::LegalizeVectors() { return VectorLegalizer(*this).Run(); }
diff --git a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
index 1bb6fbbf064b931..fb377e16ab44a71 100644
--- a/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/LegalizeVectorTypes.cpp
@@ -52,23 +52,57 @@ void DAGTypeLegalizer::ScalarizeVectorResult(SDNode *N, unsigned ResNo) {
     report_fatal_error("Do not know how to scalarize the result of this "
                        "operator!\n");
 
-  case ISD::MERGE_VALUES:      R = ScalarizeVecRes_MERGE_VALUES(N, ResNo);break;
-  case ISD::BITCAST:           R = ScalarizeVecRes_BITCAST(N); break;
-  case ISD::BUILD_VECTOR:      R = ScalarizeVecRes_BUILD_VECTOR(N); break;
-  case ISD::EXTRACT_SUBVECTOR: R = ScalarizeVecRes_EXTRACT_SUBVECTOR(N); break;
-  case ISD::FP_ROUND:          R = ScalarizeVecRes_FP_ROUND(N); break;
-  case ISD::FPOWI:             R = ScalarizeVecRes_ExpOp(N); break;
-  case ISD::INSERT_VECTOR_ELT: R = ScalarizeVecRes_INSERT_VECTOR_ELT(N); break;
-  case ISD::LOAD:           R = ScalarizeVecRes_LOAD(cast<LoadSDNode>(N));break;
-  case ISD::SCALAR_TO_VECTOR:  R = ScalarizeVecRes_SCALAR_TO_VECTOR(N); break;
-  case ISD::SIGN_EXTEND_INREG: R = ScalarizeVecRes_InregOp(N); break;
-  case ISD::VSELECT:           R = ScalarizeVecRes_VSELECT(N); break;
-  case ISD::SELECT:            R = ScalarizeVecRes_SELECT(N); break;
-  case ISD::SELECT_CC:         R = ScalarizeVecRes_SELECT_CC(N); break;
-  case ISD::SETCC:             R = ScalarizeVecRes_SETCC(N); break;
-  case ISD::UNDEF:             R = ScalarizeVecRes_UNDEF(N); break;
-  case ISD::VECTOR_SHUFFLE:    R = ScalarizeVecRes_VECTOR_SHUFFLE(N); break;
-  case ISD::IS_FPCLASS:        R = ScalarizeVecRes_IS_FPCLASS(N); break;
+  case ISD::MERGE_VALUES:
+    R = ScalarizeVecRes_MERGE_VALUES(N, ResNo);
+    break;
+  case ISD::BITCAST:
+    R = ScalarizeVecRes_BITCAST(N);
+    break;
+  case ISD::BUILD_VECTOR:
+    R = ScalarizeVecRes_BUILD_VECTOR(N);
+    break;
+  case ISD::EXTRACT_SUBVECTOR:
+    R = ScalarizeVecRes_EXTRACT_SUBVECTOR(N);
+    break;
+  case ISD::FP_ROUND:
+    R = ScalarizeVecRes_FP_ROUND(N);
+    break;
+  case ISD::FPOWI:
+    R = ScalarizeVecRes_ExpOp(N);
+    break;
+  case ISD::INSERT_VECTOR_ELT:
+    R = ScalarizeVecRes_INSERT_VECTOR_ELT(N);
+    break;
+  case ISD::LOAD:
+    R = ScalarizeVecRes_LOAD(cast<LoadSDNode>(N));
+    break;
+  case ISD::SCALAR_TO_VECTOR:
+    R = ScalarizeVecRes_SCALAR_TO_VECTOR(N);
+    break;
+  case ISD::SIGN_EXTEND_INREG:
+    R = ScalarizeVecRes_InregOp(N);
+    break;
+  case ISD::VSELECT:
+    R = ScalarizeVecRes_VSELECT(N);
+    break;
+  case ISD::SELECT:
+    R = ScalarizeVecRes_SELECT(N);
+    break;
+  case ISD::SELECT_CC:
+    R = ScalarizeVecRes_SELECT_CC(N);
+    break;
+  case ISD::SETCC:
+    R = ScalarizeVecRes_SETCC(N);
+    break;
+  case ISD::UNDEF:
+    R = ScalarizeVecRes_UNDEF(N);
+    break;
+  case ISD::VECTOR_SHUFFLE:
+    R = ScalarizeVecRes_VECTOR_SHUFFLE(N);
+    break;
+  case ISD::IS_FPCLASS:
+    R = ScalarizeVecRes_IS_FPCLASS(N);
+    break;
   case ISD::ANY_EXTEND_VECTOR_INREG:
   case ISD::SIGN_EXTEND_VECTOR_INREG:
   case ISD::ZERO_EXTEND_VECTOR_INREG:
@@ -207,8 +241,8 @@ void DAGTypeLegalizer::ScalarizeVectorResult(SDNode *N, unsigned ResNo) {
 SDValue DAGTypeLegalizer::ScalarizeVecRes_BinOp(SDNode *N) {
   SDValue LHS = GetScalarizedVector(N->getOperand(0));
   SDValue RHS = GetScalarizedVector(N->getOperand(1));
-  return DAG.getNode(N->getOpcode(), SDLoc(N),
-                     LHS.getValueType(), LHS, RHS, N->getFlags());
+  return DAG.getNode(N->getOpcode(), SDLoc(N), LHS.getValueType(), LHS, RHS,
+                     N->getFlags());
 }
 
 SDValue DAGTypeLegalizer::ScalarizeVecRes_TernaryOp(SDNode *N) {
@@ -311,10 +345,11 @@ SDValue DAGTypeLegalizer::ScalarizeVecRes_OverflowOp(SDNode *N,
     ScalarRHS = ElemsRHS[0];
   }
 
-  SDVTList ScalarVTs = DAG.getVTList(
-      ResVT.getVectorElementType(), OvVT.getVectorElementType());
-  SDNode *ScalarNode = DAG.getNode(
-      N->getOpcode(), DL, ScalarVTs, ScalarLHS, ScalarRHS).getNode();
+  SDVTList ScalarVTs =
+      DAG.getVTList(ResVT.getVectorElementType(), OvVT.getVectorElementType());
+  SDNode *ScalarNode =
+      DAG.getNode(N->getOpcode(), DL, ScalarVTs, ScalarLHS, ScalarRHS)
+          .getNode();
   ScalarNode->setFlags(N->getFlags());
 
   // Replace the other vector result not being explicitly scalarized here.
@@ -323,8 +358,8 @@ SDValue DAGTypeLegalizer::ScalarizeVecRes_OverflowOp(SDNode *N,
   if (getTypeAction(OtherVT) == TargetLowering::TypeScalarizeVector) {
     SetScalarizedVector(SDValue(N, OtherNo), SDValue(ScalarNode, OtherNo));
   } else {
-    SDValue OtherVal = DAG.getNode(
-        ISD::SCALAR_TO_VECTOR, DL, OtherVT, SDValue(ScalarNode, OtherNo));
+    SDValue OtherVal = DAG.getNode(ISD::SCALAR_TO_VECTOR, DL, OtherVT,
+                                   SDValue(ScalarNode, OtherNo));
     ReplaceValueWith(SDValue(N, OtherNo), OtherVal);
   }
 
@@ -339,13 +374,12 @@ SDValue DAGTypeLegalizer::ScalarizeVecRes_MERGE_VALUES(SDNode *N,
 
 SDValue DAGTypeLegalizer::ScalarizeVecRes_BITCAST(SDNode *N) {
   SDValue Op = N->getOperand(0);
-  if (Op.getValueType().isVector()
-      && Op.getValueType().getVectorNumElements() == 1
-      && !isSimpleLegalType(Op.getValueType()))
+  if (Op.getValueType().isVector() &&
+      Op.getValueType().getVectorNumElements() == 1 &&
+      !isSimpleLegalType(Op.getValueType()))
     Op = GetScalarizedVector(Op);
   EVT NewVT = N->getValueType(0).getVectorElementType();
-  return DAG.getNode(ISD::BITCAST, SDLoc(N),
-                     NewVT, Op);
+  return DAG.getNode(ISD::BITCAST, SDLoc(N), NewVT, Op);
 }
 
 SDValue DAGTypeLegalizer::ScalarizeVecRes_BUILD_VECTOR(SDNode *N) {
@@ -442,8 +476,8 @@ SDValue DAGTypeLegalizer::ScalarizeVecRes_InregOp(SDNode *N) {
   EVT EltVT = N->getValueType(0).getVectorElementType();
   EVT ExtVT = cast<VTSDNode>(N->getOperand(1))->getVT().getVectorElementType();
   SDValue LHS = GetScalarizedVector(N->getOperand(0));
-  return DAG.getNode(N->getOpcode(), SDLoc(N), EltVT,
-                     LHS, DAG.getValueType(ExtVT));
+  return DAG.getNode(N->getOpcode(), SDLoc(N), EltVT, LHS,
+                     DAG.getValueType(ExtVT));
 }
 
 SDValue DAGTypeLegalizer::ScalarizeVecRes_VecInregOp(SDNode *N) {
@@ -522,22 +556,22 @@ SDValue DAGTypeLegalizer::ScalarizeVecRes_VSELECT(SDNode *N) {
   EVT CondVT = Cond.getValueType();
   if (ScalarBool != VecBool) {
     switch (ScalarBool) {
-      case TargetLowering::UndefinedBooleanContent:
-        break;
-      case TargetLowering::ZeroOrOneBooleanContent:
-        assert(VecBool == TargetLowering::UndefinedBooleanContent ||
-               VecBool == TargetLowering::ZeroOrNegativeOneBooleanContent);
-        // Vector read from all ones, scalar expects a single 1 so mask.
-        Cond = DAG.getNode(ISD::AND, SDLoc(N), CondVT,
-                           Cond, DAG.getConstant(1, SDLoc(N), CondVT));
-        break;
-      case TargetLowering::ZeroOrNegativeOneBooleanContent:
-        assert(VecBool == TargetLowering::UndefinedBooleanContent ||
-               VecBool == TargetLowering::ZeroOrOneBooleanContent);
-        // Vector reads from a one, scalar from all ones so sign extend.
-        Cond = DAG.getNode(ISD::SIGN_EXTEND_INREG, SDLoc(N), CondVT,
-                           Cond, DAG.getValueType(MVT::i1));
-        break;
+    case TargetLowering::UndefinedBooleanContent:
+      break;
+    case TargetLowering::ZeroOrOneBooleanContent:
+      assert(VecBool == TargetLowering::UndefinedBooleanContent ||
+             VecBool == TargetLowering::ZeroOrNegativeOneBooleanContent);
+      // Vector read from all ones, scalar expects a single 1 so mask.
+      Cond = DAG.getNode(ISD::AND, SDLoc(N), CondVT, Cond,
+                         DAG.getConstant(1, SDLoc(N), CondVT));
+      break;
+    case TargetLowering::ZeroOrNegativeOneBooleanContent:
+      assert(VecBool == TargetLowering::UndefinedBooleanContent ||
+             VecBool == TargetLowering::ZeroOrOneBooleanContent);
+      // Vector reads from a one, scalar from all ones so sign extend.
+      Cond = DAG.getNode(ISD::SIGN_EXTEND_INREG, SDLoc(N), CondVT, Cond,
+                         DAG.getValueType(MVT::i1));
+      break;
     }
   }
 
@@ -546,24 +580,21 @@ SDValue DAGTypeLegalizer::ScalarizeVecRes_VSELECT(SDNode *N) {
   if (BoolVT.bitsLT(CondVT))
     Cond = DAG.getNode(ISD::TRUNCATE, SDLoc(N), BoolVT, Cond);
 
-  return DAG.getSelect(SDLoc(N),
-                       LHS.getValueType(), Cond, LHS,
+  return DAG.getSelect(SDLoc(N), LHS.getValueType(), Cond, LHS,
                        GetScalarizedVector(N->getOperand(2)));
 }
 
 SDValue DAGTypeLegalizer::ScalarizeVecRes_SELECT(SDNode *N) {
   SDValue LHS = GetScalarizedVector(N->getOperand(1));
-  return DAG.getSelect(SDLoc(N),
-                       LHS.getValueType(), N->getOperand(0), LHS,
+  return DAG.getSelect(SDLoc(N), LHS.getValueType(), N->getOperand(0), LHS,
                        GetScalarizedVector(N->getOperand(2)));
 }
 
 SDValue DAGTypeLegalizer::ScalarizeVecRes_SELECT_CC(SDNode *N) {
   SDValue LHS = GetScalarizedVector(N->getOperand(2));
   return DAG.getNode(ISD::SELECT_CC, SDLoc(N), LHS.getValueType(),
-                     N->getOperand(0), N->getOperand(1),
-                     LHS, GetScalarizedVector(N->getOperand(3)),
-                     N->getOperand(4));
+                     N->getOperand(0), N->getOperand(1), LHS,
+                     GetScalarizedVector(N->getOperand(3)), N->getOperand(4));
 }
 
 SDValue DAGTypeLegalizer::ScalarizeVecRes_UNDEF(SDNode *N) {
@@ -619,8 +650,8 @@ SDValue DAGTypeLegalizer::ScalarizeVecRes_SETCC(SDNode *N) {
   }
 
   // Turn it into a scalar SETCC.
-  SDValue Res = DAG.getNode(ISD::SETCC, DL, MVT::i1, LHS, RHS,
-                            N->getOperand(2));
+  SDValue Res =
+      DAG.getNode(ISD::SETCC, DL, MVT::i1, LHS, RHS, N->getOperand(2));
   // Vectors may have a different boolean contents to scalars.  Promote the
   // value appropriately.
   ISD::NodeType ExtendCode =
@@ -740,7 +771,8 @@ bool DAGTypeLegalizer::ScalarizeVectorOperand(SDNode *N, unsigned OpNo) {
   }
 
   // If the result is null, the sub-method took care of registering results etc.
-  if (!Res.getNode()) return false;
+  if (!Res.getNode())
+    return false;
 
   // If the result is N, the sub-method updated N in place.  Tell the legalizer
   // core about this.
@@ -758,8 +790,7 @@ bool DAGTypeLegalizer::ScalarizeVectorOperand(SDNode *N, unsigned OpNo) {
 /// <1 x ty>. Convert the element instead.
 SDValue DAGTypeLegalizer::ScalarizeVecOp_BITCAST(SDNode *N) {
   SDValue Elt = GetScalarizedVector(N->getOperand(0));
-  return DAG.getNode(ISD::BITCAST, SDLoc(N),
-                     N->getValueType(0), Elt);
+  return DAG.getNode(ISD::BITCAST, SDLoc(N), N->getValueType(0), Elt);
 }
 
 /// If the input is a vector that needs to be scalarized, it must be <1 x ty>.
@@ -782,8 +813,8 @@ SDValue DAGTypeLegalizer::ScalarizeVecOp_UnaryOp_StrictFP(SDNode *N) {
          "Unexpected vector type!");
   SDValue Elt = GetScalarizedVector(N->getOperand(1));
   SDValue Res = DAG.getNode(N->getOpcode(), SDLoc(N),
-                            { N->getValueType(0).getScalarType(), MVT::Other },
-                            { N->getOperand(0), Elt });
+                            {N->getValueType(0).getScalarType(), MVT::Other},
+                            {N->getOperand(0), Elt});
   // Legalize the chain result - switch anything that used the old chain to
   // use the new one.
   ReplaceValueWith(SDValue(N, 1), Res.getValue(1));
@@ -845,8 +876,8 @@ SDValue DAGTypeLegalizer::ScalarizeVecOp_VSETCC(SDNode *N) {
   EVT NVT = VT.getVectorElementType();
   SDLoc DL(N);
   // Turn it into a scalar SETCC.
-  SDValue Res = DAG.getNode(ISD::SETCC, DL, MVT::i1, LHS, RHS,
-      N->getOperand(2));
+  SDValue Res =
+      DAG.getNode(ISD::SETCC, DL, MVT::i1, LHS, RHS, N->getOperand(2));
 
   // Vectors may have a different boolean contents to scalars.  Promote the
   // value appropriately.
@@ -860,7 +891,7 @@ SDValue DAGTypeLegalizer::ScalarizeVecOp_VSETCC(SDNode *N) {
 
 /// If the value to store is a vector that needs to be scalarized, it must be
 /// <1 x ty>. Just store the element.
-SDValue DAGTypeLegalizer::ScalarizeVecOp_STORE(StoreSDNode *N, unsigned OpNo){
+SDValue DAGTypeLegalizer::ScalarizeVecOp_STORE(StoreSDNode *N, unsigned OpNo) {
   assert(N->isUnindexed() && "Indexed store of one-element vector?");
   assert(OpNo == 1 && "Do not know how to scalarize this operand!");
   SDLoc dl(N);
@@ -889,14 +920,14 @@ SDValue DAGTypeLegalizer::ScalarizeVecOp_FP_ROUND(SDNode *N, unsigned OpNo) {
   return DAG.getNode(ISD::SCALAR_TO_VECTOR, SDLoc(N), N->getValueType(0), Res);
 }
 
-SDValue DAGTypeLegalizer::ScalarizeVecOp_STRICT_FP_ROUND(SDNode *N, 
+SDValue DAGTypeLegalizer::ScalarizeVecOp_STRICT_FP_ROUND(SDNode *N,
                                                          unsigned OpNo) {
   assert(OpNo == 1 && "Wrong operand for scalarization!");
   SDValue Elt = GetScalarizedVector(N->getOperand(1));
-  SDValue Res = DAG.getNode(ISD::STRICT_FP_ROUND, SDLoc(N),
-                            { N->getValueType(0).getVectorElementType(), 
-                              MVT::Other },
-                            { N->getOperand(0), Elt, N->getOperand(2) });
+  SDValue Res =
+      DAG.getNode(ISD::STRICT_FP_ROUND, SDLoc(N),
+                  {N->getValueType(0).getVectorElementType(), MVT::Other},
+                  {N->getOperand(0), Elt, N->getOperand(2)});
   // Legalize the chain result - switch anything that used the old chain to
   // use the new one.
   ReplaceValueWith(SDValue(N, 1), Res.getValue(1));
@@ -953,8 +984,8 @@ SDValue DAGTypeLegalizer::ScalarizeVecOp_VECREDUCE_SEQ(SDNode *N) {
   unsigned BaseOpc = ISD::getVecReduceBaseOpcode(N->getOpcode());
 
   SDValue Op = GetScalarizedVector(VecOp);
-  return DAG.getNode(BaseOpc, SDLoc(N), N->getValueType(0),
-                     AccOp, Op, N->getFlags());
+  return DAG.getNode(BaseOpc, SDLoc(N), N->getValueType(0), AccOp, Op,
+                     N->getFlags());
 }
 
 //===----------------------------------------------------------------------===//
@@ -983,24 +1014,50 @@ void DAGTypeLegalizer::SplitVectorResult(SDNode *N, unsigned ResNo) {
     report_fatal_error("Do not know how to split the result of this "
                        "operator!\n");
 
-  case ISD::MERGE_VALUES: SplitRes_MERGE_VALUES(N, ResNo, Lo, Hi); break;
-  case ISD::AssertZext:   SplitVecRes_AssertZext(N, Lo, Hi); break;
+  case ISD::MERGE_VALUES:
+    SplitRes_MERGE_VALUES(N, ResNo, Lo, Hi);
+    break;
+  case ISD::AssertZext:
+    SplitVecRes_AssertZext(N, Lo, Hi);
+    break;
   case ISD::VSELECT:
   case ISD::SELECT:
   case ISD::VP_MERGE:
-  case ISD::VP_SELECT:    SplitRes_Select(N, Lo, Hi); break;
-  case ISD::SELECT_CC:    SplitRes_SELECT_CC(N, Lo, Hi); break;
-  case ISD::UNDEF:        SplitRes_UNDEF(N, Lo, Hi); break;
-  case ISD::BITCAST:           SplitVecRes_BITCAST(N, Lo, Hi); break;
-  case ISD::BUILD_VECTOR:      SplitVecRes_BUILD_VECTOR(N, Lo, Hi); break;
-  case ISD::CONCAT_VECTORS:    SplitVecRes_CONCAT_VECTORS(N, Lo, Hi); break;
-  case ISD::EXTRACT_SUBVECTOR: SplitVecRes_EXTRACT_SUBVECTOR(N, Lo, Hi); break;
-  case ISD::INSERT_SUBVECTOR:  SplitVecRes_INSERT_SUBVECTOR(N, Lo, Hi); break;
+  case ISD::VP_SELECT:
+    SplitRes_Select(N, Lo, Hi);
+    break;
+  case ISD::SELECT_CC:
+    SplitRes_SELECT_CC(N, Lo, Hi);
+    break;
+  case ISD::UNDEF:
+    SplitRes_UNDEF(N, Lo, Hi);
+    break;
+  case ISD::BITCAST:
+    SplitVecRes_BITCAST(N, Lo, Hi);
+    break;
+  case ISD::BUILD_VECTOR:
+    SplitVecRes_BUILD_VECTOR(N, Lo, Hi);
+    break;
+  case ISD::CONCAT_VECTORS:
+    SplitVecRes_CONCAT_VECTORS(N, Lo, Hi);
+    break;
+  case ISD::EXTRACT_SUBVECTOR:
+    SplitVecRes_EXTRACT_SUBVECTOR(N, Lo, Hi);
+    break;
+  case ISD::INSERT_SUBVECTOR:
+    SplitVecRes_INSERT_SUBVECTOR(N, Lo, Hi);
+    break;
   case ISD::FPOWI:
   case ISD::FLDEXP:
-  case ISD::FCOPYSIGN:         SplitVecRes_FPOp_MultiType(N, Lo, Hi); break;
-  case ISD::IS_FPCLASS:        SplitVecRes_IS_FPCLASS(N, Lo, Hi); break;
-  case ISD::INSERT_VECTOR_ELT: SplitVecRes_INSERT_VECTOR_ELT(N, Lo, Hi); break;
+  case ISD::FCOPYSIGN:
+    SplitVecRes_FPOp_MultiType(N, Lo, Hi);
+    break;
+  case ISD::IS_FPCLASS:
+    SplitVecRes_IS_FPCLASS(N, Lo, Hi);
+    break;
+  case ISD::INSERT_VECTOR_ELT:
+    SplitVecRes_INSERT_VECTOR_ELT(N, Lo, Hi);
+    break;
   case ISD::SPLAT_VECTOR:
   case ISD::SCALAR_TO_VECTOR:
     SplitVecRes_ScalarOp(N, Lo, Hi);
@@ -1008,7 +1065,9 @@ void DAGTypeLegalizer::SplitVectorResult(SDNode *N, unsigned ResNo) {
   case ISD::STEP_VECTOR:
     SplitVecRes_STEP_VECTOR(N, Lo, Hi);
     break;
-  case ISD::SIGN_EXTEND_INREG: SplitVecRes_InregOp(N, Lo, Hi); break;
+  case ISD::SIGN_EXTEND_INREG:
+    SplitVecRes_InregOp(N, Lo, Hi);
+    break;
   case ISD::LOAD:
     SplitVecRes_LOAD(cast<LoadSDNode>(N), Lo, Hi);
     break;
@@ -1070,7 +1129,8 @@ void DAGTypeLegalizer::SplitVectorResult(SDNode *N, unsigned ResNo) {
   case ISD::VP_CTTZ_ZERO_UNDEF:
   case ISD::CTPOP:
   case ISD::VP_CTPOP:
-  case ISD::FABS: case ISD::VP_FABS:
+  case ISD::FABS:
+  case ISD::VP_FABS:
   case ISD::FCEIL:
   case ISD::VP_FCEIL:
   case ISD::FCOS:
@@ -1084,7 +1144,8 @@ void DAGTypeLegalizer::SplitVectorResult(SDNode *N, unsigned ResNo) {
   case ISD::FLOG2:
   case ISD::FNEARBYINT:
   case ISD::VP_FNEARBYINT:
-  case ISD::FNEG: case ISD::VP_FNEG:
+  case ISD::FNEG:
+  case ISD::VP_FNEG:
   case ISD::FREEZE:
   case ISD::ARITH_FENCE:
   case ISD::FP_EXTEND:
@@ -1102,7 +1163,8 @@ void DAGTypeLegalizer::SplitVectorResult(SDNode *N, unsigned ResNo) {
   case ISD::FROUNDEVEN:
   case ISD::VP_FROUNDEVEN:
   case ISD::FSIN:
-  case ISD::FSQRT: case ISD::VP_SQRT:
+  case ISD::FSQRT:
+  case ISD::VP_SQRT:
   case ISD::FTRUNC:
   case ISD::VP_FROUNDTOZERO:
   case ISD::SINT_TO_FP:
@@ -1126,35 +1188,59 @@ void DAGTypeLegalizer::SplitVectorResult(SDNode *N, unsigned ResNo) {
     SplitVecRes_ExtendOp(N, Lo, Hi);
     break;
 
-  case ISD::ADD: case ISD::VP_ADD:
-  case ISD::SUB: case ISD::VP_SUB:
-  case ISD::MUL: case ISD::VP_MUL:
+  case ISD::ADD:
+  case ISD::VP_ADD:
+  case ISD::SUB:
+  case ISD::VP_SUB:
+  case ISD::MUL:
+  case ISD::VP_MUL:
   case ISD::MULHS:
   case ISD::MULHU:
-  case ISD::FADD: case ISD::VP_FADD:
-  case ISD::FSUB: case ISD::VP_FSUB:
-  case ISD::FMUL: case ISD::VP_FMUL:
-  case ISD::FMINNUM: case ISD::VP_FMINNUM:
-  case ISD::FMAXNUM: case ISD::VP_FMAXNUM:
+  case ISD::FADD:
+  case ISD::VP_FADD:
+  case ISD::FSUB:
+  case ISD::VP_FSUB:
+  case ISD::FMUL:
+  case ISD::VP_FMUL:
+  case ISD::FMINNUM:
+  case ISD::VP_FMINNUM:
+  case ISD::FMAXNUM:
+  case ISD::VP_FMAXNUM:
   case ISD::FMINIMUM:
   case ISD::FMAXIMUM:
-  case ISD::SDIV: case ISD::VP_SDIV:
-  case ISD::UDIV: case ISD::VP_UDIV:
-  case ISD::FDIV: case ISD::VP_FDIV:
+  case ISD::SDIV:
+  case ISD::VP_SDIV:
+  case ISD::UDIV:
+  case ISD::VP_UDIV:
+  case ISD::FDIV:
+  case ISD::VP_FDIV:
   case ISD::FPOW:
-  case ISD::AND: case ISD::VP_AND:
-  case ISD::OR: case ISD::VP_OR:
-  case ISD::XOR: case ISD::VP_XOR:
-  case ISD::SHL: case ISD::VP_SHL:
-  case ISD::SRA: case ISD::VP_ASHR:
-  case ISD::SRL: case ISD::VP_LSHR:
-  case ISD::UREM: case ISD::VP_UREM:
-  case ISD::SREM: case ISD::VP_SREM:
-  case ISD::FREM: case ISD::VP_FREM:
-  case ISD::SMIN: case ISD::VP_SMIN:
-  case ISD::SMAX: case ISD::VP_SMAX:
-  case ISD::UMIN: case ISD::VP_UMIN:
-  case ISD::UMAX: case ISD::VP_UMAX:
+  case ISD::AND:
+  case ISD::VP_AND:
+  case ISD::OR:
+  case ISD::VP_OR:
+  case ISD::XOR:
+  case ISD::VP_XOR:
+  case ISD::SHL:
+  case ISD::VP_SHL:
+  case ISD::SRA:
+  case ISD::VP_ASHR:
+  case ISD::SRL:
+  case ISD::VP_LSHR:
+  case ISD::UREM:
+  case ISD::VP_UREM:
+  case ISD::SREM:
+  case ISD::VP_SREM:
+  case ISD::FREM:
+  case ISD::VP_FREM:
+  case ISD::SMIN:
+  case ISD::VP_SMIN:
+  case ISD::SMAX:
+  case ISD::VP_SMAX:
+  case ISD::UMIN:
+  case ISD::VP_UMIN:
+  case ISD::UMAX:
+  case ISD::VP_UMAX:
   case ISD::SADDSAT:
   case ISD::UADDSAT:
   case ISD::SSUBSAT:
@@ -1166,7 +1252,8 @@ void DAGTypeLegalizer::SplitVectorResult(SDNode *N, unsigned ResNo) {
   case ISD::VP_FCOPYSIGN:
     SplitVecRes_BinOp(N, Lo, Hi);
     break;
-  case ISD::FMA: case ISD::VP_FMA:
+  case ISD::FMA:
+  case ISD::VP_FMA:
   case ISD::FSHL:
   case ISD::VP_FSHL:
   case ISD::FSHR:
@@ -1293,8 +1380,10 @@ void DAGTypeLegalizer::SplitVecRes_TernaryOp(SDNode *N, SDValue &Lo,
   const SDNodeFlags Flags = N->getFlags();
   unsigned Opcode = N->getOpcode();
   if (N->getNumOperands() == 3) {
-    Lo = DAG.getNode(Opcode, dl, Op0Lo.getValueType(), Op0Lo, Op1Lo, Op2Lo, Flags);
-    Hi = DAG.getNode(Opcode, dl, Op0Hi.getValueType(), Op0Hi, Op1Hi, Op2Hi, Flags);
+    Lo = DAG.getNode(Opcode, dl, Op0Lo.getValueType(), Op0Lo, Op1Lo, Op2Lo,
+                     Flags);
+    Hi = DAG.getNode(Opcode, dl, Op0Hi.getValueType(), Op0Hi, Op1Hi, Op2Hi,
+                     Flags);
     return;
   }
 
@@ -1395,10 +1484,10 @@ void DAGTypeLegalizer::SplitVecRes_BUILD_VECTOR(SDNode *N, SDValue &Lo,
   SDLoc dl(N);
   std::tie(LoVT, HiVT) = DAG.GetSplitDestVTs(N->getValueType(0));
   unsigned LoNumElts = LoVT.getVectorNumElements();
-  SmallVector<SDValue, 8> LoOps(N->op_begin(), N->op_begin()+LoNumElts);
+  SmallVector<SDValue, 8> LoOps(N->op_begin(), N->op_begin() + LoNumElts);
   Lo = DAG.getBuildVector(LoVT, dl, LoOps);
 
-  SmallVector<SDValue, 8> HiOps(N->op_begin()+LoNumElts, N->op_end());
+  SmallVector<SDValue, 8> HiOps(N->op_begin() + LoNumElts, N->op_end());
   Hi = DAG.getBuildVector(HiVT, dl, HiOps);
 }
 
@@ -1416,10 +1505,10 @@ void DAGTypeLegalizer::SplitVecRes_CONCAT_VECTORS(SDNode *N, SDValue &Lo,
   EVT LoVT, HiVT;
   std::tie(LoVT, HiVT) = DAG.GetSplitDestVTs(N->getValueType(0));
 
-  SmallVector<SDValue, 8> LoOps(N->op_begin(), N->op_begin()+NumSubvectors);
+  SmallVector<SDValue, 8> LoOps(N->op_begin(), N->op_begin() + NumSubvectors);
   Lo = DAG.getNode(ISD::CONCAT_VECTORS, dl, LoVT, LoOps);
 
-  SmallVector<SDValue, 8> HiOps(N->op_begin()+NumSubvectors, N->op_end());
+  SmallVector<SDValue, 8> HiOps(N->op_begin() + NumSubvectors, N->op_end());
   Hi = DAG.getNode(ISD::CONCAT_VECTORS, dl, HiVT, HiOps);
 }
 
@@ -1555,7 +1644,7 @@ void DAGTypeLegalizer::SplitVecRes_InregOp(SDNode *N, SDValue &Lo,
 
   EVT LoVT, HiVT;
   std::tie(LoVT, HiVT) =
-    DAG.GetSplitDestVTs(cast<VTSDNode>(N->getOperand(1))->getVT());
+      DAG.GetSplitDestVTs(cast<VTSDNode>(N->getOperand(1))->getVT());
 
   Lo = DAG.getNode(N->getOpcode(), dl, LHSLo.getValueType(), LHSLo,
                    DAG.getValueType(LoVT));
@@ -1645,8 +1734,8 @@ void DAGTypeLegalizer::SplitVecRes_StrictFPOp(SDNode *N, SDValue &Lo,
 
   // Build a factor node to remember that this Op is independent of the
   // other one.
-  Chain = DAG.getNode(ISD::TokenFactor, dl, MVT::Other,
-                      Lo.getValue(1), Hi.getValue(1));
+  Chain = DAG.getNode(ISD::TokenFactor, dl, MVT::Other, Lo.getValue(1),
+                      Hi.getValue(1));
 
   // Legalize the chain result - switch anything that used the old chain to
   // use the new one.
@@ -1669,7 +1758,7 @@ SDValue DAGTypeLegalizer::UnrollVectorOp_StrictFP(SDNode *N, unsigned ResNE) {
   else if (NE > ResNE)
     NE = ResNE;
 
-  //The results of each unrolled operation, including the chain.
+  // The results of each unrolled operation, including the chain.
   EVT ChainVTs[] = {EltVT, MVT::Other};
   SmallVector<SDValue, 8> Chains;
 
@@ -1690,8 +1779,8 @@ SDValue DAGTypeLegalizer::UnrollVectorOp_StrictFP(SDNode *N, unsigned ResNE) {
     SDValue Scalar = DAG.getNode(N->getOpcode(), dl, ChainVTs, Operands);
     Scalar.getNode()->setFlags(N->getFlags());
 
-    //Add in the scalar as well as its chain value to the
-    //result vectors.
+    // Add in the scalar as well as its chain value to the
+    // result vectors.
     Scalars.push_back(Scalar);
     Chains.push_back(Scalar.getValue(1));
   }
@@ -1741,12 +1830,12 @@ void DAGTypeLegalizer::SplitVecRes_OverflowOp(SDNode *N, unsigned ResNo,
   unsigned OtherNo = 1 - ResNo;
   EVT OtherVT = N->getValueType(OtherNo);
   if (getTypeAction(OtherVT) == TargetLowering::TypeSplitVector) {
-    SetSplitVector(SDValue(N, OtherNo),
-                   SDValue(LoNode, OtherNo), SDValue(HiNode, OtherNo));
+    SetSplitVector(SDValue(N, OtherNo), SDValue(LoNode, OtherNo),
+                   SDValue(HiNode, OtherNo));
   } else {
-    SDValue OtherVal = DAG.getNode(
-        ISD::CONCAT_VECTORS, dl, OtherVT,
-        SDValue(LoNode, OtherNo), SDValue(HiNode, OtherNo));
+    SDValue OtherVal =
+        DAG.getNode(ISD::CONCAT_VECTORS, dl, OtherVT, SDValue(LoNode, OtherNo),
+                    SDValue(HiNode, OtherNo));
     ReplaceValueWith(SDValue(N, OtherNo), OtherVal);
   }
 }
@@ -1763,8 +1852,8 @@ void DAGTypeLegalizer::SplitVecRes_INSERT_VECTOR_ELT(SDNode *N, SDValue &Lo,
     unsigned IdxVal = CIdx->getZExtValue();
     unsigned LoNumElts = Lo.getValueType().getVectorMinNumElements();
     if (IdxVal < LoNumElts) {
-      Lo = DAG.getNode(ISD::INSERT_VECTOR_ELT, dl,
-                       Lo.getValueType(), Lo, Elt, Idx);
+      Lo = DAG.getNode(ISD::INSERT_VECTOR_ELT, dl, Lo.getValueType(), Lo, Elt,
+                       Idx);
       return;
     } else if (!Vec.getValueType().isScalableVector()) {
       Hi = DAG.getNode(ISD::INSERT_VECTOR_ELT, dl, Hi.getValueType(), Hi, Elt,
@@ -1808,8 +1897,7 @@ void DAGTypeLegalizer::SplitVecRes_INSERT_VECTOR_ELT(SDNode *N, SDValue &Lo,
   SDValue EltPtr = TLI.getVectorElementPointer(DAG, StackPtr, VecVT, Idx);
   Store = DAG.getTruncStore(
       Store, dl, Elt, EltPtr, MachinePointerInfo::getUnknownStack(MF), EltVT,
-      commonAlignment(SmallestAlign,
-                      EltVT.getFixedSizeInBits() / 8));
+      commonAlignment(SmallestAlign, EltVT.getFixedSizeInBits() / 8));
 
   EVT LoVT, HiVT;
   std::tie(LoVT, HiVT) = DAG.GetSplitDestVTs(VecVT);
@@ -2076,8 +2164,8 @@ void DAGTypeLegalizer::SplitVecRes_VP_STRIDED_LOAD(VPStridedLoadSDNode *SLD,
   ReplaceValueWith(SDValue(SLD, 1), Ch);
 }
 
-void DAGTypeLegalizer::SplitVecRes_MLOAD(MaskedLoadSDNode *MLD,
-                                         SDValue &Lo, SDValue &Hi) {
+void DAGTypeLegalizer::SplitVecRes_MLOAD(MaskedLoadSDNode *MLD, SDValue &Lo,
+                                         SDValue &Hi) {
   assert(MLD->isUnindexed() && "Indexed masked load during type legalization!");
   EVT LoVT, HiVT;
   SDLoc dl(MLD);
@@ -2157,7 +2245,6 @@ void DAGTypeLegalizer::SplitVecRes_MLOAD(MaskedLoadSDNode *MLD,
   // Legalize the chain result - switch anything that used the old chain to
   // use the new one.
   ReplaceValueWith(SDValue(MLD, 1), Ch);
-
 }
 
 void DAGTypeLegalizer::SplitVecRes_Gather(MemSDNode *N, SDValue &Lo,
@@ -2909,20 +2996,36 @@ bool DAGTypeLegalizer::SplitVectorOperand(SDNode *N, unsigned OpNo) {
                        "operand!\n");
 
   case ISD::VP_SETCC:
-  case ISD::SETCC:             Res = SplitVecOp_VSETCC(N); break;
-  case ISD::BITCAST:           Res = SplitVecOp_BITCAST(N); break;
-  case ISD::EXTRACT_SUBVECTOR: Res = SplitVecOp_EXTRACT_SUBVECTOR(N); break;
-  case ISD::INSERT_SUBVECTOR:  Res = SplitVecOp_INSERT_SUBVECTOR(N, OpNo); break;
-  case ISD::EXTRACT_VECTOR_ELT:Res = SplitVecOp_EXTRACT_VECTOR_ELT(N); break;
-  case ISD::CONCAT_VECTORS:    Res = SplitVecOp_CONCAT_VECTORS(N); break;
+  case ISD::SETCC:
+    Res = SplitVecOp_VSETCC(N);
+    break;
+  case ISD::BITCAST:
+    Res = SplitVecOp_BITCAST(N);
+    break;
+  case ISD::EXTRACT_SUBVECTOR:
+    Res = SplitVecOp_EXTRACT_SUBVECTOR(N);
+    break;
+  case ISD::INSERT_SUBVECTOR:
+    Res = SplitVecOp_INSERT_SUBVECTOR(N, OpNo);
+    break;
+  case ISD::EXTRACT_VECTOR_ELT:
+    Res = SplitVecOp_EXTRACT_VECTOR_ELT(N);
+    break;
+  case ISD::CONCAT_VECTORS:
+    Res = SplitVecOp_CONCAT_VECTORS(N);
+    break;
   case ISD::VP_TRUNCATE:
   case ISD::TRUNCATE:
     Res = SplitVecOp_TruncateHelper(N);
     break;
   case ISD::STRICT_FP_ROUND:
   case ISD::VP_FP_ROUND:
-  case ISD::FP_ROUND:          Res = SplitVecOp_FP_ROUND(N); break;
-  case ISD::FCOPYSIGN:         Res = SplitVecOp_FPOpDifferentTypes(N); break;
+  case ISD::FP_ROUND:
+    Res = SplitVecOp_FP_ROUND(N);
+    break;
+  case ISD::FCOPYSIGN:
+    Res = SplitVecOp_FPOpDifferentTypes(N);
+    break;
   case ISD::STORE:
     Res = SplitVecOp_STORE(cast<StoreSDNode>(N), OpNo);
     break;
@@ -3027,7 +3130,8 @@ bool DAGTypeLegalizer::SplitVectorOperand(SDNode *N, unsigned OpNo) {
   }
 
   // If the result is null, the sub-method took care of registering results etc.
-  if (!Res.getNode()) return false;
+  if (!Res.getNode())
+    return false;
 
   // If the result is N, the sub-method updated N in place.  Tell the legalizer
   // core about this.
@@ -3039,7 +3143,7 @@ bool DAGTypeLegalizer::SplitVectorOperand(SDNode *N, unsigned OpNo) {
            "Invalid operand expansion");
   else
     assert(Res.getValueType() == N->getValueType(0) && N->getNumValues() == 1 &&
-         "Invalid operand expansion");
+           "Invalid operand expansion");
 
   ReplaceValueWith(SDValue(N, 0), Res);
   return false;
@@ -3072,9 +3176,9 @@ SDValue DAGTypeLegalizer::SplitVecOp_VSELECT(SDNode *N, unsigned OpNo) {
   std::tie(LoMask, HiMask) = DAG.SplitVector(Mask, DL);
 
   SDValue LoSelect =
-    DAG.getNode(ISD::VSELECT, DL, LoOpVT, LoMask, LoOp0, LoOp1);
+      DAG.getNode(ISD::VSELECT, DL, LoOpVT, LoMask, LoOp0, LoOp1);
   SDValue HiSelect =
-    DAG.getNode(ISD::VSELECT, DL, HiOpVT, HiMask, HiOp0, HiOp1);
+      DAG.getNode(ISD::VSELECT, DL, HiOpVT, HiMask, HiOp0, HiOp1);
 
   return DAG.getNode(ISD::CONCAT_VECTORS, DL, Src0VT, LoSelect, HiSelect);
 }
@@ -3159,16 +3263,16 @@ SDValue DAGTypeLegalizer::SplitVecOp_UnaryOp(SDNode *N) {
                                InVT.getVectorElementCount());
 
   if (N->isStrictFPOpcode()) {
-    Lo = DAG.getNode(N->getOpcode(), dl, { OutVT, MVT::Other }, 
-                     { N->getOperand(0), Lo });
-    Hi = DAG.getNode(N->getOpcode(), dl, { OutVT, MVT::Other }, 
-                     { N->getOperand(0), Hi });
+    Lo = DAG.getNode(N->getOpcode(), dl, {OutVT, MVT::Other},
+                     {N->getOperand(0), Lo});
+    Hi = DAG.getNode(N->getOpcode(), dl, {OutVT, MVT::Other},
+                     {N->getOperand(0), Hi});
 
     // Build a factor node to remember that this operation is independent
     // of the other one.
     SDValue Ch = DAG.getNode(ISD::TokenFactor, dl, MVT::Other, Lo.getValue(1),
                              Hi.getValue(1));
-  
+
     // Legalize the chain result - switch anything that used the old chain to
     // use the new one.
     ReplaceValueWith(SDValue(N, 1), Ch);
@@ -3302,9 +3406,11 @@ SDValue DAGTypeLegalizer::SplitVecOp_EXTRACT_VECTOR_ELT(SDNode *N) {
     if (IdxVal < LoElts)
       return SDValue(DAG.UpdateNodeOperands(N, Lo, Idx), 0);
     else if (!Vec.getValueType().isScalableVector())
-      return SDValue(DAG.UpdateNodeOperands(N, Hi,
-                                    DAG.getConstant(IdxVal - LoElts, SDLoc(N),
-                                                    Idx.getValueType())), 0);
+      return SDValue(
+          DAG.UpdateNodeOperands(
+              N, Hi,
+              DAG.getConstant(IdxVal - LoElts, SDLoc(N), Idx.getValueType())),
+          0);
   }
 
   // See if the target wants to custom expand this node.
@@ -3339,8 +3445,9 @@ SDValue DAGTypeLegalizer::SplitVecOp_EXTRACT_VECTOR_ELT(SDNode *N) {
   // FIXME: This is to handle i1 vectors with elements promoted to i8.
   // i1 vector handling needs general improvement.
   if (N->getValueType(0).bitsLT(EltVT)) {
-    SDValue Load = DAG.getLoad(EltVT, dl, Store, StackPtr,
-      MachinePointerInfo::getUnknownStack(DAG.getMachineFunction()));
+    SDValue Load = DAG.getLoad(
+        EltVT, dl, Store, StackPtr,
+        MachinePointerInfo::getUnknownStack(DAG.getMachineFunction()));
     return DAG.getZExtOrTrunc(Load, dl, N->getValueType(0));
   }
 
@@ -3525,7 +3632,7 @@ SDValue DAGTypeLegalizer::SplitVecOp_VP_STRIDED_STORE(VPStridedStoreSDNode *N,
 SDValue DAGTypeLegalizer::SplitVecOp_MSTORE(MaskedStoreSDNode *N,
                                             unsigned OpNo) {
   assert(N->isUnindexed() && "Indexed masked store of vector?");
-  SDValue Ch  = N->getChain();
+  SDValue Ch = N->getChain();
   SDValue Ptr = N->getBasePtr();
   SDValue Offset = N->getOffset();
   assert(Offset.isUndef() && "Unexpected indexed masked store offset");
@@ -3690,7 +3797,7 @@ SDValue DAGTypeLegalizer::SplitVecOp_STORE(StoreSDNode *N, unsigned OpNo) {
   SDLoc DL(N);
 
   bool isTruncating = N->isTruncatingStore();
-  SDValue Ch  = N->getChain();
+  SDValue Ch = N->getChain();
   SDValue Ptr = N->getBasePtr();
   EVT MemoryVT = N->getMemoryVT();
   Align Alignment = N->getOriginalAlign();
@@ -3717,8 +3824,8 @@ SDValue DAGTypeLegalizer::SplitVecOp_STORE(StoreSDNode *N, unsigned OpNo) {
   IncrementPointer(N, LoMemVT, MPI, Ptr);
 
   if (isTruncating)
-    Hi = DAG.getTruncStore(Ch, DL, Hi, Ptr, MPI,
-                           HiMemVT, Alignment, MMOFlags, AAInfo);
+    Hi = DAG.getTruncStore(Ch, DL, Hi, Ptr, MPI, HiMemVT, Alignment, MMOFlags,
+                           AAInfo);
   else
     Hi = DAG.getStore(Ch, DL, Hi, Ptr, MPI, Alignment, MMOFlags, AAInfo);
 
@@ -3736,8 +3843,8 @@ SDValue DAGTypeLegalizer::SplitVecOp_CONCAT_VECTORS(SDNode *N) {
   SmallVector<SDValue, 32> Elts;
   EVT EltVT = N->getValueType(0).getVectorElementType();
   for (const SDValue &Op : N->op_values()) {
-    for (unsigned i = 0, e = Op.getValueType().getVectorNumElements();
-         i != e; ++i) {
+    for (unsigned i = 0, e = Op.getValueType().getVectorNumElements(); i != e;
+         ++i) {
       Elts.push_back(DAG.getNode(ISD::EXTRACT_VECTOR_ELT, DL, EltVT, Op,
                                  DAG.getVectorIdxConstant(i, DL)));
     }
@@ -3782,8 +3889,7 @@ SDValue DAGTypeLegalizer::SplitVecOp_TruncateHelper(SDNode *N) {
   // If the input elements are only 1/2 the width of the result elements,
   // just use the normal splitting. Our trick only work if there's room
   // to split more than once.
-  if (isTypeLegal(LoOutVT) ||
-      InElementSize <= OutElementSize * 2)
+  if (isTypeLegal(LoOutVT) || InElementSize <= OutElementSize * 2)
     return SplitVecOp_UnaryOp(N);
   SDLoc DL(N);
 
@@ -3803,9 +3909,9 @@ SDValue DAGTypeLegalizer::SplitVecOp_TruncateHelper(SDNode *N) {
   //
   // This assumes the number of elements is a power of two; any vector that
   // isn't should be widened, not split.
-  EVT HalfElementVT = IsFloat ?
-    EVT::getFloatingPointVT(InElementSize/2) :
-    EVT::getIntegerVT(*DAG.getContext(), InElementSize/2);
+  EVT HalfElementVT =
+      IsFloat ? EVT::getFloatingPointVT(InElementSize / 2)
+              : EVT::getIntegerVT(*DAG.getContext(), InElementSize / 2);
   EVT HalfVT = EVT::getVectorVT(*DAG.getContext(), HalfElementVT,
                                 NumElements.divideCoefficientBy(2));
 
@@ -3828,8 +3934,8 @@ SDValue DAGTypeLegalizer::SplitVecOp_TruncateHelper(SDNode *N) {
 
   // Concatenate them to get the full intermediate truncation result.
   EVT InterVT = EVT::getVectorVT(*DAG.getContext(), HalfElementVT, NumElements);
-  SDValue InterVec = DAG.getNode(ISD::CONCAT_VECTORS, DL, InterVT, HalfLo,
-                                 HalfHi);
+  SDValue InterVec =
+      DAG.getNode(ISD::CONCAT_VECTORS, DL, InterVT, HalfLo, HalfHi);
   // Now finish up by truncating all the way down to the original result
   // type. This should normally be something that ends up being legal directly,
   // but in theory if a target has very wide vectors and an annoyingly
@@ -3865,7 +3971,7 @@ SDValue DAGTypeLegalizer::SplitVecOp_VSETCC(SDNode *N) {
 
   LLVMContext &Context = *DAG.getContext();
   EVT PartResVT = EVT::getVectorVT(Context, MVT::i1, PartEltCnt);
-  EVT WideResVT = EVT::getVectorVT(Context, MVT::i1, PartEltCnt*2);
+  EVT WideResVT = EVT::getVectorVT(Context, MVT::i1, PartEltCnt * 2);
 
   if (N->getOpcode() == ISD::SETCC) {
     LoRes = DAG.getNode(ISD::SETCC, DL, PartResVT, Lo0, Lo1, N->getOperand(2));
@@ -3889,7 +3995,6 @@ SDValue DAGTypeLegalizer::SplitVecOp_VSETCC(SDNode *N) {
   return DAG.getNode(ExtendCode, DL, N->getValueType(0), Con);
 }
 
-
 SDValue DAGTypeLegalizer::SplitVecOp_FP_ROUND(SDNode *N) {
   // The result has a legal vector type, but the input needs splitting.
   EVT ResVT = N->getValueType(0);
@@ -3902,13 +4007,13 @@ SDValue DAGTypeLegalizer::SplitVecOp_FP_ROUND(SDNode *N) {
                                InVT.getVectorElementCount());
 
   if (N->isStrictFPOpcode()) {
-    Lo = DAG.getNode(N->getOpcode(), DL, { OutVT, MVT::Other }, 
-                     { N->getOperand(0), Lo, N->getOperand(2) });
-    Hi = DAG.getNode(N->getOpcode(), DL, { OutVT, MVT::Other }, 
-                     { N->getOperand(0), Hi, N->getOperand(2) });
+    Lo = DAG.getNode(N->getOpcode(), DL, {OutVT, MVT::Other},
+                     {N->getOperand(0), Lo, N->getOperand(2)});
+    Hi = DAG.getNode(N->getOpcode(), DL, {OutVT, MVT::Other},
+                     {N->getOperand(0), Hi, N->getOperand(2)});
     // Legalize the chain result - switch anything that used the old chain to
     // use the new one.
-    SDValue NewChain = DAG.getNode(ISD::TokenFactor, DL, MVT::Other, 
+    SDValue NewChain = DAG.getNode(ISD::TokenFactor, DL, MVT::Other,
                                    Lo.getValue(1), Hi.getValue(1));
     ReplaceValueWith(SDValue(N, 1), NewChain);
   } else if (N->getOpcode() == ISD::VP_FP_ROUND) {
@@ -4008,33 +4113,57 @@ void DAGTypeLegalizer::WidenVectorResult(SDNode *N, unsigned ResNo) {
 #endif
     report_fatal_error("Do not know how to widen the result of this operator!");
 
-  case ISD::MERGE_VALUES:      Res = WidenVecRes_MERGE_VALUES(N, ResNo); break;
-  case ISD::AssertZext:        Res = WidenVecRes_AssertZext(N); break;
-  case ISD::BITCAST:           Res = WidenVecRes_BITCAST(N); break;
-  case ISD::BUILD_VECTOR:      Res = WidenVecRes_BUILD_VECTOR(N); break;
-  case ISD::CONCAT_VECTORS:    Res = WidenVecRes_CONCAT_VECTORS(N); break;
+  case ISD::MERGE_VALUES:
+    Res = WidenVecRes_MERGE_VALUES(N, ResNo);
+    break;
+  case ISD::AssertZext:
+    Res = WidenVecRes_AssertZext(N);
+    break;
+  case ISD::BITCAST:
+    Res = WidenVecRes_BITCAST(N);
+    break;
+  case ISD::BUILD_VECTOR:
+    Res = WidenVecRes_BUILD_VECTOR(N);
+    break;
+  case ISD::CONCAT_VECTORS:
+    Res = WidenVecRes_CONCAT_VECTORS(N);
+    break;
   case ISD::INSERT_SUBVECTOR:
     Res = WidenVecRes_INSERT_SUBVECTOR(N);
     break;
-  case ISD::EXTRACT_SUBVECTOR: Res = WidenVecRes_EXTRACT_SUBVECTOR(N); break;
-  case ISD::INSERT_VECTOR_ELT: Res = WidenVecRes_INSERT_VECTOR_ELT(N); break;
-  case ISD::LOAD:              Res = WidenVecRes_LOAD(N); break;
+  case ISD::EXTRACT_SUBVECTOR:
+    Res = WidenVecRes_EXTRACT_SUBVECTOR(N);
+    break;
+  case ISD::INSERT_VECTOR_ELT:
+    Res = WidenVecRes_INSERT_VECTOR_ELT(N);
+    break;
+  case ISD::LOAD:
+    Res = WidenVecRes_LOAD(N);
+    break;
   case ISD::STEP_VECTOR:
   case ISD::SPLAT_VECTOR:
   case ISD::SCALAR_TO_VECTOR:
     Res = WidenVecRes_ScalarOp(N);
     break;
-  case ISD::SIGN_EXTEND_INREG: Res = WidenVecRes_InregOp(N); break;
+  case ISD::SIGN_EXTEND_INREG:
+    Res = WidenVecRes_InregOp(N);
+    break;
   case ISD::VSELECT:
   case ISD::SELECT:
   case ISD::VP_SELECT:
   case ISD::VP_MERGE:
     Res = WidenVecRes_Select(N);
     break;
-  case ISD::SELECT_CC:         Res = WidenVecRes_SELECT_CC(N); break;
+  case ISD::SELECT_CC:
+    Res = WidenVecRes_SELECT_CC(N);
+    break;
   case ISD::VP_SETCC:
-  case ISD::SETCC:             Res = WidenVecRes_SETCC(N); break;
-  case ISD::UNDEF:             Res = WidenVecRes_UNDEF(N); break;
+  case ISD::SETCC:
+    Res = WidenVecRes_SETCC(N);
+    break;
+  case ISD::UNDEF:
+    Res = WidenVecRes_UNDEF(N);
+    break;
   case ISD::VECTOR_SHUFFLE:
     Res = WidenVecRes_VECTOR_SHUFFLE(cast<ShuffleVectorSDNode>(N));
     break;
@@ -4057,25 +4186,40 @@ void DAGTypeLegalizer::WidenVectorResult(SDNode *N, unsigned ResNo) {
     Res = WidenVecRes_VECTOR_REVERSE(N);
     break;
 
-  case ISD::ADD: case ISD::VP_ADD:
-  case ISD::AND: case ISD::VP_AND:
-  case ISD::MUL: case ISD::VP_MUL:
+  case ISD::ADD:
+  case ISD::VP_ADD:
+  case ISD::AND:
+  case ISD::VP_AND:
+  case ISD::MUL:
+  case ISD::VP_MUL:
   case ISD::MULHS:
   case ISD::MULHU:
-  case ISD::OR: case ISD::VP_OR:
-  case ISD::SUB: case ISD::VP_SUB:
-  case ISD::XOR: case ISD::VP_XOR:
-  case ISD::SHL: case ISD::VP_SHL:
-  case ISD::SRA: case ISD::VP_ASHR:
-  case ISD::SRL: case ISD::VP_LSHR:
-  case ISD::FMINNUM: case ISD::VP_FMINNUM:
-  case ISD::FMAXNUM: case ISD::VP_FMAXNUM:
+  case ISD::OR:
+  case ISD::VP_OR:
+  case ISD::SUB:
+  case ISD::VP_SUB:
+  case ISD::XOR:
+  case ISD::VP_XOR:
+  case ISD::SHL:
+  case ISD::VP_SHL:
+  case ISD::SRA:
+  case ISD::VP_ASHR:
+  case ISD::SRL:
+  case ISD::VP_LSHR:
+  case ISD::FMINNUM:
+  case ISD::VP_FMINNUM:
+  case ISD::FMAXNUM:
+  case ISD::VP_FMAXNUM:
   case ISD::FMINIMUM:
   case ISD::FMAXIMUM:
-  case ISD::SMIN: case ISD::VP_SMIN:
-  case ISD::SMAX: case ISD::VP_SMAX:
-  case ISD::UMIN: case ISD::VP_UMIN:
-  case ISD::UMAX: case ISD::VP_UMAX:
+  case ISD::SMIN:
+  case ISD::VP_SMIN:
+  case ISD::SMAX:
+  case ISD::VP_SMAX:
+  case ISD::UMIN:
+  case ISD::VP_UMIN:
+  case ISD::UMAX:
+  case ISD::VP_UMAX:
   case ISD::UADDSAT:
   case ISD::SADDSAT:
   case ISD::USUBSAT:
@@ -4237,7 +4381,8 @@ void DAGTypeLegalizer::WidenVectorResult(SDNode *N, unsigned ResNo) {
   case ISD::VP_CTTZ:
   case ISD::CTTZ_ZERO_UNDEF:
   case ISD::VP_CTTZ_ZERO_UNDEF:
-  case ISD::FNEG: case ISD::VP_FNEG:
+  case ISD::FNEG:
+  case ISD::VP_FNEG:
   case ISD::VP_FABS:
   case ISD::VP_SQRT:
   case ISD::VP_FCEIL:
@@ -4252,7 +4397,8 @@ void DAGTypeLegalizer::WidenVectorResult(SDNode *N, unsigned ResNo) {
   case ISD::FCANONICALIZE:
     Res = WidenVecRes_Unary(N);
     break;
-  case ISD::FMA: case ISD::VP_FMA:
+  case ISD::FMA:
+  case ISD::VP_FMA:
   case ISD::FSHL:
   case ISD::VP_FSHL:
   case ISD::FSHR:
@@ -4336,7 +4482,7 @@ static SDValue CollectOpsToWiden(SelectionDAG &DAG, const TargetLowering &TLI,
   //   From the end of ConcatOps, collect elements of the same type and put
   //   them into an op of the next larger supported type
   // }
-  while (ConcatOps[ConcatEnd-1].getValueType() != MaxVT) {
+  while (ConcatOps[ConcatEnd - 1].getValueType() != MaxVT) {
     int Idx = ConcatEnd - 1;
     VT = ConcatOps[Idx--].getValueType();
     while (Idx >= 0 && ConcatOps[Idx].getValueType() == VT)
@@ -4353,16 +4499,16 @@ static SDValue CollectOpsToWiden(SelectionDAG &DAG, const TargetLowering &TLI,
       // Scalar type, create an INSERT_VECTOR_ELEMENT of type NextVT
       SDValue VecOp = DAG.getUNDEF(NextVT);
       unsigned NumToInsert = ConcatEnd - Idx - 1;
-      for (unsigned i = 0, OpIdx = Idx+1; i < NumToInsert; i++, OpIdx++) {
+      for (unsigned i = 0, OpIdx = Idx + 1; i < NumToInsert; i++, OpIdx++) {
         VecOp = DAG.getNode(ISD::INSERT_VECTOR_ELT, dl, NextVT, VecOp,
                             ConcatOps[OpIdx], DAG.getVectorIdxConstant(i, dl));
       }
-      ConcatOps[Idx+1] = VecOp;
+      ConcatOps[Idx + 1] = VecOp;
       ConcatEnd = Idx + 2;
     } else {
       // Vector type, create a CONCAT_VECTORS of type NextVT
       SDValue undefVec = DAG.getUNDEF(VT);
-      unsigned OpsToConcat = NextSize/VT.getVectorNumElements();
+      unsigned OpsToConcat = NextSize / VT.getVectorNumElements();
       SmallVector<SDValue, 16> SubConcatOps(OpsToConcat);
       unsigned RealVals = ConcatEnd - Idx - 1;
       unsigned SubConcatEnd = 0;
@@ -4371,8 +4517,8 @@ static SDValue CollectOpsToWiden(SelectionDAG &DAG, const TargetLowering &TLI,
         SubConcatOps[SubConcatEnd++] = ConcatOps[++Idx];
       while (SubConcatEnd < OpsToConcat)
         SubConcatOps[SubConcatEnd++] = undefVec;
-      ConcatOps[SubConcatIdx] = DAG.getNode(ISD::CONCAT_VECTORS, dl,
-                                            NextVT, SubConcatOps);
+      ConcatOps[SubConcatIdx] =
+          DAG.getNode(ISD::CONCAT_VECTORS, dl, NextVT, SubConcatOps);
       ConcatEnd = SubConcatIdx + 1;
     }
   }
@@ -4385,8 +4531,9 @@ static SDValue CollectOpsToWiden(SelectionDAG &DAG, const TargetLowering &TLI,
   }
 
   // add undefs of size MaxVT until ConcatOps grows to length of WidenVT
-  unsigned NumOps = WidenVT.getVectorNumElements()/MaxVT.getVectorNumElements();
-  if (NumOps != ConcatEnd ) {
+  unsigned NumOps =
+      WidenVT.getVectorNumElements() / MaxVT.getVectorNumElements();
+  if (NumOps != ConcatEnd) {
     SDValue UndefVal = DAG.getUNDEF(MaxVT);
     for (unsigned j = ConcatEnd; j < NumOps; ++j)
       ConcatOps[j] = UndefVal;
@@ -4430,8 +4577,8 @@ SDValue DAGTypeLegalizer::WidenVecRes_BinaryCanTrap(SDNode *N) {
   unsigned CurNumElts = N->getValueType(0).getVectorNumElements();
 
   SmallVector<SDValue, 16> ConcatOps(CurNumElts);
-  unsigned ConcatEnd = 0;  // Current ConcatOps index.
-  int Idx = 0;        // Current Idx into input vectors.
+  unsigned ConcatEnd = 0; // Current ConcatOps index.
+  int Idx = 0;            // Current Idx into input vectors.
 
   // NumElts := greatest legal vector size (at most WidenVT)
   // while (orig. vector has unhandled elements) {
@@ -4459,8 +4606,8 @@ SDValue DAGTypeLegalizer::WidenVecRes_BinaryCanTrap(SDNode *N) {
                                    InOp1, DAG.getVectorIdxConstant(Idx, dl));
         SDValue EOp2 = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl, WidenEltVT,
                                    InOp2, DAG.getVectorIdxConstant(Idx, dl));
-        ConcatOps[ConcatEnd++] = DAG.getNode(Opcode, dl, WidenEltVT,
-                                             EOp1, EOp2, Flags);
+        ConcatOps[ConcatEnd++] =
+            DAG.getNode(Opcode, dl, WidenEltVT, EOp1, EOp2, Flags);
       }
       CurNumElts = 0;
     }
@@ -4480,7 +4627,7 @@ SDValue DAGTypeLegalizer::WidenVecRes_StrictFP(SDNode *N) {
   case ISD::STRICT_FP_TO_UINT:
   case ISD::STRICT_SINT_TO_FP:
   case ISD::STRICT_UINT_TO_FP:
-   return WidenVecRes_Convert_StrictFP(N);
+    return WidenVecRes_Convert_StrictFP(N);
   default:
     break;
   }
@@ -4509,8 +4656,8 @@ SDValue DAGTypeLegalizer::WidenVecRes_StrictFP(SDNode *N) {
 
   SmallVector<SDValue, 16> ConcatOps(CurNumElts);
   SmallVector<SDValue, 16> Chains;
-  unsigned ConcatEnd = 0;  // Current ConcatOps index.
-  int Idx = 0;        // Current Idx into input vectors.
+  unsigned ConcatEnd = 0; // Current ConcatOps index.
+  int Idx = 0;            // Current Idx into input vectors.
 
   // The Chain is the first operand.
   InOps.push_back(N->getOperand(0));
@@ -4588,7 +4735,7 @@ SDValue DAGTypeLegalizer::WidenVecRes_StrictFP(SDNode *N) {
           EOps.push_back(Op);
         }
 
-        EVT WidenVT[] = {WidenEltVT, MVT::Other}; 
+        EVT WidenVT[] = {WidenEltVT, MVT::Other};
         SDValue Oper = DAG.getNode(Opcode, dl, WidenVT, EOps);
         ConcatOps[ConcatEnd++] = Oper;
         Chains.push_back(Oper.getValue(1));
@@ -4618,30 +4765,27 @@ SDValue DAGTypeLegalizer::WidenVecRes_OverflowOp(SDNode *N, unsigned ResNo) {
   // TODO: This might result in a widen/split loop.
   if (ResNo == 0) {
     WideResVT = TLI.getTypeToTransformTo(*DAG.getContext(), ResVT);
-    WideOvVT = EVT::getVectorVT(
-        *DAG.getContext(), OvVT.getVectorElementType(),
-        WideResVT.getVectorNumElements());
+    WideOvVT = EVT::getVectorVT(*DAG.getContext(), OvVT.getVectorElementType(),
+                                WideResVT.getVectorNumElements());
 
     WideLHS = GetWidenedVector(N->getOperand(0));
     WideRHS = GetWidenedVector(N->getOperand(1));
   } else {
     WideOvVT = TLI.getTypeToTransformTo(*DAG.getContext(), OvVT);
-    WideResVT = EVT::getVectorVT(
-        *DAG.getContext(), ResVT.getVectorElementType(),
-        WideOvVT.getVectorNumElements());
+    WideResVT =
+        EVT::getVectorVT(*DAG.getContext(), ResVT.getVectorElementType(),
+                         WideOvVT.getVectorNumElements());
 
     SDValue Zero = DAG.getVectorIdxConstant(0, DL);
-    WideLHS = DAG.getNode(
-        ISD::INSERT_SUBVECTOR, DL, WideResVT, DAG.getUNDEF(WideResVT),
-        N->getOperand(0), Zero);
-    WideRHS = DAG.getNode(
-        ISD::INSERT_SUBVECTOR, DL, WideResVT, DAG.getUNDEF(WideResVT),
-        N->getOperand(1), Zero);
+    WideLHS = DAG.getNode(ISD::INSERT_SUBVECTOR, DL, WideResVT,
+                          DAG.getUNDEF(WideResVT), N->getOperand(0), Zero);
+    WideRHS = DAG.getNode(ISD::INSERT_SUBVECTOR, DL, WideResVT,
+                          DAG.getUNDEF(WideResVT), N->getOperand(1), Zero);
   }
 
   SDVTList WideVTs = DAG.getVTList(WideResVT, WideOvVT);
-  SDNode *WideNode = DAG.getNode(
-      N->getOpcode(), DL, WideVTs, WideLHS, WideRHS).getNode();
+  SDNode *WideNode =
+      DAG.getNode(N->getOpcode(), DL, WideVTs, WideLHS, WideRHS).getNode();
 
   // Replace the other vector result not being explicitly widened here.
   unsigned OtherNo = 1 - ResNo;
@@ -4650,8 +4794,8 @@ SDValue DAGTypeLegalizer::WidenVecRes_OverflowOp(SDNode *N, unsigned ResNo) {
     SetWidenedVector(SDValue(N, OtherNo), SDValue(WideNode, OtherNo));
   } else {
     SDValue Zero = DAG.getVectorIdxConstant(0, DL);
-    SDValue OtherVal = DAG.getNode(
-        ISD::EXTRACT_SUBVECTOR, DL, OtherVT, SDValue(WideNode, OtherNo), Zero);
+    SDValue OtherVal = DAG.getNode(ISD::EXTRACT_SUBVECTOR, DL, OtherVT,
+                                   SDValue(WideNode, OtherNo), Zero);
     ReplaceValueWith(SDValue(N, OtherNo), OtherVal);
   }
 
@@ -4676,7 +4820,7 @@ SDValue DAGTypeLegalizer::WidenVecRes_Convert(SDNode *N) {
   if (N->getOpcode() == ISD::ZERO_EXTEND &&
       getTypeAction(InVT) == TargetLowering::TypePromoteInteger &&
       TLI.getTypeToTransformTo(Ctx, InVT).getScalarSizeInBits() !=
-      WidenVT.getScalarSizeInBits()) {
+          WidenVT.getScalarSizeInBits()) {
     InOp = ZExtPromotedInteger(InOp);
     InVT = InOp.getValueType();
     if (WidenVT.getScalarSizeInBits() < InVT.getScalarSizeInBits())
@@ -4749,7 +4893,7 @@ SDValue DAGTypeLegalizer::WidenVecRes_Convert(SDNode *N) {
   // Use the original element count so we don't do more scalar opts than
   // necessary.
   unsigned MinElts = N->getValueType(0).getVectorNumElements();
-  for (unsigned i=0; i < MinElts; ++i) {
+  for (unsigned i = 0; i < MinElts; ++i) {
     SDValue Val = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, DL, InEltVT, InOp,
                               DAG.getVectorIdxConstant(i, DL));
     if (N->getNumOperands() == 1)
@@ -4805,7 +4949,7 @@ SDValue DAGTypeLegalizer::WidenVecRes_Convert_StrictFP(SDNode *N) {
   // Use the original element count so we don't do more scalar opts than
   // necessary.
   unsigned MinElts = N->getValueType(0).getVectorNumElements();
-  for (unsigned i=0; i < MinElts; ++i) {
+  for (unsigned i = 0; i < MinElts; ++i) {
     NewOps[1] = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, DL, InEltVT, InOp,
                             DAG.getVectorIdxConstant(i, DL));
     Ops[i] = DAG.getNode(Opcode, DL, EltVTs, NewOps);
@@ -4918,13 +5062,13 @@ SDValue DAGTypeLegalizer::WidenVecRes_Unary(SDNode *N) {
 
 SDValue DAGTypeLegalizer::WidenVecRes_InregOp(SDNode *N) {
   EVT WidenVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
-  EVT ExtVT = EVT::getVectorVT(*DAG.getContext(),
-                               cast<VTSDNode>(N->getOperand(1))->getVT()
-                                 .getVectorElementType(),
-                               WidenVT.getVectorNumElements());
+  EVT ExtVT = EVT::getVectorVT(
+      *DAG.getContext(),
+      cast<VTSDNode>(N->getOperand(1))->getVT().getVectorElementType(),
+      WidenVT.getVectorNumElements());
   SDValue WidenLHS = GetWidenedVector(N->getOperand(0));
-  return DAG.getNode(N->getOpcode(), SDLoc(N),
-                     WidenVT, WidenLHS, DAG.getValueType(ExtVT));
+  return DAG.getNode(N->getOpcode(), SDLoc(N), WidenVT, WidenLHS,
+                     DAG.getValueType(ExtVT));
 }
 
 SDValue DAGTypeLegalizer::WidenVecRes_MERGE_VALUES(SDNode *N, unsigned ResNo) {
@@ -4963,7 +5107,7 @@ SDValue DAGTypeLegalizer::WidenVecRes_BITCAST(SDNode *N) {
         EVT ShiftAmtTy = TLI.getShiftAmountTy(NInVT, DAG.getDataLayout());
         assert(ShiftAmt < WidenVT.getSizeInBits() && "Too large shift amount!");
         NInOp = DAG.getNode(ISD::SHL, dl, NInVT, NInOp,
-                           DAG.getConstant(ShiftAmt, dl, ShiftAmtTy));
+                            DAG.getConstant(ShiftAmt, dl, ShiftAmtTy));
       }
       return DAG.getNode(ISD::BITCAST, dl, WidenVT, NInOp);
     }
@@ -5083,7 +5227,7 @@ SDValue DAGTypeLegalizer::WidenVecRes_CONCAT_VECTORS(SDNode *N) {
       unsigned NumConcat = WidenNumElts / NumInElts;
       SDValue UndefVal = DAG.getUNDEF(InVT);
       SmallVector<SDValue, 16> Ops(NumConcat);
-      for (unsigned i=0; i < NumOperands; ++i)
+      for (unsigned i = 0; i < NumOperands; ++i)
         Ops[i] = N->getOperand(i);
       for (unsigned i = NumOperands; i != NumConcat; ++i)
         Ops[i] = UndefVal;
@@ -5094,7 +5238,7 @@ SDValue DAGTypeLegalizer::WidenVecRes_CONCAT_VECTORS(SDNode *N) {
     if (WidenVT == TLI.getTypeToTransformTo(*DAG.getContext(), InVT)) {
       // The inputs and the result are widen to the same value.
       unsigned i;
-      for (i=1; i < NumOperands; ++i)
+      for (i = 1; i < NumOperands; ++i)
         if (!N->getOperand(i).isUndef())
           break;
 
@@ -5115,10 +5259,9 @@ SDValue DAGTypeLegalizer::WidenVecRes_CONCAT_VECTORS(SDNode *N) {
           MaskOps[i] = i;
           MaskOps[i + NumInElts] = i + WidenNumElts;
         }
-        return DAG.getVectorShuffle(WidenVT, dl,
-                                    GetWidenedVector(N->getOperand(0)),
-                                    GetWidenedVector(N->getOperand(1)),
-                                    MaskOps);
+        return DAG.getVectorShuffle(
+            WidenVT, dl, GetWidenedVector(N->getOperand(0)),
+            GetWidenedVector(N->getOperand(1)), MaskOps);
       }
     }
   }
@@ -5132,7 +5275,7 @@ SDValue DAGTypeLegalizer::WidenVecRes_CONCAT_VECTORS(SDNode *N) {
   EVT EltVT = WidenVT.getVectorElementType();
   SmallVector<SDValue, 16> Ops(WidenNumElts);
   unsigned Idx = 0;
-  for (unsigned i=0; i < NumOperands; ++i) {
+  for (unsigned i = 0; i < NumOperands; ++i) {
     SDValue InOp = N->getOperand(i);
     if (InputWidened)
       InOp = GetWidenedVector(InOp);
@@ -5241,9 +5384,8 @@ SDValue DAGTypeLegalizer::WidenVecRes_AssertZext(SDNode *N) {
 
 SDValue DAGTypeLegalizer::WidenVecRes_INSERT_VECTOR_ELT(SDNode *N) {
   SDValue InOp = GetWidenedVector(N->getOperand(0));
-  return DAG.getNode(ISD::INSERT_VECTOR_ELT, SDLoc(N),
-                     InOp.getValueType(), InOp,
-                     N->getOperand(1), N->getOperand(2));
+  return DAG.getNode(ISD::INSERT_VECTOR_ELT, SDLoc(N), InOp.getValueType(),
+                     InOp, N->getOperand(1), N->getOperand(2));
 }
 
 SDValue DAGTypeLegalizer::WidenVecRes_LOAD(SDNode *N) {
@@ -5376,7 +5518,7 @@ SDValue DAGTypeLegalizer::WidenVecRes_VP_STRIDED_LOAD(VPStridedLoadSDNode *N) {
 
 SDValue DAGTypeLegalizer::WidenVecRes_MLOAD(MaskedLoadSDNode *N) {
 
-  EVT WidenVT = TLI.getTypeToTransformTo(*DAG.getContext(),N->getValueType(0));
+  EVT WidenVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
   SDValue Mask = N->getMask();
   EVT MaskVT = Mask.getValueType();
   SDValue PassThru = GetWidenedVector(N->getPassThru());
@@ -5384,9 +5526,9 @@ SDValue DAGTypeLegalizer::WidenVecRes_MLOAD(MaskedLoadSDNode *N) {
   SDLoc dl(N);
 
   // The mask should be widened as well
-  EVT WideMaskVT = EVT::getVectorVT(*DAG.getContext(),
-                                    MaskVT.getVectorElementType(),
-                                    WidenVT.getVectorNumElements());
+  EVT WideMaskVT =
+      EVT::getVectorVT(*DAG.getContext(), MaskVT.getVectorElementType(),
+                       WidenVT.getVectorNumElements());
   Mask = ModifyToType(Mask, WideMaskVT, true);
 
   SDValue Res = DAG.getMaskedLoad(
@@ -5410,19 +5552,18 @@ SDValue DAGTypeLegalizer::WidenVecRes_MGATHER(MaskedGatherSDNode *N) {
   SDLoc dl(N);
 
   // The mask should be widened as well
-  EVT WideMaskVT = EVT::getVectorVT(*DAG.getContext(),
-                                    MaskVT.getVectorElementType(),
-                                    WideVT.getVectorNumElements());
+  EVT WideMaskVT =
+      EVT::getVectorVT(*DAG.getContext(), MaskVT.getVectorElementType(),
+                       WideVT.getVectorNumElements());
   Mask = ModifyToType(Mask, WideMaskVT, true);
 
   // Widen the Index operand
   SDValue Index = N->getIndex();
-  EVT WideIndexVT = EVT::getVectorVT(*DAG.getContext(),
-                                     Index.getValueType().getScalarType(),
-                                     NumElts);
+  EVT WideIndexVT = EVT::getVectorVT(
+      *DAG.getContext(), Index.getValueType().getScalarType(), NumElts);
   Index = ModifyToType(Index, WideIndexVT);
-  SDValue Ops[] = { N->getChain(), PassThru, Mask, N->getBasePtr(), Index,
-                    Scale };
+  SDValue Ops[] = {N->getChain(),   PassThru, Mask,
+                   N->getBasePtr(), Index,    Scale};
 
   // Widen the MemoryType
   EVT WideMemVT = EVT::getVectorVT(*DAG.getContext(),
@@ -5536,11 +5677,10 @@ SDValue DAGTypeLegalizer::convertMask(SDValue InMask, EVT MaskVT,
   for (unsigned i = 0, e = InMask->getNumOperands(); i < e; ++i)
     Ops.push_back(InMask->getOperand(i));
   if (InMask->isStrictFPOpcode()) {
-    Mask = DAG.getNode(InMask->getOpcode(), SDLoc(InMask),
-                       { MaskVT, MVT::Other }, Ops);
+    Mask = DAG.getNode(InMask->getOpcode(), SDLoc(InMask), {MaskVT, MVT::Other},
+                       Ops);
     ReplaceValueWith(InMask.getValue(1), Mask.getValue(1));
-  }
-  else
+  } else
     Mask = DAG.getNode(InMask->getOpcode(), SDLoc(InMask), MaskVT, Ops);
 
   // If MaskVT has smaller or bigger elements than ToMaskVT, a vector sign
@@ -5704,7 +5844,8 @@ SDValue DAGTypeLegalizer::WidenVecRes_Select(SDNode *N) {
     if (SDValue WideCond = WidenVSELECTMask(N)) {
       SDValue InOp1 = GetWidenedVector(N->getOperand(1));
       SDValue InOp2 = GetWidenedVector(N->getOperand(2));
-      assert(InOp1.getValueType() == WidenVT && InOp2.getValueType() == WidenVT);
+      assert(InOp1.getValueType() == WidenVT &&
+             InOp2.getValueType() == WidenVT);
       return DAG.getNode(Opcode, SDLoc(N), WidenVT, WideCond, InOp1, InOp2);
     }
 
@@ -5740,14 +5881,14 @@ SDValue DAGTypeLegalizer::WidenVecRes_Select(SDNode *N) {
 SDValue DAGTypeLegalizer::WidenVecRes_SELECT_CC(SDNode *N) {
   SDValue InOp1 = GetWidenedVector(N->getOperand(2));
   SDValue InOp2 = GetWidenedVector(N->getOperand(3));
-  return DAG.getNode(ISD::SELECT_CC, SDLoc(N),
-                     InOp1.getValueType(), N->getOperand(0),
-                     N->getOperand(1), InOp1, InOp2, N->getOperand(4));
+  return DAG.getNode(ISD::SELECT_CC, SDLoc(N), InOp1.getValueType(),
+                     N->getOperand(0), N->getOperand(1), InOp1, InOp2,
+                     N->getOperand(4));
 }
 
 SDValue DAGTypeLegalizer::WidenVecRes_UNDEF(SDNode *N) {
- EVT WidenVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
- return DAG.getUNDEF(WidenVT);
+  EVT WidenVT = TLI.getTypeToTransformTo(*DAG.getContext(), N->getValueType(0));
+  return DAG.getUNDEF(WidenVT);
 }
 
 SDValue DAGTypeLegalizer::WidenVecRes_VECTOR_SHUFFLE(ShuffleVectorSDNode *N) {
@@ -5939,27 +6080,59 @@ bool DAGTypeLegalizer::WidenVectorOperand(SDNode *N, unsigned OpNo) {
 #endif
     report_fatal_error("Do not know how to widen this operator's operand!");
 
-  case ISD::BITCAST:            Res = WidenVecOp_BITCAST(N); break;
-  case ISD::CONCAT_VECTORS:     Res = WidenVecOp_CONCAT_VECTORS(N); break;
-  case ISD::INSERT_SUBVECTOR:   Res = WidenVecOp_INSERT_SUBVECTOR(N); break;
-  case ISD::EXTRACT_SUBVECTOR:  Res = WidenVecOp_EXTRACT_SUBVECTOR(N); break;
-  case ISD::EXTRACT_VECTOR_ELT: Res = WidenVecOp_EXTRACT_VECTOR_ELT(N); break;
-  case ISD::STORE:              Res = WidenVecOp_STORE(N); break;
-  case ISD::VP_STORE:           Res = WidenVecOp_VP_STORE(N, OpNo); break;
+  case ISD::BITCAST:
+    Res = WidenVecOp_BITCAST(N);
+    break;
+  case ISD::CONCAT_VECTORS:
+    Res = WidenVecOp_CONCAT_VECTORS(N);
+    break;
+  case ISD::INSERT_SUBVECTOR:
+    Res = WidenVecOp_INSERT_SUBVECTOR(N);
+    break;
+  case ISD::EXTRACT_SUBVECTOR:
+    Res = WidenVecOp_EXTRACT_SUBVECTOR(N);
+    break;
+  case ISD::EXTRACT_VECTOR_ELT:
+    Res = WidenVecOp_EXTRACT_VECTOR_ELT(N);
+    break;
+  case ISD::STORE:
+    Res = WidenVecOp_STORE(N);
+    break;
+  case ISD::VP_STORE:
+    Res = WidenVecOp_VP_STORE(N, OpNo);
+    break;
   case ISD::EXPERIMENTAL_VP_STRIDED_STORE:
     Res = WidenVecOp_VP_STRIDED_STORE(N, OpNo);
     break;
-  case ISD::MSTORE:             Res = WidenVecOp_MSTORE(N, OpNo); break;
-  case ISD::MGATHER:            Res = WidenVecOp_MGATHER(N, OpNo); break;
-  case ISD::MSCATTER:           Res = WidenVecOp_MSCATTER(N, OpNo); break;
-  case ISD::VP_SCATTER:         Res = WidenVecOp_VP_SCATTER(N, OpNo); break;
-  case ISD::SETCC:              Res = WidenVecOp_SETCC(N); break;
+  case ISD::MSTORE:
+    Res = WidenVecOp_MSTORE(N, OpNo);
+    break;
+  case ISD::MGATHER:
+    Res = WidenVecOp_MGATHER(N, OpNo);
+    break;
+  case ISD::MSCATTER:
+    Res = WidenVecOp_MSCATTER(N, OpNo);
+    break;
+  case ISD::VP_SCATTER:
+    Res = WidenVecOp_VP_SCATTER(N, OpNo);
+    break;
+  case ISD::SETCC:
+    Res = WidenVecOp_SETCC(N);
+    break;
   case ISD::STRICT_FSETCC:
-  case ISD::STRICT_FSETCCS:     Res = WidenVecOp_STRICT_FSETCC(N); break;
-  case ISD::VSELECT:            Res = WidenVecOp_VSELECT(N); break;
+  case ISD::STRICT_FSETCCS:
+    Res = WidenVecOp_STRICT_FSETCC(N);
+    break;
+  case ISD::VSELECT:
+    Res = WidenVecOp_VSELECT(N);
+    break;
   case ISD::FLDEXP:
-  case ISD::FCOPYSIGN:          Res = WidenVecOp_UnrollVectorOp(N); break;
-  case ISD::IS_FPCLASS:         Res = WidenVecOp_IS_FPCLASS(N); break;
+  case ISD::FCOPYSIGN:
+    Res = WidenVecOp_UnrollVectorOp(N);
+    break;
+  case ISD::IS_FPCLASS:
+    Res = WidenVecOp_IS_FPCLASS(N);
+    break;
 
   case ISD::ANY_EXTEND:
   case ISD::SIGN_EXTEND:
@@ -6029,14 +6202,14 @@ bool DAGTypeLegalizer::WidenVectorOperand(SDNode *N, unsigned OpNo) {
   }
 
   // If Res is null, the sub-method took care of registering the result.
-  if (!Res.getNode()) return false;
+  if (!Res.getNode())
+    return false;
 
   // If the result is N, the sub-method updated N in place.  Tell the legalizer
   // core about this.
   if (Res.getNode() == N)
     return true;
 
-
   if (N->isStrictFPOpcode())
     assert(Res.getValueType() == N->getValueType(0) && N->getNumValues() == 2 &&
            "Invalid operand expansion");
@@ -6157,17 +6330,17 @@ SDValue DAGTypeLegalizer::WidenVecOp_Convert(SDNode *N) {
 
   // See if a widened result type would be legal, if so widen the node.
   // FIXME: This isn't safe for StrictFP. Other optimization here is needed.
-  EVT WideVT = EVT::getVectorVT(*DAG.getContext(), EltVT,
-                                InVT.getVectorElementCount());
+  EVT WideVT =
+      EVT::getVectorVT(*DAG.getContext(), EltVT, InVT.getVectorElementCount());
   if (TLI.isTypeLegal(WideVT) && !N->isStrictFPOpcode()) {
     SDValue Res;
     if (N->isStrictFPOpcode()) {
       if (Opcode == ISD::STRICT_FP_ROUND)
-        Res = DAG.getNode(Opcode, dl, { WideVT, MVT::Other },
-                          { N->getOperand(0), InOp, N->getOperand(2) });
+        Res = DAG.getNode(Opcode, dl, {WideVT, MVT::Other},
+                          {N->getOperand(0), InOp, N->getOperand(2)});
       else
-        Res = DAG.getNode(Opcode, dl, { WideVT, MVT::Other },
-                          { N->getOperand(0), InOp });
+        Res = DAG.getNode(Opcode, dl, {WideVT, MVT::Other},
+                          {N->getOperand(0), InOp});
       // Legalize the chain result - switch anything that used the old chain to
       // use the new one.
       ReplaceValueWith(SDValue(N, 1), Res.getValue(1));
@@ -6189,10 +6362,10 @@ SDValue DAGTypeLegalizer::WidenVecOp_Convert(SDNode *N) {
   if (N->isStrictFPOpcode()) {
     SmallVector<SDValue, 4> NewOps(N->op_begin(), N->op_end());
     SmallVector<SDValue, 32> OpChains;
-    for (unsigned i=0; i < NumElts; ++i) {
+    for (unsigned i = 0; i < NumElts; ++i) {
       NewOps[1] = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, dl, InEltVT, InOp,
                               DAG.getVectorIdxConstant(i, dl));
-      Ops[i] = DAG.getNode(Opcode, dl, { EltVT, MVT::Other }, NewOps);
+      Ops[i] = DAG.getNode(Opcode, dl, {EltVT, MVT::Other}, NewOps);
       OpChains.push_back(Ops[i].getValue(1));
     }
     SDValue NewChain = DAG.getNode(ISD::TokenFactor, dl, MVT::Other, OpChains);
@@ -6299,7 +6472,7 @@ SDValue DAGTypeLegalizer::WidenVecOp_CONCAT_VECTORS(SDNode *N) {
   unsigned NumInElts = InVT.getVectorNumElements();
 
   unsigned Idx = 0;
-  for (unsigned i=0; i < NumOperands; ++i) {
+  for (unsigned i = 0; i < NumOperands; ++i) {
     SDValue InOp = N->getOperand(i);
     assert(getTypeAction(InOp.getValueType()) ==
                TargetLowering::TypeWidenVector &&
@@ -6353,14 +6526,14 @@ SDValue DAGTypeLegalizer::WidenVecOp_INSERT_SUBVECTOR(SDNode *N) {
 
 SDValue DAGTypeLegalizer::WidenVecOp_EXTRACT_SUBVECTOR(SDNode *N) {
   SDValue InOp = GetWidenedVector(N->getOperand(0));
-  return DAG.getNode(ISD::EXTRACT_SUBVECTOR, SDLoc(N),
-                     N->getValueType(0), InOp, N->getOperand(1));
+  return DAG.getNode(ISD::EXTRACT_SUBVECTOR, SDLoc(N), N->getValueType(0), InOp,
+                     N->getOperand(1));
 }
 
 SDValue DAGTypeLegalizer::WidenVecOp_EXTRACT_VECTOR_ELT(SDNode *N) {
   SDValue InOp = GetWidenedVector(N->getOperand(0));
-  return DAG.getNode(ISD::EXTRACT_VECTOR_ELT, SDLoc(N),
-                     N->getValueType(0), InOp, N->getOperand(1));
+  return DAG.getNode(ISD::EXTRACT_VECTOR_ELT, SDLoc(N), N->getValueType(0),
+                     InOp, N->getOperand(1));
 }
 
 SDValue DAGTypeLegalizer::WidenVecOp_STORE(SDNode *N) {
@@ -6497,9 +6670,9 @@ SDValue DAGTypeLegalizer::WidenVecOp_MSTORE(SDNode *N, unsigned OpNo) {
 
     // The mask should be widened as well.
     EVT WideVT = StVal.getValueType();
-    EVT WideMaskVT = EVT::getVectorVT(*DAG.getContext(),
-                                      MaskVT.getVectorElementType(),
-                                      WideVT.getVectorNumElements());
+    EVT WideMaskVT =
+        EVT::getVectorVT(*DAG.getContext(), MaskVT.getVectorElementType(),
+                         WideVT.getVectorNumElements());
     Mask = ModifyToType(Mask, WideMaskVT, true);
   } else {
     // Widen the mask.
@@ -6507,14 +6680,14 @@ SDValue DAGTypeLegalizer::WidenVecOp_MSTORE(SDNode *N, unsigned OpNo) {
     Mask = ModifyToType(Mask, WideMaskVT, true);
 
     EVT ValueVT = StVal.getValueType();
-    EVT WideVT = EVT::getVectorVT(*DAG.getContext(),
-                                  ValueVT.getVectorElementType(),
-                                  WideMaskVT.getVectorNumElements());
+    EVT WideVT =
+        EVT::getVectorVT(*DAG.getContext(), ValueVT.getVectorElementType(),
+                         WideMaskVT.getVectorNumElements());
     StVal = ModifyToType(StVal, WideVT);
   }
 
   assert(Mask.getValueType().getVectorNumElements() ==
-         StVal.getValueType().getVectorNumElements() &&
+             StVal.getValueType().getVectorNumElements() &&
          "Mask and data vectors should have the same number of elements");
   return DAG.getMaskedStore(MST->getChain(), dl, StVal, MST->getBasePtr(),
                             MST->getOffset(), Mask, MST->getMemoryVT(),
@@ -6533,8 +6706,8 @@ SDValue DAGTypeLegalizer::WidenVecOp_MGATHER(SDNode *N, unsigned OpNo) {
   SDValue Index = GetWidenedVector(MG->getIndex());
 
   SDLoc dl(N);
-  SDValue Ops[] = {MG->getChain(), DataOp, Mask, MG->getBasePtr(), Index,
-                   Scale};
+  SDValue Ops[] = {MG->getChain(),   DataOp, Mask,
+                   MG->getBasePtr(), Index,  Scale};
   SDValue Res = DAG.getMaskedGather(MG->getVTList(), MG->getMemoryVT(), dl, Ops,
                                     MG->getMemOperand(), MG->getIndexType(),
                                     MG->getExtensionType());
@@ -6576,8 +6749,8 @@ SDValue DAGTypeLegalizer::WidenVecOp_MSCATTER(SDNode *N, unsigned OpNo) {
   } else
     llvm_unreachable("Can't widen this operand of mscatter");
 
-  SDValue Ops[] = {MSC->getChain(), DataOp, Mask, MSC->getBasePtr(), Index,
-                   Scale};
+  SDValue Ops[] = {MSC->getChain(),   DataOp, Mask,
+                   MSC->getBasePtr(), Index,  Scale};
   return DAG.getMaskedScatter(DAG.getVTList(MVT::Other), WideMemVT, SDLoc(N),
                               Ops, MSC->getMemOperand(), MSC->getIndexType(),
                               MSC->isTruncatingStore());
@@ -6629,12 +6802,11 @@ SDValue DAGTypeLegalizer::WidenVecOp_SETCC(SDNode *N) {
     SVT = EVT::getVectorVT(*DAG.getContext(), MVT::i1,
                            SVT.getVectorElementCount());
 
-  SDValue WideSETCC = DAG.getNode(ISD::SETCC, SDLoc(N),
-                                  SVT, InOp0, InOp1, N->getOperand(2));
+  SDValue WideSETCC =
+      DAG.getNode(ISD::SETCC, SDLoc(N), SVT, InOp0, InOp1, N->getOperand(2));
 
   // Extract the needed results from the result vector.
-  EVT ResVT = EVT::getVectorVT(*DAG.getContext(),
-                               SVT.getVectorElementType(),
+  EVT ResVT = EVT::getVectorVT(*DAG.getContext(), SVT.getVectorElementType(),
                                VT.getVectorElementCount());
   SDValue CC = DAG.getNode(ISD::EXTRACT_SUBVECTOR, dl, ResVT, WideSETCC,
                            DAG.getVectorIdxConstant(0, dl));
@@ -6803,7 +6975,7 @@ static std::optional<EVT> findMemType(SelectionDAG &DAG,
   const bool Scalable = WidenVT.isScalableVector();
   unsigned WidenWidth = WidenVT.getSizeInBits().getKnownMinValue();
   unsigned WidenEltWidth = WidenEltVT.getSizeInBits();
-  unsigned AlignInBits = Align*8;
+  unsigned AlignInBits = Align * 8;
 
   // If we have one element to load/store, return it.
   EVT RetVT = WidenEltVT;
@@ -6823,8 +6995,8 @@ static std::optional<EVT> findMemType(SelectionDAG &DAG,
            Action == TargetLowering::TypePromoteInteger) &&
           (WidenWidth % MemVTWidth) == 0 &&
           isPowerOf2_32(WidenWidth / MemVTWidth) &&
-          (MemVTWidth <= Width ||
-           (Align!=0 && MemVTWidth<=AlignInBits && MemVTWidth<=Width+WidenEx))) {
+          (MemVTWidth <= Width || (Align != 0 && MemVTWidth <= AlignInBits &&
+                                   MemVTWidth <= Width + WidenEx))) {
         if (MemVTWidth == WidenWidth)
           return MemVT;
         RetVT = MemVT;
@@ -6846,8 +7018,8 @@ static std::optional<EVT> findMemType(SelectionDAG &DAG,
         WidenEltVT == MemVT.getVectorElementType() &&
         (WidenWidth % MemVTWidth) == 0 &&
         isPowerOf2_32(WidenWidth / MemVTWidth) &&
-        (MemVTWidth <= Width ||
-         (Align!=0 && MemVTWidth<=AlignInBits && MemVTWidth<=Width+WidenEx))) {
+        (MemVTWidth <= Width || (Align != 0 && MemVTWidth <= AlignInBits &&
+                                 MemVTWidth <= Width + WidenEx))) {
       if (RetVT.getFixedSizeInBits() < MemVTWidth || MemVT == WidenVT)
         return MemVT;
     }
@@ -6865,7 +7037,7 @@ static std::optional<EVT> findMemType(SelectionDAG &DAG,
 //  VecTy: Resulting Vector type
 //  LDOps: Load operators to build a vector type
 //  [Start,End) the list of loads to use.
-static SDValue BuildVectorFromScalar(SelectionDAG& DAG, EVT VecTy,
+static SDValue BuildVectorFromScalar(SelectionDAG &DAG, EVT VecTy,
                                      SmallVectorImpl<SDValue> &LdOps,
                                      unsigned Start, unsigned End) {
   SDLoc dl(LdOps[Start]);
@@ -6875,7 +7047,8 @@ static SDValue BuildVectorFromScalar(SelectionDAG& DAG, EVT VecTy,
   EVT NewVecVT = EVT::getVectorVT(*DAG.getContext(), LdTy, NumElts);
 
   unsigned Idx = 1;
-  SDValue VecOp = DAG.getNode(ISD::SCALAR_TO_VECTOR, dl, NewVecVT,LdOps[Start]);
+  SDValue VecOp =
+      DAG.getNode(ISD::SCALAR_TO_VECTOR, dl, NewVecVT, LdOps[Start]);
 
   for (unsigned i = Start + 1; i != End; ++i) {
     EVT NewLdTy = LdOps[i].getValueType();
@@ -6899,8 +7072,9 @@ SDValue DAGTypeLegalizer::GenWidenVectorLoads(SmallVectorImpl<SDValue> &LdChain,
   // The routine chops the vector into the largest vector loads with the same
   // element type or scalar loads and then recombines it to the widen vector
   // type.
-  EVT WidenVT = TLI.getTypeToTransformTo(*DAG.getContext(),LD->getValueType(0));
-  EVT LdVT    = LD->getMemoryVT();
+  EVT WidenVT =
+      TLI.getTypeToTransformTo(*DAG.getContext(), LD->getValueType(0));
+  EVT LdVT = LD->getMemoryVT();
   SDLoc dl(LD);
   assert(LdVT.isVector() && WidenVT.isVector());
   assert(LdVT.isScalableVector() == WidenVT.isScalableVector());
@@ -7017,7 +7191,7 @@ SDValue DAGTypeLegalizer::GenWidenVectorLoads(SmallVectorImpl<SDValue> &LdChain,
   int Idx = End;
   EVT LdTy = LdOps[i].getValueType();
   // First, combine the scalar loads to a vector.
-  if (!LdTy.isVector())  {
+  if (!LdTy.isVector()) {
     for (--i; i >= 0; --i) {
       LdTy = LdOps[i].getValueType();
       if (LdTy.isVector())
@@ -7039,13 +7213,13 @@ SDValue DAGTypeLegalizer::GenWidenVectorLoads(SmallVectorImpl<SDValue> &LdChain,
           NewLdTySize.getKnownMinValue() / LdTySize.getKnownMinValue();
       SmallVector<SDValue, 16> WidenOps(NumOps);
       unsigned j = 0;
-      for (; j != End-Idx; ++j)
-        WidenOps[j] = ConcatOps[Idx+j];
+      for (; j != End - Idx; ++j)
+        WidenOps[j] = ConcatOps[Idx + j];
       for (; j != NumOps; ++j)
         WidenOps[j] = DAG.getUNDEF(LdTy);
 
-      ConcatOps[End-1] = DAG.getNode(ISD::CONCAT_VECTORS, dl, NewLdTy,
-                                     WidenOps);
+      ConcatOps[End - 1] =
+          DAG.getNode(ISD::CONCAT_VECTORS, dl, NewLdTy, WidenOps);
       Idx = End - 1;
       LdTy = NewLdTy;
     }
@@ -7063,8 +7237,8 @@ SDValue DAGTypeLegalizer::GenWidenVectorLoads(SmallVectorImpl<SDValue> &LdChain,
   SDValue UndefVal = DAG.getUNDEF(LdTy);
   {
     unsigned i = 0;
-    for (; i != End-Idx; ++i)
-      WidenOps[i] = ConcatOps[Idx+i];
+    for (; i != End - Idx; ++i)
+      WidenOps[i] = ConcatOps[Idx + i];
     for (; i != NumOps; ++i)
       WidenOps[i] = UndefVal;
   }
@@ -7077,8 +7251,9 @@ DAGTypeLegalizer::GenWidenVectorExtLoads(SmallVectorImpl<SDValue> &LdChain,
                                          ISD::LoadExtType ExtType) {
   // For extension loads, it may not be more efficient to chop up the vector
   // and then extend it. Instead, we unroll the load and build a new vector.
-  EVT WidenVT = TLI.getTypeToTransformTo(*DAG.getContext(),LD->getValueType(0));
-  EVT LdVT    = LD->getMemoryVT();
+  EVT WidenVT =
+      TLI.getTypeToTransformTo(*DAG.getContext(), LD->getValueType(0));
+  EVT LdVT = LD->getMemoryVT();
   SDLoc dl(LD);
   assert(LdVT.isVector() && WidenVT.isVector());
   assert(LdVT.isScalableVector() == WidenVT.isScalableVector());
@@ -7106,7 +7281,7 @@ DAGTypeLegalizer::GenWidenVectorExtLoads(SmallVectorImpl<SDValue> &LdChain,
                      LdEltVT, LD->getOriginalAlign(), MMOFlags, AAInfo);
   LdChain.push_back(Ops[0].getValue(1));
   unsigned i = 0, Offset = Increment;
-  for (i=1; i < NumElts; ++i, Offset += Increment) {
+  for (i = 1; i < NumElts; ++i, Offset += Increment) {
     SDValue NewBasePtr =
         DAG.getObjectPtrOffset(dl, BasePtr, TypeSize::Fixed(Offset));
     Ops[i] = DAG.getExtLoad(ExtType, dl, EltVT, Chain, NewBasePtr,
@@ -7128,11 +7303,11 @@ bool DAGTypeLegalizer::GenWidenVectorStores(SmallVectorImpl<SDValue> &StChain,
   // The strategy assumes that we can efficiently store power-of-two widths.
   // The routine chops the vector into the largest vector stores with the same
   // element type or scalar stores.
-  SDValue  Chain = ST->getChain();
-  SDValue  BasePtr = ST->getBasePtr();
+  SDValue Chain = ST->getChain();
+  SDValue BasePtr = ST->getBasePtr();
   MachineMemOperand::Flags MMOFlags = ST->getMemOperand()->getFlags();
   AAMDNodes AAInfo = ST->getAAInfo();
-  SDValue  ValOp = GetWidenedVector(ST->getValue());
+  SDValue ValOp = GetWidenedVector(ST->getValue());
   SDLoc dl(ST);
 
   EVT StVT = ST->getMemoryVT();
@@ -7145,7 +7320,7 @@ bool DAGTypeLegalizer::GenWidenVectorStores(SmallVectorImpl<SDValue> &StChain,
   assert(StVT.isScalableVector() == ValVT.isScalableVector() &&
          "Mismatch between store and value types");
 
-  int Idx = 0;          // current index to store
+  int Idx = 0; // current index to store
 
   MachinePointerInfo MPI = ST->getPointerInfo();
   uint64_t ScaledOffset = 0;
@@ -7239,8 +7414,8 @@ SDValue DAGTypeLegalizer::ModifyToType(SDValue InOp, EVT NVT,
   if (WidenEC.hasKnownScalarFactor(InEC)) {
     unsigned NumConcat = WidenEC.getKnownScalarFactor(InEC);
     SmallVector<SDValue, 16> Ops(NumConcat);
-    SDValue FillVal = FillWithZeroes ? DAG.getConstant(0, dl, InVT) :
-      DAG.getUNDEF(InVT);
+    SDValue FillVal =
+        FillWithZeroes ? DAG.getConstant(0, dl, InVT) : DAG.getUNDEF(InVT);
     Ops[0] = InOp;
     for (unsigned i = 1; i != NumConcat; ++i)
       Ops[i] = FillVal;
diff --git a/llvm/lib/CodeGen/SelectionDAG/ResourcePriorityQueue.cpp b/llvm/lib/CodeGen/SelectionDAG/ResourcePriorityQueue.cpp
index e0e8d503ca92a51..85db2c79d9279a3 100644
--- a/llvm/lib/CodeGen/SelectionDAG/ResourcePriorityQueue.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/ResourcePriorityQueue.cpp
@@ -63,8 +63,7 @@ ResourcePriorityQueue::ResourcePriorityQueue(SelectionDAGISel *IS)
   HorizontalVerticalBalance = 0;
 }
 
-unsigned
-ResourcePriorityQueue::numberRCValPredInSU(SUnit *SU, unsigned RCId) {
+unsigned ResourcePriorityQueue::numberRCValPredInSU(SUnit *SU, unsigned RCId) {
   unsigned NumberDeps = 0;
   for (SDep &Pred : SU->Preds) {
     if (Pred.isCtrl())
@@ -79,20 +78,26 @@ ResourcePriorityQueue::numberRCValPredInSU(SUnit *SU, unsigned RCId) {
     // If value is passed to CopyToReg, it is probably
     // live outside BB.
     switch (ScegN->getOpcode()) {
-      default:  break;
-      case ISD::TokenFactor:    break;
-      case ISD::CopyFromReg:    NumberDeps++;  break;
-      case ISD::CopyToReg:      break;
-      case ISD::INLINEASM:      break;
-      case ISD::INLINEASM_BR:   break;
+    default:
+      break;
+    case ISD::TokenFactor:
+      break;
+    case ISD::CopyFromReg:
+      NumberDeps++;
+      break;
+    case ISD::CopyToReg:
+      break;
+    case ISD::INLINEASM:
+      break;
+    case ISD::INLINEASM_BR:
+      break;
     }
     if (!ScegN->isMachineOpcode())
       continue;
 
     for (unsigned i = 0, e = ScegN->getNumValues(); i != e; ++i) {
       MVT VT = ScegN->getSimpleValueType(i);
-      if (TLI->isTypeLegal(VT)
-          && (TLI->getRegClassFor(VT)->getID() == RCId)) {
+      if (TLI->isTypeLegal(VT) && (TLI->getRegClassFor(VT)->getID() == RCId)) {
         NumberDeps++;
         break;
       }
@@ -101,8 +106,7 @@ ResourcePriorityQueue::numberRCValPredInSU(SUnit *SU, unsigned RCId) {
   return NumberDeps;
 }
 
-unsigned ResourcePriorityQueue::numberRCValSuccInSU(SUnit *SU,
-                                                    unsigned RCId) {
+unsigned ResourcePriorityQueue::numberRCValSuccInSU(SUnit *SU, unsigned RCId) {
   unsigned NumberDeps = 0;
   for (const SDep &Succ : SU->Succs) {
     if (Succ.isCtrl())
@@ -116,12 +120,19 @@ unsigned ResourcePriorityQueue::numberRCValSuccInSU(SUnit *SU,
     // If value is passed to CopyToReg, it is probably
     // live outside BB.
     switch (ScegN->getOpcode()) {
-      default:  break;
-      case ISD::TokenFactor:    break;
-      case ISD::CopyFromReg:    break;
-      case ISD::CopyToReg:      NumberDeps++;  break;
-      case ISD::INLINEASM:      break;
-      case ISD::INLINEASM_BR:   break;
+    default:
+      break;
+    case ISD::TokenFactor:
+      break;
+    case ISD::CopyFromReg:
+      break;
+    case ISD::CopyToReg:
+      NumberDeps++;
+      break;
+    case ISD::INLINEASM:
+      break;
+    case ISD::INLINEASM_BR:
+      break;
     }
     if (!ScegN->isMachineOpcode())
       continue;
@@ -129,8 +140,7 @@ unsigned ResourcePriorityQueue::numberRCValSuccInSU(SUnit *SU,
     for (unsigned i = 0, e = ScegN->getNumOperands(); i != e; ++i) {
       const SDValue &Op = ScegN->getOperand(i);
       MVT VT = Op.getNode()->getSimpleValueType(Op.getResNo());
-      if (TLI->isTypeLegal(VT)
-          && (TLI->getRegClassFor(VT)->getID() == RCId)) {
+      if (TLI->isTypeLegal(VT) && (TLI->getRegClassFor(VT)->getID() == RCId)) {
         NumberDeps++;
         break;
       }
@@ -188,22 +198,25 @@ bool resource_sort::operator()(const SUnit *LHS, const SUnit *RHS) const {
   // The most important heuristic is scheduling the critical path.
   unsigned LHSLatency = PQ->getLatency(LHSNum);
   unsigned RHSLatency = PQ->getLatency(RHSNum);
-  if (LHSLatency < RHSLatency) return true;
-  if (LHSLatency > RHSLatency) return false;
+  if (LHSLatency < RHSLatency)
+    return true;
+  if (LHSLatency > RHSLatency)
+    return false;
 
   // After that, if two nodes have identical latencies, look to see if one will
   // unblock more other nodes than the other.
   unsigned LHSBlocked = PQ->getNumSolelyBlockNodes(LHSNum);
   unsigned RHSBlocked = PQ->getNumSolelyBlockNodes(RHSNum);
-  if (LHSBlocked < RHSBlocked) return true;
-  if (LHSBlocked > RHSBlocked) return false;
+  if (LHSBlocked < RHSBlocked)
+    return true;
+  if (LHSBlocked > RHSBlocked)
+    return false;
 
   // Finally, just to provide a stable ordering, use the node number as a
   // deciding factor.
   return LHSNum < RHSNum;
 }
 
-
 /// getSingleUnscheduledPred - If there is exactly one unscheduled predecessor
 /// of SU, return it, otherwise return null.
 SUnit *ResourcePriorityQueue::getSingleUnscheduledPred(SUnit *SU) {
@@ -249,16 +262,16 @@ bool ResourcePriorityQueue::isResourceAvailable(SUnit *SU) {
   if (SU->getNode()->isMachineOpcode())
     switch (SU->getNode()->getMachineOpcode()) {
     default:
-      if (!ResourcesModel->canReserveResources(&TII->get(
-          SU->getNode()->getMachineOpcode())))
-           return false;
+      if (!ResourcesModel->canReserveResources(
+              &TII->get(SU->getNode()->getMachineOpcode())))
+        return false;
       break;
     case TargetOpcode::EXTRACT_SUBREG:
     case TargetOpcode::INSERT_SUBREG:
     case TargetOpcode::SUBREG_TO_REG:
     case TargetOpcode::REG_SEQUENCE:
     case TargetOpcode::IMPLICIT_DEF:
-        break;
+      break;
     }
 
   // Now see if there are no other dependencies
@@ -289,8 +302,8 @@ void ResourcePriorityQueue::reserveResources(SUnit *SU) {
   if (SU->getNode() && SU->getNode()->isMachineOpcode()) {
     switch (SU->getNode()->getMachineOpcode()) {
     default:
-      ResourcesModel->reserveResources(&TII->get(
-        SU->getNode()->getMachineOpcode()));
+      ResourcesModel->reserveResources(
+          &TII->get(SU->getNode()->getMachineOpcode()));
       break;
     case TargetOpcode::EXTRACT_SUBREG:
     case TargetOpcode::INSERT_SUBREG:
@@ -323,22 +336,21 @@ int ResourcePriorityQueue::rawRegPressureDelta(SUnit *SU, unsigned RCId) {
 
   // Gen estimate.
   for (unsigned i = 0, e = SU->getNode()->getNumValues(); i != e; ++i) {
-      MVT VT = SU->getNode()->getSimpleValueType(i);
-      if (TLI->isTypeLegal(VT)
-          && TLI->getRegClassFor(VT)
-          && TLI->getRegClassFor(VT)->getID() == RCId)
-        RegBalance += numberRCValSuccInSU(SU, RCId);
+    MVT VT = SU->getNode()->getSimpleValueType(i);
+    if (TLI->isTypeLegal(VT) && TLI->getRegClassFor(VT) &&
+        TLI->getRegClassFor(VT)->getID() == RCId)
+      RegBalance += numberRCValSuccInSU(SU, RCId);
   }
   // Kill estimate.
   for (unsigned i = 0, e = SU->getNode()->getNumOperands(); i != e; ++i) {
-      const SDValue &Op = SU->getNode()->getOperand(i);
-      MVT VT = Op.getNode()->getSimpleValueType(Op.getResNo());
-      if (isa<ConstantSDNode>(Op.getNode()))
-        continue;
+    const SDValue &Op = SU->getNode()->getOperand(i);
+    MVT VT = Op.getNode()->getSimpleValueType(Op.getResNo());
+    if (isa<ConstantSDNode>(Op.getNode()))
+      continue;
 
-      if (TLI->isTypeLegal(VT) && TLI->getRegClassFor(VT)
-          && TLI->getRegClassFor(VT)->getID() == RCId)
-        RegBalance -= numberRCValPredInSU(SU, RCId);
+    if (TLI->isTypeLegal(VT) && TLI->getRegClassFor(VT) &&
+        TLI->getRegClassFor(VT)->getID() == RCId)
+      RegBalance -= numberRCValPredInSU(SU, RCId);
   }
   return RegBalance;
 }
@@ -358,13 +370,12 @@ int ResourcePriorityQueue::regPressureDelta(SUnit *SU, bool RawPressure) {
   if (RawPressure) {
     for (const TargetRegisterClass *RC : TRI->regclasses())
       RegBalance += rawRegPressureDelta(SU, RC->getID());
-  }
-  else {
+  } else {
     for (const TargetRegisterClass *RC : TRI->regclasses()) {
-      if ((RegPressure[RC->getID()] +
-           rawRegPressureDelta(SU, RC->getID()) > 0) &&
-          (RegPressure[RC->getID()] +
-           rawRegPressureDelta(SU, RC->getID())  >= RegLimit[RC->getID()]))
+      if ((RegPressure[RC->getID()] + rawRegPressureDelta(SU, RC->getID()) >
+           0) &&
+          (RegPressure[RC->getID()] + rawRegPressureDelta(SU, RC->getID()) >=
+           RegLimit[RC->getID()]))
         RegBalance += rawRegPressureDelta(SU, RC->getID());
     }
   }
@@ -410,7 +421,7 @@ int ResourcePriorityQueue::SUSchedulingCost(SUnit *SU) {
 
     // Consider change to reg pressure from scheduling
     // this SU.
-    ResCount -= (regPressureDelta(SU,true) * ScaleOne);
+    ResCount -= (regPressureDelta(SU, true) * ScaleOne);
   }
   // Default heuristic, greeady and
   // critical path driven.
@@ -434,11 +445,11 @@ int ResourcePriorityQueue::SUSchedulingCost(SUnit *SU) {
     if (N->isMachineOpcode()) {
       const MCInstrDesc &TID = TII->get(N->getMachineOpcode());
       if (TID.isCall())
-        ResCount += (PriorityTwo + (ScaleThree*N->getNumValues()));
-    }
-    else
+        ResCount += (PriorityTwo + (ScaleThree * N->getNumValues()));
+    } else
       switch (N->getOpcode()) {
-      default:  break;
+      default:
+        break;
       case ISD::TokenFactor:
       case ISD::CopyFromReg:
       case ISD::CopyToReg:
@@ -454,7 +465,6 @@ int ResourcePriorityQueue::SUSchedulingCost(SUnit *SU) {
   return ResCount;
 }
 
-
 /// Main resource tracking point.
 void ResourcePriorityQueue::scheduledNode(SUnit *SU) {
   // Use NULL entry as an event marker to reset
@@ -487,10 +497,10 @@ void ResourcePriorityQueue::scheduledNode(SUnit *SU) {
       if (TLI->isTypeLegal(VT)) {
         const TargetRegisterClass *RC = TLI->getRegClassFor(VT);
         if (RC) {
-          if (RegPressure[RC->getID()] >
-            (numberRCValPredInSU(SU, RC->getID())))
+          if (RegPressure[RC->getID()] > (numberRCValPredInSU(SU, RC->getID())))
             RegPressure[RC->getID()] -= numberRCValPredInSU(SU, RC->getID());
-          else RegPressure[RC->getID()] = 0;
+          else
+            RegPressure[RC->getID()] = 0;
         }
       }
     }
@@ -521,8 +531,7 @@ void ResourcePriorityQueue::scheduledNode(SUnit *SU) {
     else
       ParallelLiveRanges = 0;
 
-  }
-  else
+  } else
     ParallelLiveRanges += SU->NumRegDefsLeft;
 
   // Track parallel live chains.
@@ -531,7 +540,7 @@ void ResourcePriorityQueue::scheduledNode(SUnit *SU) {
 }
 
 void ResourcePriorityQueue::initNumRegDefsLeft(SUnit *SU) {
-  unsigned  NodeNumDefs = 0;
+  unsigned NodeNumDefs = 0;
   for (SDNode *N = SU->getNode(); N; N = N->getGluedNode())
     if (N->isMachineOpcode()) {
       const MCInstrDesc &TID = TII->get(N->getMachineOpcode());
@@ -541,17 +550,17 @@ void ResourcePriorityQueue::initNumRegDefsLeft(SUnit *SU) {
         break;
       }
       NodeNumDefs = std::min(N->getNumValues(), TID.getNumDefs());
-    }
-    else
-      switch(N->getOpcode()) {
-        default:     break;
-        case ISD::CopyFromReg:
-          NodeNumDefs++;
-          break;
-        case ISD::INLINEASM:
-        case ISD::INLINEASM_BR:
-          NodeNumDefs++;
-          break;
+    } else
+      switch (N->getOpcode()) {
+      default:
+        break;
+      case ISD::CopyFromReg:
+        NodeNumDefs++;
+        break;
+      case ISD::INLINEASM:
+      case ISD::INLINEASM_BR:
+        NodeNumDefs++;
+        break;
       }
 
   SU->NumRegDefsLeft = NodeNumDefs;
@@ -564,7 +573,8 @@ void ResourcePriorityQueue::initNumRegDefsLeft(SUnit *SU) {
 /// scheduled will make this node available, so it is better than some other
 /// node of the same priority that will not make a node available.
 void ResourcePriorityQueue::adjustPriorityOfUnscheduledPreds(SUnit *SU) {
-  if (SU->isAvailable) return;  // All preds scheduled.
+  if (SU->isAvailable)
+    return; // All preds scheduled.
 
   SUnit *OnlyAvailablePred = getSingleUnscheduledPred(SU);
   if (!OnlyAvailablePred || !OnlyAvailablePred->isAvailable)
@@ -579,7 +589,6 @@ void ResourcePriorityQueue::adjustPriorityOfUnscheduledPreds(SUnit *SU) {
   push(OnlyAvailablePred);
 }
 
-
 /// Main access point - returns next instructions
 /// to be placed in scheduling sequence.
 SUnit *ResourcePriorityQueue::pop() {
@@ -613,7 +622,6 @@ SUnit *ResourcePriorityQueue::pop() {
   return V;
 }
 
-
 void ResourcePriorityQueue::remove(SUnit *SU) {
   assert(!Queue.empty() && "Queue is empty!");
   std::vector<SUnit *>::iterator I = find(Queue, SU);
diff --git a/llvm/lib/CodeGen/SelectionDAG/SDNodeDbgValue.h b/llvm/lib/CodeGen/SelectionDAG/SDNodeDbgValue.h
index c31b971e7fc3d2e..7e29cd8159cd499 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SDNodeDbgValue.h
+++ b/llvm/lib/CodeGen/SelectionDAG/SDNodeDbgValue.h
@@ -132,7 +132,6 @@ class SDDbgOperand {
 /// We do not use SDValue here to avoid including its header.
 class SDDbgValue {
 public:
-
 private:
   // SDDbgValues are allocated by a BumpPtrAllocator, which means the destructor
   // may not be called; therefore all member arrays must also be allocated by
@@ -259,6 +258,6 @@ class SDDbgLabel {
   unsigned getOrder() const { return Order; }
 };
 
-} // end llvm namespace
+} // namespace llvm
 
 #endif
diff --git a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp
index 5b01743d23e0abf..499c915b87518f1 100644
--- a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGFast.cpp
@@ -27,36 +27,34 @@ using namespace llvm;
 
 #define DEBUG_TYPE "pre-RA-sched"
 
-STATISTIC(NumUnfolds,    "Number of nodes unfolded");
-STATISTIC(NumDups,       "Number of duplicated nodes");
-STATISTIC(NumPRCopies,   "Number of physical copies");
-
-static RegisterScheduler
-  fastDAGScheduler("fast", "Fast suboptimal list scheduling",
-                   createFastDAGScheduler);
-static RegisterScheduler
-  linearizeDAGScheduler("linearize", "Linearize DAG, no scheduling",
-                        createDAGLinearizer);
+STATISTIC(NumUnfolds, "Number of nodes unfolded");
+STATISTIC(NumDups, "Number of duplicated nodes");
+STATISTIC(NumPRCopies, "Number of physical copies");
 
+static RegisterScheduler fastDAGScheduler("fast",
+                                          "Fast suboptimal list scheduling",
+                                          createFastDAGScheduler);
+static RegisterScheduler linearizeDAGScheduler("linearize",
+                                               "Linearize DAG, no scheduling",
+                                               createDAGLinearizer);
 
 namespace {
-  /// FastPriorityQueue - A degenerate priority queue that considers
-  /// all nodes to have the same priority.
-  ///
-  struct FastPriorityQueue {
-    SmallVector<SUnit *, 16> Queue;
+/// FastPriorityQueue - A degenerate priority queue that considers
+/// all nodes to have the same priority.
+///
+struct FastPriorityQueue {
+  SmallVector<SUnit *, 16> Queue;
 
-    bool empty() const { return Queue.empty(); }
+  bool empty() const { return Queue.empty(); }
 
-    void push(SUnit *U) {
-      Queue.push_back(U);
-    }
+  void push(SUnit *U) { Queue.push_back(U); }
 
-    SUnit *pop() {
-      if (empty()) return nullptr;
-      return Queue.pop_back_val();
-    }
-  };
+  SUnit *pop() {
+    if (empty())
+      return nullptr;
+    return Queue.pop_back_val();
+  }
+};
 
 //===----------------------------------------------------------------------===//
 /// ScheduleDAGFast - The actual "fast" list scheduler implementation.
@@ -70,44 +68,37 @@ class ScheduleDAGFast : public ScheduleDAGSDNodes {
   /// that are "live". These nodes must be scheduled before any other nodes that
   /// modifies the registers can be scheduled.
   unsigned NumLiveRegs = 0u;
-  std::vector<SUnit*> LiveRegDefs;
+  std::vector<SUnit *> LiveRegDefs;
   std::vector<unsigned> LiveRegCycles;
 
 public:
-  ScheduleDAGFast(MachineFunction &mf)
-    : ScheduleDAGSDNodes(mf) {}
+  ScheduleDAGFast(MachineFunction &mf) : ScheduleDAGSDNodes(mf) {}
 
   void Schedule() override;
 
   /// AddPred - adds a predecessor edge to SUnit SU.
   /// This returns true if this is a new predecessor.
-  void AddPred(SUnit *SU, const SDep &D) {
-    SU->addPred(D);
-  }
+  void AddPred(SUnit *SU, const SDep &D) { SU->addPred(D); }
 
   /// RemovePred - removes a predecessor edge from SUnit SU.
   /// This returns true if an edge was removed.
-  void RemovePred(SUnit *SU, const SDep &D) {
-    SU->removePred(D);
-  }
+  void RemovePred(SUnit *SU, const SDep &D) { SU->removePred(D); }
 
 private:
   void ReleasePred(SUnit *SU, SDep *PredEdge);
   void ReleasePredecessors(SUnit *SU, unsigned CurCycle);
-  void ScheduleNodeBottomUp(SUnit*, unsigned);
-  SUnit *CopyAndMoveSuccessors(SUnit*);
-  void InsertCopiesAndMoveSuccs(SUnit*, unsigned,
-                                const TargetRegisterClass*,
-                                const TargetRegisterClass*,
-                                SmallVectorImpl<SUnit*>&);
-  bool DelayForLiveRegsBottomUp(SUnit*, SmallVectorImpl<unsigned>&);
+  void ScheduleNodeBottomUp(SUnit *, unsigned);
+  SUnit *CopyAndMoveSuccessors(SUnit *);
+  void InsertCopiesAndMoveSuccs(SUnit *, unsigned, const TargetRegisterClass *,
+                                const TargetRegisterClass *,
+                                SmallVectorImpl<SUnit *> &);
+  bool DelayForLiveRegsBottomUp(SUnit *, SmallVectorImpl<unsigned> &);
   void ListScheduleBottomUp();
 
   /// forceUnitLatencies - The fast scheduler doesn't care about real latencies.
   bool forceUnitLatencies() const override { return true; }
 };
-}  // end anonymous namespace
-
+} // end anonymous namespace
 
 /// Schedule - Schedule the DAG using list scheduling.
 void ScheduleDAGFast::Schedule() {
@@ -227,7 +218,7 @@ SUnit *ScheduleDAGFast::CopyAndMoveSuccessors(SUnit *SU) {
   }
 
   if (TryUnfold) {
-    SmallVector<SDNode*, 2> NewNodes;
+    SmallVector<SDNode *, 2> NewNodes;
     if (!TII->unfoldMemoryOperand(*DAG, N, NewNodes))
       return nullptr;
 
@@ -240,7 +231,7 @@ SUnit *ScheduleDAGFast::CopyAndMoveSuccessors(SUnit *SU) {
     unsigned OldNumVals = SU->getNode()->getNumValues();
     for (unsigned i = 0; i != NumVals; ++i)
       DAG->ReplaceAllUsesOfValueWith(SDValue(SU->getNode(), i), SDValue(N, i));
-    DAG->ReplaceAllUsesOfValueWith(SDValue(SU->getNode(), OldNumVals-1),
+    DAG->ReplaceAllUsesOfValueWith(SDValue(SU->getNode(), OldNumVals - 1),
                                    SDValue(LoadNode, 1));
 
     SUnit *NewSU = newSUnit(N);
@@ -373,10 +364,9 @@ SUnit *ScheduleDAGFast::CopyAndMoveSuccessors(SUnit *SU) {
 
 /// InsertCopiesAndMoveSuccs - Insert register copies and move all
 /// scheduled successors of the given SUnit to the last copy.
-void ScheduleDAGFast::InsertCopiesAndMoveSuccs(SUnit *SU, unsigned Reg,
-                                              const TargetRegisterClass *DestRC,
-                                              const TargetRegisterClass *SrcRC,
-                                              SmallVectorImpl<SUnit*> &Copies) {
+void ScheduleDAGFast::InsertCopiesAndMoveSuccs(
+    SUnit *SU, unsigned Reg, const TargetRegisterClass *DestRC,
+    const TargetRegisterClass *SrcRC, SmallVectorImpl<SUnit *> &Copies) {
   SUnit *CopyFromSU = newSUnit(static_cast<SDNode *>(nullptr));
   CopyFromSU->CopySrcRC = SrcRC;
   CopyFromSU->CopyDstRC = DestRC;
@@ -473,8 +463,8 @@ static bool CheckForLiveRegDef(SUnit *SU, unsigned Reg,
 /// scheduling of the given node to satisfy live physical register dependencies.
 /// If the specific node is the last one that's available to schedule, do
 /// whatever is necessary (i.e. backtracking or cloning) to make it possible.
-bool ScheduleDAGFast::DelayForLiveRegsBottomUp(SUnit *SU,
-                                              SmallVectorImpl<unsigned> &LRegs){
+bool ScheduleDAGFast::DelayForLiveRegsBottomUp(
+    SUnit *SU, SmallVectorImpl<unsigned> &LRegs) {
   if (NumLiveRegs == 0)
     return false;
 
@@ -482,8 +472,8 @@ bool ScheduleDAGFast::DelayForLiveRegsBottomUp(SUnit *SU,
   // If this node would clobber any "live" register, then it's not ready.
   for (SDep &Pred : SU->Preds) {
     if (Pred.isAssignedRegDep()) {
-      CheckForLiveRegDef(Pred.getSUnit(), Pred.getReg(), LiveRegDefs,
-                         RegAdded, LRegs, TRI);
+      CheckForLiveRegDef(Pred.getSUnit(), Pred.getReg(), LiveRegDefs, RegAdded,
+                         LRegs, TRI);
     }
   }
 
@@ -492,12 +482,12 @@ bool ScheduleDAGFast::DelayForLiveRegsBottomUp(SUnit *SU,
         Node->getOpcode() == ISD::INLINEASM_BR) {
       // Inline asm can clobber physical defs.
       unsigned NumOps = Node->getNumOperands();
-      if (Node->getOperand(NumOps-1).getValueType() == MVT::Glue)
-        --NumOps;  // Ignore the glue operand.
+      if (Node->getOperand(NumOps - 1).getValueType() == MVT::Glue)
+        --NumOps; // Ignore the glue operand.
 
       for (unsigned i = InlineAsm::Op_FirstOperand; i != NumOps;) {
         unsigned Flags =
-          cast<ConstantSDNode>(Node->getOperand(i))->getZExtValue();
+            cast<ConstantSDNode>(Node->getOperand(i))->getZExtValue();
         unsigned NumVals = InlineAsm::getNumOperandRegisters(Flags);
 
         ++i; // Skip the ID value.
@@ -533,7 +523,6 @@ bool ScheduleDAGFast::DelayForLiveRegsBottomUp(SUnit *SU,
   return !LRegs.empty();
 }
 
-
 /// ListScheduleBottomUp - The main loop of list scheduling for bottom-up
 /// schedulers.
 void ScheduleDAGFast::ListScheduleBottomUp() {
@@ -552,8 +541,8 @@ void ScheduleDAGFast::ListScheduleBottomUp() {
 
   // While Available queue is not empty, grab the node with the highest
   // priority. If it is not ready put it back.  Schedule the node.
-  SmallVector<SUnit*, 4> NotReady;
-  DenseMap<SUnit*, SmallVector<unsigned, 4> > LRegsMap;
+  SmallVector<SUnit *, 4> NotReady;
+  DenseMap<SUnit *, SmallVector<unsigned, 4>> LRegsMap;
   Sequence.reserve(SUnits.size());
   while (!AvailableQueue.empty()) {
     bool Delayed = false;
@@ -566,7 +555,7 @@ void ScheduleDAGFast::ListScheduleBottomUp() {
       Delayed = true;
       LRegsMap.insert(std::make_pair(CurSU, LRegs));
 
-      CurSU->isPending = true;  // This SU is not in AvailableQueue right now.
+      CurSU->isPending = true; // This SU is not in AvailableQueue right now.
       NotReady.push_back(CurSU);
       CurSU = AvailableQueue.pop();
     }
@@ -585,8 +574,7 @@ void ScheduleDAGFast::ListScheduleBottomUp() {
         unsigned Reg = LRegs[0];
         SUnit *LRDef = LiveRegDefs[Reg];
         MVT VT = getPhysicalRegisterVT(LRDef->getNode(), Reg, TII);
-        const TargetRegisterClass *RC =
-          TRI->getMinimalPhysRegClass(Reg, VT);
+        const TargetRegisterClass *RC = TRI->getMinimalPhysRegClass(Reg, VT);
         const TargetRegisterClass *DestRC = TRI->getCrossCopyRegClass(RC);
 
         // If cross copy register class is the same as RC, then it must be
@@ -605,7 +593,7 @@ void ScheduleDAGFast::ListScheduleBottomUp() {
         }
         if (!NewDef) {
           // Issue copies, these can be expensive cross register class copies.
-          SmallVector<SUnit*, 2> Copies;
+          SmallVector<SUnit *, 2> Copies;
           InsertCopiesAndMoveSuccs(LRDef, Reg, DestRC, RC, Copies);
           LLVM_DEBUG(dbgs() << "Adding an edge from SU # " << TrySU->NodeNum
                             << " to SU #" << Copies.front()->NodeNum << "\n");
@@ -622,7 +610,8 @@ void ScheduleDAGFast::ListScheduleBottomUp() {
       }
 
       if (!CurSU) {
-        llvm_unreachable("Unable to resolve live physical register dependencies!");
+        llvm_unreachable(
+            "Unable to resolve live physical register dependencies!");
       }
     }
 
@@ -648,7 +637,6 @@ void ScheduleDAGFast::ListScheduleBottomUp() {
 #endif
 }
 
-
 namespace {
 //===----------------------------------------------------------------------===//
 // ScheduleDAGLinearize - No scheduling scheduler, it simply linearize the
@@ -662,11 +650,11 @@ class ScheduleDAGLinearize : public ScheduleDAGSDNodes {
   void Schedule() override;
 
   MachineBasicBlock *
-    EmitSchedule(MachineBasicBlock::iterator &InsertPos) override;
+  EmitSchedule(MachineBasicBlock::iterator &InsertPos) override;
 
 private:
-  std::vector<SDNode*> Sequence;
-  DenseMap<SDNode*, SDNode*> GluedMap;  // Cache glue to its user
+  std::vector<SDNode *> Sequence;
+  DenseMap<SDNode *, SDNode *> GluedMap; // Cache glue to its user
 
   void ScheduleNode(SDNode *N);
 };
@@ -689,7 +677,7 @@ void ScheduleDAGLinearize::ScheduleNode(SDNode *N) {
   if (unsigned NumLeft = NumOps) {
     SDNode *GluedOpN = nullptr;
     do {
-      const SDValue &Op = N->getOperand(NumLeft-1);
+      const SDValue &Op = N->getOperand(NumLeft - 1);
       SDNode *OpN = Op.getNode();
 
       if (NumLeft == NumOps && Op.getValueType() == MVT::Glue) {
@@ -705,7 +693,7 @@ void ScheduleDAGLinearize::ScheduleNode(SDNode *N) {
         // Glue operand is already scheduled.
         continue;
 
-      DenseMap<SDNode*, SDNode*>::iterator DI = GluedMap.find(OpN);
+      DenseMap<SDNode *, SDNode *>::iterator DI = GluedMap.find(OpN);
       if (DI != GluedMap.end() && DI->second != N)
         // Users of glues are counted against the glued users.
         OpN = DI->second;
@@ -730,7 +718,7 @@ static SDNode *findGluedUser(SDNode *N) {
 void ScheduleDAGLinearize::Schedule() {
   LLVM_DEBUG(dbgs() << "********** DAG Linearization **********\n");
 
-  SmallVector<SDNode*, 8> Glues;
+  SmallVector<SDNode *, 8> Glues;
   unsigned DAGSize = 0;
   for (SDNode &Node : DAG->allnodes()) {
     SDNode *N = &Node;
@@ -739,8 +727,8 @@ void ScheduleDAGLinearize::Schedule() {
     unsigned Degree = N->use_size();
     N->setNodeId(Degree);
     unsigned NumVals = N->getNumValues();
-    if (NumVals && N->getValueType(NumVals-1) == MVT::Glue &&
-        N->hasAnyUseOfValue(NumVals-1)) {
+    if (NumVals && N->getValueType(NumVals - 1) == MVT::Glue &&
+        N->hasAnyUseOfValue(NumVals - 1)) {
       SDNode *User = findGluedUser(N);
       if (User) {
         Glues.push_back(N);
@@ -773,7 +761,7 @@ void ScheduleDAGLinearize::Schedule() {
   ScheduleNode(DAG->getRoot().getNode());
 }
 
-MachineBasicBlock*
+MachineBasicBlock *
 ScheduleDAGLinearize::EmitSchedule(MachineBasicBlock::iterator &InsertPos) {
   InstrEmitter Emitter(DAG->getTarget(), BB, InsertPos);
   DenseMap<SDValue, Register> VRBaseMap;
@@ -783,7 +771,7 @@ ScheduleDAGLinearize::EmitSchedule(MachineBasicBlock::iterator &InsertPos) {
   unsigned NumNodes = Sequence.size();
   MachineBasicBlock *BB = Emitter.getBlock();
   for (unsigned i = 0; i != NumNodes; ++i) {
-    SDNode *N = Sequence[NumNodes-i-1];
+    SDNode *N = Sequence[NumNodes - i - 1];
     LLVM_DEBUG(N->dump(DAG));
     Emitter.EmitNode(N, false, false, VRBaseMap);
 
@@ -808,12 +796,12 @@ ScheduleDAGLinearize::EmitSchedule(MachineBasicBlock::iterator &InsertPos) {
 //                         Public Constructor Functions
 //===----------------------------------------------------------------------===//
 
-llvm::ScheduleDAGSDNodes *
-llvm::createFastDAGScheduler(SelectionDAGISel *IS, CodeGenOpt::Level) {
+llvm::ScheduleDAGSDNodes *llvm::createFastDAGScheduler(SelectionDAGISel *IS,
+                                                       CodeGenOpt::Level) {
   return new ScheduleDAGFast(*IS->MF);
 }
 
-llvm::ScheduleDAGSDNodes *
-llvm::createDAGLinearizer(SelectionDAGISel *IS, CodeGenOpt::Level) {
+llvm::ScheduleDAGSDNodes *llvm::createDAGLinearizer(SelectionDAGISel *IS,
+                                                    CodeGenOpt::Level) {
   return new ScheduleDAGLinearize(*IS->MF);
 }
diff --git a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp
index 458f50c54824e99..8a8e13569ae5b6d 100644
--- a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp
@@ -62,72 +62,73 @@ using namespace llvm;
 #define DEBUG_TYPE "pre-RA-sched"
 
 STATISTIC(NumBacktracks, "Number of times scheduler backtracked");
-STATISTIC(NumUnfolds,    "Number of nodes unfolded");
-STATISTIC(NumDups,       "Number of duplicated nodes");
-STATISTIC(NumPRCopies,   "Number of physical register copies");
+STATISTIC(NumUnfolds, "Number of nodes unfolded");
+STATISTIC(NumDups, "Number of duplicated nodes");
+STATISTIC(NumPRCopies, "Number of physical register copies");
 
 static RegisterScheduler
-  burrListDAGScheduler("list-burr",
-                       "Bottom-up register reduction list scheduling",
-                       createBURRListDAGScheduler);
+    burrListDAGScheduler("list-burr",
+                         "Bottom-up register reduction list scheduling",
+                         createBURRListDAGScheduler);
 
 static RegisterScheduler
-  sourceListDAGScheduler("source",
-                         "Similar to list-burr but schedules in source "
-                         "order when possible",
-                         createSourceListDAGScheduler);
+    sourceListDAGScheduler("source",
+                           "Similar to list-burr but schedules in source "
+                           "order when possible",
+                           createSourceListDAGScheduler);
 
-static RegisterScheduler
-  hybridListDAGScheduler("list-hybrid",
-                         "Bottom-up register pressure aware list scheduling "
-                         "which tries to balance latency and register pressure",
-                         createHybridListDAGScheduler);
+static RegisterScheduler hybridListDAGScheduler(
+    "list-hybrid",
+    "Bottom-up register pressure aware list scheduling "
+    "which tries to balance latency and register pressure",
+    createHybridListDAGScheduler);
 
 static RegisterScheduler
-  ILPListDAGScheduler("list-ilp",
-                      "Bottom-up register pressure aware list scheduling "
-                      "which tries to balance ILP and register pressure",
-                      createILPListDAGScheduler);
+    ILPListDAGScheduler("list-ilp",
+                        "Bottom-up register pressure aware list scheduling "
+                        "which tries to balance ILP and register pressure",
+                        createILPListDAGScheduler);
 
 static cl::opt<bool> DisableSchedCycles(
-  "disable-sched-cycles", cl::Hidden, cl::init(false),
-  cl::desc("Disable cycle-level precision during preRA scheduling"));
+    "disable-sched-cycles", cl::Hidden, cl::init(false),
+    cl::desc("Disable cycle-level precision during preRA scheduling"));
 
 // Temporary sched=list-ilp flags until the heuristics are robust.
 // Some options are also available under sched=list-hybrid.
 static cl::opt<bool> DisableSchedRegPressure(
-  "disable-sched-reg-pressure", cl::Hidden, cl::init(false),
-  cl::desc("Disable regpressure priority in sched=list-ilp"));
+    "disable-sched-reg-pressure", cl::Hidden, cl::init(false),
+    cl::desc("Disable regpressure priority in sched=list-ilp"));
 static cl::opt<bool> DisableSchedLiveUses(
-  "disable-sched-live-uses", cl::Hidden, cl::init(true),
-  cl::desc("Disable live use priority in sched=list-ilp"));
+    "disable-sched-live-uses", cl::Hidden, cl::init(true),
+    cl::desc("Disable live use priority in sched=list-ilp"));
 static cl::opt<bool> DisableSchedVRegCycle(
-  "disable-sched-vrcycle", cl::Hidden, cl::init(false),
-  cl::desc("Disable virtual register cycle interference checks"));
-static cl::opt<bool> DisableSchedPhysRegJoin(
-  "disable-sched-physreg-join", cl::Hidden, cl::init(false),
-  cl::desc("Disable physreg def-use affinity"));
-static cl::opt<bool> DisableSchedStalls(
-  "disable-sched-stalls", cl::Hidden, cl::init(true),
-  cl::desc("Disable no-stall priority in sched=list-ilp"));
+    "disable-sched-vrcycle", cl::Hidden, cl::init(false),
+    cl::desc("Disable virtual register cycle interference checks"));
+static cl::opt<bool>
+    DisableSchedPhysRegJoin("disable-sched-physreg-join", cl::Hidden,
+                            cl::init(false),
+                            cl::desc("Disable physreg def-use affinity"));
+static cl::opt<bool>
+    DisableSchedStalls("disable-sched-stalls", cl::Hidden, cl::init(true),
+                       cl::desc("Disable no-stall priority in sched=list-ilp"));
 static cl::opt<bool> DisableSchedCriticalPath(
-  "disable-sched-critical-path", cl::Hidden, cl::init(false),
-  cl::desc("Disable critical path priority in sched=list-ilp"));
+    "disable-sched-critical-path", cl::Hidden, cl::init(false),
+    cl::desc("Disable critical path priority in sched=list-ilp"));
 static cl::opt<bool> DisableSchedHeight(
-  "disable-sched-height", cl::Hidden, cl::init(false),
-  cl::desc("Disable scheduled-height priority in sched=list-ilp"));
-static cl::opt<bool> Disable2AddrHack(
-  "disable-2addr-hack", cl::Hidden, cl::init(true),
-  cl::desc("Disable scheduler's two-address hack"));
+    "disable-sched-height", cl::Hidden, cl::init(false),
+    cl::desc("Disable scheduled-height priority in sched=list-ilp"));
+static cl::opt<bool>
+    Disable2AddrHack("disable-2addr-hack", cl::Hidden, cl::init(true),
+                     cl::desc("Disable scheduler's two-address hack"));
 
 static cl::opt<int> MaxReorderWindow(
-  "max-sched-reorder", cl::Hidden, cl::init(6),
-  cl::desc("Number of instructions to allow ahead of the critical path "
-           "in sched=list-ilp"));
+    "max-sched-reorder", cl::Hidden, cl::init(6),
+    cl::desc("Number of instructions to allow ahead of the critical path "
+             "in sched=list-ilp"));
 
-static cl::opt<unsigned> AvgIPC(
-  "sched-avg-ipc", cl::Hidden, cl::init(1),
-  cl::desc("Average inst/cycle whan no target itinerary exists."));
+static cl::opt<unsigned>
+    AvgIPC("sched-avg-ipc", cl::Hidden, cl::init(1),
+           cl::desc("Average inst/cycle whan no target itinerary exists."));
 
 namespace {
 
@@ -166,12 +167,12 @@ class ScheduleDAGRRList : public ScheduleDAGSDNodes {
   /// that are "live". These nodes must be scheduled before any other nodes that
   /// modifies the registers can be scheduled.
   unsigned NumLiveRegs = 0u;
-  std::unique_ptr<SUnit*[]> LiveRegDefs;
-  std::unique_ptr<SUnit*[]> LiveRegGens;
+  std::unique_ptr<SUnit *[]> LiveRegDefs;
+  std::unique_ptr<SUnit *[]> LiveRegGens;
 
   // Collect interferences between physical register use/defs.
   // Each interference is an SUnit and set of physical registers.
-  SmallVector<SUnit*, 4> Interferences;
+  SmallVector<SUnit *, 4> Interferences;
 
   using LRegsMapT = DenseMap<SUnit *, SmallVector<unsigned, 4>>;
 
@@ -183,15 +184,14 @@ class ScheduleDAGRRList : public ScheduleDAGSDNodes {
 
   // Hack to keep track of the inverse of FindCallSeqStart without more crazy
   // DAG crawling.
-  DenseMap<SUnit*, SUnit*> CallSeqEndForStart;
+  DenseMap<SUnit *, SUnit *> CallSeqEndForStart;
 
 public:
   ScheduleDAGRRList(MachineFunction &mf, bool needlatency,
                     SchedulingPriorityQueue *availqueue,
                     CodeGenOpt::Level OptLevel)
-    : ScheduleDAGSDNodes(mf),
-      NeedLatency(needlatency), AvailableQueue(availqueue),
-      Topo(SUnits, nullptr) {
+      : ScheduleDAGSDNodes(mf), NeedLatency(needlatency),
+        AvailableQueue(availqueue), Topo(SUnits, nullptr) {
     const TargetSubtargetInfo &STI = mf.getSubtarget();
     if (DisableSchedCycles || !NeedLatency)
       HazardRec = new ScheduleHazardRecognizer();
@@ -246,7 +246,7 @@ class ScheduleDAGRRList : public ScheduleDAGSDNodes {
 private:
   bool isReady(SUnit *SU) {
     return DisableSchedCycles || !AvailableQueue->hasReadyFilter() ||
-      AvailableQueue->isReady(SU);
+           AvailableQueue->isReady(SU);
   }
 
   void ReleasePred(SUnit *SU, const SDep *PredEdge);
@@ -255,18 +255,17 @@ class ScheduleDAGRRList : public ScheduleDAGSDNodes {
   void AdvanceToCycle(unsigned NextCycle);
   void AdvancePastStalls(SUnit *SU);
   void EmitNode(SUnit *SU);
-  void ScheduleNodeBottomUp(SUnit*);
+  void ScheduleNodeBottomUp(SUnit *);
   void CapturePred(SDep *PredEdge);
-  void UnscheduleNodeBottomUp(SUnit*);
+  void UnscheduleNodeBottomUp(SUnit *);
   void RestoreHazardCheckerBottomUp();
-  void BacktrackBottomUp(SUnit*, SUnit*);
+  void BacktrackBottomUp(SUnit *, SUnit *);
   SUnit *TryUnfoldSU(SUnit *);
-  SUnit *CopyAndMoveSuccessors(SUnit*);
-  void InsertCopiesAndMoveSuccs(SUnit*, unsigned,
-                                const TargetRegisterClass*,
-                                const TargetRegisterClass*,
-                                SmallVectorImpl<SUnit*>&);
-  bool DelayForLiveRegsBottomUp(SUnit*, SmallVectorImpl<unsigned>&);
+  SUnit *CopyAndMoveSuccessors(SUnit *);
+  void InsertCopiesAndMoveSuccs(SUnit *, unsigned, const TargetRegisterClass *,
+                                const TargetRegisterClass *,
+                                SmallVectorImpl<SUnit *> &);
+  bool DelayForLiveRegsBottomUp(SUnit *, SmallVectorImpl<unsigned> &);
 
   void releaseInterferences(unsigned Reg = 0);
 
@@ -295,12 +294,10 @@ class ScheduleDAGRRList : public ScheduleDAGSDNodes {
 
   /// forceUnitLatencies - Register-pressure-reducing scheduling doesn't
   /// need actual latency information but the hybrid scheduler does.
-  bool forceUnitLatencies() const override {
-    return !NeedLatency;
-  }
+  bool forceUnitLatencies() const override { return !NeedLatency; }
 };
 
-}  // end anonymous namespace
+} // end anonymous namespace
 
 static constexpr unsigned RegSequenceCost = 1;
 
@@ -309,11 +306,9 @@ static constexpr unsigned RegSequenceCost = 1;
 /// but for untyped values (MVT::Untyped) it means inspecting the node's
 /// opcode to determine what register class is being generated.
 static void GetCostForDef(const ScheduleDAGSDNodes::RegDefIter &RegDefPos,
-                          const TargetLowering *TLI,
-                          const TargetInstrInfo *TII,
-                          const TargetRegisterInfo *TRI,
-                          unsigned &RegClass, unsigned &Cost,
-                          const MachineFunction &MF) {
+                          const TargetLowering *TLI, const TargetInstrInfo *TII,
+                          const TargetRegisterInfo *TRI, unsigned &RegClass,
+                          unsigned &Cost, const MachineFunction &MF) {
   MVT VT = RegDefPos.GetValue();
 
   // Special handling for untyped values.  These values can only come from
@@ -332,7 +327,8 @@ static void GetCostForDef(const ScheduleDAGSDNodes::RegDefIter &RegDefPos,
 
     unsigned Opcode = Node->getMachineOpcode();
     if (Opcode == TargetOpcode::REG_SEQUENCE) {
-      unsigned DstRCIdx = cast<ConstantSDNode>(Node->getOperand(0))->getZExtValue();
+      unsigned DstRCIdx =
+          cast<ConstantSDNode>(Node->getOperand(0))->getZExtValue();
       const TargetRegisterClass *RC = TRI->getRegClass(DstRCIdx);
       RegClass = RC->getID();
       Cost = RegSequenceCost;
@@ -365,8 +361,8 @@ void ScheduleDAGRRList::Schedule() {
   NumLiveRegs = 0;
   // Allocate slots for each physical register, plus one for a special register
   // to track the virtual resource of a calling sequence.
-  LiveRegDefs.reset(new SUnit*[TRI->getNumRegs() + 1]());
-  LiveRegGens.reset(new SUnit*[TRI->getNumRegs() + 1]());
+  LiveRegDefs.reset(new SUnit *[TRI->getNumRegs() + 1]());
+  LiveRegGens.reset(new SUnit *[TRI->getNumRegs() + 1]());
   CallSeqEndForStart.clear();
   assert(Interferences.empty() && LRegsMap.empty() && "stale Interferences");
 
@@ -440,8 +436,7 @@ void ScheduleDAGRRList::ReleasePred(SUnit *SU, const SDep *PredEdge) {
 
 /// IsChainDependent - Test if Outer is reachable from Inner through
 /// chain dependencies.
-static bool IsChainDependent(SDNode *Outer, SDNode *Inner,
-                             unsigned NestLevel,
+static bool IsChainDependent(SDNode *Outer, SDNode *Inner, unsigned NestLevel,
                              const TargetInstrInfo *TII) {
   SDNode *N = Outer;
   while (true) {
@@ -488,9 +483,8 @@ static bool IsChainDependent(SDNode *Outer, SDNode *Inner,
 ///
 /// TODO: It would be better to give CALLSEQ_END an explicit operand to point
 /// to the corresponding CALLSEQ_BEGIN to avoid needing to search for it.
-static SDNode *
-FindCallSeqStart(SDNode *N, unsigned &NestLevel, unsigned &MaxNest,
-                 const TargetInstrInfo *TII) {
+static SDNode *FindCallSeqStart(SDNode *N, unsigned &NestLevel,
+                                unsigned &MaxNest, const TargetInstrInfo *TII) {
   while (true) {
     // For a TokenFactor, examine each operand. There may be multiple ways
     // to get to the CALLSEQ_BEGIN, but we need to find the path with the
@@ -501,8 +495,8 @@ FindCallSeqStart(SDNode *N, unsigned &NestLevel, unsigned &MaxNest,
       for (const SDValue &Op : N->op_values()) {
         unsigned MyNestLevel = NestLevel;
         unsigned MyMaxNest = MaxNest;
-        if (SDNode *New = FindCallSeqStart(Op.getNode(),
-                                           MyNestLevel, MyMaxNest, TII))
+        if (SDNode *New =
+                FindCallSeqStart(Op.getNode(), MyNestLevel, MyMaxNest, TII))
           if (!Best || (MyMaxNest > BestMaxNest)) {
             Best = New;
             BestMaxNest = MyMaxNest;
@@ -563,7 +557,8 @@ void ScheduleDAGRRList::ReleasePredecessors(SUnit *SU) {
       // expensive to copy the register. Make sure nothing that can
       // clobber the register is scheduled between the predecessor and
       // this node.
-      SUnit *RegDef = LiveRegDefs[Pred.getReg()]; (void)RegDef;
+      SUnit *RegDef = LiveRegDefs[Pred.getReg()];
+      (void)RegDef;
       assert((!RegDef || RegDef == SU || RegDef == Pred.getSUnit()) &&
              "interference on register dependence");
       LiveRegDefs[Pred.getReg()] = Pred.getSUnit();
@@ -618,13 +613,14 @@ void ScheduleDAGRRList::ReleasePending() {
 
     if (PendingQueue[i]->isAvailable) {
       if (!isReady(PendingQueue[i]))
-          continue;
+        continue;
       AvailableQueue->push(PendingQueue[i]);
     }
     PendingQueue[i]->isPending = false;
     PendingQueue[i] = PendingQueue.back();
     PendingQueue.pop_back();
-    --i; --e;
+    --i;
+    --e;
   }
 }
 
@@ -638,8 +634,7 @@ void ScheduleDAGRRList::AdvanceToCycle(unsigned NextCycle) {
   if (!HazardRec->isEnabled()) {
     // Bypass lots of virtual calls in case of long latency.
     CurCycle = NextCycle;
-  }
-  else {
+  } else {
     for (; CurCycle != NextCycle; ++CurCycle) {
       HazardRec->RecedeCycle();
     }
@@ -681,7 +676,7 @@ void ScheduleDAGRRList::AdvancePastStalls(SUnit *SU) {
   int Stalls = 0;
   while (true) {
     ScheduleHazardRecognizer::HazardType HT =
-      HazardRec->getHazardType(SU, -Stalls);
+        HazardRec->getHazardType(SU, -Stalls);
 
     if (HT == ScheduleHazardRecognizer::NoHazard)
       break;
@@ -811,8 +806,8 @@ void ScheduleDAGRRList::ScheduleNodeBottomUp(SUnit *SU) {
   if (HazardRec->isEnabled() || AvgIPC > 1) {
     if (SU->getNode() && SU->getNode()->isMachineOpcode())
       ++IssueCount;
-    if ((HazardRec->isEnabled() && HazardRec->atIssueLimit())
-        || (!HazardRec->isEnabled() && IssueCount == AvgIPC))
+    if ((HazardRec->isEnabled() && HazardRec->atIssueLimit()) ||
+        (!HazardRec->isEnabled() && IssueCount == AvgIPC))
       AdvanceToCycle(CurCycle + 1);
   }
 }
@@ -841,7 +836,7 @@ void ScheduleDAGRRList::UnscheduleNodeBottomUp(SUnit *SU) {
 
   for (SDep &Pred : SU->Preds) {
     CapturePred(&Pred);
-    if (Pred.isAssignedRegDep() && SU == LiveRegGens[Pred.getReg()]){
+    if (Pred.isAssignedRegDep() && SU == LiveRegGens[Pred.getReg()]) {
       assert(NumLiveRegs > 0 && "NumLiveRegs is already zero!");
       assert(LiveRegDefs[Pred.getReg()] == Pred.getSUnit() &&
              "Physical register dependency violated?");
@@ -918,8 +913,7 @@ void ScheduleDAGRRList::UnscheduleNodeBottomUp(SUnit *SU) {
     // Don't make available until backtracking is complete.
     SU->isPending = true;
     PendingQueue.push_back(SU);
-  }
-  else {
+  } else {
     AvailableQueue->push(SU);
   }
   AvailableQueue->unscheduledNode(SU);
@@ -930,8 +924,8 @@ void ScheduleDAGRRList::UnscheduleNodeBottomUp(SUnit *SU) {
 void ScheduleDAGRRList::RestoreHazardCheckerBottomUp() {
   HazardRec->Reset();
 
-  unsigned LookAhead = std::min((unsigned)Sequence.size(),
-                                HazardRec->getMaxLookAhead());
+  unsigned LookAhead =
+      std::min((unsigned)Sequence.size(), HazardRec->getMaxLookAhead());
   if (LookAhead == 0)
     return;
 
@@ -1142,8 +1136,7 @@ SUnit *ScheduleDAGRRList::CopyAndMoveSuccessors(SUnit *SU) {
   LLVM_DEBUG(dbgs() << "Considering duplicating the SU\n");
   LLVM_DEBUG(dumpNode(*SU));
 
-  if (N->getGluedNode() &&
-      !TII->canCopyGluedNodeDuringSchedule(N)) {
+  if (N->getGluedNode() && !TII->canCopyGluedNodeDuringSchedule(N)) {
     LLVM_DEBUG(
         dbgs()
         << "Giving up because it has incoming glue and the target does not "
@@ -1222,10 +1215,9 @@ SUnit *ScheduleDAGRRList::CopyAndMoveSuccessors(SUnit *SU) {
 
 /// InsertCopiesAndMoveSuccs - Insert register copies and move all
 /// scheduled successors of the given SUnit to the last copy.
-void ScheduleDAGRRList::InsertCopiesAndMoveSuccs(SUnit *SU, unsigned Reg,
-                                              const TargetRegisterClass *DestRC,
-                                              const TargetRegisterClass *SrcRC,
-                                              SmallVectorImpl<SUnit*> &Copies) {
+void ScheduleDAGRRList::InsertCopiesAndMoveSuccs(
+    SUnit *SU, unsigned Reg, const TargetRegisterClass *DestRC,
+    const TargetRegisterClass *SrcRC, SmallVectorImpl<SUnit *> &Copies) {
   SUnit *CopyFromSU = CreateNewSUnit(nullptr);
   CopyFromSU->CopySrcRC = SrcRC;
   CopyFromSU->CopyDstRC = DestRC;
@@ -1246,8 +1238,7 @@ void ScheduleDAGRRList::InsertCopiesAndMoveSuccs(SUnit *SU, unsigned Reg,
       D.setSUnit(CopyToSU);
       AddPredQueued(SuccSU, D);
       DelDeps.emplace_back(SuccSU, Succ);
-    }
-    else {
+    } else {
       // Avoid scheduling the def-side copy before other successors. Otherwise,
       // we could introduce another physreg interference on the copy and
       // continue inserting copies indefinitely.
@@ -1306,10 +1297,12 @@ static void CheckForLiveRegDef(SUnit *SU, unsigned Reg, SUnit **LiveRegDefs,
   for (MCRegAliasIterator AliasI(Reg, TRI, true); AliasI.isValid(); ++AliasI) {
 
     // Check if Ref is live.
-    if (!LiveRegDefs[*AliasI]) continue;
+    if (!LiveRegDefs[*AliasI])
+      continue;
 
     // Allow multiple uses of the same def.
-    if (LiveRegDefs[*AliasI] == SU) continue;
+    if (LiveRegDefs[*AliasI] == SU)
+      continue;
 
     // Allow multiple uses of same def
     if (Node && LiveRegDefs[*AliasI]->getNode() == Node)
@@ -1325,14 +1318,17 @@ static void CheckForLiveRegDef(SUnit *SU, unsigned Reg, SUnit **LiveRegDefs,
 /// CheckForLiveRegDefMasked - Check for any live physregs that are clobbered
 /// by RegMask, and add them to LRegs.
 static void CheckForLiveRegDefMasked(SUnit *SU, const uint32_t *RegMask,
-                                     ArrayRef<SUnit*> LiveRegDefs,
+                                     ArrayRef<SUnit *> LiveRegDefs,
                                      SmallSet<unsigned, 4> &RegAdded,
                                      SmallVectorImpl<unsigned> &LRegs) {
   // Look at all live registers. Skip Reg0 and the special CallResource.
-  for (unsigned i = 1, e = LiveRegDefs.size()-1; i != e; ++i) {
-    if (!LiveRegDefs[i]) continue;
-    if (LiveRegDefs[i] == SU) continue;
-    if (!MachineOperand::clobbersPhysReg(RegMask, i)) continue;
+  for (unsigned i = 1, e = LiveRegDefs.size() - 1; i != e; ++i) {
+    if (!LiveRegDefs[i])
+      continue;
+    if (LiveRegDefs[i] == SU)
+      continue;
+    if (!MachineOperand::clobbersPhysReg(RegMask, i))
+      continue;
     if (RegAdded.insert(i).second)
       LRegs.push_back(i);
   }
@@ -1350,8 +1346,8 @@ static const uint32_t *getNodeRegMask(const SDNode *N) {
 /// scheduling of the given node to satisfy live physical register dependencies.
 /// If the specific node is the last one that's available to schedule, do
 /// whatever is necessary (i.e. backtracking or cloning) to make it possible.
-bool ScheduleDAGRRList::
-DelayForLiveRegsBottomUp(SUnit *SU, SmallVectorImpl<unsigned> &LRegs) {
+bool ScheduleDAGRRList::DelayForLiveRegsBottomUp(
+    SUnit *SU, SmallVectorImpl<unsigned> &LRegs) {
   if (NumLiveRegs == 0)
     return false;
 
@@ -1371,12 +1367,12 @@ DelayForLiveRegsBottomUp(SUnit *SU, SmallVectorImpl<unsigned> &LRegs) {
         Node->getOpcode() == ISD::INLINEASM_BR) {
       // Inline asm can clobber physical defs.
       unsigned NumOps = Node->getNumOperands();
-      if (Node->getOperand(NumOps-1).getValueType() == MVT::Glue)
-        --NumOps;  // Ignore the glue operand.
+      if (Node->getOperand(NumOps - 1).getValueType() == MVT::Glue)
+        --NumOps; // Ignore the glue operand.
 
       for (unsigned i = InlineAsm::Op_FirstOperand; i != NumOps;) {
         unsigned Flags =
-          cast<ConstantSDNode>(Node->getOperand(i))->getZExtValue();
+            cast<ConstantSDNode>(Node->getOperand(i))->getZExtValue();
         unsigned NumVals = InlineAsm::getNumOperandRegisters(Flags);
 
         ++i; // Skip the ID value.
@@ -1387,7 +1383,8 @@ DelayForLiveRegsBottomUp(SUnit *SU, SmallVectorImpl<unsigned> &LRegs) {
           for (; NumVals; --NumVals, ++i) {
             Register Reg = cast<RegisterSDNode>(Node->getOperand(i))->getReg();
             if (Reg.isPhysical())
-              CheckForLiveRegDef(SU, Reg, LiveRegDefs.get(), RegAdded, LRegs, TRI);
+              CheckForLiveRegDef(SU, Reg, LiveRegDefs.get(), RegAdded, LRegs,
+                                 TRI);
           }
         } else
           i += NumVals;
@@ -1434,7 +1431,8 @@ DelayForLiveRegsBottomUp(SUnit *SU, SmallVectorImpl<unsigned> &LRegs) {
       // handle it in the same way as an ImplicitDef.
       for (unsigned i = 0; i < MCID.getNumDefs(); ++i)
         if (MCID.operands()[i].isOptionalDef()) {
-          const SDValue &OptionalDef = Node->getOperand(i - Node->getNumValues());
+          const SDValue &OptionalDef =
+              Node->getOperand(i - Node->getNumValues());
           Register Reg = cast<RegisterSDNode>(OptionalDef)->getReg();
           CheckForLiveRegDef(SU, Reg, LiveRegDefs.get(), RegAdded, LRegs, TRI);
         }
@@ -1449,7 +1447,7 @@ DelayForLiveRegsBottomUp(SUnit *SU, SmallVectorImpl<unsigned> &LRegs) {
 void ScheduleDAGRRList::releaseInterferences(unsigned Reg) {
   // Add the nodes that aren't ready back onto the available list.
   for (unsigned i = Interferences.size(); i > 0; --i) {
-    SUnit *SU = Interferences[i-1];
+    SUnit *SU = Interferences[i - 1];
     LRegsMapT::iterator LRegsPos = LRegsMap.find(SU);
     if (Reg) {
       SmallVectorImpl<unsigned> &LRegs = LRegsPos->second;
@@ -1465,7 +1463,7 @@ void ScheduleDAGRRList::releaseInterferences(unsigned Reg) {
       AvailableQueue->push(SU);
     }
     if (i < Interferences.size())
-      Interferences[i-1] = Interferences.back();
+      Interferences[i - 1] = Interferences.back();
     Interferences.pop_back();
     LRegsMap.erase(LRegsPos);
   }
@@ -1488,10 +1486,9 @@ SUnit *ScheduleDAGRRList::PickNodeToScheduleBottomUp() {
                  dbgs() << " SU #" << CurSU->NodeNum << '\n');
       auto [LRegsIter, LRegsInserted] = LRegsMap.try_emplace(CurSU, LRegs);
       if (LRegsInserted) {
-        CurSU->isPending = true;  // This SU is not in AvailableQueue right now.
+        CurSU->isPending = true; // This SU is not in AvailableQueue right now.
         Interferences.push_back(CurSU);
-      }
-      else {
+      } else {
         assert(CurSU->isPending && "Interferences are pending");
         // Update the interference with current live regs.
         LRegsIter->second = LRegs;
@@ -1524,7 +1521,7 @@ SUnit *ScheduleDAGRRList::PickNodeToScheduleBottomUp() {
         LiveCycle = BtSU->getHeight();
       }
     }
-    if (!WillCreateCycle(TrySU, BtSU))  {
+    if (!WillCreateCycle(TrySU, BtSU)) {
       // BacktrackBottomUp mutates Interferences!
       BacktrackBottomUp(TrySU, BtSU);
 
@@ -1568,8 +1565,7 @@ SUnit *ScheduleDAGRRList::PickNodeToScheduleBottomUp() {
     unsigned Reg = LRegs[0];
     SUnit *LRDef = LiveRegDefs[Reg];
     MVT VT = getPhysicalRegisterVT(LRDef->getNode(), Reg, TII);
-    const TargetRegisterClass *RC =
-      TRI->getMinimalPhysRegClass(Reg, VT);
+    const TargetRegisterClass *RC = TRI->getMinimalPhysRegClass(Reg, VT);
     const TargetRegisterClass *DestRC = TRI->getCrossCopyRegClass(RC);
 
     // If cross copy register class is the same as RC, then it must be possible
@@ -1587,7 +1583,7 @@ SUnit *ScheduleDAGRRList::PickNodeToScheduleBottomUp() {
     }
     if (!NewDef) {
       // Issue copies, these can be expensive cross register class copies.
-      SmallVector<SUnit*, 2> Copies;
+      SmallVector<SUnit *, 2> Copies;
       InsertCopiesAndMoveSuccs(LRDef, Reg, DestRC, RC, Copies);
       LLVM_DEBUG(dbgs() << "    Adding an edge from SU #" << TrySU->NodeNum
                         << " to SU #" << Copies.front()->NodeNum << "\n");
@@ -1656,17 +1652,16 @@ namespace {
 class RegReductionPQBase;
 
 struct queue_sort {
-  bool isReady(SUnit* SU, unsigned CurCycle) const { return true; }
+  bool isReady(SUnit *SU, unsigned CurCycle) const { return true; }
 };
 
 #ifndef NDEBUG
-template<class SF>
-struct reverse_sort : public queue_sort {
+template <class SF> struct reverse_sort : public queue_sort {
   SF &SortFunc;
 
   reverse_sort(SF &sf) : SortFunc(sf) {}
 
-  bool operator()(SUnit* left, SUnit* right) const {
+  bool operator()(SUnit *left, SUnit *right) const {
     // reverse left/right rather than simply !SortFunc(left, right)
     // to expose different paths in the comparison logic.
     return SortFunc(right, left);
@@ -1677,38 +1672,29 @@ struct reverse_sort : public queue_sort {
 /// bu_ls_rr_sort - Priority function for bottom up register pressure
 // reduction scheduler.
 struct bu_ls_rr_sort : public queue_sort {
-  enum {
-    IsBottomUp = true,
-    HasReadyFilter = false
-  };
+  enum { IsBottomUp = true, HasReadyFilter = false };
 
   RegReductionPQBase *SPQ;
 
   bu_ls_rr_sort(RegReductionPQBase *spq) : SPQ(spq) {}
 
-  bool operator()(SUnit* left, SUnit* right) const;
+  bool operator()(SUnit *left, SUnit *right) const;
 };
 
 // src_ls_rr_sort - Priority function for source order scheduler.
 struct src_ls_rr_sort : public queue_sort {
-  enum {
-    IsBottomUp = true,
-    HasReadyFilter = false
-  };
+  enum { IsBottomUp = true, HasReadyFilter = false };
 
   RegReductionPQBase *SPQ;
 
   src_ls_rr_sort(RegReductionPQBase *spq) : SPQ(spq) {}
 
-  bool operator()(SUnit* left, SUnit* right) const;
+  bool operator()(SUnit *left, SUnit *right) const;
 };
 
 // hybrid_ls_rr_sort - Priority function for hybrid scheduler.
 struct hybrid_ls_rr_sort : public queue_sort {
-  enum {
-    IsBottomUp = true,
-    HasReadyFilter = false
-  };
+  enum { IsBottomUp = true, HasReadyFilter = false };
 
   RegReductionPQBase *SPQ;
 
@@ -1716,16 +1702,13 @@ struct hybrid_ls_rr_sort : public queue_sort {
 
   bool isReady(SUnit *SU, unsigned CurCycle) const;
 
-  bool operator()(SUnit* left, SUnit* right) const;
+  bool operator()(SUnit *left, SUnit *right) const;
 };
 
 // ilp_ls_rr_sort - Priority function for ILP (instruction level parallelism)
 // scheduler.
 struct ilp_ls_rr_sort : public queue_sort {
-  enum {
-    IsBottomUp = true,
-    HasReadyFilter = false
-  };
+  enum { IsBottomUp = true, HasReadyFilter = false };
 
   RegReductionPQBase *SPQ;
 
@@ -1733,7 +1716,7 @@ struct ilp_ls_rr_sort : public queue_sort {
 
   bool isReady(SUnit *SU, unsigned CurCycle) const;
 
-  bool operator()(SUnit* left, SUnit* right) const;
+  bool operator()(SUnit *left, SUnit *right) const;
 };
 
 class RegReductionPQBase : public SchedulingPriorityQueue {
@@ -1763,15 +1746,11 @@ class RegReductionPQBase : public SchedulingPriorityQueue {
   std::vector<unsigned> RegLimit;
 
 public:
-  RegReductionPQBase(MachineFunction &mf,
-                     bool hasReadyFilter,
-                     bool tracksrp,
-                     bool srcorder,
-                     const TargetInstrInfo *tii,
-                     const TargetRegisterInfo *tri,
-                     const TargetLowering *tli)
-    : SchedulingPriorityQueue(hasReadyFilter), TracksRegPressure(tracksrp),
-      SrcOrder(srcorder), MF(mf), TII(tii), TRI(tri), TLI(tli) {
+  RegReductionPQBase(MachineFunction &mf, bool hasReadyFilter, bool tracksrp,
+                     bool srcorder, const TargetInstrInfo *tii,
+                     const TargetRegisterInfo *tri, const TargetLowering *tli)
+      : SchedulingPriorityQueue(hasReadyFilter), TracksRegPressure(tracksrp),
+        SrcOrder(srcorder), MF(mf), TII(tii), TRI(tri), TLI(tli) {
     if (TracksRegPressure) {
       unsigned NumRC = TRI->getNumRegClasses();
       RegLimit.resize(NumRC);
@@ -1787,7 +1766,7 @@ class RegReductionPQBase : public SchedulingPriorityQueue {
     scheduleDAG = scheduleDag;
   }
 
-  ScheduleHazardRecognizer* getHazardRec() {
+  ScheduleHazardRecognizer *getHazardRec() {
     return scheduleDAG->getHazardRec();
   }
 
@@ -1806,7 +1785,8 @@ class RegReductionPQBase : public SchedulingPriorityQueue {
   unsigned getNodePriority(const SUnit *SU) const;
 
   unsigned getNodeOrdering(const SUnit *SU) const {
-    if (!SU->getNode()) return 0;
+    if (!SU->getNode())
+      return 0;
 
     return SU->getNode()->getIROrder();
   }
@@ -1850,7 +1830,7 @@ class RegReductionPQBase : public SchedulingPriorityQueue {
   void CalculateSethiUllmanNumbers();
 };
 
-template<class SF>
+template <class SF>
 static SUnit *popFromQueueImpl(std::vector<SUnit *> &Q, SF &Picker) {
   unsigned BestIdx = 0;
   // Only compute the cost for the first 1000 items in the queue, to avoid
@@ -1866,7 +1846,7 @@ static SUnit *popFromQueueImpl(std::vector<SUnit *> &Q, SF &Picker) {
   return V;
 }
 
-template<class SF>
+template <class SF>
 SUnit *popFromQueue(std::vector<SUnit *> &Q, SF &Picker, ScheduleDAG *DAG) {
 #ifndef NDEBUG
   if (DAG->StressSched) {
@@ -1885,20 +1865,18 @@ SUnit *popFromQueue(std::vector<SUnit *> &Q, SF &Picker, ScheduleDAG *DAG) {
 // This is a SchedulingPriorityQueue that schedules using Sethi Ullman numbers
 // to reduce register pressure.
 //
-template<class SF>
+template <class SF>
 class RegReductionPriorityQueue : public RegReductionPQBase {
   SF Picker;
 
 public:
-  RegReductionPriorityQueue(MachineFunction &mf,
-                            bool tracksrp,
-                            bool srcorder,
+  RegReductionPriorityQueue(MachineFunction &mf, bool tracksrp, bool srcorder,
                             const TargetInstrInfo *tii,
                             const TargetRegisterInfo *tri,
                             const TargetLowering *tli)
-    : RegReductionPQBase(mf, SF::HasReadyFilter, tracksrp, srcorder,
-                         tii, tri, tli),
-      Picker(this) {}
+      : RegReductionPQBase(mf, SF::HasReadyFilter, tracksrp, srcorder, tii, tri,
+                           tli),
+        Picker(this) {}
 
   bool isBottomUp() const override { return SF::IsBottomUp; }
 
@@ -1907,7 +1885,8 @@ class RegReductionPriorityQueue : public RegReductionPQBase {
   }
 
   SUnit *pop() override {
-    if (Queue.empty()) return nullptr;
+    if (Queue.empty())
+      return nullptr;
 
     SUnit *V = popFromQueue(Queue, Picker, scheduleDAG);
     V->NodeQueueId = 0;
@@ -1955,8 +1934,8 @@ static int checkSpecialNodes(const SUnit *left, const SUnit *right) {
 
 /// CalcNodeSethiUllmanNumber - Compute Sethi Ullman number.
 /// Smaller number is the higher priority.
-static unsigned
-CalcNodeSethiUllmanNumber(const SUnit *SU, std::vector<unsigned> &SUNumbers) {
+static unsigned CalcNodeSethiUllmanNumber(const SUnit *SU,
+                                          std::vector<unsigned> &SUNumbers) {
   if (SUNumbers[SU->NodeNum] != 0)
     return SUNumbers[SU->NodeNum];
 
@@ -1976,7 +1955,8 @@ CalcNodeSethiUllmanNumber(const SUnit *SU, std::vector<unsigned> &SUNumbers) {
     // Try to find a non-evaluated pred and push it into the processing stack.
     for (unsigned P = Temp.PredsProcessed; P < TempSU->Preds.size(); ++P) {
       auto &Pred = TempSU->Preds[P];
-      if (Pred.isCtrl()) continue;  // ignore chain preds
+      if (Pred.isCtrl())
+        continue; // ignore chain preds
       SUnit *PredSU = Pred.getSUnit();
       if (SUNumbers[PredSU->NodeNum] == 0) {
 #ifndef NDEBUG
@@ -1999,7 +1979,8 @@ CalcNodeSethiUllmanNumber(const SUnit *SU, std::vector<unsigned> &SUNumbers) {
     unsigned SethiUllmanNumber = 0;
     unsigned Extra = 0;
     for (const SDep &Pred : TempSU->Preds) {
-      if (Pred.isCtrl()) continue;  // ignore chain preds
+      if (Pred.isCtrl())
+        continue; // ignore chain preds
       SUnit *PredSU = Pred.getSUnit();
       unsigned PredSethiUllman = SUNumbers[PredSU->NodeNum];
       assert(PredSethiUllman > 0 && "We should have evaluated this pred!");
@@ -2033,7 +2014,7 @@ void RegReductionPQBase::CalculateSethiUllmanNumbers() {
 void RegReductionPQBase::addNode(const SUnit *SU) {
   unsigned SUSize = SethiUllmanNumbers.size();
   if (SUnits->size() > SUSize)
-    SethiUllmanNumbers.resize(SUSize*2, 0);
+    SethiUllmanNumbers.resize(SUSize * 2, 0);
   CalcNodeSethiUllmanNumber(SU, SethiUllmanNumbers);
 }
 
@@ -2052,8 +2033,7 @@ unsigned RegReductionPQBase::getNodePriority(const SUnit *SU) const {
     // avoid spilling.
     return 0;
   if (Opc == TargetOpcode::EXTRACT_SUBREG ||
-      Opc == TargetOpcode::SUBREG_TO_REG ||
-      Opc == TargetOpcode::INSERT_SUBREG)
+      Opc == TargetOpcode::SUBREG_TO_REG || Opc == TargetOpcode::INSERT_SUBREG)
     // EXTRACT_SUBREG, INSERT_SUBREG, and SUBREG_TO_REG nodes should be
     // close to their uses to facilitate coalescing.
     return 0;
@@ -2090,7 +2070,8 @@ LLVM_DUMP_METHOD void RegReductionPQBase::dumpRegPressure() const {
   for (const TargetRegisterClass *RC : TRI->regclasses()) {
     unsigned Id = RC->getID();
     unsigned RP = RegPressure[Id];
-    if (!RP) continue;
+    if (!RP)
+      continue;
     LLVM_DEBUG(dbgs() << TRI->getRegClassName(RC) << ": " << RP << " / "
                       << RegLimit[Id] << '\n');
   }
@@ -2233,7 +2214,7 @@ void RegReductionPQBase::scheduledNode(SUnit *SU) {
 
   // We should have this assert, but there may be dead SDNodes that never
   // materialize as SUnits, so they don't appear to generate liveness.
-  //assert(SU->NumRegDefsLeft == 0 && "not all regdefs have scheduled uses");
+  // assert(SU->NumRegDefsLeft == 0 && "not all regdefs have scheduled uses");
   int SkipRegDefs = (int)SU->NumRegDefsLeft;
   for (ScheduleDAGSDNodes::RegDefIter RegDefPos(SU, scheduleDAG);
        RegDefPos.IsValid(); RegDefPos.Advance(), --SkipRegDefs) {
@@ -2247,8 +2228,7 @@ void RegReductionPQBase::scheduledNode(SUnit *SU) {
       LLVM_DEBUG(dbgs() << "  SU(" << SU->NodeNum
                         << ") has too many regdefs\n");
       RegPressure[RCId] = 0;
-    }
-    else {
+    } else {
       RegPressure[RCId] -= Cost;
     }
   }
@@ -2260,7 +2240,8 @@ void RegReductionPQBase::unscheduledNode(SUnit *SU) {
     return;
 
   const SDNode *N = SU->getNode();
-  if (!N) return;
+  if (!N)
+    return;
 
   if (!N->isMachineOpcode()) {
     if (N->getOpcode() != ISD::CopyToReg)
@@ -2270,8 +2251,7 @@ void RegReductionPQBase::unscheduledNode(SUnit *SU) {
     if (Opc == TargetOpcode::EXTRACT_SUBREG ||
         Opc == TargetOpcode::INSERT_SUBREG ||
         Opc == TargetOpcode::SUBREG_TO_REG ||
-        Opc == TargetOpcode::REG_SEQUENCE ||
-        Opc == TargetOpcode::IMPLICIT_DEF)
+        Opc == TargetOpcode::REG_SEQUENCE || Opc == TargetOpcode::IMPLICIT_DEF)
       return;
   }
 
@@ -2354,13 +2334,14 @@ void RegReductionPQBase::unscheduledNode(SUnit *SU) {
 static unsigned closestSucc(const SUnit *SU) {
   unsigned MaxHeight = 0;
   for (const SDep &Succ : SU->Succs) {
-    if (Succ.isCtrl()) continue;  // ignore chain succs
+    if (Succ.isCtrl())
+      continue; // ignore chain succs
     unsigned Height = Succ.getSUnit()->getHeight();
     // If there are bunch of CopyToRegs stacked up, they should be considered
     // to be at the same position.
     if (Succ.getSUnit()->getNode() &&
         Succ.getSUnit()->getNode()->getOpcode() == ISD::CopyToReg)
-      Height = closestSucc(Succ.getSUnit())+1;
+      Height = closestSucc(Succ.getSUnit()) + 1;
     if (Height > MaxHeight)
       MaxHeight = Height;
   }
@@ -2372,7 +2353,8 @@ static unsigned closestSucc(const SUnit *SU) {
 static unsigned calcMaxScratches(const SUnit *SU) {
   unsigned Scratches = 0;
   for (const SDep &Pred : SU->Preds) {
-    if (Pred.isCtrl()) continue;  // ignore chain preds
+    if (Pred.isCtrl())
+      continue; // ignore chain preds
     Scratches++;
   }
   return Scratches;
@@ -2383,7 +2365,8 @@ static unsigned calcMaxScratches(const SUnit *SU) {
 static bool hasOnlyLiveInOpers(const SUnit *SU) {
   bool RetVal = false;
   for (const SDep &Pred : SU->Preds) {
-    if (Pred.isCtrl()) continue;
+    if (Pred.isCtrl())
+      continue;
     const SUnit *PredSU = Pred.getSUnit();
     if (PredSU->getNode() &&
         PredSU->getNode()->getOpcode() == ISD::CopyFromReg) {
@@ -2405,7 +2388,8 @@ static bool hasOnlyLiveInOpers(const SUnit *SU) {
 static bool hasOnlyLiveOutUses(const SUnit *SU) {
   bool RetVal = false;
   for (const SDep &Succ : SU->Succs) {
-    if (Succ.isCtrl()) continue;
+    if (Succ.isCtrl())
+      continue;
     const SUnit *SuccSU = Succ.getSUnit();
     if (SuccSU->getNode() && SuccSU->getNode()->getOpcode() == ISD::CopyToReg) {
       Register Reg =
@@ -2442,7 +2426,8 @@ static void initVRegCycle(SUnit *SU) {
   SU->isVRegCycle = true;
 
   for (const SDep &Pred : SU->Preds) {
-    if (Pred.isCtrl()) continue;
+    if (Pred.isCtrl())
+      continue;
     Pred.getSUnit()->isVRegCycle = true;
   }
 }
@@ -2454,7 +2439,8 @@ static void resetVRegCycle(SUnit *SU) {
     return;
 
   for (const SDep &Pred : SU->Preds) {
-    if (Pred.isCtrl()) continue;  // ignore chain preds
+    if (Pred.isCtrl())
+      continue; // ignore chain preds
     SUnit *PredSU = Pred.getSUnit();
     if (PredSU->isVRegCycle) {
       assert(PredSU->getNode()->getOpcode() == ISD::CopyFromReg &&
@@ -2472,7 +2458,8 @@ static bool hasVRegCycleUse(const SUnit *SU) {
     return false;
 
   for (const SDep &Pred : SU->Preds) {
-    if (Pred.isCtrl()) continue;  // ignore chain preds
+    if (Pred.isCtrl())
+      continue; // ignore chain preds
     if (Pred.getSUnit()->isVRegCycle &&
         Pred.getSUnit()->getNode()->getOpcode() == ISD::CopyFromReg) {
       LLVM_DEBUG(dbgs() << "  VReg cycle use: SU (" << SU->NodeNum << ")\n");
@@ -2486,9 +2473,10 @@ static bool hasVRegCycleUse(const SUnit *SU) {
 //
 // Note: The ScheduleHazardRecognizer interface requires a non-const SU.
 static bool BUHasStall(SUnit *SU, int Height, RegReductionPQBase *SPQ) {
-  if ((int)SPQ->getCurCycle() < Height) return true;
-  if (SPQ->getHazardRec()->getHazardType(SU, 0)
-      != ScheduleHazardRecognizer::NoHazard)
+  if ((int)SPQ->getCurCycle() < Height)
+    return true;
+  if (SPQ->getHazardRec()->getHazardType(SU, 0) !=
+      ScheduleHazardRecognizer::NoHazard)
     return true;
   return false;
 }
@@ -2505,9 +2493,9 @@ static int BUCompareLatency(SUnit *left, SUnit *right, bool checkPref,
   int RHeight = (int)right->getHeight() + RPenalty;
 
   bool LStall = (!checkPref || left->SchedulingPref == Sched::ILP) &&
-    BUHasStall(left, LHeight, SPQ);
+                BUHasStall(left, LHeight, SPQ);
   bool RStall = (!checkPref || right->SchedulingPref == Sched::ILP) &&
-    BUHasStall(right, RHeight, SPQ);
+                BUHasStall(right, RHeight, SPQ);
 
   // If scheduling one of the node will cause a pipeline stall, delay it.
   // If scheduling either one of the node will cause a pipeline stall, sort
@@ -2555,10 +2543,10 @@ static bool BURRSort(SUnit *left, SUnit *right, RegReductionPQBase *SPQ) {
     bool LHasPhysReg = left->hasPhysRegDefs;
     bool RHasPhysReg = right->hasPhysRegDefs;
     if (LHasPhysReg != RHasPhysReg) {
-      #ifndef NDEBUG
-      static const char *const PhysRegMsg[] = { " has no physreg",
-                                                " defines a physreg" };
-      #endif
+#ifndef NDEBUG
+      static const char *const PhysRegMsg[] = {" has no physreg",
+                                               " defines a physreg"};
+#endif
       LLVM_DEBUG(dbgs() << "  SU (" << left->NodeNum << ") "
                         << PhysRegMsg[LHasPhysReg] << " SU(" << right->NodeNum
                         << ") " << PhysRegMsg[RHasPhysReg] << "\n");
@@ -2630,13 +2618,11 @@ static bool BURRSort(SUnit *left, SUnit *right, RegReductionPQBase *SPQ) {
     return (left->NodeQueueId > right->NodeQueueId);
 
   // Do not compare latencies when one or both of the nodes are calls.
-  if (!DisableSchedCycles &&
-      !(left->isCall || right->isCall)) {
+  if (!DisableSchedCycles && !(left->isCall || right->isCall)) {
     int result = BUCompareLatency(left, right, false /*checkPref*/, SPQ);
     if (result != 0)
       return result > 0;
-  }
-  else {
+  } else {
     if (left->getHeight() != right->getHeight())
       return left->getHeight() > right->getHeight();
 
@@ -2680,12 +2666,14 @@ bool src_ls_rr_sort::operator()(SUnit *left, SUnit *right) const {
 bool hybrid_ls_rr_sort::isReady(SUnit *SU, unsigned CurCycle) const {
   static const unsigned ReadyDelay = 3;
 
-  if (SPQ->MayReduceRegPressure(SU)) return true;
+  if (SPQ->MayReduceRegPressure(SU))
+    return true;
 
-  if (SU->getHeight() > (CurCycle + ReadyDelay)) return false;
+  if (SU->getHeight() > (CurCycle + ReadyDelay))
+    return false;
 
-  if (SPQ->getHazardRec()->getHazardType(SU, -ReadyDelay)
-      != ScheduleHazardRecognizer::NoHazard)
+  if (SPQ->getHazardRec()->getHazardType(SU, -ReadyDelay) !=
+      ScheduleHazardRecognizer::NoHazard)
     return false;
 
   return true;
@@ -2708,8 +2696,7 @@ bool hybrid_ls_rr_sort::operator()(SUnit *left, SUnit *right) const {
     LLVM_DEBUG(dbgs() << "  pressure SU(" << left->NodeNum << ") > SU("
                       << right->NodeNum << ")\n");
     return true;
-  }
-  else if (!LHigh && RHigh) {
+  } else if (!LHigh && RHigh) {
     LLVM_DEBUG(dbgs() << "  pressure SU(" << right->NodeNum << ") > SU("
                       << left->NodeNum << ")\n");
     return false;
@@ -2725,10 +2712,11 @@ bool hybrid_ls_rr_sort::operator()(SUnit *left, SUnit *right) const {
 // Schedule as many instructions in each cycle as possible. So don't make an
 // instruction available unless it is ready in the current cycle.
 bool ilp_ls_rr_sort::isReady(SUnit *SU, unsigned CurCycle) const {
-  if (SU->getHeight() > CurCycle) return false;
+  if (SU->getHeight() > CurCycle)
+    return false;
 
-  if (SPQ->getHazardRec()->getHazardType(SU, 0)
-      != ScheduleHazardRecognizer::NoHazard)
+  if (SPQ->getHazardRec()->getHazardType(SU, 0) !=
+      ScheduleHazardRecognizer::NoHazard)
     return false;
 
   return true;
@@ -2742,8 +2730,7 @@ static bool canEnableCoalescing(SUnit *SU) {
     return true;
 
   if (Opc == TargetOpcode::EXTRACT_SUBREG ||
-      Opc == TargetOpcode::SUBREG_TO_REG ||
-      Opc == TargetOpcode::INSERT_SUBREG)
+      Opc == TargetOpcode::SUBREG_TO_REG || Opc == TargetOpcode::INSERT_SUBREG)
     // EXTRACT_SUBREG, INSERT_SUBREG, and SUBREG_TO_REG nodes should be
     // close to their uses to facilitate coalescing.
     return true;
@@ -2782,8 +2769,10 @@ bool ilp_ls_rr_sort::operator()(SUnit *left, SUnit *right) const {
   if (!DisableSchedRegPressure && (LPDiff > 0 || RPDiff > 0)) {
     bool LReduce = canEnableCoalescing(left);
     bool RReduce = canEnableCoalescing(right);
-    if (LReduce && !RReduce) return false;
-    if (RReduce && !LReduce) return true;
+    if (LReduce && !RReduce)
+      return false;
+    if (RReduce && !LReduce)
+      return true;
   }
 
   if (!DisableSchedLiveUses && (LLiveUses != RLiveUses)) {
@@ -2847,7 +2836,7 @@ bool RegReductionPQBase::canClobber(const SUnit *SU, const SUnit *Op) {
     unsigned NumRes = MCID.getNumDefs();
     unsigned NumOps = MCID.getNumOperands() - NumRes;
     for (unsigned i = 0; i != NumOps; ++i) {
-      if (MCID.getOperandConstraint(i+NumRes, MCOI::TIED_TO) != -1) {
+      if (MCID.getOperandConstraint(i + NumRes, MCOI::TIED_TO) != -1) {
         SDNode *DU = SU->getNode()->getOperand(i).getNode();
         if (DU->getNodeId() != -1 &&
             Op->OrigNode == &(*SUnits)[DU->getNodeId()])
@@ -3028,7 +3017,8 @@ void RegReductionPQBase::PrescheduleNodesWithMultipleUses() {
     // Perform checks on the successors of PredSU.
     for (const SDep &PredSucc : PredSU->Succs) {
       SUnit *PredSuccSU = PredSucc.getSUnit();
-      if (PredSuccSU == &SU) continue;
+      if (PredSuccSU == &SU)
+        continue;
       // If PredSU has another successor with no data successors, for
       // now don't attempt to choose either over the other.
       if (PredSuccSU->NumSuccs == 0)
@@ -3087,7 +3077,7 @@ void RegReductionPQBase::AddPseudoTwoAddrDeps() {
     unsigned NumRes = MCID.getNumDefs();
     unsigned NumOps = MCID.getNumOperands() - NumRes;
     for (unsigned j = 0; j != NumOps; ++j) {
-      if (MCID.getOperandConstraint(j+NumRes, MCOI::TIED_TO) == -1)
+      if (MCID.getOperandConstraint(j + NumRes, MCOI::TIED_TO) == -1)
         continue;
       SDNode *DU = SU.getNode()->getOperand(j).getNode();
       if (DU->getNodeId() == -1)
@@ -3113,7 +3103,7 @@ void RegReductionPQBase::AddPseudoTwoAddrDeps() {
         while (SuccSU->Succs.size() == 1 &&
                SuccSU->getNode()->isMachineOpcode() &&
                SuccSU->getNode()->getMachineOpcode() ==
-                 TargetOpcode::COPY_TO_REGCLASS)
+                   TargetOpcode::COPY_TO_REGCLASS)
           SuccSU = SuccSU->Succs.front().getSUnit();
         // Don't constrain non-instruction nodes.
         if (!SuccSU->getNode() || !SuccSU->getNode()->isMachineOpcode())
@@ -3158,7 +3148,7 @@ llvm::createBURRListDAGScheduler(SelectionDAGISel *IS,
   const TargetRegisterInfo *TRI = STI.getRegisterInfo();
 
   BURegReductionPriorityQueue *PQ =
-    new BURegReductionPriorityQueue(*IS->MF, false, false, TII, TRI, nullptr);
+      new BURegReductionPriorityQueue(*IS->MF, false, false, TII, TRI, nullptr);
   ScheduleDAGRRList *SD = new ScheduleDAGRRList(*IS->MF, false, PQ, OptLevel);
   PQ->setScheduleDAG(SD);
   return SD;
@@ -3172,7 +3162,7 @@ llvm::createSourceListDAGScheduler(SelectionDAGISel *IS,
   const TargetRegisterInfo *TRI = STI.getRegisterInfo();
 
   SrcRegReductionPriorityQueue *PQ =
-    new SrcRegReductionPriorityQueue(*IS->MF, false, true, TII, TRI, nullptr);
+      new SrcRegReductionPriorityQueue(*IS->MF, false, true, TII, TRI, nullptr);
   ScheduleDAGRRList *SD = new ScheduleDAGRRList(*IS->MF, false, PQ, OptLevel);
   PQ->setScheduleDAG(SD);
   return SD;
@@ -3187,7 +3177,7 @@ llvm::createHybridListDAGScheduler(SelectionDAGISel *IS,
   const TargetLowering *TLI = IS->TLI;
 
   HybridBURRPriorityQueue *PQ =
-    new HybridBURRPriorityQueue(*IS->MF, true, false, TII, TRI, TLI);
+      new HybridBURRPriorityQueue(*IS->MF, true, false, TII, TRI, TLI);
 
   ScheduleDAGRRList *SD = new ScheduleDAGRRList(*IS->MF, true, PQ, OptLevel);
   PQ->setScheduleDAG(SD);
@@ -3203,7 +3193,7 @@ llvm::createILPListDAGScheduler(SelectionDAGISel *IS,
   const TargetLowering *TLI = IS->TLI;
 
   ILPBURRPriorityQueue *PQ =
-    new ILPBURRPriorityQueue(*IS->MF, true, false, TII, TRI, TLI);
+      new ILPBURRPriorityQueue(*IS->MF, true, false, TII, TRI, TLI);
   ScheduleDAGRRList *SD = new ScheduleDAGRRList(*IS->MF, true, PQ, OptLevel);
   PQ->setScheduleDAG(SD);
   return SD;
diff --git a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
index 0579c1664d5c9ab..afbd33ebd7a7229 100644
--- a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.cpp
@@ -42,9 +42,9 @@ STATISTIC(LoadsClustered, "Number of loads clustered together");
 // without a target itinerary. The choice of number here has more to do with
 // balancing scheduler heuristics than with the actual machine latency.
 static cl::opt<int> HighLatencyCycles(
-  "sched-high-latency-cycles", cl::Hidden, cl::init(10),
-  cl::desc("Roughly estimate the number of cycles that 'long latency'"
-           "instructions take for targets with no itinerary"));
+    "sched-high-latency-cycles", cl::Hidden, cl::init(10),
+    cl::desc("Roughly estimate the number of cycles that 'long latency'"
+             "instructions take for targets with no itinerary"));
 
 ScheduleDAGSDNodes::ScheduleDAGSDNodes(MachineFunction &mf)
     : ScheduleDAG(mf), InstrItins(mf.getSubtarget().getInstrItineraryData()) {}
@@ -77,9 +77,8 @@ SUnit *ScheduleDAGSDNodes::newSUnit(SDNode *N) {
   SUnits.back().OrigNode = &SUnits.back();
   SUnit *SU = &SUnits.back();
   const TargetLowering &TLI = DAG->getTargetLoweringInfo();
-  if (!N ||
-      (N->isMachineOpcode() &&
-       N->getMachineOpcode() == TargetOpcode::IMPLICIT_DEF))
+  if (!N || (N->isMachineOpcode() &&
+             N->getMachineOpcode() == TargetOpcode::IMPLICIT_DEF))
     SU->SchedulingPref = Sched::None;
   else
     SU->SchedulingPref = TLI.getSchedulingPreference(N);
@@ -165,15 +164,17 @@ static bool AddGlue(SDNode *N, SDValue Glue, bool AddGlue, SelectionDAG *DAG) {
   SDNode *GlueDestNode = Glue.getNode();
 
   // Don't add glue from a node to itself.
-  if (GlueDestNode == N) return false;
+  if (GlueDestNode == N)
+    return false;
 
   // Don't add a glue operand to something that already uses glue.
   if (GlueDestNode &&
-      N->getOperand(N->getNumOperands()-1).getValueType() == MVT::Glue) {
+      N->getOperand(N->getNumOperands() - 1).getValueType() == MVT::Glue) {
     return false;
   }
   // Don't add glue to something that already has a glue value.
-  if (N->getValueType(N->getNumValues() - 1) == MVT::Glue) return false;
+  if (N->getValueType(N->getNumValues() - 1) == MVT::Glue)
+    return false;
 
   SmallVector<EVT, 4> VTs(N->values());
   if (AddGlue)
@@ -203,8 +204,8 @@ static void RemoveUnusedGlue(SDNode *N, SelectionDAG *DAG) {
 void ScheduleDAGSDNodes::ClusterNeighboringLoads(SDNode *Node) {
   SDValue Chain;
   unsigned NumOps = Node->getNumOperands();
-  if (Node->getOperand(NumOps-1).getValueType() == MVT::Other)
-    Chain = Node->getOperand(NumOps-1);
+  if (Node->getOperand(NumOps - 1).getValueType() == MVT::Other)
+    Chain = Node->getOperand(NumOps - 1);
   if (!Chain)
     return;
 
@@ -223,9 +224,9 @@ void ScheduleDAGSDNodes::ClusterNeighboringLoads(SDNode *Node) {
 
   // Look for other loads of the same chain. Find loads that are loading from
   // the same base pointer and different offsets.
-  SmallPtrSet<SDNode*, 16> Visited;
+  SmallPtrSet<SDNode *, 16> Visited;
   SmallVector<int64_t, 4> Offsets;
-  DenseMap<long long, SDNode*> O2SMap;  // Map from offset to SDNode.
+  DenseMap<long long, SDNode *> O2SMap; // Map from offset to SDNode.
   bool Cluster = false;
   SDNode *Base = Node;
 
@@ -245,8 +246,7 @@ void ScheduleDAGSDNodes::ClusterNeighboringLoads(SDNode *Node) {
       continue;
     int64_t Offset1, Offset2;
     if (!TII->areLoadsFromSameBasePtr(Base, User, Offset1, Offset2) ||
-        Offset1 == Offset2 ||
-        hasTiedInput(User)) {
+        Offset1 == Offset2 || hasTiedInput(User)) {
       // FIXME: Should be ok if they addresses are identical. But earlier
       // optimizations really should have eliminated one of the loads.
       continue;
@@ -269,7 +269,7 @@ void ScheduleDAGSDNodes::ClusterNeighboringLoads(SDNode *Node) {
   llvm::sort(Offsets);
 
   // Check if the loads are close enough.
-  SmallVector<SDNode*, 4> Loads;
+  SmallVector<SDNode *, 4> Loads;
   unsigned NumLoads = 0;
   int64_t BaseOff = Offsets[0];
   SDNode *BaseLoad = O2SMap[BaseOff];
@@ -277,7 +277,8 @@ void ScheduleDAGSDNodes::ClusterNeighboringLoads(SDNode *Node) {
   for (unsigned i = 1, e = Offsets.size(); i != e; ++i) {
     int64_t Offset = Offsets[i];
     SDNode *Load = O2SMap[Offset];
-    if (!TII->shouldScheduleLoadsNear(BaseLoad, Load, BaseOff, Offset,NumLoads))
+    if (!TII->shouldScheduleLoadsNear(BaseLoad, Load, BaseOff, Offset,
+                                      NumLoads))
       break; // Stop right here. Ignore loads that are further away.
     Loads.push_back(Load);
     ++NumLoads;
@@ -303,8 +304,7 @@ void ScheduleDAGSDNodes::ClusterNeighboringLoads(SDNode *Node) {
         InGlue = SDValue(Load, Load->getNumValues() - 1);
 
       ++LoadsClustered;
-    }
-    else if (!OutGlue && InGlue.getNode())
+    } else if (!OutGlue && InGlue.getNode())
       RemoveUnusedGlue(InGlue.getNode(), DAG);
   }
 }
@@ -343,12 +343,12 @@ void ScheduleDAGSDNodes::BuildSchedUnits() {
   SUnits.reserve(NumNodes * 2);
 
   // Add all nodes in depth first order.
-  SmallVector<SDNode*, 64> Worklist;
-  SmallPtrSet<SDNode*, 32> Visited;
+  SmallVector<SDNode *, 64> Worklist;
+  SmallPtrSet<SDNode *, 32> Visited;
   Worklist.push_back(DAG->getRoot().getNode());
   Visited.insert(DAG->getRoot().getNode());
 
-  SmallVector<SUnit*, 8> CallSUnits;
+  SmallVector<SUnit *, 8> CallSUnits;
   while (!Worklist.empty()) {
     SDNode *NI = Worklist.pop_back_val();
 
@@ -357,11 +357,12 @@ void ScheduleDAGSDNodes::BuildSchedUnits() {
       if (Visited.insert(Op.getNode()).second)
         Worklist.push_back(Op.getNode());
 
-    if (isPassiveNode(NI))  // Leaf node, e.g. a TargetImmediate.
+    if (isPassiveNode(NI)) // Leaf node, e.g. a TargetImmediate.
       continue;
 
     // If this node has already been processed, stop now.
-    if (NI->getNodeId() != -1) continue;
+    if (NI->getNodeId() != -1)
+      continue;
 
     SUnit *NodeSUnit = newSUnit(NI);
 
@@ -372,8 +373,8 @@ void ScheduleDAGSDNodes::BuildSchedUnits() {
     // Scan up to find glued preds.
     SDNode *N = NI;
     while (N->getNumOperands() &&
-           N->getOperand(N->getNumOperands()-1).getValueType() == MVT::Glue) {
-      N = N->getOperand(N->getNumOperands()-1).getNode();
+           N->getOperand(N->getNumOperands() - 1).getValueType() == MVT::Glue) {
+      N = N->getOperand(N->getNumOperands() - 1).getNode();
       assert(N->getNodeId() == -1 && "Node already inserted!");
       N->setNodeId(NodeSUnit->NodeNum);
       if (N->isMachineOpcode() && TII->get(N->getMachineOpcode()).isCall())
@@ -382,8 +383,8 @@ void ScheduleDAGSDNodes::BuildSchedUnits() {
 
     // Scan down to find any glued succs.
     N = NI;
-    while (N->getValueType(N->getNumValues()-1) == MVT::Glue) {
-      SDValue GlueVal(N, N->getNumValues()-1);
+    while (N->getValueType(N->getNumValues() - 1) == MVT::Glue) {
+      SDValue GlueVal(N, N->getNumValues() - 1);
 
       // There are either zero or one users of the Glue result.
       bool HasGlueUse = false;
@@ -397,7 +398,8 @@ void ScheduleDAGSDNodes::BuildSchedUnits() {
             NodeSUnit->isCall = true;
           break;
         }
-      if (!HasGlueUse) break;
+      if (!HasGlueUse)
+        break;
     }
 
     if (NodeSUnit->isCall)
@@ -431,7 +433,8 @@ void ScheduleDAGSDNodes::BuildSchedUnits() {
       if (SUNode->getOpcode() != ISD::CopyToReg)
         continue;
       SDNode *SrcN = SUNode->getOperand(2).getNode();
-      if (isPassiveNode(SrcN)) continue;   // Not scheduled.
+      if (isPassiveNode(SrcN))
+        continue; // Not scheduled.
       SUnit *SrcSU = &SUnits[SrcN->getNodeId()];
       SrcSU->isCallOp = true;
     }
@@ -468,7 +471,7 @@ void ScheduleDAGSDNodes::AddSchedEdges() {
         SU.hasPhysRegClobbers = true;
         unsigned NumUsed = InstrEmitter::CountResults(N);
         while (NumUsed != 0 && !N->hasAnyUseOfValue(NumUsed - 1))
-          --NumUsed;    // Skip over unused values at the end.
+          --NumUsed; // Skip over unused values at the end.
         if (NumUsed > TII->get(N->getMachineOpcode()).getNumDefs())
           SU.hasPhysRegDefs = true;
       }
@@ -476,7 +479,8 @@ void ScheduleDAGSDNodes::AddSchedEdges() {
       for (unsigned i = 0, e = N->getNumOperands(); i != e; ++i) {
         SDNode *OpN = N->getOperand(i).getNode();
         unsigned DefIdx = N->getOperand(i).getResNo();
-        if (isPassiveNode(OpN)) continue;   // Not scheduled.
+        if (isPassiveNode(OpN))
+          continue; // Not scheduled.
         SUnit *OpSU = &SUnits[OpN->getNodeId()];
         assert(OpSU && "Node has no SUnit!");
         if (OpSU == &SU)
@@ -504,11 +508,11 @@ void ScheduleDAGSDNodes::AddSchedEdges() {
         // If this is a ctrl dep, latency is 1.
         unsigned OpLatency = isChain ? 1 : OpSU->Latency;
         // Special-case TokenFactor chains as zero-latency.
-        if(isChain && OpN->getOpcode() == ISD::TokenFactor)
+        if (isChain && OpN->getOpcode() == ISD::TokenFactor)
           OpLatency = 0;
 
         SDep Dep = isChain ? SDep(OpSU, SDep::Barrier)
-          : SDep(OpSU, SDep::Data, PhysReg);
+                           : SDep(OpSU, SDep::Data, PhysReg);
         Dep.setLatency(OpLatency);
         if (!isChain && !UnitLatencies) {
           computeOperandLatency(OpN, N, i, Dep);
@@ -563,8 +567,7 @@ void ScheduleDAGSDNodes::RegDefIter::InitNodeNumDefs() {
     NodeNumDefs = 0;
     return;
   }
-  if (POpc == TargetOpcode::PATCHPOINT &&
-      Node->getValueType(0) == MVT::Other) {
+  if (POpc == TargetOpcode::PATCHPOINT && Node->getValueType(0) == MVT::Other) {
     // PATCHPOINT is defined to have one result, but it might really have none
     // if we're not using CallingConv::AnyReg. Don't mistake the chain for a
     // real definition.
@@ -588,8 +591,8 @@ ScheduleDAGSDNodes::RegDefIter::RegDefIter(const SUnit *SU,
 
 // Advance to the next valid value defined by the SUnit.
 void ScheduleDAGSDNodes::RegDefIter::Advance() {
-  for (;Node;) { // Visit all glued nodes.
-    for (;DefIdx < NodeNumDefs; ++DefIdx) {
+  for (; Node;) { // Visit all glued nodes.
+    for (; DefIdx < NodeNumDefs; ++DefIdx) {
       if (!Node->hasAnyUseOfValue(DefIdx))
         continue;
       ValueType = Node->getSimpleValueType(DefIdx);
@@ -647,7 +650,8 @@ void ScheduleDAGSDNodes::computeLatency(SUnit *SU) {
 }
 
 void ScheduleDAGSDNodes::computeOperandLatency(SDNode *Def, SDNode *Use,
-                                               unsigned OpIdx, SDep& dep) const{
+                                               unsigned OpIdx,
+                                               SDep &dep) const {
   // Check to see if the scheduler cares about latencies.
   if (forceUnitLatencies())
     return;
@@ -660,8 +664,7 @@ void ScheduleDAGSDNodes::computeOperandLatency(SDNode *Def, SDNode *Use,
     // Adjust the use operand index by num of defs.
     OpIdx += TII->get(Use->getMachineOpcode()).getNumDefs();
   int Latency = TII->getOperandLatency(InstrItins, Def, DefIdx, Use, OpIdx);
-  if (Latency > 1 && Use->getOpcode() == ISD::CopyToReg &&
-      !BB->succ_empty()) {
+  if (Latency > 1 && Use->getOpcode() == ISD::CopyToReg && !BB->succ_empty()) {
     unsigned Reg = cast<RegisterSDNode>(Use->getOperand(1))->getReg();
     if (Register::isVirtualRegister(Reg))
       // This copy is a liveout value. It is likely coalesced, so reduce the
@@ -734,7 +737,7 @@ void ScheduleDAGSDNodes::VerifyScheduledSequence(bool isBottomUp) {
 /// ProcessSDDbgValues - Process SDDbgValues associated with this node.
 static void
 ProcessSDDbgValues(SDNode *N, SelectionDAG *DAG, InstrEmitter &Emitter,
-                   SmallVectorImpl<std::pair<unsigned, MachineInstr*> > &Orders,
+                   SmallVectorImpl<std::pair<unsigned, MachineInstr *>> &Orders,
                    DenseMap<SDValue, Register> &VRBaseMap, unsigned Order) {
   if (!N->getHasDebugValue())
     return;
@@ -805,9 +808,9 @@ ProcessSourceNode(SDNode *N, SelectionDAG *DAG, InstrEmitter &Emitter,
   ProcessSDDbgValues(N, DAG, Emitter, Orders, VRBaseMap, Order);
 }
 
-void ScheduleDAGSDNodes::
-EmitPhysRegCopy(SUnit *SU, DenseMap<SUnit*, Register> &VRBaseMap,
-                MachineBasicBlock::iterator InsertPos) {
+void ScheduleDAGSDNodes::EmitPhysRegCopy(
+    SUnit *SU, DenseMap<SUnit *, Register> &VRBaseMap,
+    MachineBasicBlock::iterator InsertPos) {
   for (const SDep &Pred : SU->Preds) {
     if (Pred.isCtrl())
       continue; // ignore chain preds
@@ -827,7 +830,7 @@ EmitPhysRegCopy(SUnit *SU, DenseMap<SUnit*, Register> &VRBaseMap,
         }
       }
       BuildMI(*BB, InsertPos, DebugLoc(), TII->get(TargetOpcode::COPY), Reg)
-        .addReg(VRI->second);
+          .addReg(VRI->second);
     } else {
       // Copy from physical register.
       assert(Pred.getReg() && "Unknown physical register!");
@@ -846,12 +849,12 @@ EmitPhysRegCopy(SUnit *SU, DenseMap<SUnit*, Register> &VRBaseMap,
 /// InsertPos and MachineBasicBlock that contains this insertion
 /// point. ScheduleDAGSDNodes holds a BB pointer for convenience, but this does
 /// not necessarily refer to returned BB. The emitter may split blocks.
-MachineBasicBlock *ScheduleDAGSDNodes::
-EmitSchedule(MachineBasicBlock::iterator &InsertPos) {
+MachineBasicBlock *
+ScheduleDAGSDNodes::EmitSchedule(MachineBasicBlock::iterator &InsertPos) {
   InstrEmitter Emitter(DAG->getTarget(), BB, InsertPos);
   DenseMap<SDValue, Register> VRBaseMap;
-  DenseMap<SUnit*, Register> CopyVRBaseMap;
-  SmallVector<std::pair<unsigned, MachineInstr*>, 32> Orders;
+  DenseMap<SUnit *, Register> CopyVRBaseMap;
+  SmallVector<std::pair<unsigned, MachineInstr *>, 32> Orders;
   SmallSet<Register, 8> Seen;
   bool HasDbg = DAG->hasDebugValues();
 
@@ -905,7 +908,7 @@ EmitSchedule(MachineBasicBlock::iterator &InsertPos) {
     SDDbgInfo::DbgIterator PDI = DAG->ByvalParmDbgBegin();
     SDDbgInfo::DbgIterator PDE = DAG->ByvalParmDbgEnd();
     for (; PDI != PDE; ++PDI) {
-      MachineInstr *DbgMI= Emitter.EmitDbgValue(*PDI, VRBaseMap);
+      MachineInstr *DbgMI = Emitter.EmitDbgValue(*PDI, VRBaseMap);
       if (DbgMI) {
         BB->insert(InsertPos, DbgMI);
         // We re-emit the dbg_value closer to its use, too, after instructions
@@ -1005,7 +1008,7 @@ EmitSchedule(MachineBasicBlock::iterator &InsertPos) {
     }
     // Add trailing DbgValue's before the terminator. FIXME: May want to add
     // some of them before one or more conditional branches?
-    SmallVector<MachineInstr*, 8> DbgMIs;
+    SmallVector<MachineInstr *, 8> DbgMIs;
     for (; DI != DE; ++DI) {
       if ((*DI)->isEmitted())
         continue;
@@ -1030,9 +1033,9 @@ EmitSchedule(MachineBasicBlock::iterator &InsertPos) {
         continue;
 
       // Insert all SDDbgLabel's whose order(s) are before "Order".
-      for (; DLI != DLE &&
-             (*DLI)->getOrder() >= LastOrder && (*DLI)->getOrder() < Order;
-             ++DLI) {
+      for (; DLI != DLE && (*DLI)->getOrder() >= LastOrder &&
+             (*DLI)->getOrder() < Order;
+           ++DLI) {
         MachineInstr *DbgMI = Emitter.EmitDbgLabel(*DLI);
         if (DbgMI) {
           if (!LastOrder)
diff --git a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.h b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.h
index 439ccfdc327587f..1a8562d0350c9e0 100644
--- a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.h
+++ b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGSDNodes.h
@@ -29,164 +29,173 @@ namespace llvm {
 class AAResults;
 class InstrItineraryData;
 
-  /// ScheduleDAGSDNodes - A ScheduleDAG for scheduling SDNode-based DAGs.
+/// ScheduleDAGSDNodes - A ScheduleDAG for scheduling SDNode-based DAGs.
+///
+/// Edges between SUnits are initially based on edges in the SelectionDAG,
+/// and additional edges can be added by the schedulers as heuristics.
+/// SDNodes such as Constants, Registers, and a few others that are not
+/// interesting to schedulers are not allocated SUnits.
+///
+/// SDNodes with MVT::Glue operands are grouped along with the flagged
+/// nodes into a single SUnit so that they are scheduled together.
+///
+/// SDNode-based scheduling graphs do not use SDep::Anti or SDep::Output
+/// edges.  Physical register dependence information is not carried in
+/// the DAG and must be handled explicitly by schedulers.
+///
+class ScheduleDAGSDNodes : public ScheduleDAG {
+public:
+  MachineBasicBlock *BB = nullptr;
+  SelectionDAG *DAG = nullptr; // DAG of the current basic block
+  const InstrItineraryData *InstrItins;
+
+  /// The schedule. Null SUnit*'s represent noop instructions.
+  std::vector<SUnit *> Sequence;
+
+  explicit ScheduleDAGSDNodes(MachineFunction &mf);
+
+  ~ScheduleDAGSDNodes() override = default;
+
+  /// Run - perform scheduling.
   ///
-  /// Edges between SUnits are initially based on edges in the SelectionDAG,
-  /// and additional edges can be added by the schedulers as heuristics.
-  /// SDNodes such as Constants, Registers, and a few others that are not
-  /// interesting to schedulers are not allocated SUnits.
-  ///
-  /// SDNodes with MVT::Glue operands are grouped along with the flagged
-  /// nodes into a single SUnit so that they are scheduled together.
+  void Run(SelectionDAG *dag, MachineBasicBlock *bb);
+
+  /// isPassiveNode - Return true if the node is a non-scheduled leaf.
   ///
-  /// SDNode-based scheduling graphs do not use SDep::Anti or SDep::Output
-  /// edges.  Physical register dependence information is not carried in
-  /// the DAG and must be handled explicitly by schedulers.
+  static bool isPassiveNode(SDNode *Node) {
+    if (isa<ConstantSDNode>(Node))
+      return true;
+    if (isa<ConstantFPSDNode>(Node))
+      return true;
+    if (isa<RegisterSDNode>(Node))
+      return true;
+    if (isa<RegisterMaskSDNode>(Node))
+      return true;
+    if (isa<GlobalAddressSDNode>(Node))
+      return true;
+    if (isa<BasicBlockSDNode>(Node))
+      return true;
+    if (isa<FrameIndexSDNode>(Node))
+      return true;
+    if (isa<ConstantPoolSDNode>(Node))
+      return true;
+    if (isa<TargetIndexSDNode>(Node))
+      return true;
+    if (isa<JumpTableSDNode>(Node))
+      return true;
+    if (isa<ExternalSymbolSDNode>(Node))
+      return true;
+    if (isa<MCSymbolSDNode>(Node))
+      return true;
+    if (isa<BlockAddressSDNode>(Node))
+      return true;
+    if (Node->getOpcode() == ISD::EntryToken || isa<MDNodeSDNode>(Node))
+      return true;
+    return false;
+  }
+
+  /// NewSUnit - Creates a new SUnit and return a ptr to it.
   ///
-  class ScheduleDAGSDNodes : public ScheduleDAG {
-  public:
-    MachineBasicBlock *BB = nullptr;
-    SelectionDAG *DAG = nullptr; // DAG of the current basic block
-    const InstrItineraryData *InstrItins;
-
-    /// The schedule. Null SUnit*'s represent noop instructions.
-    std::vector<SUnit*> Sequence;
-
-    explicit ScheduleDAGSDNodes(MachineFunction &mf);
-
-    ~ScheduleDAGSDNodes() override = default;
-
-    /// Run - perform scheduling.
-    ///
-    void Run(SelectionDAG *dag, MachineBasicBlock *bb);
-
-    /// isPassiveNode - Return true if the node is a non-scheduled leaf.
-    ///
-    static bool isPassiveNode(SDNode *Node) {
-      if (isa<ConstantSDNode>(Node))       return true;
-      if (isa<ConstantFPSDNode>(Node))     return true;
-      if (isa<RegisterSDNode>(Node))       return true;
-      if (isa<RegisterMaskSDNode>(Node))   return true;
-      if (isa<GlobalAddressSDNode>(Node))  return true;
-      if (isa<BasicBlockSDNode>(Node))     return true;
-      if (isa<FrameIndexSDNode>(Node))     return true;
-      if (isa<ConstantPoolSDNode>(Node))   return true;
-      if (isa<TargetIndexSDNode>(Node))    return true;
-      if (isa<JumpTableSDNode>(Node))      return true;
-      if (isa<ExternalSymbolSDNode>(Node)) return true;
-      if (isa<MCSymbolSDNode>(Node))       return true;
-      if (isa<BlockAddressSDNode>(Node))   return true;
-      if (Node->getOpcode() == ISD::EntryToken ||
-          isa<MDNodeSDNode>(Node)) return true;
-      return false;
-    }
+  SUnit *newSUnit(SDNode *N);
 
-    /// NewSUnit - Creates a new SUnit and return a ptr to it.
-    ///
-    SUnit *newSUnit(SDNode *N);
+  /// Clone - Creates a clone of the specified SUnit. It does not copy the
+  /// predecessors / successors info nor the temporary scheduling states.
+  ///
+  SUnit *Clone(SUnit *Old);
 
-    /// Clone - Creates a clone of the specified SUnit. It does not copy the
-    /// predecessors / successors info nor the temporary scheduling states.
-    ///
-    SUnit *Clone(SUnit *Old);
+  /// BuildSchedGraph - Build the SUnit graph from the selection dag that we
+  /// are input.  This SUnit graph is similar to the SelectionDAG, but
+  /// excludes nodes that aren't interesting to scheduling, and represents
+  /// flagged together nodes with a single SUnit.
+  void BuildSchedGraph(AAResults *AA);
 
-    /// BuildSchedGraph - Build the SUnit graph from the selection dag that we
-    /// are input.  This SUnit graph is similar to the SelectionDAG, but
-    /// excludes nodes that aren't interesting to scheduling, and represents
-    /// flagged together nodes with a single SUnit.
-    void BuildSchedGraph(AAResults *AA);
+  /// InitNumRegDefsLeft - Determine the # of regs defined by this node.
+  ///
+  void InitNumRegDefsLeft(SUnit *SU);
 
-    /// InitNumRegDefsLeft - Determine the # of regs defined by this node.
-    ///
-    void InitNumRegDefsLeft(SUnit *SU);
+  /// computeLatency - Compute node latency.
+  ///
+  virtual void computeLatency(SUnit *SU);
 
-    /// computeLatency - Compute node latency.
-    ///
-    virtual void computeLatency(SUnit *SU);
+  virtual void computeOperandLatency(SDNode *Def, SDNode *Use, unsigned OpIdx,
+                                     SDep &dep) const;
 
-    virtual void computeOperandLatency(SDNode *Def, SDNode *Use,
-                                       unsigned OpIdx, SDep& dep) const;
+  /// Schedule - Order nodes according to selected style, filling
+  /// in the Sequence member.
+  ///
+  virtual void Schedule() = 0;
 
-    /// Schedule - Order nodes according to selected style, filling
-    /// in the Sequence member.
-    ///
-    virtual void Schedule() = 0;
+  /// VerifyScheduledSequence - Verify that all SUnits are scheduled and
+  /// consistent with the Sequence of scheduled instructions.
+  void VerifyScheduledSequence(bool isBottomUp);
 
-    /// VerifyScheduledSequence - Verify that all SUnits are scheduled and
-    /// consistent with the Sequence of scheduled instructions.
-    void VerifyScheduledSequence(bool isBottomUp);
+  /// EmitSchedule - Insert MachineInstrs into the MachineBasicBlock
+  /// according to the order specified in Sequence.
+  ///
+  virtual MachineBasicBlock *
+  EmitSchedule(MachineBasicBlock::iterator &InsertPos);
 
-    /// EmitSchedule - Insert MachineInstrs into the MachineBasicBlock
-    /// according to the order specified in Sequence.
-    ///
-    virtual MachineBasicBlock*
-    EmitSchedule(MachineBasicBlock::iterator &InsertPos);
+  void dumpNode(const SUnit &SU) const override;
+  void dump() const override;
+  void dumpSchedule() const;
 
-    void dumpNode(const SUnit &SU) const override;
-    void dump() const override;
-    void dumpSchedule() const;
+  std::string getGraphNodeLabel(const SUnit *SU) const override;
 
-    std::string getGraphNodeLabel(const SUnit *SU) const override;
+  std::string getDAGName() const override;
 
-    std::string getDAGName() const override;
+  virtual void getCustomGraphFeatures(GraphWriter<ScheduleDAG *> &GW) const;
 
-    virtual void getCustomGraphFeatures(GraphWriter<ScheduleDAG*> &GW) const;
+  /// RegDefIter - In place iteration over the values defined by an
+  /// SUnit. This does not need copies of the iterator or any other STLisms.
+  /// The iterator creates itself, rather than being provided by the SchedDAG.
+  class RegDefIter {
+    const ScheduleDAGSDNodes *SchedDAG;
+    const SDNode *Node;
+    unsigned DefIdx = 0;
+    unsigned NodeNumDefs = 0;
+    MVT ValueType;
 
-    /// RegDefIter - In place iteration over the values defined by an
-    /// SUnit. This does not need copies of the iterator or any other STLisms.
-    /// The iterator creates itself, rather than being provided by the SchedDAG.
-    class RegDefIter {
-      const ScheduleDAGSDNodes *SchedDAG;
-      const SDNode *Node;
-      unsigned DefIdx = 0;
-      unsigned NodeNumDefs = 0;
-      MVT ValueType;
+  public:
+    RegDefIter(const SUnit *SU, const ScheduleDAGSDNodes *SD);
 
-    public:
-      RegDefIter(const SUnit *SU, const ScheduleDAGSDNodes *SD);
+    bool IsValid() const { return Node != nullptr; }
 
-      bool IsValid() const { return Node != nullptr; }
+    MVT GetValue() const {
+      assert(IsValid() && "bad iterator");
+      return ValueType;
+    }
 
-      MVT GetValue() const {
-        assert(IsValid() && "bad iterator");
-        return ValueType;
-      }
+    const SDNode *GetNode() const { return Node; }
 
-      const SDNode *GetNode() const {
-        return Node;
-      }
+    unsigned GetIdx() const { return DefIdx - 1; }
 
-      unsigned GetIdx() const {
-        return DefIdx-1;
-      }
+    void Advance();
 
-      void Advance();
+  private:
+    void InitNodeNumDefs();
+  };
 
-    private:
-      void InitNodeNumDefs();
-    };
+protected:
+  /// ForceUnitLatencies - Return true if all scheduling edges should be given
+  /// a latency value of one.  The default is to return false; schedulers may
+  /// override this as needed.
+  virtual bool forceUnitLatencies() const { return false; }
+
+private:
+  /// ClusterNeighboringLoads - Cluster loads from "near" addresses into
+  /// combined SUnits.
+  void ClusterNeighboringLoads(SDNode *Node);
+  /// ClusterNodes - Cluster certain nodes which should be scheduled together.
+  ///
+  void ClusterNodes();
 
-  protected:
-    /// ForceUnitLatencies - Return true if all scheduling edges should be given
-    /// a latency value of one.  The default is to return false; schedulers may
-    /// override this as needed.
-    virtual bool forceUnitLatencies() const { return false; }
+  /// BuildSchedUnits, AddSchedEdges - Helper functions for BuildSchedGraph.
+  void BuildSchedUnits();
+  void AddSchedEdges();
 
-  private:
-    /// ClusterNeighboringLoads - Cluster loads from "near" addresses into
-    /// combined SUnits.
-    void ClusterNeighboringLoads(SDNode *Node);
-    /// ClusterNodes - Cluster certain nodes which should be scheduled together.
-    ///
-    void ClusterNodes();
-
-    /// BuildSchedUnits, AddSchedEdges - Helper functions for BuildSchedGraph.
-    void BuildSchedUnits();
-    void AddSchedEdges();
-
-    void EmitPhysRegCopy(SUnit *SU, DenseMap<SUnit*, Register> &VRBaseMap,
-                         MachineBasicBlock::iterator InsertPos);
-  };
+  void EmitPhysRegCopy(SUnit *SU, DenseMap<SUnit *, Register> &VRBaseMap,
+                       MachineBasicBlock::iterator InsertPos);
+};
 
 } // end namespace llvm
 
diff --git a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGVLIW.cpp b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGVLIW.cpp
index 1ba1fd65b8c903b..f7632a33e699d90 100644
--- a/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGVLIW.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/ScheduleDAGVLIW.cpp
@@ -32,12 +32,11 @@ using namespace llvm;
 
 #define DEBUG_TYPE "pre-RA-sched"
 
-STATISTIC(NumNoops , "Number of noops inserted");
+STATISTIC(NumNoops, "Number of noops inserted");
 STATISTIC(NumStalls, "Number of pipeline stalls");
 
-static RegisterScheduler
-  VLIWScheduler("vliw-td", "VLIW scheduler",
-                createVLIWDAGScheduler);
+static RegisterScheduler VLIWScheduler("vliw-td", "VLIW scheduler",
+                                       createVLIWDAGScheduler);
 
 namespace {
 //===----------------------------------------------------------------------===//
@@ -54,7 +53,7 @@ class ScheduleDAGVLIW : public ScheduleDAGSDNodes {
   /// been issued, but their results are not ready yet (due to the latency of
   /// the operation).  Once the operands become available, the instruction is
   /// added to the AvailableQueue.
-  std::vector<SUnit*> PendingQueue;
+  std::vector<SUnit *> PendingQueue;
 
   /// HazardRec - The hazard recognizer to use.
   ScheduleHazardRecognizer *HazardRec;
@@ -83,7 +82,7 @@ class ScheduleDAGVLIW : public ScheduleDAGSDNodes {
   void scheduleNodeTopDown(SUnit *SU, unsigned CurCycle);
   void listScheduleTopDown();
 };
-}  // end anonymous namespace
+} // end anonymous namespace
 
 /// Schedule - Schedule the DAG using list scheduling.
 void ScheduleDAGVLIW::Schedule() {
@@ -175,7 +174,7 @@ void ScheduleDAGVLIW::listScheduleTopDown() {
 
   // While AvailableQueue is not empty, grab the node with the highest
   // priority. If it is not ready put it back.  Schedule the node.
-  std::vector<SUnit*> NotReady;
+  std::vector<SUnit *> NotReady;
   Sequence.reserve(SUnits.size());
   while (!AvailableQueue->empty() || !PendingQueue.empty()) {
     // Check to see if any of the pending instructions are ready to issue.  If
@@ -186,9 +185,9 @@ void ScheduleDAGVLIW::listScheduleTopDown() {
         PendingQueue[i]->isAvailable = true;
         PendingQueue[i] = PendingQueue.back();
         PendingQueue.pop_back();
-        --i; --e;
-      }
-      else {
+        --i;
+        --e;
+      } else {
         assert(PendingQueue[i]->getDepth() > CurCycle && "Negative latency?");
       }
     }
@@ -209,7 +208,7 @@ void ScheduleDAGVLIW::listScheduleTopDown() {
       SUnit *CurSUnit = AvailableQueue->pop();
 
       ScheduleHazardRecognizer::HazardType HT =
-        HazardRec->getHazardType(CurSUnit, 0/*no stalls*/);
+          HazardRec->getHazardType(CurSUnit, 0 /*no stalls*/);
       if (HT == ScheduleHazardRecognizer::NoHazard) {
         FoundSUnit = CurSUnit;
         break;
@@ -234,7 +233,7 @@ void ScheduleDAGVLIW::listScheduleTopDown() {
 
       // If this is a pseudo-op node, we don't want to increment the current
       // cycle.
-      if (FoundSUnit->Latency)  // Don't increment CurCycle for pseudo-ops!
+      if (FoundSUnit->Latency) // Don't increment CurCycle for pseudo-ops!
         ++CurCycle;
     } else if (!HasNoopHazards) {
       // Otherwise, we have a pipeline stall, but no other problem, just advance
@@ -249,7 +248,7 @@ void ScheduleDAGVLIW::listScheduleTopDown() {
       // processors without pipeline interlocks and other cases.
       LLVM_DEBUG(dbgs() << "*** Emitting noop\n");
       HazardRec->EmitNoop();
-      Sequence.push_back(nullptr);   // NULL here means noop
+      Sequence.push_back(nullptr); // NULL here means noop
       ++NumNoops;
       ++CurCycle;
     }
@@ -265,7 +264,7 @@ void ScheduleDAGVLIW::listScheduleTopDown() {
 //===----------------------------------------------------------------------===//
 
 /// createVLIWDAGScheduler - This creates a top-down list scheduler.
-ScheduleDAGSDNodes *
-llvm::createVLIWDAGScheduler(SelectionDAGISel *IS, CodeGenOpt::Level) {
+ScheduleDAGSDNodes *llvm::createVLIWDAGScheduler(SelectionDAGISel *IS,
+                                                 CodeGenOpt::Level) {
   return new ScheduleDAGVLIW(*IS->MF, IS->AA, new ResourcePriorityQueue(IS));
 }
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
index b2ba747ce209867..30ecf1a8d44406f 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAG.cpp
@@ -89,8 +89,8 @@ static SDVTList makeVTList(const EVT *VTs, unsigned NumVTs) {
 }
 
 // Default null implementations of the callbacks.
-void SelectionDAG::DAGUpdateListener::NodeDeleted(SDNode*, SDNode*) {}
-void SelectionDAG::DAGUpdateListener::NodeUpdated(SDNode*) {}
+void SelectionDAG::DAGUpdateListener::NodeDeleted(SDNode *, SDNode *) {}
+void SelectionDAG::DAGUpdateListener::NodeUpdated(SDNode *) {}
 void SelectionDAG::DAGUpdateListener::NodeInserted(SDNode *) {}
 
 void SelectionDAG::DAGNodeDeletedListener::anchor() {}
@@ -98,13 +98,14 @@ void SelectionDAG::DAGNodeInsertedListener::anchor() {}
 
 #define DEBUG_TYPE "selectiondag"
 
-static cl::opt<bool> EnableMemCpyDAGOpt("enable-memcpy-dag-opt",
-       cl::Hidden, cl::init(true),
-       cl::desc("Gang up loads and stores generated by inlining of memcpy"));
+static cl::opt<bool> EnableMemCpyDAGOpt(
+    "enable-memcpy-dag-opt", cl::Hidden, cl::init(true),
+    cl::desc("Gang up loads and stores generated by inlining of memcpy"));
 
-static cl::opt<int> MaxLdStGlue("ldstmemcpy-glue-max",
-       cl::desc("Number limit for gluing ld/st of memcpy."),
-       cl::Hidden, cl::init(0));
+static cl::opt<int>
+    MaxLdStGlue("ldstmemcpy-glue-max",
+                cl::desc("Number limit for gluing ld/st of memcpy."),
+                cl::Hidden, cl::init(0));
 
 static void NewSDValueDbgMsg(SDValue V, StringRef Msg, SelectionDAG *G) {
   LLVM_DEBUG(dbgs() << Msg; V.getNode()->dump(G););
@@ -118,20 +119,18 @@ static void NewSDValueDbgMsg(SDValue V, StringRef Msg, SelectionDAG *G) {
 /// it returns true for things that are clearly not equal, like -0.0 and 0.0.
 /// As such, this method can be used to do an exact bit-for-bit comparison of
 /// two floating point values.
-bool ConstantFPSDNode::isExactlyValue(const APFloat& V) const {
+bool ConstantFPSDNode::isExactlyValue(const APFloat &V) const {
   return getValueAPF().bitwiseIsEqual(V);
 }
 
-bool ConstantFPSDNode::isValueValidForType(EVT VT,
-                                           const APFloat& Val) {
+bool ConstantFPSDNode::isValueValidForType(EVT VT, const APFloat &Val) {
   assert(VT.isFloatingPoint() && "Can only convert between FP types");
 
   // convert modifies in place, so make a copy.
   APFloat Val2 = APFloat(Val);
   bool losesInfo;
-  (void) Val2.convert(SelectionDAG::EVTToAPFloatSemantics(VT),
-                      APFloat::rmNearestTiesToEven,
-                      &losesInfo);
+  (void)Val2.convert(SelectionDAG::EVTToAPFloatSemantics(VT),
+                     APFloat::rmNearestTiesToEven, &losesInfo);
   return !losesInfo;
 }
 
@@ -179,7 +178,8 @@ bool ISD::isConstantSplatVectorAllOnes(const SDNode *N, bool BuildVectorOnly) {
     return isConstantSplatVector(N, SplatVal) && SplatVal.isAllOnes();
   }
 
-  if (N->getOpcode() != ISD::BUILD_VECTOR) return false;
+  if (N->getOpcode() != ISD::BUILD_VECTOR)
+    return false;
 
   unsigned i = 0, e = N->getNumOperands();
 
@@ -188,7 +188,8 @@ bool ISD::isConstantSplatVectorAllOnes(const SDNode *N, bool BuildVectorOnly) {
     ++i;
 
   // Do not accept an all-undef vector.
-  if (i == e) return false;
+  if (i == e)
+    return false;
 
   // Do not accept build_vectors that aren't all constants or which have non-~0
   // elements. We have to be a bit careful here, as the type of the constant
@@ -228,7 +229,8 @@ bool ISD::isConstantSplatVectorAllZeros(const SDNode *N, bool BuildVectorOnly) {
     return isConstantSplatVector(N, SplatVal) && SplatVal.isZero();
   }
 
-  if (N->getOpcode() != ISD::BUILD_VECTOR) return false;
+  if (N->getOpcode() != ISD::BUILD_VECTOR)
+    return false;
 
   bool IsAllUndef = true;
   for (const SDValue &Op : N->op_values()) {
@@ -566,20 +568,20 @@ ISD::CondCode ISD::getSetCCSwappedOperands(ISD::CondCode Operation) {
   // operation.
   unsigned OldL = (Operation >> 2) & 1;
   unsigned OldG = (Operation >> 1) & 1;
-  return ISD::CondCode((Operation & ~6) |  // Keep the N, U, E bits
-                       (OldL << 1) |       // New G bit
-                       (OldG << 2));       // New L bit.
+  return ISD::CondCode((Operation & ~6) | // Keep the N, U, E bits
+                       (OldL << 1) |      // New G bit
+                       (OldG << 2));      // New L bit.
 }
 
 static ISD::CondCode getSetCCInverseImpl(ISD::CondCode Op, bool isIntegerLike) {
   unsigned Operation = Op;
   if (isIntegerLike)
-    Operation ^= 7;   // Flip L, G, E bits, but not U.
+    Operation ^= 7; // Flip L, G, E bits, but not U.
   else
-    Operation ^= 15;  // Flip all of the condition bits.
+    Operation ^= 15; // Flip all of the condition bits.
 
   if (Operation > ISD::SETTRUE2)
-    Operation &= ~8;  // Don't let N and U bits get set.
+    Operation &= ~8; // Don't let N and U bits get set.
 
   return ISD::CondCode(Operation);
 }
@@ -598,17 +600,21 @@ ISD::CondCode ISD::GlobalISel::getSetCCInverse(ISD::CondCode Op,
 /// does not depend on the sign of the input (setne and seteq).
 static int isSignedOp(ISD::CondCode Opcode) {
   switch (Opcode) {
-  default: llvm_unreachable("Illegal integer setcc operation!");
+  default:
+    llvm_unreachable("Illegal integer setcc operation!");
   case ISD::SETEQ:
-  case ISD::SETNE: return 0;
+  case ISD::SETNE:
+    return 0;
   case ISD::SETLT:
   case ISD::SETLE:
   case ISD::SETGT:
-  case ISD::SETGE: return 1;
+  case ISD::SETGE:
+    return 1;
   case ISD::SETULT:
   case ISD::SETULE:
   case ISD::SETUGT:
-  case ISD::SETUGE: return 2;
+  case ISD::SETUGE:
+    return 2;
   }
 }
 
@@ -619,15 +625,15 @@ ISD::CondCode ISD::getSetCCOrOperation(ISD::CondCode Op1, ISD::CondCode Op2,
     // Cannot fold a signed integer setcc with an unsigned integer setcc.
     return ISD::SETCC_INVALID;
 
-  unsigned Op = Op1 | Op2;  // Combine all of the condition bits.
+  unsigned Op = Op1 | Op2; // Combine all of the condition bits.
 
   // If the N and U bits get set, then the resultant comparison DOES suddenly
   // care about orderedness, and it is true when ordered.
   if (Op > ISD::SETTRUE2)
-    Op &= ~16;     // Clear the U bit if the N bit is set.
+    Op &= ~16; // Clear the U bit if the N bit is set.
 
   // Canonicalize illegal integer setcc's.
-  if (IsInteger && Op == ISD::SETUNE)  // e.g. SETUGT | SETULT
+  if (IsInteger && Op == ISD::SETUNE) // e.g. SETUGT | SETULT
     Op = ISD::SETNE;
 
   return ISD::CondCode(Op);
@@ -646,12 +652,21 @@ ISD::CondCode ISD::getSetCCAndOperation(ISD::CondCode Op1, ISD::CondCode Op2,
   // Canonicalize illegal integer setcc's.
   if (IsInteger) {
     switch (Result) {
-    default: break;
-    case ISD::SETUO : Result = ISD::SETFALSE; break;  // SETUGT & SETULT
-    case ISD::SETOEQ:                                 // SETEQ  & SETU[LG]E
-    case ISD::SETUEQ: Result = ISD::SETEQ   ; break;  // SETUGE & SETULE
-    case ISD::SETOLT: Result = ISD::SETULT  ; break;  // SETULT & SETNE
-    case ISD::SETOGT: Result = ISD::SETUGT  ; break;  // SETUGT & SETNE
+    default:
+      break;
+    case ISD::SETUO:
+      Result = ISD::SETFALSE;
+      break;          // SETUGT & SETULT
+    case ISD::SETOEQ: // SETEQ  & SETU[LG]E
+    case ISD::SETUEQ:
+      Result = ISD::SETEQ;
+      break; // SETUGE & SETULE
+    case ISD::SETOLT:
+      Result = ISD::SETULT;
+      break; // SETULT & SETNE
+    case ISD::SETOGT:
+      Result = ISD::SETUGT;
+      break; // SETUGT & SETNE
     }
   }
 
@@ -663,7 +678,7 @@ ISD::CondCode ISD::getSetCCAndOperation(ISD::CondCode Op1, ISD::CondCode Op2,
 //===----------------------------------------------------------------------===//
 
 /// AddNodeIDOpcode - Add the node opcode to the NodeID data.
-static void AddNodeIDOpcode(FoldingSetNodeID &ID, unsigned OpC)  {
+static void AddNodeIDOpcode(FoldingSetNodeID &ID, unsigned OpC) {
   ID.AddInteger(OpC);
 }
 
@@ -674,8 +689,7 @@ static void AddNodeIDValueTypes(FoldingSetNodeID &ID, SDVTList VTList) {
 }
 
 /// AddNodeIDOperands - Various routines for adding operands to the NodeID data.
-static void AddNodeIDOperands(FoldingSetNodeID &ID,
-                              ArrayRef<SDValue> Ops) {
+static void AddNodeIDOperands(FoldingSetNodeID &ID, ArrayRef<SDValue> Ops) {
   for (const auto &Op : Ops) {
     ID.AddPointer(Op.getNode());
     ID.AddInteger(Op.getResNo());
@@ -683,16 +697,15 @@ static void AddNodeIDOperands(FoldingSetNodeID &ID,
 }
 
 /// AddNodeIDOperands - Various routines for adding operands to the NodeID data.
-static void AddNodeIDOperands(FoldingSetNodeID &ID,
-                              ArrayRef<SDUse> Ops) {
+static void AddNodeIDOperands(FoldingSetNodeID &ID, ArrayRef<SDUse> Ops) {
   for (const auto &Op : Ops) {
     ID.AddPointer(Op.getNode());
     ID.AddInteger(Op.getResNo());
   }
 }
 
-static void AddNodeIDNode(FoldingSetNodeID &ID, unsigned OpC,
-                          SDVTList VTList, ArrayRef<SDValue> OpList) {
+static void AddNodeIDNode(FoldingSetNodeID &ID, unsigned OpC, SDVTList VTList,
+                          ArrayRef<SDValue> OpList) {
   AddNodeIDOpcode(ID, OpC);
   AddNodeIDValueTypes(ID, VTList);
   AddNodeIDOperands(ID, OpList);
@@ -705,7 +718,8 @@ static void AddNodeIDCustom(FoldingSetNodeID &ID, const SDNode *N) {
   case ISD::ExternalSymbol:
   case ISD::MCSymbol:
     llvm_unreachable("Should only be used on nodes with operands");
-  default: break;  // Normal nodes don't need extra info.
+  default:
+    break; // Normal nodes don't need extra info.
   case ISD::TargetConstant:
   case ISD::Constant: {
     const ConstantSDNode *C = cast<ConstantSDNode>(N);
@@ -898,8 +912,8 @@ static void AddNodeIDCustom(FoldingSetNodeID &ID, const SDNode *N) {
   }
   case ISD::VECTOR_SHUFFLE: {
     const ShuffleVectorSDNode *SVN = cast<ShuffleVectorSDNode>(N);
-    for (unsigned i = 0, e = N->getValueType(0).getVectorNumElements();
-         i != e; ++i)
+    for (unsigned i = 0, e = N->getValueType(0).getVectorNumElements(); i != e;
+         ++i)
       ID.AddInteger(SVN->getMaskElt(i));
     break;
   }
@@ -954,10 +968,11 @@ static bool doNotCSE(SDNode *N) {
     return true; // Never CSE anything that produces a flag.
 
   switch (N->getOpcode()) {
-  default: break;
+  default:
+    break;
   case ISD::HANDLENODE:
   case ISD::EH_LABEL:
-    return true;   // Never CSE these nodes.
+    return true; // Never CSE these nodes.
   }
 
   // Check that remaining values produced are not flags.
@@ -975,7 +990,7 @@ void SelectionDAG::RemoveDeadNodes() {
   // to the root node, preventing it from being deleted.
   HandleSDNode Dummy(getRoot());
 
-  SmallVector<SDNode*, 128> DeadNodes;
+  SmallVector<SDNode *, 128> DeadNodes;
 
   // Add all obviously-dead nodes to the DeadNodes worklist.
   for (SDNode &Node : allnodes())
@@ -1010,7 +1025,7 @@ void SelectionDAG::RemoveDeadNodes(SmallVectorImpl<SDNode *> &DeadNodes) {
 
     // Next, brutally remove the operand list.  This is safe to do, as there are
     // no cycles in the graph.
-    for (SDNode::op_iterator I = N->op_begin(), E = N->op_end(); I != E; ) {
+    for (SDNode::op_iterator I = N->op_begin(), E = N->op_end(); I != E;) {
       SDUse &Use = *I++;
       SDNode *Operand = Use.getNode();
       Use.set(SDValue());
@@ -1024,8 +1039,8 @@ void SelectionDAG::RemoveDeadNodes(SmallVectorImpl<SDNode *> &DeadNodes) {
   }
 }
 
-void SelectionDAG::RemoveDeadNode(SDNode *N){
-  SmallVector<SDNode*, 16> DeadNodes(1, N);
+void SelectionDAG::RemoveDeadNode(SDNode *N) {
+  SmallVector<SDNode *, 16> DeadNodes(1, N);
 
   // Create a dummy node that adds a reference to the root node, preventing
   // it from being deleted.  (This matters if the root is an operand of the
@@ -1070,7 +1085,7 @@ void SDDbgInfo::erase(const SDNode *Node) {
   DbgValMapType::iterator I = DbgValMap.find(Node);
   if (I == DbgValMap.end())
     return;
-  for (auto &Val: I->second)
+  for (auto &Val : I->second)
     Val->setIsInvalidated();
   DbgValMap.erase(I);
 }
@@ -1157,7 +1172,8 @@ void SelectionDAG::InsertNode(SDNode *N) {
 bool SelectionDAG::RemoveNodeFromCSEMaps(SDNode *N) {
   bool Erased = false;
   switch (N->getOpcode()) {
-  case ISD::HANDLENODE: return false;  // noop.
+  case ISD::HANDLENODE:
+    return false; // noop.
   case ISD::CONDCODE:
     assert(CondCodeNodes[cast<CondCodeSDNode>(N)->get()] &&
            "Cond code doesn't exist!");
@@ -1199,7 +1215,7 @@ bool SelectionDAG::RemoveNodeFromCSEMaps(SDNode *N) {
   // Verify that the node was actually in one of the CSE maps, unless it has a
   // flag result (which cannot be CSE'd) or is one of the special cases that are
   // not subject to CSE.
-  if (!Erased && N->getValueType(N->getNumValues()-1) != MVT::Glue &&
+  if (!Erased && N->getValueType(N->getNumValues() - 1) != MVT::Glue &&
       !N->isMachineOpcode() && !doNotCSE(N)) {
     N->dump(this);
     dbgs() << "\n";
@@ -1213,8 +1229,7 @@ bool SelectionDAG::RemoveNodeFromCSEMaps(SDNode *N) {
 /// maps and modified in place. Add it back to the CSE maps, unless an identical
 /// node already exists, in which case transfer all its users to the existing
 /// node. This transfer can potentially trigger recursive merging.
-void
-SelectionDAG::AddModifiedNodeToCSEMaps(SDNode *N) {
+void SelectionDAG::AddModifiedNodeToCSEMaps(SDNode *N) {
   // For node types that aren't CSE'd, just act as if no identical node
   // already exists.
   if (!doNotCSE(N)) {
@@ -1247,7 +1262,7 @@ SDNode *SelectionDAG::FindModifiedNodeSlot(SDNode *N, SDValue Op,
   if (doNotCSE(N))
     return nullptr;
 
-  SDValue Ops[] = { Op };
+  SDValue Ops[] = {Op};
   FoldingSetNodeID ID;
   AddNodeIDNode(ID, N->getOpcode(), N->getVTList(), Ops);
   AddNodeIDCustom(ID, N);
@@ -1261,13 +1276,12 @@ SDNode *SelectionDAG::FindModifiedNodeSlot(SDNode *N, SDValue Op,
 /// were replaced with those specified.  If this node is never memoized,
 /// return null, otherwise return a pointer to the slot it would take.  If a
 /// node already exists with these operands, the slot will be non-null.
-SDNode *SelectionDAG::FindModifiedNodeSlot(SDNode *N,
-                                           SDValue Op1, SDValue Op2,
+SDNode *SelectionDAG::FindModifiedNodeSlot(SDNode *N, SDValue Op1, SDValue Op2,
                                            void *&InsertPos) {
   if (doNotCSE(N))
     return nullptr;
 
-  SDValue Ops[] = { Op1, Op2 };
+  SDValue Ops[] = {Op1, Op2};
   FoldingSetNodeID ID;
   AddNodeIDNode(ID, N->getOpcode(), N->getVTList(), Ops);
   AddNodeIDCustom(ID, N);
@@ -1304,8 +1318,8 @@ Align SelectionDAG::getEVTAlign(EVT VT) const {
 
 // EntryNode could meaningfully have debug info if we can find it...
 SelectionDAG::SelectionDAG(const TargetMachine &tm, CodeGenOpt::Level OL)
-    : TM(tm), OptLevel(OL),
-      EntryNode(ISD::EntryToken, 0, DebugLoc(), getVTList(MVT::Other, MVT::Glue)),
+    : TM(tm), OptLevel(OL), EntryNode(ISD::EntryToken, 0, DebugLoc(),
+                                      getVTList(MVT::Other, MVT::Glue)),
       Root(getEntryNode()) {
   InsertNode(&EntryNode);
   DbgInfo = new SDDbgInfo();
@@ -1339,7 +1353,7 @@ SelectionDAG::~SelectionDAG() {
 
 bool SelectionDAG::shouldOptForSize() const {
   return MF->getFunction().hasOptSize() ||
-      llvm::shouldOptimizeForSize(FLI->MBB->getBasicBlock(), PSI, BFI);
+         llvm::shouldOptimizeForSize(FLI->MBB->getBasicBlock(), PSI, BFI);
 }
 
 void SelectionDAG::allnodes_clear() {
@@ -1357,7 +1371,8 @@ SDNode *SelectionDAG::FindNodeOrInsertPos(const FoldingSetNodeID &ID,
   SDNode *N = CSEMap.FindNodeOrInsertPos(ID, InsertPos);
   if (N) {
     switch (N->getOpcode()) {
-    default: break;
+    default:
+      break;
     case ISD::Constant:
     case ISD::ConstantFP:
       llvm_unreachable("Querying for Constant and ConstantFP nodes requires "
@@ -1404,9 +1419,9 @@ void SelectionDAG::clear() {
   MCSymbols.clear();
   SDEI.clear();
   std::fill(CondCodeNodes.begin(), CondCodeNodes.end(),
-            static_cast<CondCodeSDNode*>(nullptr));
+            static_cast<CondCodeSDNode *>(nullptr));
   std::fill(ValueTypeNodes.begin(), ValueTypeNodes.end(),
-            static_cast<SDNode*>(nullptr));
+            static_cast<SDNode *>(nullptr));
 
   EntryNode.UseList = nullptr;
   InsertNode(&EntryNode);
@@ -1436,25 +1451,22 @@ SelectionDAG::getStrictFPExtendOrRound(SDValue Op, SDValue Chain,
 }
 
 SDValue SelectionDAG::getAnyExtOrTrunc(SDValue Op, const SDLoc &DL, EVT VT) {
-  return VT.bitsGT(Op.getValueType()) ?
-    getNode(ISD::ANY_EXTEND, DL, VT, Op) :
-    getNode(ISD::TRUNCATE, DL, VT, Op);
+  return VT.bitsGT(Op.getValueType()) ? getNode(ISD::ANY_EXTEND, DL, VT, Op)
+                                      : getNode(ISD::TRUNCATE, DL, VT, Op);
 }
 
 SDValue SelectionDAG::getSExtOrTrunc(SDValue Op, const SDLoc &DL, EVT VT) {
-  return VT.bitsGT(Op.getValueType()) ?
-    getNode(ISD::SIGN_EXTEND, DL, VT, Op) :
-    getNode(ISD::TRUNCATE, DL, VT, Op);
+  return VT.bitsGT(Op.getValueType()) ? getNode(ISD::SIGN_EXTEND, DL, VT, Op)
+                                      : getNode(ISD::TRUNCATE, DL, VT, Op);
 }
 
 SDValue SelectionDAG::getZExtOrTrunc(SDValue Op, const SDLoc &DL, EVT VT) {
-  return VT.bitsGT(Op.getValueType()) ?
-    getNode(ISD::ZERO_EXTEND, DL, VT, Op) :
-    getNode(ISD::TRUNCATE, DL, VT, Op);
+  return VT.bitsGT(Op.getValueType()) ? getNode(ISD::ZERO_EXTEND, DL, VT, Op)
+                                      : getNode(ISD::TRUNCATE, DL, VT, Op);
 }
 
 SDValue SelectionDAG::getBitcastedAnyExtOrTrunc(SDValue Op, const SDLoc &DL,
-                                                 EVT VT) {
+                                                EVT VT) {
   assert(!VT.isVector());
   auto Type = Op.getValueType();
   SDValue DestOp;
@@ -1469,7 +1481,7 @@ SDValue SelectionDAG::getBitcastedAnyExtOrTrunc(SDValue Op, const SDLoc &DL,
 }
 
 SDValue SelectionDAG::getBitcastedSExtOrTrunc(SDValue Op, const SDLoc &DL,
-                                               EVT VT) {
+                                              EVT VT) {
   assert(!VT.isVector());
   auto Type = Op.getValueType();
   SDValue DestOp;
@@ -1484,7 +1496,7 @@ SDValue SelectionDAG::getBitcastedSExtOrTrunc(SDValue Op, const SDLoc &DL,
 }
 
 SDValue SelectionDAG::getBitcastedZExtOrTrunc(SDValue Op, const SDLoc &DL,
-                                               EVT VT) {
+                                              EVT VT) {
   assert(!VT.isVector());
   auto Type = Op.getValueType();
   SDValue DestOp;
@@ -1631,8 +1643,7 @@ SDValue SelectionDAG::getConstant(const ConstantInt &Val, const SDLoc &DL,
     unsigned ViaEltSizeInBits = ViaEltVT.getSizeInBits();
 
     // For scalable vectors, try to use a SPLAT_VECTOR_PARTS node.
-    if (VT.isScalableVector() ||
-        TLI->isOperationLegal(ISD::SPLAT_VECTOR, VT)) {
+    if (VT.isScalableVector() || TLI->isOperationLegal(ISD::SPLAT_VECTOR, VT)) {
       assert(EltVT.getSizeInBits() % ViaEltSizeInBits == 0 &&
              "Can only handle an even split!");
       unsigned Parts = EltVT.getSizeInBits() / ViaEltSizeInBits;
@@ -1808,7 +1819,7 @@ SDValue SelectionDAG::getGlobalAddress(const GlobalValue *GV, const SDLoc &DL,
   auto *N = newSDNode<GlobalAddressSDNode>(
       Opc, DL.getIROrder(), DL.getDebugLoc(), GV, VT, Offset, TargetFlags);
   CSEMap.InsertNode(N, IP);
-    InsertNode(N);
+  InsertNode(N);
   return SDValue(N, 0);
 }
 
@@ -1922,14 +1933,15 @@ SDValue SelectionDAG::getBasicBlock(MachineBasicBlock *MBB) {
 }
 
 SDValue SelectionDAG::getValueType(EVT VT) {
-  if (VT.isSimple() && (unsigned)VT.getSimpleVT().SimpleTy >=
-      ValueTypeNodes.size())
-    ValueTypeNodes.resize(VT.getSimpleVT().SimpleTy+1);
+  if (VT.isSimple() &&
+      (unsigned)VT.getSimpleVT().SimpleTy >= ValueTypeNodes.size())
+    ValueTypeNodes.resize(VT.getSimpleVT().SimpleTy + 1);
 
-  SDNode *&N = VT.isExtended() ?
-    ExtendedValueTypeNodes[VT] : ValueTypeNodes[VT.getSimpleVT().SimpleTy];
+  SDNode *&N = VT.isExtended() ? ExtendedValueTypeNodes[VT]
+                               : ValueTypeNodes[VT.getSimpleVT().SimpleTy];
 
-  if (N) return SDValue(N, 0);
+  if (N)
+    return SDValue(N, 0);
   N = newSDNode<VTSDNode>(VT);
   InsertNode(N);
   return SDValue(N, 0);
@@ -1937,7 +1949,8 @@ SDValue SelectionDAG::getValueType(EVT VT) {
 
 SDValue SelectionDAG::getExternalSymbol(const char *Sym, EVT VT) {
   SDNode *&N = ExternalSymbols[Sym];
-  if (N) return SDValue(N, 0);
+  if (N)
+    return SDValue(N, 0);
   N = newSDNode<ExternalSymbolSDNode>(false, Sym, 0, VT);
   InsertNode(N);
   return SDValue(N, 0);
@@ -1956,7 +1969,8 @@ SDValue SelectionDAG::getTargetExternalSymbol(const char *Sym, EVT VT,
                                               unsigned TargetFlags) {
   SDNode *&N =
       TargetExternalSymbols[std::pair<std::string, unsigned>(Sym, TargetFlags)];
-  if (N) return SDValue(N, 0);
+  if (N)
+    return SDValue(N, 0);
   N = newSDNode<ExternalSymbolSDNode>(true, Sym, TargetFlags, VT);
   InsertNode(N);
   return SDValue(N, 0);
@@ -1964,7 +1978,7 @@ SDValue SelectionDAG::getTargetExternalSymbol(const char *Sym, EVT VT,
 
 SDValue SelectionDAG::getCondCode(ISD::CondCode Cond) {
   if ((unsigned)Cond >= CondCodeNodes.size())
-    CondCodeNodes.resize(Cond+1);
+    CondCodeNodes.resize(Cond + 1);
 
   if (!CondCodeNodes[Cond]) {
     auto *N = newSDNode<CondCodeSDNode>(Cond);
@@ -1997,8 +2011,7 @@ SDValue SelectionDAG::getVScale(const SDLoc &DL, EVT VT, APInt MulImm,
 SDValue SelectionDAG::getElementCount(const SDLoc &DL, EVT VT, ElementCount EC,
                                       bool ConstantFold) {
   if (EC.isScalable())
-    return getVScale(DL, VT,
-                     APInt(VT.getSizeInBits(), EC.getKnownMinValue()));
+    return getVScale(DL, VT, APInt(VT.getSizeInBits(), EC.getKnownMinValue()));
 
   return getConstant(EC.getKnownMinValue(), DL, VT);
 }
@@ -2043,9 +2056,9 @@ SDValue SelectionDAG::getVectorShuffle(EVT VT, const SDLoc &dl, SDValue N1,
   // Validate that all indices in Mask are within the range of the elements
   // input to the shuffle.
   int NElts = Mask.size();
-  assert(llvm::all_of(Mask,
-                      [&](int M) { return M < (NElts * 2) && M >= -1; }) &&
-         "Index out of range");
+  assert(
+      llvm::all_of(Mask, [&](int M) { return M < (NElts * 2) && M >= -1; }) &&
+      "Index out of range");
 
   // Copy the mask so we can do any needed cleanup.
   SmallVector<int, 8> MaskVec(Mask);
@@ -2054,7 +2067,8 @@ SDValue SelectionDAG::getVectorShuffle(EVT VT, const SDLoc &dl, SDValue N1,
   if (N1 == N2) {
     N2 = getUNDEF(VT);
     for (int i = 0; i != NElts; ++i)
-      if (MaskVec[i] >= NElts) MaskVec[i] -= NElts;
+      if (MaskVec[i] >= NElts)
+        MaskVec[i] -= NElts;
   }
 
   // Canonicalize shuffle undef, v -> v, undef.  Commute the shuffle mask.
@@ -2122,8 +2136,10 @@ SDValue SelectionDAG::getVectorShuffle(EVT VT, const SDLoc &dl, SDValue N1,
   // If Identity shuffle return that node.
   bool Identity = true, AllSame = true;
   for (int i = 0; i != NElts; ++i) {
-    if (MaskVec[i] >= 0 && MaskVec[i] != i) Identity = false;
-    if (MaskVec[i] != MaskVec[0]) AllSame = false;
+    if (MaskVec[i] >= 0 && MaskVec[i] != i)
+      Identity = false;
+    if (MaskVec[i] != MaskVec[0])
+      AllSame = false;
   }
   if (Identity && NElts)
     return N1;
@@ -2173,12 +2189,12 @@ SDValue SelectionDAG::getVectorShuffle(EVT VT, const SDLoc &dl, SDValue N1,
   }
 
   FoldingSetNodeID ID;
-  SDValue Ops[2] = { N1, N2 };
+  SDValue Ops[2] = {N1, N2};
   AddNodeIDNode(ID, ISD::VECTOR_SHUFFLE, getVTList(VT), Ops);
   for (int i = 0; i != NElts; ++i)
     ID.AddInteger(MaskVec[i]);
 
-  void* IP = nullptr;
+  void *IP = nullptr;
   if (SDNode *E = FindNodeOrInsertPos(ID, dl, IP))
     return SDValue(E, 0);
 
@@ -2246,7 +2262,7 @@ SDValue SelectionDAG::getEHLabel(const SDLoc &dl, SDValue Root,
 SDValue SelectionDAG::getLabelNode(unsigned Opcode, const SDLoc &dl,
                                    SDValue Root, MCSymbol *Label) {
   FoldingSetNodeID ID;
-  SDValue Ops[] = { Root };
+  SDValue Ops[] = {Root};
   AddNodeIDNode(ID, Opcode, getVTList(MVT::Other), Ops);
   ID.AddPointer(Label);
   void *IP = nullptr;
@@ -2349,7 +2365,8 @@ SDValue SelectionDAG::getFreeze(SDValue V) {
 SDValue SelectionDAG::getShiftAmountOperand(EVT LHSTy, SDValue Op) {
   EVT OpTy = Op.getValueType();
   EVT ShTy = TLI->getShiftAmountTy(LHSTy, getDataLayout());
-  if (OpTy == ShTy || OpTy.isVector()) return Op;
+  if (OpTy == ShTy || OpTy.isVector())
+    return Op;
 
   return getZExtOrTrunc(Op, SDLoc(Op), ShTy);
 }
@@ -2379,7 +2396,7 @@ SDValue SelectionDAG::expandVAArg(SDNode *Node) {
   // Increment the pointer, VAList, to the next vaarg
   Tmp1 = getNode(ISD::ADD, dl, VAList.getValueType(), VAList,
                  getConstant(getDataLayout().getTypeAllocSize(
-                                               VT.getTypeForEVT(*getContext())),
+                                 VT.getTypeForEVT(*getContext())),
                              dl, VAList.getValueType()));
   // Store the incremented VAList to the legalized pointer
   Tmp1 =
@@ -2483,11 +2500,14 @@ SDValue SelectionDAG::FoldSetCC(EVT VT, SDValue N1, SDValue N2,
 
   // These setcc operations always fold.
   switch (Cond) {
-  default: break;
+  default:
+    break;
   case ISD::SETFALSE:
-  case ISD::SETFALSE2: return getBoolConstant(false, dl, VT, OpVT);
+  case ISD::SETFALSE2:
+    return getBoolConstant(false, dl, VT, OpVT);
   case ISD::SETTRUE:
-  case ISD::SETTRUE2: return getBoolConstant(true, dl, VT, OpVT);
+  case ISD::SETTRUE2:
+    return getBoolConstant(true, dl, VT, OpVT);
 
   case ISD::SETOEQ:
   case ISD::SETOGT:
@@ -2539,58 +2559,69 @@ SDValue SelectionDAG::FoldSetCC(EVT VT, SDValue N1, SDValue N2,
   if (N1CFP && N2CFP) {
     APFloat::cmpResult R = N1CFP->getValueAPF().compare(N2CFP->getValueAPF());
     switch (Cond) {
-    default: break;
-    case ISD::SETEQ:  if (R==APFloat::cmpUnordered)
-                        return GetUndefBooleanConstant();
-                      [[fallthrough]];
-    case ISD::SETOEQ: return getBoolConstant(R==APFloat::cmpEqual, dl, VT,
-                                             OpVT);
-    case ISD::SETNE:  if (R==APFloat::cmpUnordered)
-                        return GetUndefBooleanConstant();
-                      [[fallthrough]];
-    case ISD::SETONE: return getBoolConstant(R==APFloat::cmpGreaterThan ||
-                                             R==APFloat::cmpLessThan, dl, VT,
-                                             OpVT);
-    case ISD::SETLT:  if (R==APFloat::cmpUnordered)
-                        return GetUndefBooleanConstant();
-                      [[fallthrough]];
-    case ISD::SETOLT: return getBoolConstant(R==APFloat::cmpLessThan, dl, VT,
-                                             OpVT);
-    case ISD::SETGT:  if (R==APFloat::cmpUnordered)
-                        return GetUndefBooleanConstant();
-                      [[fallthrough]];
-    case ISD::SETOGT: return getBoolConstant(R==APFloat::cmpGreaterThan, dl,
-                                             VT, OpVT);
-    case ISD::SETLE:  if (R==APFloat::cmpUnordered)
-                        return GetUndefBooleanConstant();
-                      [[fallthrough]];
-    case ISD::SETOLE: return getBoolConstant(R==APFloat::cmpLessThan ||
-                                             R==APFloat::cmpEqual, dl, VT,
-                                             OpVT);
-    case ISD::SETGE:  if (R==APFloat::cmpUnordered)
-                        return GetUndefBooleanConstant();
-                      [[fallthrough]];
-    case ISD::SETOGE: return getBoolConstant(R==APFloat::cmpGreaterThan ||
-                                         R==APFloat::cmpEqual, dl, VT, OpVT);
-    case ISD::SETO:   return getBoolConstant(R!=APFloat::cmpUnordered, dl, VT,
-                                             OpVT);
-    case ISD::SETUO:  return getBoolConstant(R==APFloat::cmpUnordered, dl, VT,
-                                             OpVT);
-    case ISD::SETUEQ: return getBoolConstant(R==APFloat::cmpUnordered ||
-                                             R==APFloat::cmpEqual, dl, VT,
-                                             OpVT);
-    case ISD::SETUNE: return getBoolConstant(R!=APFloat::cmpEqual, dl, VT,
-                                             OpVT);
-    case ISD::SETULT: return getBoolConstant(R==APFloat::cmpUnordered ||
-                                             R==APFloat::cmpLessThan, dl, VT,
-                                             OpVT);
-    case ISD::SETUGT: return getBoolConstant(R==APFloat::cmpGreaterThan ||
-                                             R==APFloat::cmpUnordered, dl, VT,
-                                             OpVT);
-    case ISD::SETULE: return getBoolConstant(R!=APFloat::cmpGreaterThan, dl,
-                                             VT, OpVT);
-    case ISD::SETUGE: return getBoolConstant(R!=APFloat::cmpLessThan, dl, VT,
-                                             OpVT);
+    default:
+      break;
+    case ISD::SETEQ:
+      if (R == APFloat::cmpUnordered)
+        return GetUndefBooleanConstant();
+      [[fallthrough]];
+    case ISD::SETOEQ:
+      return getBoolConstant(R == APFloat::cmpEqual, dl, VT, OpVT);
+    case ISD::SETNE:
+      if (R == APFloat::cmpUnordered)
+        return GetUndefBooleanConstant();
+      [[fallthrough]];
+    case ISD::SETONE:
+      return getBoolConstant(R == APFloat::cmpGreaterThan ||
+                                 R == APFloat::cmpLessThan,
+                             dl, VT, OpVT);
+    case ISD::SETLT:
+      if (R == APFloat::cmpUnordered)
+        return GetUndefBooleanConstant();
+      [[fallthrough]];
+    case ISD::SETOLT:
+      return getBoolConstant(R == APFloat::cmpLessThan, dl, VT, OpVT);
+    case ISD::SETGT:
+      if (R == APFloat::cmpUnordered)
+        return GetUndefBooleanConstant();
+      [[fallthrough]];
+    case ISD::SETOGT:
+      return getBoolConstant(R == APFloat::cmpGreaterThan, dl, VT, OpVT);
+    case ISD::SETLE:
+      if (R == APFloat::cmpUnordered)
+        return GetUndefBooleanConstant();
+      [[fallthrough]];
+    case ISD::SETOLE:
+      return getBoolConstant(
+          R == APFloat::cmpLessThan || R == APFloat::cmpEqual, dl, VT, OpVT);
+    case ISD::SETGE:
+      if (R == APFloat::cmpUnordered)
+        return GetUndefBooleanConstant();
+      [[fallthrough]];
+    case ISD::SETOGE:
+      return getBoolConstant(
+          R == APFloat::cmpGreaterThan || R == APFloat::cmpEqual, dl, VT, OpVT);
+    case ISD::SETO:
+      return getBoolConstant(R != APFloat::cmpUnordered, dl, VT, OpVT);
+    case ISD::SETUO:
+      return getBoolConstant(R == APFloat::cmpUnordered, dl, VT, OpVT);
+    case ISD::SETUEQ:
+      return getBoolConstant(
+          R == APFloat::cmpUnordered || R == APFloat::cmpEqual, dl, VT, OpVT);
+    case ISD::SETUNE:
+      return getBoolConstant(R != APFloat::cmpEqual, dl, VT, OpVT);
+    case ISD::SETULT:
+      return getBoolConstant(R == APFloat::cmpUnordered ||
+                                 R == APFloat::cmpLessThan,
+                             dl, VT, OpVT);
+    case ISD::SETUGT:
+      return getBoolConstant(R == APFloat::cmpGreaterThan ||
+                                 R == APFloat::cmpUnordered,
+                             dl, VT, OpVT);
+    case ISD::SETULE:
+      return getBoolConstant(R != APFloat::cmpGreaterThan, dl, VT, OpVT);
+    case ISD::SETUGE:
+      return getBoolConstant(R != APFloat::cmpLessThan, dl, VT, OpVT);
     }
   } else if (N1CFP && OpVT.isSimple() && !N2.isUndef()) {
     // Ensure that the constant occurs on the RHS.
@@ -2731,7 +2762,7 @@ bool SelectionDAG::isSplatValue(SDValue V, const APInt &DemandedElts,
       return TLI->isSplatValueForTargetNode(V, DemandedElts, UndefElts, *this,
                                             Depth);
     break;
-}
+  }
 
   // We don't support other cases than those above for scalable vectors at
   // the moment.
@@ -2877,8 +2908,8 @@ bool SelectionDAG::isSplatValue(SDValue V, bool AllowUndefs) const {
   // Since the number of lanes in a scalable vector is unknown at compile time,
   // we track one bit which is implicitly broadcast to all lanes.  This means
   // that all lanes in a scalable vector are considered demanded.
-  APInt DemandedElts
-    = APInt::getAllOnes(VT.isScalableVector() ? 1 : VT.getVectorNumElements());
+  APInt DemandedElts =
+      APInt::getAllOnes(VT.isScalableVector() ? 1 : VT.getVectorNumElements());
   return isSplatValue(V, DemandedElts, UndefElts) &&
          (AllowUndefs || !UndefElts);
 }
@@ -2891,11 +2922,11 @@ SDValue SelectionDAG::getSplatSourceVector(SDValue V, int &SplatIdx) {
   switch (Opcode) {
   default: {
     APInt UndefElts;
-    // Since the number of lanes in a scalable vector is unknown at compile time,
-    // we track one bit which is implicitly broadcast to all lanes.  This means
-    // that all lanes in a scalable vector are considered demanded.
-    APInt DemandedElts
-      = APInt::getAllOnes(VT.isScalableVector() ? 1 : VT.getVectorNumElements());
+    // Since the number of lanes in a scalable vector is unknown at compile
+    // time, we track one bit which is implicitly broadcast to all lanes.  This
+    // means that all lanes in a scalable vector are considered demanded.
+    APInt DemandedElts = APInt::getAllOnes(
+        VT.isScalableVector() ? 1 : VT.getVectorNumElements());
 
     if (isSplatValue(V, DemandedElts, UndefElts)) {
       if (VT.isScalableVector()) {
@@ -3049,7 +3080,7 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
                                          unsigned Depth) const {
   unsigned BitWidth = Op.getScalarValueSizeInBits();
 
-  KnownBits Known(BitWidth);   // Don't know anything.
+  KnownBits Known(BitWidth); // Don't know anything.
 
   if (auto *C = dyn_cast<ConstantSDNode>(Op)) {
     // We know all of the bits for a constant!
@@ -3061,7 +3092,7 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
   }
 
   if (Depth >= MaxRecursionDepth)
-    return Known;  // Limit search depth.
+    return Known; // Limit search depth.
 
   KnownBits Known2;
   unsigned NumElts = DemandedElts.getBitWidth();
@@ -3070,7 +3101,7 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
          "Unexpected vector size");
 
   if (!DemandedElts)
-    return Known;  // No demanded elts, better to assume we don't know anything.
+    return Known; // No demanded elts, better to assume we don't know anything.
 
   unsigned Opcode = Op.getOpcode();
   switch (Opcode) {
@@ -3098,7 +3129,8 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
   case ISD::BUILD_VECTOR:
     assert(!Op.getValueType().isScalableVector());
     // Collect the known bits that are shared by every demanded vector element.
-    Known.Zero.setAllBits(); Known.One.setAllBits();
+    Known.Zero.setAllBits();
+    Known.One.setAllBits();
     for (unsigned i = 0, e = Op.getNumOperands(); i != e; ++i) {
       if (!DemandedElts[i])
         continue;
@@ -3133,7 +3165,8 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
       break;
 
     // Known bits are the values that are shared by every demanded element.
-    Known.Zero.setAllBits(); Known.One.setAllBits();
+    Known.Zero.setAllBits();
+    Known.One.setAllBits();
     if (!!DemandedLHS) {
       SDValue LHS = Op.getOperand(0);
       Known2 = computeKnownBits(LHS, DemandedLHS, Depth + 1);
@@ -3159,7 +3192,8 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
     if (Op.getValueType().isScalableVector())
       break;
     // Split DemandedElts and test each of the demanded subvectors.
-    Known.Zero.setAllBits(); Known.One.setAllBits();
+    Known.Zero.setAllBits();
+    Known.One.setAllBits();
     EVT SubVectorVT = Op.getOperand(0).getValueType();
     unsigned NumSubVectorElts = SubVectorVT.getVectorNumElements();
     unsigned NumSubVectors = Op.getNumOperands();
@@ -3207,7 +3241,8 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
     // Offset the demanded elts by the subvector index.
     SDValue Src = Op.getOperand(0);
     // Bail until we can represent demanded elements for scalable vectors.
-    if (Op.getValueType().isScalableVector() || Src.getValueType().isScalableVector())
+    if (Op.getValueType().isScalableVector() ||
+        Src.getValueType().isScalableVector())
       break;
     uint64_t Idx = Op.getConstantOperandVal(1);
     unsigned NumSrcElts = Src.getValueType().getVectorNumElements();
@@ -3265,8 +3300,7 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
           SubDemandedElts.setBit(i * SubScale);
 
       for (unsigned i = 0; i != SubScale; ++i) {
-        Known2 = computeKnownBits(N0, SubDemandedElts.shl(i),
-                         Depth + 1);
+        Known2 = computeKnownBits(N0, SubDemandedElts.shl(i), Depth + 1);
         unsigned Shifts = IsLE ? i : SubScale - 1 - i;
         Known.insertBits(Known2, SubBitWidth * Shifts);
       }
@@ -3284,7 +3318,8 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
           APIntOps::ScaleBitMask(DemandedElts, NumElts / SubScale);
       Known2 = computeKnownBits(N0, SubDemandedElts, Depth + 1);
 
-      Known.Zero.setAllBits(); Known.One.setAllBits();
+      Known.Zero.setAllBits();
+      Known.One.setAllBits();
       for (unsigned i = 0; i != NumElts; ++i)
         if (DemandedElts[i]) {
           unsigned Shifts = IsLE ? i : NumElts - 1 - i;
@@ -3329,8 +3364,7 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
     // with itself is non-negative. Only do this if we didn't already computed
     // the opposite value for the sign bit.
     if (Op->getFlags().hasNoSignedWrap() &&
-        Op.getOperand(0) == Op.getOperand(1) &&
-        !Known.isNegative())
+        Op.getOperand(0) == Op.getOperand(1) && !Known.isNegative())
       Known.makeNonNegative();
     break;
   }
@@ -3380,21 +3414,21 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
   }
   case ISD::SELECT:
   case ISD::VSELECT:
-    Known = computeKnownBits(Op.getOperand(2), DemandedElts, Depth+1);
+    Known = computeKnownBits(Op.getOperand(2), DemandedElts, Depth + 1);
     // If we don't know any bits, early out.
     if (Known.isUnknown())
       break;
-    Known2 = computeKnownBits(Op.getOperand(1), DemandedElts, Depth+1);
+    Known2 = computeKnownBits(Op.getOperand(1), DemandedElts, Depth + 1);
 
     // Only known if known in both the LHS and RHS.
     Known = Known.intersectWith(Known2);
     break;
   case ISD::SELECT_CC:
-    Known = computeKnownBits(Op.getOperand(3), DemandedElts, Depth+1);
+    Known = computeKnownBits(Op.getOperand(3), DemandedElts, Depth + 1);
     // If we don't know any bits, early out.
     if (Known.isUnknown())
       break;
-    Known2 = computeKnownBits(Op.getOperand(2), DemandedElts, Depth+1);
+    Known2 = computeKnownBits(Op.getOperand(2), DemandedElts, Depth + 1);
 
     // Only known if known in both the LHS and RHS.
     Known = Known.intersectWith(Known2);
@@ -3451,7 +3485,8 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
     break;
   case ISD::FSHL:
   case ISD::FSHR:
-    if (ConstantSDNode *C = isConstOrConstSplat(Op.getOperand(2), DemandedElts)) {
+    if (ConstantSDNode *C =
+            isConstOrConstSplat(Op.getOperand(2), DemandedElts)) {
       unsigned Amt = C->getAPIntValue().urem(BitWidth);
 
       // For fshl, 0-shift returns the 1st arg.
@@ -3682,7 +3717,7 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
     APInt InMask = APInt::getLowBitsSet(BitWidth, VT.getSizeInBits());
     Known = computeKnownBits(Op.getOperand(0), DemandedElts, Depth + 1);
     Known.Zero |= (~InMask);
-    Known.One  &= (~Known.Zero);
+    Known.One &= (~Known.Zero);
     break;
   }
   case ISD::AssertAlign: {
@@ -3732,8 +3767,8 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
 
     Known = computeKnownBits(Op.getOperand(0), DemandedElts, Depth + 1);
     Known2 = computeKnownBits(Op.getOperand(1), DemandedElts, Depth + 1);
-    Known = KnownBits::computeForAddSub(/* Add */ false, /* NSW */ false,
-                                        Known, Known2);
+    Known = KnownBits::computeForAddSub(/* Add */ false, /* NSW */ false, Known,
+                                        Known2);
     break;
   }
   case ISD::UADDO:
@@ -3798,12 +3833,13 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
     break;
   }
   case ISD::EXTRACT_ELEMENT: {
-    Known = computeKnownBits(Op.getOperand(0), Depth+1);
+    Known = computeKnownBits(Op.getOperand(0), Depth + 1);
     const unsigned Index = Op.getConstantOperandVal(1);
     const unsigned EltBitWidth = Op.getValueSizeInBits();
 
     // Remove low part of known bits mask
-    Known.Zero = Known.Zero.getHiBits(Known.getBitWidth() - Index * EltBitWidth);
+    Known.Zero =
+        Known.Zero.getHiBits(Known.getBitWidth() - Index * EltBitWidth);
     Known.One = Known.One.getHiBits(Known.getBitWidth() - Index * EltBitWidth);
 
     // Remove high part of known bit mask
@@ -4023,7 +4059,8 @@ KnownBits SelectionDAG::computeKnownBits(SDValue Op, const APInt &DemandedElts,
 }
 
 /// Convert ConstantRange OverflowResult into SelectionDAG::OverflowKind.
-static SelectionDAG::OverflowKind mapOverflowResult(ConstantRange::OverflowResult OR) {
+static SelectionDAG::OverflowKind
+mapOverflowResult(ConstantRange::OverflowResult OR) {
   switch (OR) {
   case ConstantRange::OverflowResult::MayOverflow:
     return SelectionDAG::OFK_Sometime;
@@ -4259,20 +4296,21 @@ unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, const APInt &DemandedElts,
   }
 
   if (Depth >= MaxRecursionDepth)
-    return 1;  // Limit search depth.
+    return 1; // Limit search depth.
 
   if (!DemandedElts)
-    return 1;  // No demanded elts, better to assume we don't know anything.
+    return 1; // No demanded elts, better to assume we don't know anything.
 
   unsigned Opcode = Op.getOpcode();
   switch (Opcode) {
-  default: break;
+  default:
+    break;
   case ISD::AssertSext:
     Tmp = cast<VTSDNode>(Op.getOperand(1))->getVT().getSizeInBits();
-    return VTBits-Tmp+1;
+    return VTBits - Tmp + 1;
   case ISD::AssertZext:
     Tmp = cast<VTSDNode>(Op.getOperand(1))->getVT().getSizeInBits();
-    return VTBits-Tmp;
+    return VTBits - Tmp;
   case ISD::MERGE_VALUES:
     return ComputeNumSignBits(Op.getOperand(Op.getResNo()), DemandedElts,
                               Depth + 1);
@@ -4387,12 +4425,12 @@ unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, const APInt &DemandedElts,
     return VTBits - Tmp + 1;
   case ISD::SIGN_EXTEND:
     Tmp = VTBits - Op.getOperand(0).getScalarValueSizeInBits();
-    return ComputeNumSignBits(Op.getOperand(0), DemandedElts, Depth+1) + Tmp;
+    return ComputeNumSignBits(Op.getOperand(0), DemandedElts, Depth + 1) + Tmp;
   case ISD::SIGN_EXTEND_INREG:
     // Max of the input and what this extends.
     Tmp = cast<VTSDNode>(Op.getOperand(1))->getVT().getScalarSizeInBits();
-    Tmp = VTBits-Tmp+1;
-    Tmp2 = ComputeNumSignBits(Op.getOperand(0), DemandedElts, Depth+1);
+    Tmp = VTBits - Tmp + 1;
+    Tmp2 = ComputeNumSignBits(Op.getOperand(0), DemandedElts, Depth + 1);
     return std::max(Tmp, Tmp2);
   case ISD::SIGN_EXTEND_VECTOR_INREG: {
     if (VT.isScalableVector())
@@ -4401,7 +4439,7 @@ unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, const APInt &DemandedElts,
     EVT SrcVT = Src.getValueType();
     APInt DemandedSrcElts = DemandedElts.zext(SrcVT.getVectorNumElements());
     Tmp = VTBits - SrcVT.getScalarSizeInBits();
-    return ComputeNumSignBits(Src, DemandedSrcElts, Depth+1) + Tmp;
+    return ComputeNumSignBits(Src, DemandedSrcElts, Depth + 1) + Tmp;
   }
   case ISD::SRA:
     Tmp = ComputeNumSignBits(Op.getOperand(0), DemandedElts, Depth + 1);
@@ -4421,11 +4459,11 @@ unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, const APInt &DemandedElts,
     break;
   case ISD::AND:
   case ISD::OR:
-  case ISD::XOR:    // NOT is handled here.
+  case ISD::XOR: // NOT is handled here.
     // Logical binary ops preserve the number of sign bits at the worst.
-    Tmp = ComputeNumSignBits(Op.getOperand(0), DemandedElts, Depth+1);
+    Tmp = ComputeNumSignBits(Op.getOperand(0), DemandedElts, Depth + 1);
     if (Tmp != 1) {
-      Tmp2 = ComputeNumSignBits(Op.getOperand(1), DemandedElts, Depth+1);
+      Tmp2 = ComputeNumSignBits(Op.getOperand(1), DemandedElts, Depth + 1);
       FirstAnswer = std::min(Tmp, Tmp2);
       // We computed what we know about the sign bits as our first
       // answer. Now proceed to the generic code that uses
@@ -4435,14 +4473,16 @@ unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, const APInt &DemandedElts,
 
   case ISD::SELECT:
   case ISD::VSELECT:
-    Tmp = ComputeNumSignBits(Op.getOperand(1), DemandedElts, Depth+1);
-    if (Tmp == 1) return 1;  // Early out.
-    Tmp2 = ComputeNumSignBits(Op.getOperand(2), DemandedElts, Depth+1);
+    Tmp = ComputeNumSignBits(Op.getOperand(1), DemandedElts, Depth + 1);
+    if (Tmp == 1)
+      return 1; // Early out.
+    Tmp2 = ComputeNumSignBits(Op.getOperand(2), DemandedElts, Depth + 1);
     return std::min(Tmp, Tmp2);
   case ISD::SELECT_CC:
-    Tmp = ComputeNumSignBits(Op.getOperand(2), DemandedElts, Depth+1);
-    if (Tmp == 1) return 1;  // Early out.
-    Tmp2 = ComputeNumSignBits(Op.getOperand(3), DemandedElts, Depth+1);
+    Tmp = ComputeNumSignBits(Op.getOperand(2), DemandedElts, Depth + 1);
+    if (Tmp == 1)
+      return 1; // Early out.
+    Tmp2 = ComputeNumSignBits(Op.getOperand(3), DemandedElts, Depth + 1);
     return std::min(Tmp, Tmp2);
 
   case ISD::SMIN:
@@ -4468,7 +4508,7 @@ unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, const APInt &DemandedElts,
     // Fallback - just get the minimum number of sign bits of the operands.
     Tmp = ComputeNumSignBits(Op.getOperand(0), DemandedElts, Depth + 1);
     if (Tmp == 1)
-      return 1;  // Early out.
+      return 1; // Early out.
     Tmp2 = ComputeNumSignBits(Op.getOperand(1), DemandedElts, Depth + 1);
     return std::min(Tmp, Tmp2);
   }
@@ -4476,7 +4516,7 @@ unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, const APInt &DemandedElts,
   case ISD::UMAX:
     Tmp = ComputeNumSignBits(Op.getOperand(0), DemandedElts, Depth + 1);
     if (Tmp == 1)
-      return 1;  // Early out.
+      return 1; // Early out.
     Tmp2 = ComputeNumSignBits(Op.getOperand(1), DemandedElts, Depth + 1);
     return std::min(Tmp, Tmp2);
   case ISD::SADDO:
@@ -4528,7 +4568,8 @@ unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, const APInt &DemandedElts,
 
       // If we aren't rotating out all of the known-in sign bits, return the
       // number that are left.  This handles rotl(sext(x), 1) for example.
-      if (Tmp > (RotAmt + 1)) return (Tmp - RotAmt);
+      if (Tmp > (RotAmt + 1))
+        return (Tmp - RotAmt);
     }
     break;
   case ISD::ADD:
@@ -4536,7 +4577,8 @@ unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, const APInt &DemandedElts,
     // Add can have at most one carry bit.  Thus we know that the output
     // is, at worst, one more bit than the inputs.
     Tmp = ComputeNumSignBits(Op.getOperand(0), DemandedElts, Depth + 1);
-    if (Tmp == 1) return 1; // Early out.
+    if (Tmp == 1)
+      return 1; // Early out.
 
     // Special case decrementing a value (ADD X, -1):
     if (ConstantSDNode *CRHS =
@@ -4557,11 +4599,13 @@ unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, const APInt &DemandedElts,
       }
 
     Tmp2 = ComputeNumSignBits(Op.getOperand(1), DemandedElts, Depth + 1);
-    if (Tmp2 == 1) return 1; // Early out.
+    if (Tmp2 == 1)
+      return 1; // Early out.
     return std::min(Tmp, Tmp2) - 1;
   case ISD::SUB:
     Tmp2 = ComputeNumSignBits(Op.getOperand(1), DemandedElts, Depth + 1);
-    if (Tmp2 == 1) return 1; // Early out.
+    if (Tmp2 == 1)
+      return 1; // Early out.
 
     // Handle NEG.
     if (ConstantSDNode *CLHS =
@@ -4585,7 +4629,8 @@ unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, const APInt &DemandedElts,
     // Sub can have at most one carry bit.  Thus we know that the output
     // is, at worst, one more bit than the inputs.
     Tmp = ComputeNumSignBits(Op.getOperand(0), DemandedElts, Depth + 1);
-    if (Tmp == 1) return 1; // Early out.
+    if (Tmp == 1)
+      return 1; // Early out.
     return std::min(Tmp, Tmp2) - 1;
   case ISD::MUL: {
     // The output of the Mul can be at most twice the valid bits in the inputs.
@@ -4616,7 +4661,7 @@ unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, const APInt &DemandedElts,
   case ISD::EXTRACT_ELEMENT: {
     if (VT.isScalableVector())
       break;
-    const int KnownSign = ComputeNumSignBits(Op.getOperand(0), Depth+1);
+    const int KnownSign = ComputeNumSignBits(Op.getOperand(0), Depth + 1);
     const int BitWidth = Op.getValueSizeInBits();
     const int Items = Op.getOperand(0).getValueSizeInBits() / BitWidth;
 
@@ -4808,7 +4853,8 @@ unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, const APInt &DemandedElts,
     if (LoadSDNode *LD = dyn_cast<LoadSDNode>(Op)) {
       unsigned ExtType = LD->getExtensionType();
       switch (ExtType) {
-      default: break;
+      default:
+        break;
       case ISD::SEXTLOAD: // e.g. i16->i32 = '17' bits known.
         Tmp = LD->getMemoryVT().getScalarSizeInBits();
         return VTBits - Tmp + 1;
@@ -4851,15 +4897,13 @@ unsigned SelectionDAG::ComputeNumSignBits(SDValue Op, const APInt &DemandedElts,
   }
 
   // Allow the target to implement this method for its nodes.
-  if (Opcode >= ISD::BUILTIN_OP_END ||
-      Opcode == ISD::INTRINSIC_WO_CHAIN ||
-      Opcode == ISD::INTRINSIC_W_CHAIN ||
-      Opcode == ISD::INTRINSIC_VOID) {
+  if (Opcode >= ISD::BUILTIN_OP_END || Opcode == ISD::INTRINSIC_WO_CHAIN ||
+      Opcode == ISD::INTRINSIC_W_CHAIN || Opcode == ISD::INTRINSIC_VOID) {
     // TODO: This can probably be removed once target code is audited.  This
     // is here purely to reduce patch size and review complexity.
     if (!VT.isScalableVector()) {
       unsigned NumBits =
-        TLI->ComputeNumSignBitsForTargetNode(Op, DemandedElts, *this, Depth);
+          TLI->ComputeNumSignBitsForTargetNode(Op, DemandedElts, *this, Depth);
       if (NumBits > 1)
         FirstAnswer = std::max(FirstAnswer, NumBits);
     }
@@ -5030,7 +5074,7 @@ bool SelectionDAG::canCreateUndefOrPoison(SDValue Op, const APInt &DemandedElts,
     return ConsiderFlags && (Op->getFlags().hasNoSignedWrap() ||
                              Op->getFlags().hasNoUnsignedWrap());
 
-  case ISD::INSERT_VECTOR_ELT:{
+  case ISD::INSERT_VECTOR_ELT: {
     // Ensure that the element index is in bounds.
     EVT VecVT = Op.getOperand(0).getValueType();
     KnownBits KnownIdx = computeKnownBits(Op.getOperand(2), Depth + 1);
@@ -5071,7 +5115,8 @@ bool SelectionDAG::isBaseWithConstantOffset(SDValue Op) const {
   return true;
 }
 
-bool SelectionDAG::isKnownNeverNaN(SDValue Op, bool SNaN, unsigned Depth) const {
+bool SelectionDAG::isKnownNeverNaN(SDValue Op, bool SNaN,
+                                   unsigned Depth) const {
   // If we're told that NaNs won't happen, assume they won't.
   if (getTarget().Options.NoNaNsFPMath || Op->getFlags().hasNoNaNs())
     return true;
@@ -5179,10 +5224,8 @@ bool SelectionDAG::isKnownNeverNaN(SDValue Op, bool SNaN, unsigned Depth) const
     return true;
   }
   default:
-    if (Opcode >= ISD::BUILTIN_OP_END ||
-        Opcode == ISD::INTRINSIC_WO_CHAIN ||
-        Opcode == ISD::INTRINSIC_W_CHAIN ||
-        Opcode == ISD::INTRINSIC_VOID) {
+    if (Opcode >= ISD::BUILTIN_OP_END || Opcode == ISD::INTRINSIC_WO_CHAIN ||
+        Opcode == ISD::INTRINSIC_W_CHAIN || Opcode == ISD::INTRINSIC_VOID) {
       return TLI->isKnownNeverNaNForTargetNode(Op, *this, SNaN, Depth);
     }
 
@@ -5191,8 +5234,7 @@ bool SelectionDAG::isKnownNeverNaN(SDValue Op, bool SNaN, unsigned Depth) const
 }
 
 bool SelectionDAG::isKnownNeverZeroFloat(SDValue Op) const {
-  assert(Op.getValueType().isFloatingPoint() &&
-         "Floating point type expected");
+  assert(Op.getValueType().isFloatingPoint() && "Floating point type expected");
 
   // If the value is a constant, we can obviously see if it is a zero or not.
   if (const ConstantFPSDNode *C = dyn_cast<ConstantFPSDNode>(Op))
@@ -5332,12 +5374,14 @@ bool SelectionDAG::isKnownNeverZero(SDValue Op, unsigned Depth) const {
 
 bool SelectionDAG::isEqualTo(SDValue A, SDValue B) const {
   // Check the obvious case.
-  if (A == B) return true;
+  if (A == B)
+    return true;
 
   // For negative and positive zero.
   if (const ConstantFPSDNode *CA = dyn_cast<ConstantFPSDNode>(A))
     if (const ConstantFPSDNode *CB = dyn_cast<ConstantFPSDNode>(B))
-      if (CA->isZero() && CB->isZero()) return true;
+      if (CA->isZero() && CB->isZero())
+        return true;
 
   // Otherwise they may not be equal.
   return false;
@@ -5414,8 +5458,7 @@ static SDValue FoldSTEP_VECTOR(const SDLoc &DL, EVT VT, SDValue Step,
   return SDValue();
 }
 
-static SDValue FoldBUILD_VECTOR(const SDLoc &DL, EVT VT,
-                                ArrayRef<SDValue> Ops,
+static SDValue FoldBUILD_VECTOR(const SDLoc &DL, EVT VT, ArrayRef<SDValue> Ops,
                                 SelectionDAG &DAG) {
   int NumOps = Ops.size();
   assert(NumOps != 0 && "Can't build an empty vector!");
@@ -5451,8 +5494,7 @@ static SDValue FoldBUILD_VECTOR(const SDLoc &DL, EVT VT,
 /// Try to simplify vector concatenation to an input value, undef, or build
 /// vector.
 static SDValue foldCONCAT_VECTORS(const SDLoc &DL, EVT VT,
-                                  ArrayRef<SDValue> Ops,
-                                  SelectionDAG &DAG) {
+                                  ArrayRef<SDValue> Ops, SelectionDAG &DAG) {
   assert(!Ops.empty() && "Can't concatenate an empty list of vectors!");
   assert(llvm::all_of(Ops,
                       [Ops](SDValue Op) {
@@ -5623,7 +5665,7 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
   case ISD::TokenFactor:
   case ISD::MERGE_VALUES:
   case ISD::CONCAT_VECTORS:
-    return N1;         // Factor, merge or concat of one node?  No need.
+    return N1; // Factor, merge or concat of one node?  No need.
   case ISD::BUILD_VECTOR: {
     // Attempt to simplify BUILD_VECTOR.
     SDValue Ops[] = {N1};
@@ -5631,11 +5673,13 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
       return V;
     break;
   }
-  case ISD::FP_ROUND: llvm_unreachable("Invalid method to make FP_ROUND node");
+  case ISD::FP_ROUND:
+    llvm_unreachable("Invalid method to make FP_ROUND node");
   case ISD::FP_EXTEND:
     assert(VT.isFloatingPoint() && N1.getValueType().isFloatingPoint() &&
            "Invalid FP cast!");
-    if (N1.getValueType() == VT) return N1;  // noop conversion.
+    if (N1.getValueType() == VT)
+      return N1; // noop conversion.
     assert((!VT.isVector() || VT.getVectorElementCount() ==
                                   N1.getValueType().getVectorElementCount()) &&
            "Vector element count mismatch!");
@@ -5660,7 +5704,8 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
     assert(VT.isVector() == N1.getValueType().isVector() &&
            "SIGN_EXTEND result type type should be vector iff the operand "
            "type is vector!");
-    if (N1.getValueType() == VT) return N1;   // noop extension
+    if (N1.getValueType() == VT)
+      return N1; // noop extension
     assert((!VT.isVector() || VT.getVectorElementCount() ==
                                   N1.getValueType().getVectorElementCount()) &&
            "Vector element count mismatch!");
@@ -5677,7 +5722,8 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
     assert(VT.isVector() == N1.getValueType().isVector() &&
            "ZERO_EXTEND result type type should be vector iff the operand "
            "type is vector!");
-    if (N1.getValueType() == VT) return N1;   // noop extension
+    if (N1.getValueType() == VT)
+      return N1; // noop extension
     assert((!VT.isVector() || VT.getVectorElementCount() ==
                                   N1.getValueType().getVectorElementCount()) &&
            "Vector element count mismatch!");
@@ -5710,7 +5756,8 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
     assert(VT.isVector() == N1.getValueType().isVector() &&
            "ANY_EXTEND result type type should be vector iff the operand "
            "type is vector!");
-    if (N1.getValueType() == VT) return N1;   // noop extension
+    if (N1.getValueType() == VT)
+      return N1; // noop extension
     assert((!VT.isVector() || VT.getVectorElementCount() ==
                                   N1.getValueType().getVectorElementCount()) &&
            "Vector element count mismatch!");
@@ -5738,7 +5785,8 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
     assert(VT.isVector() == N1.getValueType().isVector() &&
            "TRUNCATE result type type should be vector iff the operand "
            "type is vector!");
-    if (N1.getValueType() == VT) return N1;   // noop truncate
+    if (N1.getValueType() == VT)
+      return N1; // noop truncate
     assert((!VT.isVector() || VT.getVectorElementCount() ==
                                   N1.getValueType().getVectorElementCount()) &&
            "Vector element count mismatch!");
@@ -5794,7 +5842,8 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
   case ISD::BITCAST:
     assert(VT.getSizeInBits() == N1.getValueSizeInBits() &&
            "Cannot BITCAST between types of different sizes!");
-    if (VT == N1.getValueType()) return N1;   // noop conversion.
+    if (VT == N1.getValueType())
+      return N1;                  // noop conversion.
     if (OpOpcode == ISD::BITCAST) // bitconv(bitconv(x)) -> bitconv(x)
       return getNode(ISD::BITCAST, DL, VT, N1.getOperand(0));
     if (OpOpcode == ISD::UNDEF)
@@ -5886,27 +5935,48 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
 static std::optional<APInt> FoldValue(unsigned Opcode, const APInt &C1,
                                       const APInt &C2) {
   switch (Opcode) {
-  case ISD::ADD:  return C1 + C2;
-  case ISD::SUB:  return C1 - C2;
-  case ISD::MUL:  return C1 * C2;
-  case ISD::AND:  return C1 & C2;
-  case ISD::OR:   return C1 | C2;
-  case ISD::XOR:  return C1 ^ C2;
-  case ISD::SHL:  return C1 << C2;
-  case ISD::SRL:  return C1.lshr(C2);
-  case ISD::SRA:  return C1.ashr(C2);
-  case ISD::ROTL: return C1.rotl(C2);
-  case ISD::ROTR: return C1.rotr(C2);
-  case ISD::SMIN: return C1.sle(C2) ? C1 : C2;
-  case ISD::SMAX: return C1.sge(C2) ? C1 : C2;
-  case ISD::UMIN: return C1.ule(C2) ? C1 : C2;
-  case ISD::UMAX: return C1.uge(C2) ? C1 : C2;
-  case ISD::SADDSAT: return C1.sadd_sat(C2);
-  case ISD::UADDSAT: return C1.uadd_sat(C2);
-  case ISD::SSUBSAT: return C1.ssub_sat(C2);
-  case ISD::USUBSAT: return C1.usub_sat(C2);
-  case ISD::SSHLSAT: return C1.sshl_sat(C2);
-  case ISD::USHLSAT: return C1.ushl_sat(C2);
+  case ISD::ADD:
+    return C1 + C2;
+  case ISD::SUB:
+    return C1 - C2;
+  case ISD::MUL:
+    return C1 * C2;
+  case ISD::AND:
+    return C1 & C2;
+  case ISD::OR:
+    return C1 | C2;
+  case ISD::XOR:
+    return C1 ^ C2;
+  case ISD::SHL:
+    return C1 << C2;
+  case ISD::SRL:
+    return C1.lshr(C2);
+  case ISD::SRA:
+    return C1.ashr(C2);
+  case ISD::ROTL:
+    return C1.rotl(C2);
+  case ISD::ROTR:
+    return C1.rotr(C2);
+  case ISD::SMIN:
+    return C1.sle(C2) ? C1 : C2;
+  case ISD::SMAX:
+    return C1.sge(C2) ? C1 : C2;
+  case ISD::UMIN:
+    return C1.ule(C2) ? C1 : C2;
+  case ISD::UMAX:
+    return C1.uge(C2) ? C1 : C2;
+  case ISD::SADDSAT:
+    return C1.sadd_sat(C2);
+  case ISD::UADDSAT:
+    return C1.uadd_sat(C2);
+  case ISD::SSUBSAT:
+    return C1.ssub_sat(C2);
+  case ISD::USUBSAT:
+    return C1.usub_sat(C2);
+  case ISD::SSHLSAT:
+    return C1.sshl_sat(C2);
+  case ISD::USHLSAT:
+    return C1.ushl_sat(C2);
   case ISD::UDIV:
     if (!C2.getBoolValue())
       break;
@@ -5995,9 +6065,13 @@ SDValue SelectionDAG::FoldSymbolOffset(unsigned Opcode, EVT VT,
     return SDValue();
   int64_t Offset = C2->getSExtValue();
   switch (Opcode) {
-  case ISD::ADD: break;
-  case ISD::SUB: Offset = -uint64_t(Offset); break;
-  default: return SDValue();
+  case ISD::ADD:
+    break;
+  case ISD::SUB:
+    Offset = -uint64_t(Offset);
+    break;
+  default:
+    return SDValue();
   }
   return getGlobalAddress(GA->getGlobal(), SDLoc(C2), VT,
                           GA->getOffset() + uint64_t(Offset));
@@ -6017,9 +6091,9 @@ bool SelectionDAG::isUndef(unsigned Opcode, ArrayRef<SDValue> Ops) {
       return true;
 
     return ISD::isBuildVectorOfConstantSDNodes(Divisor.getNode()) &&
-           llvm::any_of(Divisor->op_values(),
-                        [](SDValue V) { return V.isUndef() ||
-                                        isNullConstant(V); });
+           llvm::any_of(Divisor->op_values(), [](SDValue V) {
+             return V.isUndef() || isNullConstant(V);
+           });
     // TODO: Handle signed overflow.
   }
   // TODO: Handle oversized shifts.
@@ -6443,16 +6517,17 @@ SDValue SelectionDAG::foldConstantFPMath(unsigned Opcode, const SDLoc &DL,
       return getConstantFP(minimum(C1, C2), DL, VT);
     case ISD::FMAXIMUM:
       return getConstantFP(maximum(C1, C2), DL, VT);
-    default: break;
+    default:
+      break;
     }
   }
   if (N1CFP && Opcode == ISD::FP_ROUND) {
-    APFloat C1 = N1CFP->getValueAPF();    // make copy
+    APFloat C1 = N1CFP->getValueAPF(); // make copy
     bool Unused;
     // This can return overflow, underflow, or inexact; we don't care.
     // FIXME need to be more flexible about rounding mode.
-    (void) C1.convert(EVTToAPFloatSemantics(VT), APFloat::rmNearestTiesToEven,
-                      &Unused);
+    (void)C1.convert(EVTToAPFloatSemantics(VT), APFloat::rmNearestTiesToEven,
+                     &Unused);
     return getConstantFP(C1, DL, VT);
   }
 
@@ -6538,8 +6613,7 @@ void SelectionDAG::canonicalizeCommutativeBinop(unsigned Opcode, SDValue &N1,
 SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
                               SDValue N1, SDValue N2, const SDNodeFlags Flags) {
   assert(N1.getOpcode() != ISD::DELETED_NODE &&
-         N2.getOpcode() != ISD::DELETED_NODE &&
-         "Operand is DELETED_NODE!");
+         N2.getOpcode() != ISD::DELETED_NODE && "Operand is DELETED_NODE!");
 
   canonicalizeCommutativeBinop(Opcode, N1, N2);
 
@@ -6552,14 +6626,18 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
       isConstOrConstSplat(N2, /*AllowUndefs*/ false, /*AllowTruncation*/ true);
 
   switch (Opcode) {
-  default: break;
+  default:
+    break;
   case ISD::TokenFactor:
     assert(VT == MVT::Other && N1.getValueType() == MVT::Other &&
            N2.getValueType() == MVT::Other && "Invalid token factor!");
     // Fold trivial token factors.
-    if (N1.getOpcode() == ISD::EntryToken) return N2;
-    if (N2.getOpcode() == ISD::EntryToken) return N1;
-    if (N1 == N2) return N1;
+    if (N1.getOpcode() == ISD::EntryToken)
+      return N2;
+    if (N2.getOpcode() == ISD::EntryToken)
+      return N1;
+    if (N1 == N2)
+      return N1;
     break;
   case ISD::BUILD_VECTOR: {
     // Attempt to simplify BUILD_VECTOR.
@@ -6576,8 +6654,8 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
   }
   case ISD::AND:
     assert(VT.isInteger() && "This operator does not apply to FP types!");
-    assert(N1.getValueType() == N2.getValueType() &&
-           N1.getValueType() == VT && "Binary operator types must match!");
+    assert(N1.getValueType() == N2.getValueType() && N1.getValueType() == VT &&
+           "Binary operator types must match!");
     // (X & 0) -> 0.  This commonly occurs when legalizing i64 values, so it's
     // worth handling here.
     if (N2CV && N2CV->isZero())
@@ -6590,8 +6668,8 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
   case ISD::ADD:
   case ISD::SUB:
     assert(VT.isInteger() && "This operator does not apply to FP types!");
-    assert(N1.getValueType() == N2.getValueType() &&
-           N1.getValueType() == VT && "Binary operator types must match!");
+    assert(N1.getValueType() == N2.getValueType() && N1.getValueType() == VT &&
+           "Binary operator types must match!");
     // (X ^|+- 0) -> X.  This commonly occurs when legalizing i64 values, so
     // it's worth handling here.
     if (N2CV && N2CV->isZero())
@@ -6602,8 +6680,8 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
     break;
   case ISD::MUL:
     assert(VT.isInteger() && "This operator does not apply to FP types!");
-    assert(N1.getValueType() == N2.getValueType() &&
-           N1.getValueType() == VT && "Binary operator types must match!");
+    assert(N1.getValueType() == N2.getValueType() && N1.getValueType() == VT &&
+           "Binary operator types must match!");
     if (VT.isVector() && VT.getVectorElementType() == MVT::i1)
       return getNode(ISD::AND, DL, VT, N1, N2);
     if (N2C && (N1.getOpcode() == ISD::VSCALE) && Flags.hasNoSignedWrap()) {
@@ -6623,8 +6701,8 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
   case ISD::UADDSAT:
   case ISD::USUBSAT:
     assert(VT.isInteger() && "This operator does not apply to FP types!");
-    assert(N1.getValueType() == N2.getValueType() &&
-           N1.getValueType() == VT && "Binary operator types must match!");
+    assert(N1.getValueType() == N2.getValueType() && N1.getValueType() == VT &&
+           "Binary operator types must match!");
     if (VT.isVector() && VT.getVectorElementType() == MVT::i1) {
       // fold (add_sat x, y) -> (or x, y) for bool types.
       if (Opcode == ISD::SADDSAT || Opcode == ISD::UADDSAT)
@@ -6637,22 +6715,22 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
   case ISD::ABDS:
   case ISD::ABDU:
     assert(VT.isInteger() && "This operator does not apply to FP types!");
-    assert(N1.getValueType() == N2.getValueType() &&
-           N1.getValueType() == VT && "Binary operator types must match!");
+    assert(N1.getValueType() == N2.getValueType() && N1.getValueType() == VT &&
+           "Binary operator types must match!");
     break;
   case ISD::SMIN:
   case ISD::UMAX:
     assert(VT.isInteger() && "This operator does not apply to FP types!");
-    assert(N1.getValueType() == N2.getValueType() &&
-           N1.getValueType() == VT && "Binary operator types must match!");
+    assert(N1.getValueType() == N2.getValueType() && N1.getValueType() == VT &&
+           "Binary operator types must match!");
     if (VT.isVector() && VT.getVectorElementType() == MVT::i1)
       return getNode(ISD::OR, DL, VT, N1, N2);
     break;
   case ISD::SMAX:
   case ISD::UMIN:
     assert(VT.isInteger() && "This operator does not apply to FP types!");
-    assert(N1.getValueType() == N2.getValueType() &&
-           N1.getValueType() == VT && "Binary operator types must match!");
+    assert(N1.getValueType() == N2.getValueType() && N1.getValueType() == VT &&
+           "Binary operator types must match!");
     if (VT.isVector() && VT.getVectorElementType() == MVT::i1)
       return getNode(ISD::AND, DL, VT, N1, N2);
     break;
@@ -6662,16 +6740,14 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
   case ISD::FDIV:
   case ISD::FREM:
     assert(VT.isFloatingPoint() && "This operator only applies to FP types!");
-    assert(N1.getValueType() == N2.getValueType() &&
-           N1.getValueType() == VT && "Binary operator types must match!");
+    assert(N1.getValueType() == N2.getValueType() && N1.getValueType() == VT &&
+           "Binary operator types must match!");
     if (SDValue V = simplifyFPBinop(Opcode, N1, N2, Flags))
       return V;
     break;
-  case ISD::FCOPYSIGN:   // N1 and result must match.  N1/N2 need not match.
-    assert(N1.getValueType() == VT &&
-           N1.getValueType().isFloatingPoint() &&
-           N2.getValueType().isFloatingPoint() &&
-           "Invalid FCOPYSIGN!");
+  case ISD::FCOPYSIGN: // N1 and result must match.  N1/N2 need not match.
+    assert(N1.getValueType() == VT && N1.getValueType().isFloatingPoint() &&
+           N2.getValueType().isFloatingPoint() && "Invalid FCOPYSIGN!");
     break;
   case ISD::SHL:
     if (N2C && (N1.getOpcode() == ISD::VSCALE) && Flags.hasNoSignedWrap()) {
@@ -6710,12 +6786,12 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
       return N1;
     break;
   case ISD::FP_ROUND:
-    assert(VT.isFloatingPoint() &&
-           N1.getValueType().isFloatingPoint() &&
-           VT.bitsLE(N1.getValueType()) &&
-           N2C && (N2C->getZExtValue() == 0 || N2C->getZExtValue() == 1) &&
+    assert(VT.isFloatingPoint() && N1.getValueType().isFloatingPoint() &&
+           VT.bitsLE(N1.getValueType()) && N2C &&
+           (N2C->getZExtValue() == 0 || N2C->getZExtValue() == 1) &&
            "Invalid FP_ROUND!");
-    if (N1.getValueType() == VT) return N1;  // noop conversion.
+    if (N1.getValueType() == VT)
+      return N1; // noop conversion.
     break;
   case ISD::AssertSext:
   case ISD::AssertZext: {
@@ -6727,7 +6803,8 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
            "AssertSExt/AssertZExt type should be the vector element type "
            "rather than the vector type!");
     assert(EVT.bitsLE(VT.getScalarType()) && "Not extending!");
-    if (VT.getScalarType() == EVT) return N1; // noop assertion.
+    if (VT.getScalarType() == EVT)
+      return N1; // noop assertion.
     break;
   }
   case ISD::SIGN_EXTEND_INREG: {
@@ -6742,7 +6819,8 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
             EVT.getVectorElementCount() == VT.getVectorElementCount()) &&
            "Vector element counts must match in SIGN_EXTEND_INREG");
     assert(EVT.bitsLE(VT) && "Not extending!");
-    if (EVT == VT) return N1;  // Not actually extending
+    if (EVT == VT)
+      return N1; // Not actually extending
 
     auto SignExtendInReg = [&](APInt Val, llvm::EVT ConstantVT) {
       unsigned FromBits = EVT.getScalarSizeInBits();
@@ -6774,10 +6852,9 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
 
     if (N1.getOpcode() == ISD::SPLAT_VECTOR &&
         isa<ConstantSDNode>(N1.getOperand(0)))
-      return getNode(
-          ISD::SPLAT_VECTOR, DL, VT,
-          SignExtendInReg(N1.getConstantOperandAPInt(0),
-                          N1.getOperand(0).getValueType()));
+      return getNode(ISD::SPLAT_VECTOR, DL, VT,
+                     SignExtendInReg(N1.getConstantOperandAPInt(0),
+                                     N1.getOperand(0).getValueType()));
     break;
   }
   case ISD::FP_TO_SINT_SAT:
@@ -6818,8 +6895,7 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
     // elements.
     if (N2C && N1.getOperand(0).getValueType().isFixedLengthVector() &&
         N1.getOpcode() == ISD::CONCAT_VECTORS && N1.getNumOperands() > 0) {
-      unsigned Factor =
-        N1.getOperand(0).getValueType().getVectorNumElements();
+      unsigned Factor = N1.getOperand(0).getValueType().getVectorNumElements();
       return getNode(ISD::EXTRACT_VECTOR_ELT, DL, VT,
                      N1.getOperand(N2C->getZExtValue() / Factor),
                      getVectorIdxConstant(N2C->getZExtValue() % Factor, DL));
@@ -6859,7 +6935,8 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
           if (VT == N1.getOperand(1).getValueType())
             return N1.getOperand(1);
           if (VT.isFloatingPoint()) {
-            assert(VT.getSizeInBits() > N1.getOperand(1).getValueType().getSizeInBits());
+            assert(VT.getSizeInBits() >
+                   N1.getOperand(1).getValueType().getSizeInBits());
             return getFPExtendOrRound(N1.getOperand(1), DL, VT);
           }
           return getSExtOrTrunc(N1.getOperand(1), DL, VT);
@@ -6887,8 +6964,7 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
     assert(N2C && (unsigned)N2C->getZExtValue() < 2 && "Bad EXTRACT_ELEMENT!");
     assert(!N1.getValueType().isVector() && !VT.isVector() &&
            (N1.getValueType().isInteger() == VT.isInteger()) &&
-           N1.getValueType() != VT &&
-           "Wrong types for EXTRACT_ELEMENT!");
+           N1.getValueType() != VT && "Wrong types for EXTRACT_ELEMENT!");
 
     // EXTRACT_ELEMENT of BUILD_PAIR is often formed while legalize is expanding
     // 64-bit integers into 32-bit parts.  Instead of building the extract of
@@ -6960,7 +7036,7 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
     } else {
       switch (Opcode) {
       case ISD::SUB:
-        return getUNDEF(VT);     // fold op(undef, arg2) -> undef
+        return getUNDEF(VT); // fold op(undef, arg2) -> undef
       case ISD::SIGN_EXTEND_INREG:
       case ISD::UDIV:
       case ISD::SDIV:
@@ -6968,7 +7044,7 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
       case ISD::SREM:
       case ISD::SSUBSAT:
       case ISD::USUBSAT:
-        return getConstant(0, DL, VT);    // fold op(undef, arg2) -> 0
+        return getConstant(0, DL, VT); // fold op(undef, arg2) -> 0
       }
     }
   }
@@ -6988,12 +7064,12 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
     case ISD::SDIV:
     case ISD::UREM:
     case ISD::SREM:
-      return getUNDEF(VT);       // fold op(arg1, undef) -> undef
+      return getUNDEF(VT); // fold op(arg1, undef) -> undef
     case ISD::MUL:
     case ISD::AND:
     case ISD::SSUBSAT:
     case ISD::USUBSAT:
-      return getConstant(0, DL, VT);  // fold op(arg1, undef) -> 0
+      return getConstant(0, DL, VT); // fold op(arg1, undef) -> 0
     case ISD::OR:
     case ISD::SADDSAT:
     case ISD::UADDSAT:
@@ -7042,8 +7118,7 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
                               const SDNodeFlags Flags) {
   assert(N1.getOpcode() != ISD::DELETED_NODE &&
          N2.getOpcode() != ISD::DELETED_NODE &&
-         N3.getOpcode() != ISD::DELETED_NODE &&
-         "Operand is DELETED_NODE!");
+         N3.getOpcode() != ISD::DELETED_NODE && "Operand is DELETED_NODE!");
   // Perform various simplifications.
   switch (Opcode) {
   case ISD::FMA: {
@@ -7054,7 +7129,7 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
     ConstantFPSDNode *N2CFP = dyn_cast<ConstantFPSDNode>(N2);
     ConstantFPSDNode *N3CFP = dyn_cast<ConstantFPSDNode>(N3);
     if (N1CFP && N2CFP && N3CFP) {
-      APFloat  V1 = N1CFP->getValueAPF();
+      APFloat V1 = N1CFP->getValueAPF();
       const APFloat &V2 = N2CFP->getValueAPF();
       const APFloat &V3 = N3CFP->getValueAPF();
       V1.fusedMultiplyAdd(V2, V3, APFloat::rmNearestTiesToEven);
@@ -7209,14 +7284,14 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
 
 SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
                               SDValue N1, SDValue N2, SDValue N3, SDValue N4) {
-  SDValue Ops[] = { N1, N2, N3, N4 };
+  SDValue Ops[] = {N1, N2, N3, N4};
   return getNode(Opcode, DL, VT, Ops);
 }
 
 SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
                               SDValue N1, SDValue N2, SDValue N3, SDValue N4,
                               SDValue N5) {
-  SDValue Ops[] = { N1, N2, N3, N4, N5 };
+  SDValue Ops[] = {N1, N2, N3, N4, N5};
   return getNode(Opcode, DL, VT, Ops);
 }
 
@@ -7252,7 +7327,8 @@ static SDValue getMemsetValue(SDValue Value, EVT VT, SelectionDAG &DAG,
     assert(C->getAPIntValue().getBitWidth() == 8);
     APInt Val = APInt::getSplat(NumBits, C->getAPIntValue());
     if (VT.isInteger()) {
-      bool IsOpaque = VT.getSizeInBits() > 64 ||
+      bool IsOpaque =
+          VT.getSizeInBits() > 64 ||
           !DAG.getTargetLoweringInfo().isLegalStoreImmediate(C->getSExtValue());
       return DAG.getConstant(Val, dl, VT, false, IsOpaque);
     }
@@ -7297,10 +7373,10 @@ static SDValue getMemsetStringVal(EVT VT, const SDLoc &dl, SelectionDAG &DAG,
     if (VT.isVector()) {
       unsigned NumElts = VT.getVectorNumElements();
       MVT EltVT = (VT.getVectorElementType() == MVT::f32) ? MVT::i32 : MVT::i64;
-      return DAG.getNode(ISD::BITCAST, dl, VT,
-                         DAG.getConstant(0, dl,
-                                         EVT::getVectorVT(*DAG.getContext(),
-                                                          EltVT, NumElts)));
+      return DAG.getNode(
+          ISD::BITCAST, dl, VT,
+          DAG.getConstant(0, dl,
+                          EVT::getVectorVT(*DAG.getContext(), EltVT, NumElts)));
     }
     llvm_unreachable("Expected type!");
   }
@@ -7313,10 +7389,10 @@ static SDValue getMemsetStringVal(EVT VT, const SDLoc &dl, SelectionDAG &DAG,
   APInt Val(NumVTBits, 0);
   if (DAG.getDataLayout().isLittleEndian()) {
     for (unsigned i = 0; i != NumBytes; ++i)
-      Val |= (uint64_t)(unsigned char)Slice[i] << i*8;
+      Val |= (uint64_t)(unsigned char)Slice[i] << i * 8;
   } else {
     for (unsigned i = 0; i != NumBytes; ++i)
-      Val |= (uint64_t)(unsigned char)Slice[i] << (NumVTBytes-i-1)*8;
+      Val |= (uint64_t)(unsigned char)Slice[i] << (NumVTBytes - i - 1) * 8;
   }
 
   // If the "cost" of materializing the integer immediate is less than the cost
@@ -7379,10 +7455,10 @@ static bool shouldLowerMemFuncForSize(const MachineFunction &MF,
   return DAG.shouldOptForSize();
 }
 
-static void chainLoadsAndStoresForMemcpy(SelectionDAG &DAG, const SDLoc &dl,
-                          SmallVector<SDValue, 32> &OutChains, unsigned From,
-                          unsigned To, SmallVector<SDValue, 16> &OutLoadChains,
-                          SmallVector<SDValue, 16> &OutStoreChains) {
+static void chainLoadsAndStoresForMemcpy(
+    SelectionDAG &DAG, const SDLoc &dl, SmallVector<SDValue, 32> &OutChains,
+    unsigned From, unsigned To, SmallVector<SDValue, 16> &OutLoadChains,
+    SmallVector<SDValue, 16> &OutStoreChains) {
   assert(OutLoadChains.size() && "Missing loads in memcpy inlining");
   assert(OutStoreChains.size() && "Missing stores in memcpy inlining");
   SmallVector<SDValue, 16> GluedLoadChains;
@@ -7392,14 +7468,14 @@ static void chainLoadsAndStoresForMemcpy(SelectionDAG &DAG, const SDLoc &dl,
   }
 
   // Chain for all loads.
-  SDValue LoadToken = DAG.getNode(ISD::TokenFactor, dl, MVT::Other,
-                                  GluedLoadChains);
+  SDValue LoadToken =
+      DAG.getNode(ISD::TokenFactor, dl, MVT::Other, GluedLoadChains);
 
   for (unsigned i = From; i < To; ++i) {
     StoreSDNode *ST = dyn_cast<StoreSDNode>(OutStoreChains[i]);
-    SDValue NewStore = DAG.getTruncStore(LoadToken, dl, ST->getValue(),
-                                  ST->getBasePtr(), ST->getMemoryVT(),
-                                  ST->getMemOperand());
+    SDValue NewStore =
+        DAG.getTruncStore(LoadToken, dl, ST->getValue(), ST->getBasePtr(),
+                          ST->getMemoryVT(), ST->getMemOperand());
     OutChains.push_back(NewStore);
   }
 }
@@ -7494,7 +7570,7 @@ static SDValue getMemcpyLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
     if (VTSize > Size) {
       // Issuing an unaligned load / store pair  that overlaps with the previous
       // pair. Adjust the offset accordingly.
-      assert(i == NumMemOps-1 && i != 0);
+      assert(i == NumMemOps - 1 && i != 0);
       SrcOff -= VTSize - Size;
       DstOff -= VTSize - Size;
     }
@@ -7536,7 +7612,7 @@ static SDValue getMemcpyLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
       assert(NVT.bitsGE(VT));
 
       bool isDereferenceable =
-        SrcPtrInfo.getWithOffset(SrcOff).isDereferenceable(VTSize, C, DL);
+          SrcPtrInfo.getWithOffset(SrcOff).isDereferenceable(VTSize, C, DL);
       MachineMemOperand::Flags SrcMMOFlags = MMOFlags;
       if (isDereferenceable)
         SrcMMOFlags |= MachineMemOperand::MODereferenceable;
@@ -7561,8 +7637,8 @@ static SDValue getMemcpyLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
     Size -= VTSize;
   }
 
-  unsigned GluedLdStLimit = MaxLdStGlue == 0 ?
-                                TLI.getMaxGluedStoresPerMemcpy() : MaxLdStGlue;
+  unsigned GluedLdStLimit =
+      MaxLdStGlue == 0 ? TLI.getMaxGluedStoresPerMemcpy() : MaxLdStGlue;
   unsigned NumLdStInMemcpy = OutStoreChains.size();
 
   if (NumLdStInMemcpy) {
@@ -7578,17 +7654,16 @@ static SDValue getMemcpyLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
     } else {
       // Ld/St less than/equal limit set by target.
       if (NumLdStInMemcpy <= GluedLdStLimit) {
-          chainLoadsAndStoresForMemcpy(DAG, dl, OutChains, 0,
-                                        NumLdStInMemcpy, OutLoadChains,
-                                        OutStoreChains);
+        chainLoadsAndStoresForMemcpy(DAG, dl, OutChains, 0, NumLdStInMemcpy,
+                                     OutLoadChains, OutStoreChains);
       } else {
-        unsigned NumberLdChain =  NumLdStInMemcpy / GluedLdStLimit;
+        unsigned NumberLdChain = NumLdStInMemcpy / GluedLdStLimit;
         unsigned RemainingLdStInMemcpy = NumLdStInMemcpy % GluedLdStLimit;
         unsigned GlueIter = 0;
 
         for (unsigned cnt = 0; cnt < NumberLdChain; ++cnt) {
           unsigned IndexFrom = NumLdStInMemcpy - GlueIter - GluedLdStLimit;
-          unsigned IndexTo   = NumLdStInMemcpy - GlueIter;
+          unsigned IndexTo = NumLdStInMemcpy - GlueIter;
 
           chainLoadsAndStoresForMemcpy(DAG, dl, OutChains, IndexFrom, IndexTo,
                                        OutLoadChains, OutStoreChains);
@@ -7598,8 +7673,8 @@ static SDValue getMemcpyLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
         // Residual ld/st.
         if (RemainingLdStInMemcpy) {
           chainLoadsAndStoresForMemcpy(DAG, dl, OutChains, 0,
-                                        RemainingLdStInMemcpy, OutLoadChains,
-                                        OutStoreChains);
+                                       RemainingLdStInMemcpy, OutLoadChains,
+                                       OutStoreChains);
         }
       }
     }
@@ -7682,7 +7757,7 @@ static SDValue getMemmoveLoadsAndStores(SelectionDAG &DAG, const SDLoc &dl,
     SDValue Value;
 
     bool isDereferenceable =
-      SrcPtrInfo.getWithOffset(SrcOff).isDereferenceable(VTSize, C, DL);
+        SrcPtrInfo.getWithOffset(SrcOff).isDereferenceable(VTSize, C, DL);
     MachineMemOperand::Flags SrcMMOFlags = MMOFlags;
     if (isDereferenceable)
       SrcMMOFlags |= MachineMemOperand::MODereferenceable;
@@ -7804,7 +7879,7 @@ static SDValue getMemsetStores(SelectionDAG &DAG, const SDLoc &dl,
     if (VTSize > Size) {
       // Issuing an unaligned load / store pair  that overlaps with the previous
       // pair. Adjust the offset accordingly.
-      assert(i == NumMemOps-1 && i != 0);
+      assert(i == NumMemOps - 1 && i != 0);
       DstOff -= VTSize - Size;
     }
 
@@ -7911,11 +7986,14 @@ SDValue SelectionDAG::getMemcpy(SDValue Chain, const SDLoc &dl, SDValue Dst,
   TargetLowering::ArgListTy Args;
   TargetLowering::ArgListEntry Entry;
   Entry.Ty = PointerType::getUnqual(*getContext());
-  Entry.Node = Dst; Args.push_back(Entry);
-  Entry.Node = Src; Args.push_back(Entry);
+  Entry.Node = Dst;
+  Args.push_back(Entry);
+  Entry.Node = Src;
+  Args.push_back(Entry);
 
   Entry.Ty = getDataLayout().getIntPtrType(*getContext());
-  Entry.Node = Size; Args.push_back(Entry);
+  Entry.Node = Size;
+  Args.push_back(Entry);
   // FIXME: pass in SDLoc
   TargetLowering::CallLoweringInfo CLI(*this);
   CLI.setDebugLoc(dl)
@@ -7928,7 +8006,7 @@ SDValue SelectionDAG::getMemcpy(SDValue Chain, const SDLoc &dl, SDValue Dst,
       .setDiscardResult()
       .setTailCall(isTailCall);
 
-  std::pair<SDValue,SDValue> CallResult = TLI->LowerCallTo(CLI);
+  std::pair<SDValue, SDValue> CallResult = TLI->LowerCallTo(CLI);
   return CallResult.second;
 }
 
@@ -8013,11 +8091,14 @@ SDValue SelectionDAG::getMemmove(SDValue Chain, const SDLoc &dl, SDValue Dst,
   TargetLowering::ArgListTy Args;
   TargetLowering::ArgListEntry Entry;
   Entry.Ty = PointerType::getUnqual(*getContext());
-  Entry.Node = Dst; Args.push_back(Entry);
-  Entry.Node = Src; Args.push_back(Entry);
+  Entry.Node = Dst;
+  Args.push_back(Entry);
+  Entry.Node = Src;
+  Args.push_back(Entry);
 
   Entry.Ty = getDataLayout().getIntPtrType(*getContext());
-  Entry.Node = Size; Args.push_back(Entry);
+  Entry.Node = Size;
+  Args.push_back(Entry);
   // FIXME:  pass in SDLoc
   TargetLowering::CallLoweringInfo CLI(*this);
   CLI.setDebugLoc(dl)
@@ -8030,7 +8111,7 @@ SDValue SelectionDAG::getMemmove(SDValue Chain, const SDLoc &dl, SDValue Dst,
       .setDiscardResult()
       .setTailCall(isTailCall);
 
-  std::pair<SDValue,SDValue> CallResult = TLI->LowerCallTo(CLI);
+  std::pair<SDValue, SDValue> CallResult = TLI->LowerCallTo(CLI);
   return CallResult.second;
 }
 
@@ -8098,8 +8179,9 @@ SDValue SelectionDAG::getMemset(SDValue Chain, const SDLoc &dl, SDValue Dst,
   // Then check to see if we should lower the memset with target-specific
   // code. If the target chooses to do this, this is the next best.
   if (TSI) {
-    SDValue Result = TSI->EmitTargetCodeForMemset(
-        *this, dl, Chain, Dst, Src, Size, Alignment, isVol, AlwaysInline, DstPtrInfo);
+    SDValue Result = TSI->EmitTargetCodeForMemset(*this, dl, Chain, Dst, Src,
+                                                  Size, Alignment, isVol,
+                                                  AlwaysInline, DstPtrInfo);
     if (Result.getNode())
       return Result;
   }
@@ -8120,7 +8202,7 @@ SDValue SelectionDAG::getMemset(SDValue Chain, const SDLoc &dl, SDValue Dst,
 
   // Emit a library call.
   auto &Ctx = *getContext();
-  const auto& DL = getDataLayout();
+  const auto &DL = getDataLayout();
 
   TargetLowering::CallLoweringInfo CLI(*this);
   // FIXME: pass in SDLoc
@@ -8210,7 +8292,7 @@ SDValue SelectionDAG::getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT,
   AddNodeIDNode(ID, Opcode, VTList, Ops);
   ID.AddInteger(MMO->getPointerInfo().getAddrSpace());
   ID.AddInteger(MMO->getFlags());
-  void* IP = nullptr;
+  void *IP = nullptr;
   if (SDNode *E = FindNodeOrInsertPos(ID, dl, IP)) {
     cast<AtomicSDNode>(E)->refineAlignment(MMO);
     return SDValue(E, 0);
@@ -8240,31 +8322,23 @@ SDValue SelectionDAG::getAtomicCmpSwap(unsigned Opcode, const SDLoc &dl,
 SDValue SelectionDAG::getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT,
                                 SDValue Chain, SDValue Ptr, SDValue Val,
                                 MachineMemOperand *MMO) {
-  assert((Opcode == ISD::ATOMIC_LOAD_ADD ||
-          Opcode == ISD::ATOMIC_LOAD_SUB ||
-          Opcode == ISD::ATOMIC_LOAD_AND ||
-          Opcode == ISD::ATOMIC_LOAD_CLR ||
-          Opcode == ISD::ATOMIC_LOAD_OR ||
-          Opcode == ISD::ATOMIC_LOAD_XOR ||
-          Opcode == ISD::ATOMIC_LOAD_NAND ||
-          Opcode == ISD::ATOMIC_LOAD_MIN ||
-          Opcode == ISD::ATOMIC_LOAD_MAX ||
-          Opcode == ISD::ATOMIC_LOAD_UMIN ||
-          Opcode == ISD::ATOMIC_LOAD_UMAX ||
-          Opcode == ISD::ATOMIC_LOAD_FADD ||
-          Opcode == ISD::ATOMIC_LOAD_FSUB ||
-          Opcode == ISD::ATOMIC_LOAD_FMAX ||
+  assert((Opcode == ISD::ATOMIC_LOAD_ADD || Opcode == ISD::ATOMIC_LOAD_SUB ||
+          Opcode == ISD::ATOMIC_LOAD_AND || Opcode == ISD::ATOMIC_LOAD_CLR ||
+          Opcode == ISD::ATOMIC_LOAD_OR || Opcode == ISD::ATOMIC_LOAD_XOR ||
+          Opcode == ISD::ATOMIC_LOAD_NAND || Opcode == ISD::ATOMIC_LOAD_MIN ||
+          Opcode == ISD::ATOMIC_LOAD_MAX || Opcode == ISD::ATOMIC_LOAD_UMIN ||
+          Opcode == ISD::ATOMIC_LOAD_UMAX || Opcode == ISD::ATOMIC_LOAD_FADD ||
+          Opcode == ISD::ATOMIC_LOAD_FSUB || Opcode == ISD::ATOMIC_LOAD_FMAX ||
           Opcode == ISD::ATOMIC_LOAD_FMIN ||
           Opcode == ISD::ATOMIC_LOAD_UINC_WRAP ||
-          Opcode == ISD::ATOMIC_LOAD_UDEC_WRAP ||
-          Opcode == ISD::ATOMIC_SWAP ||
+          Opcode == ISD::ATOMIC_LOAD_UDEC_WRAP || Opcode == ISD::ATOMIC_SWAP ||
           Opcode == ISD::ATOMIC_STORE) &&
          "Invalid Atomic Op");
 
   EVT VT = Val.getValueType();
 
-  SDVTList VTs = Opcode == ISD::ATOMIC_STORE ? getVTList(MVT::Other) :
-                                               getVTList(VT, MVT::Other);
+  SDVTList VTs = Opcode == ISD::ATOMIC_STORE ? getVTList(MVT::Other)
+                                             : getVTList(VT, MVT::Other);
   SDValue Ops[] = {Chain, Ptr, Val};
   return getAtomic(Opcode, dl, MemVT, VTs, Ops, MMO);
 }
@@ -8311,8 +8385,7 @@ SDValue SelectionDAG::getMemIntrinsicNode(unsigned Opcode, const SDLoc &dl,
                                           SDVTList VTList,
                                           ArrayRef<SDValue> Ops, EVT MemVT,
                                           MachineMemOperand *MMO) {
-  assert((Opcode == ISD::INTRINSIC_VOID ||
-          Opcode == ISD::INTRINSIC_W_CHAIN ||
+  assert((Opcode == ISD::INTRINSIC_VOID || Opcode == ISD::INTRINSIC_W_CHAIN ||
           Opcode == ISD::PREFETCH ||
           (Opcode <= (unsigned)std::numeric_limits<int>::max() &&
            (int)Opcode >= ISD::FIRST_TARGET_MEMORY_OPCODE)) &&
@@ -8320,7 +8393,7 @@ SDValue SelectionDAG::getMemIntrinsicNode(unsigned Opcode, const SDLoc &dl,
 
   // Memoize the node unless it returns a flag.
   MemIntrinsicSDNode *N;
-  if (VTList.VTs[VTList.NumVTs-1] != MVT::Glue) {
+  if (VTList.VTs[VTList.NumVTs - 1] != MVT::Glue) {
     FoldingSetNodeID ID;
     AddNodeIDNode(ID, Opcode, VTList, Ops);
     ID.AddInteger(getSyntheticNodeSubclassData<MemIntrinsicSDNode>(
@@ -8338,7 +8411,7 @@ SDValue SelectionDAG::getMemIntrinsicNode(unsigned Opcode, const SDLoc &dl,
                                       VTList, MemVT, MMO);
     createOperands(N, Ops);
 
-  CSEMap.InsertNode(N, IP);
+    CSEMap.InsertNode(N, IP);
   } else {
     N = newSDNode<MemIntrinsicSDNode>(Opcode, dl.getIROrder(), dl.getDebugLoc(),
                                       VTList, MemVT, MMO);
@@ -8417,8 +8490,7 @@ static MachinePointerInfo InferPointerInfo(const MachinePointerInfo &Info,
                                              FI->getIndex(), Offset);
 
   // If this is (FI+Offset1)+Offset2, we can model it.
-  if (Ptr.getOpcode() != ISD::ADD ||
-      !isa<ConstantSDNode>(Ptr.getOperand(1)) ||
+  if (Ptr.getOpcode() != ISD::ADD || !isa<ConstantSDNode>(Ptr.getOperand(1)) ||
       !isa<FrameIndexSDNode>(Ptr.getOperand(0)))
     return Info;
 
@@ -8450,8 +8522,7 @@ SDValue SelectionDAG::getLoad(ISD::MemIndexedMode AM, ISD::LoadExtType ExtType,
                               Align Alignment,
                               MachineMemOperand::Flags MMOFlags,
                               const AAMDNodes &AAInfo, const MDNode *Ranges) {
-  assert(Chain.getValueType() == MVT::Other &&
-        "Invalid chain type");
+  assert(Chain.getValueType() == MVT::Other && "Invalid chain type");
 
   MMOFlags |= MachineMemOperand::MOLoad;
   assert((MMOFlags & MachineMemOperand::MOStore) == 0);
@@ -8491,9 +8562,9 @@ SDValue SelectionDAG::getLoad(ISD::MemIndexedMode AM, ISD::LoadExtType ExtType,
   bool Indexed = AM != ISD::UNINDEXED;
   assert((Indexed || Offset.isUndef()) && "Unindexed load with an offset!");
 
-  SDVTList VTs = Indexed ?
-    getVTList(VT, Ptr.getValueType(), MVT::Other) : getVTList(VT, MVT::Other);
-  SDValue Ops[] = { Chain, Ptr, Offset };
+  SDVTList VTs = Indexed ? getVTList(VT, Ptr.getValueType(), MVT::Other)
+                         : getVTList(VT, MVT::Other);
+  SDValue Ops[] = {Chain, Ptr, Offset};
   FoldingSetNodeID ID;
   AddNodeIDNode(ID, ISD::LOAD, VTs, Ops);
   ID.AddInteger(MemVT.getRawBits());
@@ -8549,8 +8620,8 @@ SDValue SelectionDAG::getExtLoad(ISD::LoadExtType ExtType, const SDLoc &dl,
                                  EVT VT, SDValue Chain, SDValue Ptr, EVT MemVT,
                                  MachineMemOperand *MMO) {
   SDValue Undef = getUNDEF(Ptr.getValueType());
-  return getLoad(ISD::UNINDEXED, ExtType, VT, dl, Chain, Ptr, Undef,
-                 MemVT, MMO);
+  return getLoad(ISD::UNINDEXED, ExtType, VT, dl, Chain, Ptr, Undef, MemVT,
+                 MMO);
 }
 
 SDValue SelectionDAG::getIndexedLoad(SDValue OrigLoad, const SDLoc &dl,
@@ -8590,12 +8661,11 @@ SDValue SelectionDAG::getStore(SDValue Chain, const SDLoc &dl, SDValue Val,
 
 SDValue SelectionDAG::getStore(SDValue Chain, const SDLoc &dl, SDValue Val,
                                SDValue Ptr, MachineMemOperand *MMO) {
-  assert(Chain.getValueType() == MVT::Other &&
-        "Invalid chain type");
+  assert(Chain.getValueType() == MVT::Other && "Invalid chain type");
   EVT VT = Val.getValueType();
   SDVTList VTs = getVTList(MVT::Other);
   SDValue Undef = getUNDEF(Ptr.getValueType());
-  SDValue Ops[] = { Chain, Val, Ptr, Undef };
+  SDValue Ops[] = {Chain, Val, Ptr, Undef};
   FoldingSetNodeID ID;
   AddNodeIDNode(ID, ISD::STORE, VTs, Ops);
   ID.AddInteger(VT.getRawBits());
@@ -8624,8 +8694,7 @@ SDValue SelectionDAG::getTruncStore(SDValue Chain, const SDLoc &dl, SDValue Val,
                                     EVT SVT, Align Alignment,
                                     MachineMemOperand::Flags MMOFlags,
                                     const AAMDNodes &AAInfo) {
-  assert(Chain.getValueType() == MVT::Other &&
-        "Invalid chain type");
+  assert(Chain.getValueType() == MVT::Other && "Invalid chain type");
 
   MMOFlags |= MachineMemOperand::MOStore;
   assert((MMOFlags & MachineMemOperand::MOLoad) == 0);
@@ -8645,15 +8714,13 @@ SDValue SelectionDAG::getTruncStore(SDValue Chain, const SDLoc &dl, SDValue Val,
                                     MachineMemOperand *MMO) {
   EVT VT = Val.getValueType();
 
-  assert(Chain.getValueType() == MVT::Other &&
-        "Invalid chain type");
+  assert(Chain.getValueType() == MVT::Other && "Invalid chain type");
   if (VT == SVT)
     return getStore(Chain, dl, Val, Ptr, MMO);
 
   assert(SVT.getScalarType().bitsLT(VT.getScalarType()) &&
          "Should only be a truncating store, not extending!");
-  assert(VT.isInteger() == SVT.isInteger() &&
-         "Can't do FP-INT conversion!");
+  assert(VT.isInteger() == SVT.isInteger() && "Can't do FP-INT conversion!");
   assert(VT.isVector() == SVT.isVector() &&
          "Cannot use trunc store to convert to or from a vector!");
   assert((!VT.isVector() ||
@@ -8662,7 +8729,7 @@ SDValue SelectionDAG::getTruncStore(SDValue Chain, const SDLoc &dl, SDValue Val,
 
   SDVTList VTs = getVTList(MVT::Other);
   SDValue Undef = getUNDEF(Ptr.getValueType());
-  SDValue Ops[] = { Chain, Val, Ptr, Undef };
+  SDValue Ops[] = {Chain, Val, Ptr, Undef};
   FoldingSetNodeID ID;
   AddNodeIDNode(ID, ISD::STORE, VTs, Ops);
   ID.AddInteger(SVT.getRawBits());
@@ -8692,7 +8759,7 @@ SDValue SelectionDAG::getIndexedStore(SDValue OrigStore, const SDLoc &dl,
   StoreSDNode *ST = cast<StoreSDNode>(OrigStore);
   assert(ST->getOffset().isUndef() && "Store is already a indexed store!");
   SDVTList VTs = getVTList(Base.getValueType(), MVT::Other);
-  SDValue Ops[] = { ST->getChain(), ST->getValue(), Base, Offset };
+  SDValue Ops[] = {ST->getChain(), ST->getValue(), Base, Offset};
   FoldingSetNodeID ID;
   AddNodeIDNode(ID, ISD::STORE, VTs, Ops);
   ID.AddInteger(ST->getMemoryVT().getRawBits());
@@ -9360,8 +9427,7 @@ SDValue SelectionDAG::getMaskedStore(SDValue Chain, const SDLoc &dl,
                                      MachineMemOperand *MMO,
                                      ISD::MemIndexedMode AM, bool IsTruncating,
                                      bool IsCompressing) {
-  assert(Chain.getValueType() == MVT::Other &&
-        "Invalid chain type");
+  assert(Chain.getValueType() == MVT::Other && "Invalid chain type");
   bool Indexed = AM != ISD::UNINDEXED;
   assert((Indexed || Offset.isUndef()) &&
          "Unindexed masked store with an offset!");
@@ -9611,8 +9677,8 @@ SDValue SelectionDAG::simplifyFPBinop(unsigned Opcode, SDValue X, SDValue Y,
   // operation is poison. That result can be relaxed to undef.
   ConstantFPSDNode *XC = isConstOrConstSplatFP(X, /* AllowUndefs */ true);
   ConstantFPSDNode *YC = isConstOrConstSplatFP(Y, /* AllowUndefs */ true);
-  bool HasNan = (XC && XC->getValueAPF().isNaN()) ||
-                (YC && YC->getValueAPF().isNaN());
+  bool HasNan =
+      (XC && XC->getValueAPF().isNaN()) || (YC && YC->getValueAPF().isNaN());
   bool HasInf = (XC && XC->getValueAPF().isInfinity()) ||
                 (YC && YC->getValueAPF().isInfinity());
 
@@ -9651,18 +9717,23 @@ SDValue SelectionDAG::simplifyFPBinop(unsigned Opcode, SDValue X, SDValue Y,
 
 SDValue SelectionDAG::getVAArg(EVT VT, const SDLoc &dl, SDValue Chain,
                                SDValue Ptr, SDValue SV, unsigned Align) {
-  SDValue Ops[] = { Chain, Ptr, SV, getTargetConstant(Align, dl, MVT::i32) };
+  SDValue Ops[] = {Chain, Ptr, SV, getTargetConstant(Align, dl, MVT::i32)};
   return getNode(ISD::VAARG, dl, getVTList(VT, MVT::Other), Ops);
 }
 
 SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
                               ArrayRef<SDUse> Ops) {
   switch (Ops.size()) {
-  case 0: return getNode(Opcode, DL, VT);
-  case 1: return getNode(Opcode, DL, VT, static_cast<const SDValue>(Ops[0]));
-  case 2: return getNode(Opcode, DL, VT, Ops[0], Ops[1]);
-  case 3: return getNode(Opcode, DL, VT, Ops[0], Ops[1], Ops[2]);
-  default: break;
+  case 0:
+    return getNode(Opcode, DL, VT);
+  case 1:
+    return getNode(Opcode, DL, VT, static_cast<const SDValue>(Ops[0]));
+  case 2:
+    return getNode(Opcode, DL, VT, Ops[0], Ops[1]);
+  case 3:
+    return getNode(Opcode, DL, VT, Ops[0], Ops[1], Ops[2]);
+  default:
+    break;
   }
 
   // Copy from an SDUse array into an SDValue array for use with
@@ -9683,21 +9754,26 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, EVT VT,
                               ArrayRef<SDValue> Ops, const SDNodeFlags Flags) {
   unsigned NumOps = Ops.size();
   switch (NumOps) {
-  case 0: return getNode(Opcode, DL, VT);
-  case 1: return getNode(Opcode, DL, VT, Ops[0], Flags);
-  case 2: return getNode(Opcode, DL, VT, Ops[0], Ops[1], Flags);
-  case 3: return getNode(Opcode, DL, VT, Ops[0], Ops[1], Ops[2], Flags);
-  default: break;
+  case 0:
+    return getNode(Opcode, DL, VT);
+  case 1:
+    return getNode(Opcode, DL, VT, Ops[0], Flags);
+  case 2:
+    return getNode(Opcode, DL, VT, Ops[0], Ops[1], Flags);
+  case 3:
+    return getNode(Opcode, DL, VT, Ops[0], Ops[1], Ops[2], Flags);
+  default:
+    break;
   }
 
 #ifndef NDEBUG
   for (const auto &Op : Ops)
-    assert(Op.getOpcode() != ISD::DELETED_NODE &&
-           "Operand is DELETED_NODE!");
+    assert(Op.getOpcode() != ISD::DELETED_NODE && "Operand is DELETED_NODE!");
 #endif
 
   switch (Opcode) {
-  default: break;
+  default:
+    break;
   case ISD::BUILD_VECTOR:
     // Attempt to simplify BUILD_VECTOR.
     if (SDValue V = FoldBUILD_VECTOR(DL, VT, Ops, *this))
@@ -9811,8 +9887,7 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, SDVTList VTList,
 
 #ifndef NDEBUG
   for (const auto &Op : Ops)
-    assert(Op.getOpcode() != ISD::DELETED_NODE &&
-           "Operand is DELETED_NODE!");
+    assert(Op.getOpcode() != ISD::DELETED_NODE && "Operand is DELETED_NODE!");
 #endif
 
   switch (Opcode) {
@@ -9920,7 +9995,7 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, SDVTList VTList,
 
   // Memoize the node unless it returns a flag.
   SDNode *N;
-  if (VTList.VTs[VTList.NumVTs-1] != MVT::Glue) {
+  if (VTList.VTs[VTList.NumVTs - 1] != MVT::Glue) {
     FoldingSetNodeID ID;
     AddNodeIDNode(ID, Opcode, VTList, Ops);
     void *IP = nullptr;
@@ -9949,32 +10024,32 @@ SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL,
 
 SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, SDVTList VTList,
                               SDValue N1) {
-  SDValue Ops[] = { N1 };
+  SDValue Ops[] = {N1};
   return getNode(Opcode, DL, VTList, Ops);
 }
 
 SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, SDVTList VTList,
                               SDValue N1, SDValue N2) {
-  SDValue Ops[] = { N1, N2 };
+  SDValue Ops[] = {N1, N2};
   return getNode(Opcode, DL, VTList, Ops);
 }
 
 SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, SDVTList VTList,
                               SDValue N1, SDValue N2, SDValue N3) {
-  SDValue Ops[] = { N1, N2, N3 };
+  SDValue Ops[] = {N1, N2, N3};
   return getNode(Opcode, DL, VTList, Ops);
 }
 
 SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, SDVTList VTList,
                               SDValue N1, SDValue N2, SDValue N3, SDValue N4) {
-  SDValue Ops[] = { N1, N2, N3, N4 };
+  SDValue Ops[] = {N1, N2, N3, N4};
   return getNode(Opcode, DL, VTList, Ops);
 }
 
 SDValue SelectionDAG::getNode(unsigned Opcode, const SDLoc &DL, SDVTList VTList,
                               SDValue N1, SDValue N2, SDValue N3, SDValue N4,
                               SDValue N5) {
-  SDValue Ops[] = { N1, N2, N3, N4, N5 };
+  SDValue Ops[] = {N1, N2, N3, N4, N5};
   return getNode(Opcode, DL, VTList, Ops);
 }
 
@@ -10061,7 +10136,6 @@ SDVTList SelectionDAG::getVTList(ArrayRef<EVT> VTs) {
   return Result->getSDVTList();
 }
 
-
 /// UpdateNodeOperands - *Mutate* the specified node in-place to have the
 /// specified operands.  If the resultant node already exists in the DAG,
 /// this does not modify the specified node, instead it returns the node that
@@ -10072,7 +10146,8 @@ SDNode *SelectionDAG::UpdateNodeOperands(SDNode *N, SDValue Op) {
   assert(N->getNumOperands() == 1 && "Update with wrong number of operands");
 
   // Check to see if there is no change.
-  if (Op == N->getOperand(0)) return N;
+  if (Op == N->getOperand(0))
+    return N;
 
   // See if the modified node already exists.
   void *InsertPos = nullptr;
@@ -10089,7 +10164,8 @@ SDNode *SelectionDAG::UpdateNodeOperands(SDNode *N, SDValue Op) {
 
   updateDivergence(N);
   // If this gets put into a CSE map, add it.
-  if (InsertPos) CSEMap.InsertNode(N, InsertPos);
+  if (InsertPos)
+    CSEMap.InsertNode(N, InsertPos);
   return N;
 }
 
@@ -10098,7 +10174,7 @@ SDNode *SelectionDAG::UpdateNodeOperands(SDNode *N, SDValue Op1, SDValue Op2) {
 
   // Check to see if there is no change.
   if (Op1 == N->getOperand(0) && Op2 == N->getOperand(1))
-    return N;   // No operands changed, just return the input node.
+    return N; // No operands changed, just return the input node.
 
   // See if the modified node already exists.
   void *InsertPos = nullptr;
@@ -10118,32 +10194,31 @@ SDNode *SelectionDAG::UpdateNodeOperands(SDNode *N, SDValue Op1, SDValue Op2) {
 
   updateDivergence(N);
   // If this gets put into a CSE map, add it.
-  if (InsertPos) CSEMap.InsertNode(N, InsertPos);
+  if (InsertPos)
+    CSEMap.InsertNode(N, InsertPos);
   return N;
 }
 
-SDNode *SelectionDAG::
-UpdateNodeOperands(SDNode *N, SDValue Op1, SDValue Op2, SDValue Op3) {
-  SDValue Ops[] = { Op1, Op2, Op3 };
+SDNode *SelectionDAG::UpdateNodeOperands(SDNode *N, SDValue Op1, SDValue Op2,
+                                         SDValue Op3) {
+  SDValue Ops[] = {Op1, Op2, Op3};
   return UpdateNodeOperands(N, Ops);
 }
 
-SDNode *SelectionDAG::
-UpdateNodeOperands(SDNode *N, SDValue Op1, SDValue Op2,
-                   SDValue Op3, SDValue Op4) {
-  SDValue Ops[] = { Op1, Op2, Op3, Op4 };
+SDNode *SelectionDAG::UpdateNodeOperands(SDNode *N, SDValue Op1, SDValue Op2,
+                                         SDValue Op3, SDValue Op4) {
+  SDValue Ops[] = {Op1, Op2, Op3, Op4};
   return UpdateNodeOperands(N, Ops);
 }
 
-SDNode *SelectionDAG::
-UpdateNodeOperands(SDNode *N, SDValue Op1, SDValue Op2,
-                   SDValue Op3, SDValue Op4, SDValue Op5) {
-  SDValue Ops[] = { Op1, Op2, Op3, Op4, Op5 };
+SDNode *SelectionDAG::UpdateNodeOperands(SDNode *N, SDValue Op1, SDValue Op2,
+                                         SDValue Op3, SDValue Op4,
+                                         SDValue Op5) {
+  SDValue Ops[] = {Op1, Op2, Op3, Op4, Op5};
   return UpdateNodeOperands(N, Ops);
 }
 
-SDNode *SelectionDAG::
-UpdateNodeOperands(SDNode *N, ArrayRef<SDValue> Ops) {
+SDNode *SelectionDAG::UpdateNodeOperands(SDNode *N, ArrayRef<SDValue> Ops) {
   unsigned NumOps = Ops.size();
   assert(N->getNumOperands() == NumOps &&
          "Update with wrong number of operands");
@@ -10169,7 +10244,8 @@ UpdateNodeOperands(SDNode *N, ArrayRef<SDValue> Ops) {
 
   updateDivergence(N);
   // If this gets put into a CSE map, add it.
-  if (InsertPos) CSEMap.InsertNode(N, InsertPos);
+  if (InsertPos)
+    CSEMap.InsertNode(N, InsertPos);
   return N;
 }
 
@@ -10178,7 +10254,7 @@ UpdateNodeOperands(SDNode *N, ArrayRef<SDValue> Ops) {
 void SDNode::DropOperands() {
   // Unlike the code in MorphNodeTo that does this, we don't need to
   // watch for dead nodes here.
-  for (op_iterator I = op_begin(), E = op_end(); I != E; ) {
+  for (op_iterator I = op_begin(), E = op_end(); I != E;) {
     SDUse &Use = *I++;
     Use.set(SDValue());
   }
@@ -10208,70 +10284,65 @@ void SelectionDAG::setNodeMemRefs(MachineSDNode *N,
 /// SelectNodeTo - These are wrappers around MorphNodeTo that accept a
 /// machine opcode.
 ///
-SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc,
-                                   EVT VT) {
+SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc, EVT VT) {
   SDVTList VTs = getVTList(VT);
   return SelectNodeTo(N, MachineOpc, VTs, std::nullopt);
 }
 
-SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc,
-                                   EVT VT, SDValue Op1) {
+SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc, EVT VT,
+                                   SDValue Op1) {
   SDVTList VTs = getVTList(VT);
-  SDValue Ops[] = { Op1 };
+  SDValue Ops[] = {Op1};
   return SelectNodeTo(N, MachineOpc, VTs, Ops);
 }
 
-SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc,
-                                   EVT VT, SDValue Op1,
-                                   SDValue Op2) {
+SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc, EVT VT,
+                                   SDValue Op1, SDValue Op2) {
   SDVTList VTs = getVTList(VT);
-  SDValue Ops[] = { Op1, Op2 };
+  SDValue Ops[] = {Op1, Op2};
   return SelectNodeTo(N, MachineOpc, VTs, Ops);
 }
 
-SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc,
-                                   EVT VT, SDValue Op1,
-                                   SDValue Op2, SDValue Op3) {
+SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc, EVT VT,
+                                   SDValue Op1, SDValue Op2, SDValue Op3) {
   SDVTList VTs = getVTList(VT);
-  SDValue Ops[] = { Op1, Op2, Op3 };
+  SDValue Ops[] = {Op1, Op2, Op3};
   return SelectNodeTo(N, MachineOpc, VTs, Ops);
 }
 
-SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc,
-                                   EVT VT, ArrayRef<SDValue> Ops) {
+SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc, EVT VT,
+                                   ArrayRef<SDValue> Ops) {
   SDVTList VTs = getVTList(VT);
   return SelectNodeTo(N, MachineOpc, VTs, Ops);
 }
 
-SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc,
-                                   EVT VT1, EVT VT2, ArrayRef<SDValue> Ops) {
+SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc, EVT VT1,
+                                   EVT VT2, ArrayRef<SDValue> Ops) {
   SDVTList VTs = getVTList(VT1, VT2);
   return SelectNodeTo(N, MachineOpc, VTs, Ops);
 }
 
-SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc,
-                                   EVT VT1, EVT VT2) {
+SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc, EVT VT1,
+                                   EVT VT2) {
   SDVTList VTs = getVTList(VT1, VT2);
   return SelectNodeTo(N, MachineOpc, VTs, std::nullopt);
 }
 
-SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc,
-                                   EVT VT1, EVT VT2, EVT VT3,
-                                   ArrayRef<SDValue> Ops) {
+SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc, EVT VT1,
+                                   EVT VT2, EVT VT3, ArrayRef<SDValue> Ops) {
   SDVTList VTs = getVTList(VT1, VT2, VT3);
   return SelectNodeTo(N, MachineOpc, VTs, Ops);
 }
 
-SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc,
-                                   EVT VT1, EVT VT2,
-                                   SDValue Op1, SDValue Op2) {
+SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc, EVT VT1,
+                                   EVT VT2, SDValue Op1, SDValue Op2) {
   SDVTList VTs = getVTList(VT1, VT2);
-  SDValue Ops[] = { Op1, Op2 };
+  SDValue Ops[] = {Op1, Op2};
   return SelectNodeTo(N, MachineOpc, VTs, Ops);
 }
 
-SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc,
-                                   SDVTList VTs,ArrayRef<SDValue> Ops) {
+SDNode *SelectionDAG::SelectNodeTo(SDNode *N, unsigned MachineOpc, SDVTList VTs,
+                                   ArrayRef<SDValue> Ops) {
   SDNode *New = MorphNodeTo(N, ~MachineOpc, VTs, Ops);
   // Reset the NodeID to -1.
   New->setNodeId(-1);
@@ -10315,11 +10386,11 @@ SDNode *SelectionDAG::UpdateSDLocOnMergeSDNode(SDNode *N, const SDLoc &OLoc) {
 /// As a consequence it isn't appropriate to use from within the DAG combiner or
 /// the legalizer which maintain worklists that would need to be updated when
 /// deleting things.
-SDNode *SelectionDAG::MorphNodeTo(SDNode *N, unsigned Opc,
-                                  SDVTList VTs, ArrayRef<SDValue> Ops) {
+SDNode *SelectionDAG::MorphNodeTo(SDNode *N, unsigned Opc, SDVTList VTs,
+                                  ArrayRef<SDValue> Ops) {
   // If an identical node already exists, use it.
   void *IP = nullptr;
-  if (VTs.VTs[VTs.NumVTs-1] != MVT::Glue) {
+  if (VTs.VTs[VTs.NumVTs - 1] != MVT::Glue) {
     FoldingSetNodeID ID;
     AddNodeIDNode(ID, Opc, VTs, Ops);
     if (SDNode *ON = FindNodeOrInsertPos(ID, SDLoc(N), IP))
@@ -10336,8 +10407,8 @@ SDNode *SelectionDAG::MorphNodeTo(SDNode *N, unsigned Opc,
 
   // Clear the operands list, updating used nodes to remove this from their
   // use list.  Keep track of any operands that become dead as a result.
-  SmallPtrSet<SDNode*, 16> DeadNodeSet;
-  for (SDNode::op_iterator I = N->op_begin(), E = N->op_end(); I != E; ) {
+  SmallPtrSet<SDNode *, 16> DeadNodeSet;
+  for (SDNode::op_iterator I = N->op_begin(), E = N->op_end(); I != E;) {
     SDUse &Use = *I++;
     SDNode *Used = Use.getNode();
     Use.set(SDValue());
@@ -10364,20 +10435,24 @@ SDNode *SelectionDAG::MorphNodeTo(SDNode *N, unsigned Opc,
   }
 
   if (IP)
-    CSEMap.InsertNode(N, IP);   // Memoize the new node.
+    CSEMap.InsertNode(N, IP); // Memoize the new node.
   return N;
 }
 
-SDNode* SelectionDAG::mutateStrictFPToFP(SDNode *Node) {
+SDNode *SelectionDAG::mutateStrictFPToFP(SDNode *Node) {
   unsigned OrigOpc = Node->getOpcode();
   unsigned NewOpc;
   switch (OrigOpc) {
   default:
     llvm_unreachable("mutateStrictFPToFP called with unexpected opcode!");
 #define DAG_INSTRUCTION(NAME, NARG, ROUND_MODE, INTRINSIC, DAGN)               \
-  case ISD::STRICT_##DAGN: NewOpc = ISD::DAGN; break;
+  case ISD::STRICT_##DAGN:                                                     \
+    NewOpc = ISD::DAGN;                                                        \
+    break;
 #define CMP_INSTRUCTION(NAME, NARG, ROUND_MODE, INTRINSIC, DAGN)               \
-  case ISD::STRICT_##DAGN: NewOpc = ISD::SETCC; break;
+  case ISD::STRICT_##DAGN:                                                     \
+    NewOpc = ISD::SETCC;                                                       \
+    break;
 #include "llvm/IR/ConstrainedOps.def"
   }
 
@@ -10425,14 +10500,14 @@ MachineSDNode *SelectionDAG::getMachineNode(unsigned Opcode, const SDLoc &dl,
 MachineSDNode *SelectionDAG::getMachineNode(unsigned Opcode, const SDLoc &dl,
                                             EVT VT, SDValue Op1) {
   SDVTList VTs = getVTList(VT);
-  SDValue Ops[] = { Op1 };
+  SDValue Ops[] = {Op1};
   return getMachineNode(Opcode, dl, VTs, Ops);
 }
 
 MachineSDNode *SelectionDAG::getMachineNode(unsigned Opcode, const SDLoc &dl,
                                             EVT VT, SDValue Op1, SDValue Op2) {
   SDVTList VTs = getVTList(VT);
-  SDValue Ops[] = { Op1, Op2 };
+  SDValue Ops[] = {Op1, Op2};
   return getMachineNode(Opcode, dl, VTs, Ops);
 }
 
@@ -10440,7 +10515,7 @@ MachineSDNode *SelectionDAG::getMachineNode(unsigned Opcode, const SDLoc &dl,
                                             EVT VT, SDValue Op1, SDValue Op2,
                                             SDValue Op3) {
   SDVTList VTs = getVTList(VT);
-  SDValue Ops[] = { Op1, Op2, Op3 };
+  SDValue Ops[] = {Op1, Op2, Op3};
   return getMachineNode(Opcode, dl, VTs, Ops);
 }
 
@@ -10454,7 +10529,7 @@ MachineSDNode *SelectionDAG::getMachineNode(unsigned Opcode, const SDLoc &dl,
                                             EVT VT1, EVT VT2, SDValue Op1,
                                             SDValue Op2) {
   SDVTList VTs = getVTList(VT1, VT2);
-  SDValue Ops[] = { Op1, Op2 };
+  SDValue Ops[] = {Op1, Op2};
   return getMachineNode(Opcode, dl, VTs, Ops);
 }
 
@@ -10462,7 +10537,7 @@ MachineSDNode *SelectionDAG::getMachineNode(unsigned Opcode, const SDLoc &dl,
                                             EVT VT1, EVT VT2, SDValue Op1,
                                             SDValue Op2, SDValue Op3) {
   SDVTList VTs = getVTList(VT1, VT2);
-  SDValue Ops[] = { Op1, Op2, Op3 };
+  SDValue Ops[] = {Op1, Op2, Op3};
   return getMachineNode(Opcode, dl, VTs, Ops);
 }
 
@@ -10477,7 +10552,7 @@ MachineSDNode *SelectionDAG::getMachineNode(unsigned Opcode, const SDLoc &dl,
                                             EVT VT1, EVT VT2, EVT VT3,
                                             SDValue Op1, SDValue Op2) {
   SDVTList VTs = getVTList(VT1, VT2, VT3);
-  SDValue Ops[] = { Op1, Op2 };
+  SDValue Ops[] = {Op1, Op2};
   return getMachineNode(Opcode, dl, VTs, Ops);
 }
 
@@ -10486,7 +10561,7 @@ MachineSDNode *SelectionDAG::getMachineNode(unsigned Opcode, const SDLoc &dl,
                                             SDValue Op1, SDValue Op2,
                                             SDValue Op3) {
   SDVTList VTs = getVTList(VT1, VT2, VT3);
-  SDValue Ops[] = { Op1, Op2, Op3 };
+  SDValue Ops[] = {Op1, Op2, Op3};
   return getMachineNode(Opcode, dl, VTs, Ops);
 }
 
@@ -10507,7 +10582,7 @@ MachineSDNode *SelectionDAG::getMachineNode(unsigned Opcode, const SDLoc &dl,
 MachineSDNode *SelectionDAG::getMachineNode(unsigned Opcode, const SDLoc &DL,
                                             SDVTList VTs,
                                             ArrayRef<SDValue> Ops) {
-  bool DoCSE = VTs.VTs[VTs.NumVTs-1] != MVT::Glue;
+  bool DoCSE = VTs.VTs[VTs.NumVTs - 1] != MVT::Glue;
   MachineSDNode *N;
   void *IP = nullptr;
 
@@ -10537,8 +10612,8 @@ MachineSDNode *SelectionDAG::getMachineNode(unsigned Opcode, const SDLoc &DL,
 SDValue SelectionDAG::getTargetExtractSubreg(int SRIdx, const SDLoc &DL, EVT VT,
                                              SDValue Operand) {
   SDValue SRIdxVal = getTargetConstant(SRIdx, DL, MVT::i32);
-  SDNode *Subreg = getMachineNode(TargetOpcode::EXTRACT_SUBREG, DL,
-                                  VT, Operand, SRIdxVal);
+  SDNode *Subreg =
+      getMachineNode(TargetOpcode::EXTRACT_SUBREG, DL, VT, Operand, SRIdxVal);
   return SDValue(Subreg, 0);
 }
 
@@ -10547,8 +10622,8 @@ SDValue SelectionDAG::getTargetExtractSubreg(int SRIdx, const SDLoc &DL, EVT VT,
 SDValue SelectionDAG::getTargetInsertSubreg(int SRIdx, const SDLoc &DL, EVT VT,
                                             SDValue Operand, SDValue Subreg) {
   SDValue SRIdxVal = getTargetConstant(SRIdx, DL, MVT::i32);
-  SDNode *Result = getMachineNode(TargetOpcode::INSERT_SUBREG, DL,
-                                  VT, Operand, Subreg, SRIdxVal);
+  SDNode *Result = getMachineNode(TargetOpcode::INSERT_SUBREG, DL, VT, Operand,
+                                  Subreg, SRIdxVal);
   return SDValue(Result, 0);
 }
 
@@ -10812,8 +10887,8 @@ void SelectionDAG::salvageDebugInfo(SDNode &N) {
 }
 
 /// Creates a SDDbgLabel node.
-SDDbgLabel *SelectionDAG::getDbgLabel(DILabel *Label,
-                                      const DebugLoc &DL, unsigned O) {
+SDDbgLabel *SelectionDAG::getDbgLabel(DILabel *Label, const DebugLoc &DL,
+                                      unsigned O) {
   assert(cast<DILabel>(Label)->isValidLocationForIntrinsic(DL) &&
          "Expected inlined-at fields to agree");
   return new (DbgInfo->getAlloc()) SDDbgLabel(Label, DL, O);
@@ -10836,10 +10911,9 @@ class RAUWUpdateListener : public SelectionDAG::DAGUpdateListener {
   }
 
 public:
-  RAUWUpdateListener(SelectionDAG &d,
-                     SDNode::use_iterator &ui,
+  RAUWUpdateListener(SelectionDAG &d, SDNode::use_iterator &ui,
                      SDNode::use_iterator &ue)
-    : SelectionDAG::DAGUpdateListener(d), UI(ui), UE(ue) {}
+      : SelectionDAG::DAGUpdateListener(d), UI(ui), UE(ue) {}
 };
 
 } // end anonymous namespace
@@ -10961,7 +11035,7 @@ void SelectionDAG::ReplaceAllUsesWith(SDNode *From, SDNode *To) {
 /// This version can replace From with any result values.  To must match the
 /// number and types of values returned by From.
 void SelectionDAG::ReplaceAllUsesWith(SDNode *From, const SDValue *To) {
-  if (From->getNumValues() == 1)  // Handle the simple case efficiently.
+  if (From->getNumValues() == 1) // Handle the simple case efficiently.
     return ReplaceAllUsesWith(SDValue(From, 0), To[0]);
 
   for (unsigned i = 0, e = From->getNumValues(); i != e; ++i) {
@@ -11010,9 +11084,10 @@ void SelectionDAG::ReplaceAllUsesWith(SDNode *From, const SDValue *To) {
 /// ReplaceAllUsesOfValueWith - Replace any uses of From with To, leaving
 /// uses of other values produced by From.getNode() alone.  The Deleted
 /// vector is handled the same way as for ReplaceAllUsesWith.
-void SelectionDAG::ReplaceAllUsesOfValueWith(SDValue From, SDValue To){
+void SelectionDAG::ReplaceAllUsesOfValueWith(SDValue From, SDValue To) {
   // Handle the really simple, really trivial case efficiently.
-  if (From == To) return;
+  if (From == To)
+    return;
 
   // Handle the simple, trivial, case efficiently.
   if (From.getNode()->getNumValues() == 1) {
@@ -11169,8 +11244,7 @@ void SelectionDAG::VerifyDAGDivergence() {
 /// may appear in both the From and To list.  The Deleted vector is
 /// handled the same way as for ReplaceAllUsesWith.
 void SelectionDAG::ReplaceAllUsesOfValuesWith(const SDValue *From,
-                                              const SDValue *To,
-                                              unsigned Num){
+                                              const SDValue *To, unsigned Num) {
   // Handle the simple, trivial case efficiently.
   if (Num == 1)
     return ReplaceAllUsesOfValueWith(*From, *To);
@@ -11186,10 +11260,11 @@ void SelectionDAG::ReplaceAllUsesOfValuesWith(const SDValue *From,
     unsigned FromResNo = From[i].getResNo();
     SDNode *FromNode = From[i].getNode();
     for (SDNode::use_iterator UI = FromNode->use_begin(),
-         E = FromNode->use_end(); UI != E; ++UI) {
+                              E = FromNode->use_end();
+         UI != E; ++UI) {
       SDUse &Use = UI.getUse();
       if (Use.getResNo() == FromResNo) {
-        UseMemo Memo = { *UI, i, &Use };
+        UseMemo Memo = {*UI, i, &Use};
         Uses.push_back(Memo);
       }
     }
@@ -11200,7 +11275,7 @@ void SelectionDAG::ReplaceAllUsesOfValuesWith(const SDValue *From,
   RAUOVWUpdateListener Listener(*this, Uses);
 
   for (unsigned UseIndex = 0, UseIndexEnd = Uses.size();
-       UseIndex != UseIndexEnd; ) {
+       UseIndex != UseIndexEnd;) {
     // We know that this user uses some value of From.  If it is the right
     // value, update it.
     SDNode *User = Uses[UseIndex].User;
@@ -11295,7 +11370,8 @@ unsigned SelectionDAG::AssignTopologicalOrder() {
       allnodes_iterator I(N);
       SDNode *S = &*++I;
       dbgs() << "Overran sorted position:\n";
-      S->dumprFull(this); dbgs() << "\n";
+      S->dumprFull(this);
+      dbgs() << "\n";
       dbgs() << "Checking if this is due to cycles\n";
       checkForCycles(this, true);
 #endif
@@ -11303,15 +11379,14 @@ unsigned SelectionDAG::AssignTopologicalOrder() {
     }
   }
 
-  assert(SortedPos == AllNodes.end() &&
-         "Topological sort incomplete!");
+  assert(SortedPos == AllNodes.end() && "Topological sort incomplete!");
   assert(AllNodes.front().getOpcode() == ISD::EntryToken &&
          "First node in topological sort is not the entry token!");
   assert(AllNodes.front().getNodeId() == 0 &&
          "First node in topological sort has non-zero id!");
   assert(AllNodes.front().getNumOperands() == 0 &&
          "First node in topological sort has operands!");
-  assert(AllNodes.back().getNodeId() == (int)DAGSize-1 &&
+  assert(AllNodes.back().getNodeId() == (int)DAGSize - 1 &&
          "Last node in topologic sort has unexpected id!");
   assert(AllNodes.back().use_empty() &&
          "Last node in topologic sort has users!");
@@ -11368,10 +11443,11 @@ SDValue SelectionDAG::getSymbolFunctionGlobalAddress(SDValue Op,
   auto *Function = Module->getFunction(Symbol);
 
   if (OutFunction != nullptr)
-      *OutFunction = Function;
+    *OutFunction = Function;
 
   if (Function != nullptr) {
-    auto PtrTy = TLI->getPointerTy(getDataLayout(), Function->getAddressSpace());
+    auto PtrTy =
+        TLI->getPointerTy(getDataLayout(), Function->getAddressSpace());
     return getGlobalAddress(Function, SDLoc(Op), PtrTy);
   }
 
@@ -11457,11 +11533,9 @@ bool llvm::isNeutralConstant(unsigned Opcode, SDNodeFlags Flags, SDValue V,
       // Neutral element for fminnum is NaN, Inf or FLT_MAX, depending on FMF.
       EVT VT = V.getValueType();
       const fltSemantics &Semantics = SelectionDAG::EVTToAPFloatSemantics(VT);
-      APFloat NeutralAF = !Flags.hasNoNaNs()
-                              ? APFloat::getQNaN(Semantics)
-                              : !Flags.hasNoInfs()
-                                    ? APFloat::getInf(Semantics)
-                                    : APFloat::getLargest(Semantics);
+      APFloat NeutralAF = !Flags.hasNoNaNs()   ? APFloat::getQNaN(Semantics)
+                          : !Flags.hasNoInfs() ? APFloat::getInf(Semantics)
+                                               : APFloat::getLargest(Semantics);
       if (Opcode == ISD::FMAXNUM)
         NeutralAF.changeSign();
 
@@ -11602,9 +11676,7 @@ bool llvm::isAllOnesOrAllOnesSplat(SDValue N, bool AllowUndefs) {
   return C && C->isAllOnes() && C->getValueSizeInBits(0) == BitWidth;
 }
 
-HandleSDNode::~HandleSDNode() {
-  DropOperands();
-}
+HandleSDNode::~HandleSDNode() { DropOperands(); }
 
 GlobalAddressSDNode::GlobalAddressSDNode(unsigned Opc, unsigned Order,
                                          const DebugLoc &DL,
@@ -11638,21 +11710,19 @@ MemSDNode::MemSDNode(unsigned Opc, unsigned Order, const DebugLoc &dl,
 
 /// Profile - Gather unique data for the node.
 ///
-void SDNode::Profile(FoldingSetNodeID &ID) const {
-  AddNodeIDNode(ID, this);
-}
+void SDNode::Profile(FoldingSetNodeID &ID) const { AddNodeIDNode(ID, this); }
 
 namespace {
 
-  struct EVTArray {
-    std::vector<EVT> VTs;
+struct EVTArray {
+  std::vector<EVT> VTs;
 
-    EVTArray() {
-      VTs.reserve(MVT::VALUETYPE_SIZE);
-      for (unsigned i = 0; i < MVT::VALUETYPE_SIZE; ++i)
-        VTs.push_back(MVT((MVT::SimpleValueType)i));
-    }
-  };
+  EVTArray() {
+    VTs.reserve(MVT::VALUETYPE_SIZE);
+    for (unsigned i = 0; i < MVT::VALUETYPE_SIZE; ++i)
+      VTs.push_back(MVT((MVT::SimpleValueType)i));
+  }
+};
 
 } // end anonymous namespace
 
@@ -11750,11 +11820,13 @@ bool SDNode::isOperandOf(const SDNode *N) const {
 /// constraint is necessary to allow transformations like splitting loads.
 bool SDValue::reachesChainWithoutSideEffects(SDValue Dest,
                                              unsigned Depth) const {
-  if (*this == Dest) return true;
+  if (*this == Dest)
+    return true;
 
   // Don't search too deeply, we just want to be able to see through
   // TokenFactor's etc.
-  if (Depth == 0) return false;
+  if (Depth == 0)
+    return false;
 
   // If this is a token factor, all inputs to the TF happen in parallel.
   if (getOpcode() == ISD::TokenFactor) {
@@ -11781,7 +11853,7 @@ bool SDValue::reachesChainWithoutSideEffects(SDValue Dest,
   // Loads don't have side effects, look through them.
   if (LoadSDNode *Ld = dyn_cast<LoadSDNode>(*this)) {
     if (Ld->isUnordered())
-      return Ld->getChain().reachesChainWithoutSideEffects(Dest, Depth-1);
+      return Ld->getChain().reachesChainWithoutSideEffects(Dest, Depth - 1);
   }
   return false;
 }
@@ -11962,7 +12034,7 @@ SDValue SelectionDAG::UnrollVectorOp(SDNode *N, unsigned ResNE) {
   SmallVector<SDValue, 4> Operands(N->getNumOperands());
 
   unsigned i;
-  for (i= 0; i != NE; ++i) {
+  for (i = 0; i != NE; ++i) {
     for (unsigned j = 0, e = N->getNumOperands(); j != e; ++j) {
       SDValue Operand = N->getOperand(j);
       EVT OperandVT = Operand.getValueType();
@@ -11979,8 +12051,8 @@ SDValue SelectionDAG::UnrollVectorOp(SDNode *N, unsigned ResNE) {
 
     switch (N->getOpcode()) {
     default: {
-      Scalars.push_back(getNode(N->getOpcode(), dl, EltVT, Operands,
-                                N->getFlags()));
+      Scalars.push_back(
+          getNode(N->getOpcode(), dl, EltVT, Operands, N->getFlags()));
       break;
     }
     case ISD::VSELECT:
@@ -11991,15 +12063,14 @@ SDValue SelectionDAG::UnrollVectorOp(SDNode *N, unsigned ResNE) {
     case ISD::SRL:
     case ISD::ROTL:
     case ISD::ROTR:
-      Scalars.push_back(getNode(N->getOpcode(), dl, EltVT, Operands[0],
-                               getShiftAmountOperand(Operands[0].getValueType(),
-                                                     Operands[1])));
+      Scalars.push_back(getNode(
+          N->getOpcode(), dl, EltVT, Operands[0],
+          getShiftAmountOperand(Operands[0].getValueType(), Operands[1])));
       break;
     case ISD::SIGN_EXTEND_INREG: {
       EVT ExtVT = cast<VTSDNode>(Operands[1])->getVT().getVectorElementType();
-      Scalars.push_back(getNode(N->getOpcode(), dl, EltVT,
-                                Operands[0],
-                                getValueType(ExtVT)));
+      Scalars.push_back(
+          getNode(N->getOpcode(), dl, EltVT, Operands[0], getValueType(ExtVT)));
     }
     }
   }
@@ -12011,8 +12082,8 @@ SDValue SelectionDAG::UnrollVectorOp(SDNode *N, unsigned ResNE) {
   return getBuildVector(VecVT, dl, Scalars);
 }
 
-std::pair<SDValue, SDValue> SelectionDAG::UnrollVectorOverflowOp(
-    SDNode *N, unsigned ResNE) {
+std::pair<SDValue, SDValue>
+SelectionDAG::UnrollVectorOverflowOp(SDNode *N, unsigned ResNE) {
   unsigned Opcode = N->getOpcode();
   assert((Opcode == ISD::UADDO || Opcode == ISD::SADDO ||
           Opcode == ISD::USUBO || Opcode == ISD::SSUBO ||
@@ -12043,10 +12114,9 @@ std::pair<SDValue, SDValue> SelectionDAG::UnrollVectorOverflowOp(
   SmallVector<SDValue, 8> OvScalars;
   for (unsigned i = 0; i < NE; ++i) {
     SDValue Res = getNode(Opcode, dl, VTs, LHSScalars[i], RHSScalars[i]);
-    SDValue Ov =
-        getSelect(dl, OvEltVT, Res.getValue(1),
-                  getBoolConstant(true, dl, OvEltVT, ResVT),
-                  getConstant(0, dl, OvEltVT));
+    SDValue Ov = getSelect(dl, OvEltVT, Res.getValue(1),
+                           getBoolConstant(true, dl, OvEltVT, ResVT),
+                           getConstant(0, dl, OvEltVT));
 
     ResScalars.push_back(Res);
     OvScalars.push_back(Ov);
@@ -12184,9 +12254,10 @@ SelectionDAG::GetDependentSplitDestVTs(const EVT &VT, const EVT &EnvVT,
 
 /// SplitVector - Split the vector with EXTRACT_SUBVECTOR and return the
 /// low/high part.
-std::pair<SDValue, SDValue>
-SelectionDAG::SplitVector(const SDValue &N, const SDLoc &DL, const EVT &LoVT,
-                          const EVT &HiVT) {
+std::pair<SDValue, SDValue> SelectionDAG::SplitVector(const SDValue &N,
+                                                      const SDLoc &DL,
+                                                      const EVT &LoVT,
+                                                      const EVT &HiVT) {
   assert(LoVT.isScalableVector() == HiVT.isScalableVector() &&
          LoVT.isScalableVector() == N.getValueType().isScalableVector() &&
          "Splitting vector with an invalid mixture of fixed and scalable "
@@ -12505,8 +12576,7 @@ void BuildVectorSDNode::recastRawBits(bool IsLittleEndian,
   unsigned SrcEltSizeInBits = SrcBitElements[0].getBitWidth();
   assert(((NumSrcOps * SrcEltSizeInBits) % DstEltSizeInBits) == 0 &&
          "Invalid bitcast scale");
-  assert(NumSrcOps == SrcUndefElements.size() &&
-         "Vector size mismatch");
+  assert(NumSrcOps == SrcUndefElements.size() && "Vector size mismatch");
 
   unsigned NumDstOps = (NumSrcOps * SrcEltSizeInBits) / DstEltSizeInBits;
   DstUndefElements.clear();
@@ -12616,8 +12686,7 @@ SDNode *SelectionDAG::isConstantIntBuildVectorOrConstantInt(SDValue N) const {
   // Treat a GlobalAddress supporting constant offset folding as a
   // constant integer.
   if (GlobalAddressSDNode *GA = dyn_cast<GlobalAddressSDNode>(N))
-    if (GA->getOpcode() == ISD::GlobalAddress &&
-        TLI->isOffsetFoldingLegal(GA))
+    if (GA->getOpcode() == ISD::GlobalAddress && TLI->isOffsetFoldingLegal(GA))
       return GA;
   if ((N.getOpcode() == ISD::SPLAT_VECTOR) &&
       isa<ConstantSDNode>(N.getOperand(0)))
@@ -12652,7 +12721,8 @@ void SelectionDAG::createOperands(SDNode *Node, ArrayRef<SDValue> Vals) {
   for (unsigned I = 0; I != Vals.size(); ++I) {
     Ops[I].setUser(Node);
     Ops[I].setInitial(Vals[I]);
-    if (Ops[I].Val.getValueType() != MVT::Other) // Skip Chain. It does not carry divergence.
+    if (Ops[I].Val.getValueType() !=
+        MVT::Other) // Skip Chain. It does not carry divergence.
       IsDivergent |= Ops[I].getNode()->isDivergent();
   }
   Node->NumOperands = Vals.size();
@@ -12704,9 +12774,9 @@ SDValue SelectionDAG::getNeutralElement(unsigned Opcode, const SDLoc &DL,
   case ISD::FMAXNUM: {
     // Neutral element for fminnum is NaN, Inf or FLT_MAX, depending on FMF.
     const fltSemantics &Semantics = EVTToAPFloatSemantics(VT);
-    APFloat NeutralAF = !Flags.hasNoNaNs() ? APFloat::getQNaN(Semantics) :
-                        !Flags.hasNoInfs() ? APFloat::getInf(Semantics) :
-                        APFloat::getLargest(Semantics);
+    APFloat NeutralAF = !Flags.hasNoNaNs()   ? APFloat::getQNaN(Semantics)
+                        : !Flags.hasNoInfs() ? APFloat::getInf(Semantics)
+                                             : APFloat::getLargest(Semantics);
     if (Opcode == ISD::FMAXNUM)
       NeutralAF.changeSign();
 
@@ -12723,7 +12793,6 @@ SDValue SelectionDAG::getNeutralElement(unsigned Opcode, const SDLoc &DL,
 
     return getConstantFP(NeutralAF, DL, VT);
   }
-
   }
 }
 
@@ -12855,8 +12924,8 @@ void SelectionDAG::copyExtraInfo(SDNode *From, SDNode *To) {
 
 #ifndef NDEBUG
 static void checkForCyclesHelper(const SDNode *N,
-                                 SmallPtrSetImpl<const SDNode*> &Visited,
-                                 SmallPtrSetImpl<const SDNode*> &Checked,
+                                 SmallPtrSetImpl<const SDNode *> &Visited,
+                                 SmallPtrSetImpl<const SDNode *> &Checked,
                                  const llvm::SelectionDAG *DAG) {
   // If this node has already been checked, don't check it again.
   if (Checked.count(N))
@@ -12867,7 +12936,8 @@ static void checkForCyclesHelper(const SDNode *N,
   if (!Visited.insert(N).second) {
     errs() << "Detected cycle in SelectionDAG\n";
     dbgs() << "Offending node:\n";
-    N->dumprFull(DAG); dbgs() << "\n";
+    N->dumprFull(DAG);
+    dbgs() << "\n";
     abort();
   }
 
@@ -12879,21 +12949,20 @@ static void checkForCyclesHelper(const SDNode *N,
 }
 #endif
 
-void llvm::checkForCycles(const llvm::SDNode *N,
-                          const llvm::SelectionDAG *DAG,
+void llvm::checkForCycles(const llvm::SDNode *N, const llvm::SelectionDAG *DAG,
                           bool force) {
 #ifndef NDEBUG
   bool check = force;
 #ifdef EXPENSIVE_CHECKS
   check = true;
-#endif  // EXPENSIVE_CHECKS
+#endif // EXPENSIVE_CHECKS
   if (check) {
     assert(N && "Checking nonexistent SDNode");
-    SmallPtrSet<const SDNode*, 32> visited;
-    SmallPtrSet<const SDNode*, 32> checked;
+    SmallPtrSet<const SDNode *, 32> visited;
+    SmallPtrSet<const SDNode *, 32> checked;
     checkForCyclesHelper(N, visited, checked, DAG);
   }
-#endif  // !NDEBUG
+#endif // !NDEBUG
 }
 
 void llvm::checkForCycles(const llvm::SelectionDAG *DAG, bool force) {
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGAddressAnalysis.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGAddressAnalysis.cpp
index 39a1e09e83c5940..0d50c9a9ded7220 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGAddressAnalysis.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGAddressAnalysis.cpp
@@ -130,8 +130,9 @@ bool BaseIndexOffset::computeAliasing(const SDNode *Op0,
       MachineFrameInfo &MFI = DAG.getMachineFunction().getFrameInfo();
       // If the base are the same frame index but the we couldn't find a
       // constant offset, (indices are different) be conservative.
-      if (A->getIndex() != B->getIndex() && (!MFI.isFixedObjectIndex(A->getIndex()) ||
-                     !MFI.isFixedObjectIndex(B->getIndex()))) {
+      if (A->getIndex() != B->getIndex() &&
+          (!MFI.isFixedObjectIndex(A->getIndex()) ||
+           !MFI.isFixedObjectIndex(B->getIndex()))) {
         IsAlias = false;
         return true;
       }
@@ -244,7 +245,8 @@ static BaseIndexOffset matchLSNode(const LSBaseSDNode *N,
             Offset -= Off;
           else
             Offset += Off;
-          Base = DAG.getTargetLoweringInfo().unwrapAddress(LSBase->getBasePtr());
+          Base =
+              DAG.getTargetLoweringInfo().unwrapAddress(LSBase->getBasePtr());
           continue;
         }
       break;
@@ -308,11 +310,9 @@ BaseIndexOffset BaseIndexOffset::match(const SDNode *N,
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
 
-LLVM_DUMP_METHOD void BaseIndexOffset::dump() const {
-  print(dbgs());
-}
+LLVM_DUMP_METHOD void BaseIndexOffset::dump() const { print(dbgs()); }
 
-void BaseIndexOffset::print(raw_ostream& OS) const {
+void BaseIndexOffset::print(raw_ostream &OS) const {
   OS << "BaseIndexOffset base=[";
   Base->print(OS);
   OS << "] index=[";
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
index 5a227ba398e1c11..e749aeb2c23455d 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
@@ -186,17 +186,18 @@ getCopyFromParts(SelectionDAG &DAG, const SDLoc &DL, const SDValue *Parts,
       // Assemble the power of 2 part.
       unsigned RoundParts = llvm::bit_floor(NumParts);
       unsigned RoundBits = PartBits * RoundParts;
-      EVT RoundVT = RoundBits == ValueBits ?
-        ValueVT : EVT::getIntegerVT(*DAG.getContext(), RoundBits);
+      EVT RoundVT = RoundBits == ValueBits
+                        ? ValueVT
+                        : EVT::getIntegerVT(*DAG.getContext(), RoundBits);
       SDValue Lo, Hi;
 
-      EVT HalfVT = EVT::getIntegerVT(*DAG.getContext(), RoundBits/2);
+      EVT HalfVT = EVT::getIntegerVT(*DAG.getContext(), RoundBits / 2);
 
       if (RoundParts > 2) {
-        Lo = getCopyFromParts(DAG, DL, Parts, RoundParts / 2,
+        Lo =
+            getCopyFromParts(DAG, DL, Parts, RoundParts / 2, PartVT, HalfVT, V);
+        Hi = getCopyFromParts(DAG, DL, Parts + RoundParts / 2, RoundParts / 2,
                               PartVT, HalfVT, V);
-        Hi = getCopyFromParts(DAG, DL, Parts + RoundParts / 2,
-                              RoundParts / 2, PartVT, HalfVT, V);
       } else {
         Lo = DAG.getNode(ISD::BITCAST, DL, HalfVT, Parts[0]);
         Hi = DAG.getNode(ISD::BITCAST, DL, HalfVT, Parts[1]);
@@ -258,7 +259,7 @@ getCopyFromParts(SelectionDAG &DAG, const SDLoc &DL, const SDValue *Parts,
       ValueVT.bitsLT(PartEVT)) {
     // For an FP value in an integer part, we need to truncate to the right
     // width first.
-    PartEVT = EVT::getIntegerVT(*DAG.getContext(),  ValueVT.getSizeInBits());
+    PartEVT = EVT::getIntegerVT(*DAG.getContext(), ValueVT.getSizeInBits());
     Val = DAG.getNode(ISD::TRUNCATE, DL, PartEVT, Val);
   }
 
@@ -273,8 +274,8 @@ getCopyFromParts(SelectionDAG &DAG, const SDLoc &DL, const SDValue *Parts,
       // indicate whether the truncated bits will always be
       // zero or sign-extension.
       if (AssertOp)
-        Val = DAG.getNode(*AssertOp, DL, PartEVT, Val,
-                          DAG.getValueType(ValueVT));
+        Val =
+            DAG.getNode(*AssertOp, DL, PartEVT, Val, DAG.getValueType(ValueVT));
       return DAG.getNode(ISD::TRUNCATE, DL, ValueVT, Val);
     }
     return DAG.getNode(ISD::ANY_EXTEND, DL, ValueVT, Val);
@@ -352,7 +353,7 @@ static SDValue getCopyFromPartsVector(SelectionDAG &DAG, const SDLoc &DL,
     NumParts = NumRegs; // Silence a compiler warning.
     assert(RegisterVT == PartVT && "Part type doesn't match vector breakdown!");
     assert(RegisterVT.getSizeInBits() ==
-           Parts[0].getSimpleValueType().getSizeInBits() &&
+               Parts[0].getSimpleValueType().getSizeInBits() &&
            "Part type sizes don't match!");
 
     // Assemble the parts into intermediate operands.
@@ -361,8 +362,8 @@ static SDValue getCopyFromPartsVector(SelectionDAG &DAG, const SDLoc &DL,
       // If the register was not expanded, truncate or copy the value,
       // as appropriate.
       for (unsigned i = 0; i != NumParts; ++i)
-        Ops[i] = getCopyFromParts(DAG, DL, &Parts[i], 1,
-                                  PartVT, IntermediateVT, V, CallConv);
+        Ops[i] = getCopyFromParts(DAG, DL, &Parts[i], 1, PartVT, IntermediateVT,
+                                  V, CallConv);
     } else if (NumParts > 0) {
       // If the intermediate type was expanded, build the intermediate
       // operands from the parts.
@@ -370,8 +371,8 @@ static SDValue getCopyFromPartsVector(SelectionDAG &DAG, const SDLoc &DL,
              "Must expand into a divisible number of parts!");
       unsigned Factor = NumParts / NumIntermediates;
       for (unsigned i = 0; i != NumIntermediates; ++i)
-        Ops[i] = getCopyFromParts(DAG, DL, &Parts[i * Factor], Factor,
-                                  PartVT, IntermediateVT, V, CallConv);
+        Ops[i] = getCopyFromParts(DAG, DL, &Parts[i * Factor], Factor, PartVT,
+                                  IntermediateVT, V, CallConv);
     }
 
     // Build a vector with BUILD_VECTOR or CONCAT_VECTORS from the
@@ -435,21 +436,21 @@ static SDValue getCopyFromPartsVector(SelectionDAG &DAG, const SDLoc &DL,
     return DAG.getNode(ISD::BITCAST, DL, ValueVT, Val);
 
   if (ValueVT.getVectorNumElements() != 1) {
-     // Certain ABIs require that vectors are passed as integers. For vectors
-     // are the same size, this is an obvious bitcast.
-     if (ValueVT.getSizeInBits() == PartEVT.getSizeInBits()) {
-       return DAG.getNode(ISD::BITCAST, DL, ValueVT, Val);
-     } else if (ValueVT.bitsLT(PartEVT)) {
-       const uint64_t ValueSize = ValueVT.getFixedSizeInBits();
-       EVT IntermediateType = EVT::getIntegerVT(*DAG.getContext(), ValueSize);
-       // Drop the extra bits.
-       Val = DAG.getNode(ISD::TRUNCATE, DL, IntermediateType, Val);
-       return DAG.getBitcast(ValueVT, Val);
-     }
-
-     diagnosePossiblyInvalidConstraint(
-         *DAG.getContext(), V, "non-trivial scalar-to-vector conversion");
-     return DAG.getUNDEF(ValueVT);
+    // Certain ABIs require that vectors are passed as integers. For vectors
+    // are the same size, this is an obvious bitcast.
+    if (ValueVT.getSizeInBits() == PartEVT.getSizeInBits()) {
+      return DAG.getNode(ISD::BITCAST, DL, ValueVT, Val);
+    } else if (ValueVT.bitsLT(PartEVT)) {
+      const uint64_t ValueSize = ValueVT.getFixedSizeInBits();
+      EVT IntermediateType = EVT::getIntegerVT(*DAG.getContext(), ValueSize);
+      // Drop the extra bits.
+      Val = DAG.getNode(ISD::TRUNCATE, DL, IntermediateType, Val);
+      return DAG.getBitcast(ValueVT, Val);
+    }
+
+    diagnosePossiblyInvalidConstraint(
+        *DAG.getContext(), V, "non-trivial scalar-to-vector conversion");
+    return DAG.getUNDEF(ValueVT);
   }
 
   // Handle cases such as i8 -> <1 x i1>
@@ -526,12 +527,11 @@ getCopyToParts(SelectionDAG &DAG, const SDLoc &DL, SDValue Val, SDValue *Parts,
       if (ValueVT.isFloatingPoint()) {
         // FP values need to be bitcast, then extended if they are being put
         // into a larger container.
-        ValueVT = EVT::getIntegerVT(*DAG.getContext(),  ValueVT.getSizeInBits());
+        ValueVT = EVT::getIntegerVT(*DAG.getContext(), ValueVT.getSizeInBits());
         Val = DAG.getNode(ISD::BITCAST, DL, ValueVT, Val);
       }
       assert((PartVT.isInteger() || PartVT == MVT::x86mmx) &&
-             ValueVT.isInteger() &&
-             "Unknown mismatch!");
+             ValueVT.isInteger() && "Unknown mismatch!");
       ValueVT = EVT::getIntegerVT(*DAG.getContext(), NumParts * PartBits);
       Val = DAG.getNode(ExtendKind, DL, ValueVT, Val);
       if (PartVT == MVT::x86mmx)
@@ -544,8 +544,7 @@ getCopyToParts(SelectionDAG &DAG, const SDLoc &DL, SDValue Val, SDValue *Parts,
   } else if (NumParts * PartBits < ValueVT.getSizeInBits()) {
     // If the parts cover less bits than value has, truncate the value.
     assert((PartVT.isInteger() || PartVT == MVT::x86mmx) &&
-           ValueVT.isInteger() &&
-           "Unknown mismatch!");
+           ValueVT.isInteger() && "Unknown mismatch!");
     ValueVT = EVT::getIntegerVT(*DAG.getContext(), NumParts * PartBits);
     Val = DAG.getNode(ISD::TRUNCATE, DL, ValueVT, Val);
     if (PartVT == MVT::x86mmx)
@@ -576,8 +575,9 @@ getCopyToParts(SelectionDAG &DAG, const SDLoc &DL, SDValue Val, SDValue *Parts,
     unsigned RoundParts = llvm::bit_floor(NumParts);
     unsigned RoundBits = RoundParts * PartBits;
     unsigned OddParts = NumParts - RoundParts;
-    SDValue OddVal = DAG.getNode(ISD::SRL, DL, ValueVT, Val,
-      DAG.getShiftAmountConstant(RoundBits, ValueVT, DL));
+    SDValue OddVal =
+        DAG.getNode(ISD::SRL, DL, ValueVT, Val,
+                    DAG.getShiftAmountConstant(RoundBits, ValueVT, DL));
 
     getCopyToParts(DAG, DL, OddVal, Parts + RoundParts, OddParts, PartVT, V,
                    CallConv);
@@ -593,22 +593,21 @@ getCopyToParts(SelectionDAG &DAG, const SDLoc &DL, SDValue Val, SDValue *Parts,
 
   // The number of parts is a power of 2.  Repeatedly bisect the value using
   // EXTRACT_ELEMENT.
-  Parts[0] = DAG.getNode(ISD::BITCAST, DL,
-                         EVT::getIntegerVT(*DAG.getContext(),
-                                           ValueVT.getSizeInBits()),
-                         Val);
+  Parts[0] = DAG.getNode(
+      ISD::BITCAST, DL,
+      EVT::getIntegerVT(*DAG.getContext(), ValueVT.getSizeInBits()), Val);
 
   for (unsigned StepSize = NumParts; StepSize > 1; StepSize /= 2) {
     for (unsigned i = 0; i < NumParts; i += StepSize) {
       unsigned ThisBits = StepSize * PartBits / 2;
       EVT ThisVT = EVT::getIntegerVT(*DAG.getContext(), ThisBits);
       SDValue &Part0 = Parts[i];
-      SDValue &Part1 = Parts[i+StepSize/2];
+      SDValue &Part1 = Parts[i + StepSize / 2];
 
-      Part1 = DAG.getNode(ISD::EXTRACT_ELEMENT, DL,
-                          ThisVT, Part0, DAG.getIntPtrConstant(1, DL));
-      Part0 = DAG.getNode(ISD::EXTRACT_ELEMENT, DL,
-                          ThisVT, Part0, DAG.getIntPtrConstant(0, DL));
+      Part1 = DAG.getNode(ISD::EXTRACT_ELEMENT, DL, ThisVT, Part0,
+                          DAG.getIntPtrConstant(1, DL));
+      Part0 = DAG.getNode(ISD::EXTRACT_ELEMENT, DL, ThisVT, Part0,
+                          DAG.getIntPtrConstant(0, DL));
 
       if (ThisBits == PartBits && ThisVT != PartVT) {
         Part0 = DAG.getNode(ISD::BITCAST, DL, PartVT, Part0);
@@ -872,9 +871,9 @@ SDValue RegsForValue::getCopyFromRegs(SelectionDAG &DAG,
     for (unsigned i = 0; i != NumRegs; ++i) {
       SDValue P;
       if (!Glue) {
-        P = DAG.getCopyFromReg(Chain, dl, Regs[Part+i], RegisterVT);
+        P = DAG.getCopyFromReg(Chain, dl, Regs[Part + i], RegisterVT);
       } else {
-        P = DAG.getCopyFromReg(Chain, dl, Regs[Part+i], RegisterVT, *Glue);
+        P = DAG.getCopyFromReg(Chain, dl, Regs[Part + i], RegisterVT, *Glue);
         *Glue = P.getValue(2);
       }
 
@@ -888,7 +887,7 @@ SDValue RegsForValue::getCopyFromRegs(SelectionDAG &DAG,
         continue;
 
       const FunctionLoweringInfo::LiveOutInfo *LOI =
-        FuncInfo.GetLiveOutRegInfo(Regs[Part+i]);
+          FuncInfo.GetLiveOutRegInfo(Regs[Part + i]);
       if (!LOI)
         continue;
 
@@ -984,7 +983,7 @@ void RegsForValue::getCopyToRegs(SDValue Val, SelectionDAG &DAG,
     // c3     = TokenFactor c1, c2
     // ...
     //        = op c3, ..., f2
-    Chain = Chains[NumRegs-1];
+    Chain = Chains[NumRegs - 1];
   else
     Chain = DAG.getNode(ISD::TokenFactor, dl, MVT::Other, Chains);
 }
@@ -1032,8 +1031,8 @@ void RegsForValue::AddInlineAsmOperands(InlineAsm::Kind Code, bool HasMatching,
 
   for (unsigned Value = 0, Reg = 0, e = ValueVTs.size(); Value != e; ++Value) {
     MVT RegisterVT = RegVTs[Value];
-    unsigned NumRegs = TLI.getNumRegisters(*DAG.getContext(), ValueVTs[Value],
-                                           RegisterVT);
+    unsigned NumRegs =
+        TLI.getNumRegisters(*DAG.getContext(), ValueVTs[Value], RegisterVT);
     for (unsigned i = 0; i != NumRegs; ++i) {
       assert(Reg < Regs.size() && "Mismatch in # registers expected");
       unsigned TheReg = Regs[Reg++];
@@ -1101,7 +1100,7 @@ SDValue SelectionDAGBuilder::updateRoot(SmallVectorImpl<SDValue> &Pending) {
     for (; i != e; ++i) {
       assert(Pending[i].getNode()->getNumOperands() > 1);
       if (Pending[i].getNode()->getOperand(0) == Root)
-        break;  // Don't add the root if we already indirectly depend on it.
+        break; // Don't add the root if we already indirectly depend on it.
     }
 
     if (i == e)
@@ -1126,11 +1125,9 @@ SDValue SelectionDAGBuilder::getRoot() {
   // Chain up all pending constrained intrinsics together with all
   // pending loads, by simply appending them to PendingLoads and
   // then calling getMemoryRoot().
-  PendingLoads.reserve(PendingLoads.size() +
-                       PendingConstrainedFP.size() +
+  PendingLoads.reserve(PendingLoads.size() + PendingConstrainedFP.size() +
                        PendingConstrainedFPStrict.size());
-  PendingLoads.append(PendingConstrainedFP.begin(),
-                      PendingConstrainedFP.end());
+  PendingLoads.append(PendingConstrainedFP.begin(), PendingConstrainedFP.end());
   PendingLoads.append(PendingConstrainedFPStrict.begin(),
                       PendingConstrainedFPStrict.end());
   PendingConstrainedFP.clear();
@@ -1220,10 +1217,13 @@ void SelectionDAGBuilder::visit(unsigned Opcode, const User &I) {
   // Note: this doesn't use InstVisitor, because it has to work with
   // ConstantExpr's in addition to instructions.
   switch (Opcode) {
-  default: llvm_unreachable("Unknown instruction type encountered!");
+  default:
+    llvm_unreachable("Unknown instruction type encountered!");
     // Build the switch statement using the Instruction.def file.
-#define HANDLE_INST(NUM, OPCODE, CLASS) \
-    case Instruction::OPCODE: visit##OPCODE((const CLASS&)I); break;
+#define HANDLE_INST(NUM, OPCODE, CLASS)                                        \
+  case Instruction::OPCODE:                                                    \
+    visit##OPCODE((const CLASS &)I);                                           \
+    break;
 #include "llvm/IR/Instruction.def"
   }
 }
@@ -1588,8 +1588,8 @@ SDValue SelectionDAGBuilder::getCopyFromRegs(const Value *V, Type *Ty) {
                      DAG.getDataLayout(), InReg, Ty,
                      std::nullopt); // This is not an ABI copy.
     SDValue Chain = DAG.getEntryNode();
-    Result = RFV.getCopyFromRegs(DAG, FuncInfo, getCurSDLoc(), Chain, nullptr,
-                                 V);
+    Result =
+        RFV.getCopyFromRegs(DAG, FuncInfo, getCurSDLoc(), Chain, nullptr, V);
     resolveDanglingDebugInfo(V, Result);
   }
 
@@ -1602,7 +1602,8 @@ SDValue SelectionDAGBuilder::getValue(const Value *V) {
   // to do this first, so that we don't create a CopyFromReg if we already
   // have a regular SDValue.
   SDValue &N = NodeMap[V];
-  if (N.getNode()) return N;
+  if (N.getNode())
+    return N;
 
   // If there's a virtual register allocated and initialized for this
   // value, use it.
@@ -1680,7 +1681,8 @@ SDValue SelectionDAGBuilder::getValueImpl(const Value *V) {
       for (const Use &U : C->operands()) {
         SDNode *Val = getValue(U).getNode();
         // If the operand is an empty aggregate, there are no values.
-        if (!Val) continue;
+        if (!Val)
+          continue;
         // Add each leaf value from the operand to the Constants list
         // to form a flattened list of all the values.
         for (unsigned i = 0, e = Val->getNumValues(); i != e; ++i)
@@ -1691,7 +1693,7 @@ SDValue SelectionDAGBuilder::getValueImpl(const Value *V) {
     }
 
     if (const ConstantDataSequential *CDS =
-          dyn_cast<ConstantDataSequential>(C)) {
+            dyn_cast<ConstantDataSequential>(C)) {
       SmallVector<SDValue, 4> Ops;
       for (unsigned i = 0, e = CDS->getNumElements(); i != e; ++i) {
         SDNode *Val = getValue(CDS->getElementAsConstant(i)).getNode();
@@ -1770,8 +1772,8 @@ SDValue SelectionDAGBuilder::getValueImpl(const Value *V) {
   // If this is a static alloca, generate it as the frameindex instead of
   // computation.
   if (const AllocaInst *AI = dyn_cast<AllocaInst>(V)) {
-    DenseMap<const AllocaInst*, int>::iterator SI =
-      FuncInfo.StaticAllocaMap.find(AI);
+    DenseMap<const AllocaInst *, int>::iterator SI =
+        FuncInfo.StaticAllocaMap.find(AI);
     if (SI != FuncInfo.StaticAllocaMap.end())
       return DAG.getFrameIndex(
           SI->second, TLI.getValueType(DAG.getDataLayout(), AI->getType()));
@@ -1926,7 +1928,7 @@ static void findUnwindDestinations(
     SmallVectorImpl<std::pair<MachineBasicBlock *, BranchProbability>>
         &UnwindDests) {
   EHPersonality Personality =
-    classifyEHPersonality(FuncInfo.Fn->getPersonalityFn());
+      classifyEHPersonality(FuncInfo.Fn->getPersonalityFn());
   bool IsMSVCCXX = Personality == EHPersonality::MSVC_CXX;
   bool IsCoreCLR = Personality == EHPersonality::CoreCLR;
   bool IsWasmCXX = Personality == EHPersonality::Wasm_CXX;
@@ -2061,8 +2063,7 @@ void SelectionDAGBuilder::visitRet(const ReturnInst &I) {
           commonAlignment(BaseAlign, Offsets[i]));
     }
 
-    Chain = DAG.getNode(ISD::TokenFactor, getCurSDLoc(),
-                        MVT::Other, Chains);
+    Chain = DAG.getNode(ISD::TokenFactor, getCurSDLoc(), MVT::Other, Chains);
   } else if (I.getNumOperands() != 0) {
     SmallVector<EVT, 4> ValueVTs;
     ComputeValueVTs(TLI, DL, I.getOperand(0)->getType(), ValueVTs);
@@ -2154,7 +2155,7 @@ void SelectionDAGBuilder::visitRet(const ReturnInst &I) {
 
   bool isVarArg = DAG.getMachineFunction().getFunction().isVarArg();
   CallingConv::ID CallConv =
-    DAG.getMachineFunction().getFunction().getCallingConv();
+      DAG.getMachineFunction().getFunction().getCallingConv();
   Chain = DAG.getTargetLoweringInfo().LowerReturn(
       Chain, CallConv, isVarArg, Outs, OutVals, getCurSDLoc(), DAG);
 
@@ -2187,17 +2188,19 @@ void SelectionDAGBuilder::CopyToExportRegsIfNeeded(const Value *V) {
 /// CopyTo/FromReg.
 void SelectionDAGBuilder::ExportFromCurrentBlock(const Value *V) {
   // No need to export constants.
-  if (!isa<Instruction>(V) && !isa<Argument>(V)) return;
+  if (!isa<Instruction>(V) && !isa<Argument>(V))
+    return;
 
   // Already exported?
-  if (FuncInfo.isExportedInst(V)) return;
+  if (FuncInfo.isExportedInst(V))
+    return;
 
   Register Reg = FuncInfo.InitializeRegForValue(V);
   CopyValueToVirtualRegister(V, Reg);
 }
 
-bool SelectionDAGBuilder::isExportableFromCurrentBlock(const Value *V,
-                                                     const BasicBlock *FromBB) {
+bool SelectionDAGBuilder::isExportableFromCurrentBlock(
+    const Value *V, const BasicBlock *FromBB) {
   // The operands of the setcc have to be in this block.  We don't know
   // how to export them from some other block.
   if (const Instruction *VI = dyn_cast<Instruction>(V)) {
@@ -2260,15 +2263,10 @@ static bool InBlock(const Value *V, const BasicBlock *BB) {
 /// EmitBranchForMergedCondition - Helper method for FindMergedConditions.
 /// This function emits a branch and is used at the leaves of an OR or an
 /// AND operator tree.
-void
-SelectionDAGBuilder::EmitBranchForMergedCondition(const Value *Cond,
-                                                  MachineBasicBlock *TBB,
-                                                  MachineBasicBlock *FBB,
-                                                  MachineBasicBlock *CurBB,
-                                                  MachineBasicBlock *SwitchBB,
-                                                  BranchProbability TProb,
-                                                  BranchProbability FProb,
-                                                  bool InvertCond) {
+void SelectionDAGBuilder::EmitBranchForMergedCondition(
+    const Value *Cond, MachineBasicBlock *TBB, MachineBasicBlock *FBB,
+    MachineBasicBlock *CurBB, MachineBasicBlock *SwitchBB,
+    BranchProbability TProb, BranchProbability FProb, bool InvertCond) {
   const BasicBlock *BB = CurBB->getBasicBlock();
 
   // If the leaf of the tree is a comparison, merge the condition into
@@ -2303,20 +2301,16 @@ SelectionDAGBuilder::EmitBranchForMergedCondition(const Value *Cond,
 
   // Create a CaseBlock record representing this branch.
   ISD::CondCode Opc = InvertCond ? ISD::SETNE : ISD::SETEQ;
-  CaseBlock CB(Opc, Cond, ConstantInt::getTrue(*DAG.getContext()),
-               nullptr, TBB, FBB, CurBB, getCurSDLoc(), TProb, FProb);
+  CaseBlock CB(Opc, Cond, ConstantInt::getTrue(*DAG.getContext()), nullptr, TBB,
+               FBB, CurBB, getCurSDLoc(), TProb, FProb);
   SL->SwitchCases.push_back(CB);
 }
 
-void SelectionDAGBuilder::FindMergedConditions(const Value *Cond,
-                                               MachineBasicBlock *TBB,
-                                               MachineBasicBlock *FBB,
-                                               MachineBasicBlock *CurBB,
-                                               MachineBasicBlock *SwitchBB,
-                                               Instruction::BinaryOps Opc,
-                                               BranchProbability TProb,
-                                               BranchProbability FProb,
-                                               bool InvertCond) {
+void SelectionDAGBuilder::FindMergedConditions(
+    const Value *Cond, MachineBasicBlock *TBB, MachineBasicBlock *FBB,
+    MachineBasicBlock *CurBB, MachineBasicBlock *SwitchBB,
+    Instruction::BinaryOps Opc, BranchProbability TProb,
+    BranchProbability FProb, bool InvertCond) {
   // Skip over not part of the tree and remember to invert op and operands at
   // next level.
   Value *NotCond;
@@ -2355,8 +2349,8 @@ void SelectionDAGBuilder::FindMergedConditions(const Value *Cond,
   if (!BOpIsInOrAndTree || BOp->getParent() != CurBB->getBasicBlock() ||
       !InBlock(BOpOp0, CurBB->getBasicBlock()) ||
       !InBlock(BOpOp1, CurBB->getBasicBlock())) {
-    EmitBranchForMergedCondition(Cond, TBB, FBB, CurBB, SwitchBB,
-                                 TProb, FProb, InvertCond);
+    EmitBranchForMergedCondition(Cond, TBB, FBB, CurBB, SwitchBB, TProb, FProb,
+                                 InvertCond);
     return;
   }
 
@@ -2438,9 +2432,10 @@ void SelectionDAGBuilder::FindMergedConditions(const Value *Cond,
 /// If the set of cases should be emitted as a series of branches, return true.
 /// If we should emit this as a bunch of and/or'd together conditions, return
 /// false.
-bool
-SelectionDAGBuilder::ShouldEmitAsBranches(const std::vector<CaseBlock> &Cases) {
-  if (Cases.size() != 2) return true;
+bool SelectionDAGBuilder::ShouldEmitAsBranches(
+    const std::vector<CaseBlock> &Cases) {
+  if (Cases.size() != 2)
+    return true;
 
   // If this is two comparisons of the same values or'd or and'd together, they
   // will get folded into a single comparison, so don't emit two blocks.
@@ -2453,8 +2448,7 @@ SelectionDAGBuilder::ShouldEmitAsBranches(const std::vector<CaseBlock> &Cases) {
 
   // Handle: (X != null) | (Y != null) --> (X|Y) != 0
   // Handle: (X == null) & (Y == null) --> (X|Y) == 0
-  if (Cases[0].CmpRHS == Cases[1].CmpRHS &&
-      Cases[0].CC == Cases[1].CC &&
+  if (Cases[0].CmpRHS == Cases[1].CmpRHS && Cases[0].CC == Cases[1].CC &&
       isa<Constant>(Cases[0].CmpRHS) &&
       cast<Constant>(Cases[0].CmpRHS)->isNullValue()) {
     if (Cases[0].CC == ISD::SETEQ && Cases[0].TrueBB == Cases[1].ThisBB)
@@ -2611,8 +2605,8 @@ void SelectionDAGBuilder::visitSwitchCase(CaseBlock &CB,
   } else {
     assert(CB.CC == ISD::SETLE && "Can handle only LE ranges now");
 
-    const APInt& Low = cast<ConstantInt>(CB.CmpLHS)->getValue();
-    const APInt& High = cast<ConstantInt>(CB.CmpRHS)->getValue();
+    const APInt &Low = cast<ConstantInt>(CB.CmpLHS)->getValue();
+    const APInt &High = cast<ConstantInt>(CB.CmpRHS)->getValue();
 
     SDValue CmpOp = getValue(CB.CmpMHS);
     EVT VT = CmpOp.getValueType();
@@ -2621,10 +2615,10 @@ void SelectionDAGBuilder::visitSwitchCase(CaseBlock &CB,
       Cond = DAG.getSetCC(dl, MVT::i1, CmpOp, DAG.getConstant(High, dl, VT),
                           ISD::SETLE);
     } else {
-      SDValue SUB = DAG.getNode(ISD::SUB, dl,
-                                VT, CmpOp, DAG.getConstant(Low, dl, VT));
-      Cond = DAG.getSetCC(dl, MVT::i1, SUB,
-                          DAG.getConstant(High-Low, dl, VT), ISD::SETULE);
+      SDValue SUB =
+          DAG.getNode(ISD::SUB, dl, VT, CmpOp, DAG.getConstant(Low, dl, VT));
+      Cond = DAG.getSetCC(dl, MVT::i1, SUB, DAG.getConstant(High - Low, dl, VT),
+                          ISD::SETULE);
     }
   }
 
@@ -2644,9 +2638,8 @@ void SelectionDAGBuilder::visitSwitchCase(CaseBlock &CB,
     Cond = DAG.getNode(ISD::XOR, dl, Cond.getValueType(), Cond, True);
   }
 
-  SDValue BrCond = DAG.getNode(ISD::BRCOND, dl,
-                               MVT::Other, getControlRoot(), Cond,
-                               DAG.getBasicBlock(CB.TrueBB));
+  SDValue BrCond = DAG.getNode(ISD::BRCOND, dl, MVT::Other, getControlRoot(),
+                               Cond, DAG.getBasicBlock(CB.TrueBB));
 
   setValue(CurInst, BrCond);
 
@@ -2664,12 +2657,11 @@ void SelectionDAGBuilder::visitJumpTable(SwitchCG::JumpTable &JT) {
   // Emit the code for the jump table
   assert(JT.Reg != -1U && "Should lower JT Header first!");
   EVT PTy = DAG.getTargetLoweringInfo().getPointerTy(DAG.getDataLayout());
-  SDValue Index = DAG.getCopyFromReg(getControlRoot(), getCurSDLoc(),
-                                     JT.Reg, PTy);
+  SDValue Index =
+      DAG.getCopyFromReg(getControlRoot(), getCurSDLoc(), JT.Reg, PTy);
   SDValue Table = DAG.getJumpTable(JT.JTI, PTy);
-  SDValue BrJumpTable = DAG.getNode(ISD::BR_JT, getCurSDLoc(),
-                                    MVT::Other, Index.getValue(1),
-                                    Table, Index);
+  SDValue BrJumpTable = DAG.getNode(ISD::BR_JT, getCurSDLoc(), MVT::Other,
+                                    Index.getValue(1), Table, Index);
   DAG.setRoot(BrJumpTable);
 }
 
@@ -2696,8 +2688,8 @@ void SelectionDAGBuilder::visitJumpTableHeader(SwitchCG::JumpTable &JT,
 
   unsigned JumpTableReg =
       FuncInfo.CreateReg(TLI.getPointerTy(DAG.getDataLayout()));
-  SDValue CopyTo = DAG.getCopyToReg(getControlRoot(), dl,
-                                    JumpTableReg, SwitchOp);
+  SDValue CopyTo =
+      DAG.getCopyToReg(getControlRoot(), dl, JumpTableReg, SwitchOp);
   JT.Reg = JumpTableReg;
 
   if (!JTH.FallthroughUnreachable) {
@@ -2705,12 +2697,12 @@ void SelectionDAGBuilder::visitJumpTableHeader(SwitchCG::JumpTable &JT,
     // for the switch statement if the value being switched on exceeds the
     // largest case in the switch.
     SDValue CMP = DAG.getSetCC(
-        dl, TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
-                                   Sub.getValueType()),
+        dl,
+        TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
+                               Sub.getValueType()),
         Sub, DAG.getConstant(JTH.Last - JTH.First, dl, VT), ISD::SETUGT);
 
-    SDValue BrCond = DAG.getNode(ISD::BRCOND, dl,
-                                 MVT::Other, CopyTo, CMP,
+    SDValue BrCond = DAG.getNode(ISD::BRCOND, dl, MVT::Other, CopyTo, CMP,
                                  DAG.getBasicBlock(JT.Default));
 
     // Avoid emitting unnecessary branches to the next block.
@@ -2828,18 +2820,18 @@ void SelectionDAGBuilder::visitSPDescriptorParent(StackProtectorDescriptor &SPD,
   }
 
   // Perform the comparison via a getsetcc.
-  SDValue Cmp = DAG.getSetCC(dl, TLI.getSetCCResultType(DAG.getDataLayout(),
-                                                        *DAG.getContext(),
-                                                        Guard.getValueType()),
+  SDValue Cmp = DAG.getSetCC(dl,
+                             TLI.getSetCCResultType(DAG.getDataLayout(),
+                                                    *DAG.getContext(),
+                                                    Guard.getValueType()),
                              Guard, GuardVal, ISD::SETNE);
 
   // If the guard/stackslot do not equal, branch to failure MBB.
-  SDValue BrCond = DAG.getNode(ISD::BRCOND, dl,
-                               MVT::Other, GuardVal.getOperand(0),
-                               Cmp, DAG.getBasicBlock(SPD.getFailureMBB()));
+  SDValue BrCond =
+      DAG.getNode(ISD::BRCOND, dl, MVT::Other, GuardVal.getOperand(0), Cmp,
+                  DAG.getBasicBlock(SPD.getFailureMBB()));
   // Otherwise branch to success MBB.
-  SDValue Br = DAG.getNode(ISD::BR, dl,
-                           MVT::Other, BrCond,
+  SDValue Br = DAG.getNode(ISD::BR, dl, MVT::Other, BrCond,
                            DAG.getBasicBlock(SPD.getSuccessMBB()));
 
   DAG.setRoot(Br);
@@ -2853,8 +2845,8 @@ void SelectionDAGBuilder::visitSPDescriptorParent(StackProtectorDescriptor &SPD,
 /// For a high level explanation of how this fits into the stack protector
 /// generation see the comment on the declaration of class
 /// StackProtectorDescriptor.
-void
-SelectionDAGBuilder::visitSPDescriptorFailure(StackProtectorDescriptor &SPD) {
+void SelectionDAGBuilder::visitSPDescriptorFailure(
+    StackProtectorDescriptor &SPD) {
   const TargetLowering &TLI = DAG.getTargetLoweringInfo();
   TargetLowering::MakeLibCallOptions CallOptions;
   CallOptions.setDiscardResult(true);
@@ -2912,7 +2904,7 @@ void SelectionDAGBuilder::visitBitTestHeader(BitTestBlock &B,
   B.Reg = FuncInfo.CreateReg(B.RegVT);
   SDValue CopyTo = DAG.getCopyToReg(getControlRoot(), dl, B.Reg, Sub);
 
-  MachineBasicBlock* MBB = B.Cases[0].ThisBB;
+  MachineBasicBlock *MBB = B.Cases[0].ThisBB;
 
   if (!B.FallthroughUnreachable)
     addSuccessorWithProb(SwitchBB, B.Default, B.DefaultProb);
@@ -2922,7 +2914,8 @@ void SelectionDAGBuilder::visitBitTestHeader(BitTestBlock &B,
   SDValue Root = CopyTo;
   if (!B.FallthroughUnreachable) {
     // Conditional branch to the default block.
-    SDValue RangeCmp = DAG.getSetCC(dl,
+    SDValue RangeCmp = DAG.getSetCC(
+        dl,
         TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
                                RangeSub.getValueType()),
         RangeSub, DAG.getConstant(B.Range, dl, RangeSub.getValueType()),
@@ -2941,10 +2934,9 @@ void SelectionDAGBuilder::visitBitTestHeader(BitTestBlock &B,
 
 /// visitBitTestCase - this function produces one "bit test"
 void SelectionDAGBuilder::visitBitTestCase(BitTestBlock &BB,
-                                           MachineBasicBlock* NextMBB,
+                                           MachineBasicBlock *NextMBB,
                                            BranchProbability BranchProbToNext,
-                                           unsigned Reg,
-                                           BitTestCase &B,
+                                           unsigned Reg, BitTestCase &B,
                                            MachineBasicBlock *SwitchBB) {
   SDLoc dl = getCurSDLoc();
   MVT VT = BB.RegVT;
@@ -2966,12 +2958,12 @@ void SelectionDAGBuilder::visitBitTestCase(BitTestBlock &BB,
         ShiftOp, DAG.getConstant(llvm::countr_one(B.Mask), dl, VT), ISD::SETNE);
   } else {
     // Make desired shift
-    SDValue SwitchVal = DAG.getNode(ISD::SHL, dl, VT,
-                                    DAG.getConstant(1, dl, VT), ShiftOp);
+    SDValue SwitchVal =
+        DAG.getNode(ISD::SHL, dl, VT, DAG.getConstant(1, dl, VT), ShiftOp);
 
     // Emit bit tests and jumps
-    SDValue AndOp = DAG.getNode(ISD::AND, dl,
-                                VT, SwitchVal, DAG.getConstant(B.Mask, dl, VT));
+    SDValue AndOp = DAG.getNode(ISD::AND, dl, VT, SwitchVal,
+                                DAG.getConstant(B.Mask, dl, VT));
     Cmp = DAG.getSetCC(
         dl, TLI.getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(), VT),
         AndOp, DAG.getConstant(0, dl, VT), ISD::SETNE);
@@ -2986,14 +2978,13 @@ void SelectionDAGBuilder::visitBitTestCase(BitTestBlock &BB,
   // and hence we need to normalize them to let the sum of them become one.
   SwitchBB->normalizeSuccProbs();
 
-  SDValue BrAnd = DAG.getNode(ISD::BRCOND, dl,
-                              MVT::Other, getControlRoot(),
+  SDValue BrAnd = DAG.getNode(ISD::BRCOND, dl, MVT::Other, getControlRoot(),
                               Cmp, DAG.getBasicBlock(B.TargetBB));
 
   // Avoid emitting unnecessary branches to the next block.
   if (NextMBB != NextBlock(SwitchBB))
-    BrAnd = DAG.getNode(ISD::BR, dl, MVT::Other, BrAnd,
-                        DAG.getBasicBlock(NextMBB));
+    BrAnd =
+        DAG.getNode(ISD::BR, dl, MVT::Other, BrAnd, DAG.getBasicBlock(NextMBB));
 
   DAG.setRoot(BrAnd);
 }
@@ -3031,9 +3022,9 @@ void SelectionDAGBuilder::visitInvoke(const InvokeInst &I) {
     case Intrinsic::seh_try_end:
     case Intrinsic::seh_scope_end:
       if (EHPadMBB)
-          // a block referenced by EH table
-          // so dtor-funclet not removed by opts
-          EHPadMBB->setMachineBlockAddressTaken();
+        // a block referenced by EH table
+        // so dtor-funclet not removed by opts
+        EHPadMBB->setMachineBlockAddressTaken();
       break;
     case Intrinsic::experimental_patchpoint_void:
     case Intrinsic::experimental_patchpoint_i64:
@@ -3128,8 +3119,7 @@ void SelectionDAGBuilder::visitCallBr(const CallBrInst &I) {
   CallBrMBB->normalizeSuccProbs();
 
   // Drop into default successor.
-  DAG.setRoot(DAG.getNode(ISD::BR, getCurSDLoc(),
-                          MVT::Other, getControlRoot(),
+  DAG.setRoot(DAG.getNode(ISD::BR, getCurSDLoc(), MVT::Other, getControlRoot(),
                           DAG.getBasicBlock(Return)));
 }
 
@@ -3138,8 +3128,7 @@ void SelectionDAGBuilder::visitResume(const ResumeInst &RI) {
 }
 
 void SelectionDAGBuilder::visitLandingPad(const LandingPadInst &LP) {
-  assert(FuncInfo.MBB->isEHPad() &&
-         "Call to landingpad not in landing pad!");
+  assert(FuncInfo.MBB->isEHPad() && "Call to landingpad not in landing pad!");
 
   // If there aren't registers to copy the values into (e.g., during SjLj
   // exceptions), then don't bother to create these DAG nodes.
@@ -3180,8 +3169,8 @@ void SelectionDAGBuilder::visitLandingPad(const LandingPadInst &LP) {
       dl, ValueVTs[1]);
 
   // Merge into one.
-  SDValue Res = DAG.getNode(ISD::MERGE_VALUES, dl,
-                            DAG.getVTList(ValueVTs), Ops);
+  SDValue Res =
+      DAG.getNode(ISD::MERGE_VALUES, dl, DAG.getVTList(ValueVTs), Ops);
   setValue(&LP, Res);
 }
 
@@ -3202,21 +3191,20 @@ void SelectionDAGBuilder::visitIndirectBr(const IndirectBrInst &I) {
   MachineBasicBlock *IndirectBrMBB = FuncInfo.MBB;
 
   // Update machine-CFG edges with unique successors.
-  SmallSet<BasicBlock*, 32> Done;
+  SmallSet<BasicBlock *, 32> Done;
   for (unsigned i = 0, e = I.getNumSuccessors(); i != e; ++i) {
     BasicBlock *BB = I.getSuccessor(i);
     bool Inserted = Done.insert(BB).second;
     if (!Inserted)
-        continue;
+      continue;
 
     MachineBasicBlock *Succ = FuncInfo.MBBMap[BB];
     addSuccessorWithProb(IndirectBrMBB, Succ);
   }
   IndirectBrMBB->normalizeSuccProbs();
 
-  DAG.setRoot(DAG.getNode(ISD::BRIND, getCurSDLoc(),
-                          MVT::Other, getControlRoot(),
-                          getValue(I.getAddress())));
+  DAG.setRoot(DAG.getNode(ISD::BRIND, getCurSDLoc(), MVT::Other,
+                          getControlRoot(), getValue(I.getAddress())));
 }
 
 void SelectionDAGBuilder::visitUnreachable(const UnreachableInst &I) {
@@ -3228,7 +3216,7 @@ void SelectionDAGBuilder::visitUnreachable(const UnreachableInst &I) {
     const BasicBlock &BB = *I.getParent();
     if (&I != &BB.front()) {
       BasicBlock::const_iterator PredI =
-        std::prev(BasicBlock::const_iterator(&I));
+          std::prev(BasicBlock::const_iterator(&I));
       if (const CallInst *Call = dyn_cast<CallInst>(&*PredI)) {
         if (Call->doesNotReturn())
           return;
@@ -3245,8 +3233,8 @@ void SelectionDAGBuilder::visitUnary(const User &I, unsigned Opcode) {
     Flags.copyFMF(*FPOp);
 
   SDValue Op = getValue(I.getOperand(0));
-  SDValue UnNodeValue = DAG.getNode(Opcode, getCurSDLoc(), Op.getValueType(),
-                                    Op, Flags);
+  SDValue UnNodeValue =
+      DAG.getNode(Opcode, getCurSDLoc(), Op.getValueType(), Op, Flags);
   setValue(&I, UnNodeValue);
 }
 
@@ -3263,8 +3251,8 @@ void SelectionDAGBuilder::visitBinary(const User &I, unsigned Opcode) {
 
   SDValue Op1 = getValue(I.getOperand(0));
   SDValue Op2 = getValue(I.getOperand(1));
-  SDValue BinNodeValue = DAG.getNode(Opcode, getCurSDLoc(), Op1.getValueType(),
-                                     Op1, Op2, Flags);
+  SDValue BinNodeValue =
+      DAG.getNode(Opcode, getCurSDLoc(), Op1.getValueType(), Op1, Op2, Flags);
   setValue(&I, BinNodeValue);
 }
 
@@ -3302,8 +3290,8 @@ void SelectionDAGBuilder::visitShift(const User &I, unsigned Opcode) {
   Flags.setExact(exact);
   Flags.setNoSignedWrap(nsw);
   Flags.setNoUnsignedWrap(nuw);
-  SDValue Res = DAG.getNode(Opcode, getCurSDLoc(), Op1.getValueType(), Op1, Op2,
-                            Flags);
+  SDValue Res =
+      DAG.getNode(Opcode, getCurSDLoc(), Op1.getValueType(), Op1, Op2, Flags);
   setValue(&I, Res);
 }
 
@@ -3371,9 +3359,8 @@ void SelectionDAGBuilder::visitFCmp(const User &I) {
 // Check if the condition of the select has one use or two users that are both
 // selects with the same condition.
 static bool hasOnlySelectUsers(const Value *Cond) {
-  return llvm::all_of(Cond->users(), [](const Value *V) {
-    return isa<SelectInst>(V);
-  });
+  return llvm::all_of(Cond->users(),
+                      [](const Value *V) { return isa<SelectInst>(V); });
 }
 
 void SelectionDAGBuilder::visitSelect(const User &I) {
@@ -3381,12 +3368,13 @@ void SelectionDAGBuilder::visitSelect(const User &I) {
   ComputeValueVTs(DAG.getTargetLoweringInfo(), DAG.getDataLayout(), I.getType(),
                   ValueVTs);
   unsigned NumValues = ValueVTs.size();
-  if (NumValues == 0) return;
+  if (NumValues == 0)
+    return;
 
   SmallVector<SDValue, 4> Values(NumValues);
-  SDValue Cond     = getValue(I.getOperand(0));
-  SDValue LHSVal   = getValue(I.getOperand(1));
-  SDValue RHSVal   = getValue(I.getOperand(2));
+  SDValue Cond = getValue(I.getOperand(0));
+  SDValue LHSVal = getValue(I.getOperand(1));
+  SDValue RHSVal = getValue(I.getOperand(2));
   SmallVector<SDValue, 1> BaseOps(1, Cond);
   ISD::NodeType OpCode =
       Cond.getValueType().isVector() ? ISD::VSELECT : ISD::SELECT;
@@ -3415,25 +3403,37 @@ void SelectionDAGBuilder::visitSelect(const User &I) {
     // If the vselect is legal, assume we want to leave this as a vector setcc +
     // vselect. Otherwise, if this is going to be scalarized, we want to see if
     // min/max is legal on the scalar type.
-    bool UseScalarMinMax = VT.isVector() &&
-      !TLI.isOperationLegalOrCustom(ISD::VSELECT, VT);
+    bool UseScalarMinMax =
+        VT.isVector() && !TLI.isOperationLegalOrCustom(ISD::VSELECT, VT);
 
     // ValueTracking's select pattern matching does not account for -0.0,
     // so we can't lower to FMINIMUM/FMAXIMUM because those nodes specify that
     // -0.0 is less than +0.0.
     Value *LHS, *RHS;
-    auto SPR = matchSelectPattern(const_cast<User*>(&I), LHS, RHS);
+    auto SPR = matchSelectPattern(const_cast<User *>(&I), LHS, RHS);
     ISD::NodeType Opc = ISD::DELETED_NODE;
     switch (SPR.Flavor) {
-    case SPF_UMAX:    Opc = ISD::UMAX; break;
-    case SPF_UMIN:    Opc = ISD::UMIN; break;
-    case SPF_SMAX:    Opc = ISD::SMAX; break;
-    case SPF_SMIN:    Opc = ISD::SMIN; break;
+    case SPF_UMAX:
+      Opc = ISD::UMAX;
+      break;
+    case SPF_UMIN:
+      Opc = ISD::UMIN;
+      break;
+    case SPF_SMAX:
+      Opc = ISD::SMAX;
+      break;
+    case SPF_SMIN:
+      Opc = ISD::SMIN;
+      break;
     case SPF_FMINNUM:
       switch (SPR.NaNBehavior) {
-      case SPNB_NA: llvm_unreachable("No NaN behavior for FP op?");
-      case SPNB_RETURNS_NAN: break;
-      case SPNB_RETURNS_OTHER: Opc = ISD::FMINNUM; break;
+      case SPNB_NA:
+        llvm_unreachable("No NaN behavior for FP op?");
+      case SPNB_RETURNS_NAN:
+        break;
+      case SPNB_RETURNS_OTHER:
+        Opc = ISD::FMINNUM;
+        break;
       case SPNB_RETURNS_ANY:
         if (TLI.isOperationLegalOrCustom(ISD::FMINNUM, VT) ||
             (UseScalarMinMax &&
@@ -3444,9 +3444,13 @@ void SelectionDAGBuilder::visitSelect(const User &I) {
       break;
     case SPF_FMAXNUM:
       switch (SPR.NaNBehavior) {
-      case SPNB_NA: llvm_unreachable("No NaN behavior for FP op?");
-      case SPNB_RETURNS_NAN: break;
-      case SPNB_RETURNS_OTHER: Opc = ISD::FMAXNUM; break;
+      case SPNB_NA:
+        llvm_unreachable("No NaN behavior for FP op?");
+      case SPNB_RETURNS_NAN:
+        break;
+      case SPNB_RETURNS_OTHER:
+        Opc = ISD::FMAXNUM;
+        break;
       case SPNB_RETURNS_ANY:
         if (TLI.isOperationLegalOrCustom(ISD::FMAXNUM, VT) ||
             (UseScalarMinMax &&
@@ -3462,7 +3466,8 @@ void SelectionDAGBuilder::visitSelect(const User &I) {
       IsUnaryAbs = true;
       Opc = ISD::ABS;
       break;
-    default: break;
+    default:
+      break;
     }
 
     if (!IsUnaryAbs && Opc != ISD::DELETED_NODE &&
@@ -3622,17 +3627,16 @@ void SelectionDAGBuilder::visitBitCast(const User &I) {
   // BitCast assures us that source and destination are the same size so this is
   // either a BITCAST or a no-op.
   if (DestVT != N.getValueType())
-    setValue(&I, DAG.getNode(ISD::BITCAST, dl,
-                             DestVT, N)); // convert types.
+    setValue(&I, DAG.getNode(ISD::BITCAST, dl, DestVT, N)); // convert types.
   // Check if the original LLVM IR Operand was a ConstantInt, because getValue()
   // might fold any kind of constant expression to an integer constant and that
   // is not what we are looking for. Only recognize a bitcast of a genuine
   // constant integer as an opaque constant.
-  else if(ConstantInt *C = dyn_cast<ConstantInt>(I.getOperand(0)))
+  else if (ConstantInt *C = dyn_cast<ConstantInt>(I.getOperand(0)))
     setValue(&I, DAG.getConstant(C->getValue(), dl, DestVT, /*isTarget=*/false,
-                                 /*isOpaque*/true));
+                                 /*isOpaque*/ true));
   else
-    setValue(&I, N);            // noop cast.
+    setValue(&I, N); // noop cast.
 }
 
 void SelectionDAGBuilder::visitAddrSpaceCast(const User &I) {
@@ -3792,7 +3796,7 @@ void SelectionDAGBuilder::visitShuffleVector(const User &I) {
   if (SrcNumElts > MaskNumElts) {
     // Analyze the access pattern of the vector to see if we can extract
     // two subvectors and do the shuffle.
-    int StartIdx[2] = { -1, -1 };  // StartIdx to extract from
+    int StartIdx[2] = {-1, -1}; // StartIdx to extract from
     bool CanExtract = true;
     for (int Idx : Mask) {
       unsigned Input = 0;
@@ -3850,7 +3854,7 @@ void SelectionDAGBuilder::visitShuffleVector(const User &I) {
   // replacing the shuffle with extract and build vector.
   // to insert and build vector.
   EVT EltVT = VT.getVectorElementType();
-  SmallVector<SDValue,8> Ops;
+  SmallVector<SDValue, 8> Ops;
   for (int Idx : Mask) {
     SDValue Res;
 
@@ -3858,7 +3862,8 @@ void SelectionDAGBuilder::visitShuffleVector(const User &I) {
       Res = DAG.getUNDEF(EltVT);
     } else {
       SDValue &Src = Idx < (int)SrcNumElts ? Src1 : Src2;
-      if (Idx >= (int)SrcNumElts) Idx -= SrcNumElts;
+      if (Idx >= (int)SrcNumElts)
+        Idx -= SrcNumElts;
 
       Res = DAG.getNode(ISD::EXTRACT_VECTOR_ELT, DL, EltVT, Src,
                         DAG.getVectorIdxConstant(Idx, DL));
@@ -3901,19 +3906,20 @@ void SelectionDAGBuilder::visitInsertValue(const InsertValueInst &I) {
   unsigned i = 0;
   // Copy the beginning value(s) from the original aggregate.
   for (; i != LinearIndex; ++i)
-    Values[i] = IntoUndef ? DAG.getUNDEF(AggValueVTs[i]) :
-                SDValue(Agg.getNode(), Agg.getResNo() + i);
+    Values[i] = IntoUndef ? DAG.getUNDEF(AggValueVTs[i])
+                          : SDValue(Agg.getNode(), Agg.getResNo() + i);
   // Copy values from the inserted value(s).
   if (NumValValues) {
     SDValue Val = getValue(Op1);
     for (; i != LinearIndex + NumValValues; ++i)
-      Values[i] = FromUndef ? DAG.getUNDEF(AggValueVTs[i]) :
-                  SDValue(Val.getNode(), Val.getResNo() + i - LinearIndex);
+      Values[i] =
+          FromUndef ? DAG.getUNDEF(AggValueVTs[i])
+                    : SDValue(Val.getNode(), Val.getResNo() + i - LinearIndex);
   }
   // Copy remaining value(s) from the original aggregate.
   for (; i != NumAggValues; ++i)
-    Values[i] = IntoUndef ? DAG.getUNDEF(AggValueVTs[i]) :
-                SDValue(Agg.getNode(), Agg.getResNo() + i);
+    Values[i] = IntoUndef ? DAG.getUNDEF(AggValueVTs[i])
+                          : SDValue(Agg.getNode(), Agg.getResNo() + i);
 
   setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurSDLoc(),
                            DAG.getVTList(AggValueVTs), Values));
@@ -3946,9 +3952,9 @@ void SelectionDAGBuilder::visitExtractValue(const ExtractValueInst &I) {
   // Copy out the selected value(s).
   for (unsigned i = LinearIndex; i != LinearIndex + NumValValues; ++i)
     Values[i - LinearIndex] =
-      OutOfUndef ?
-        DAG.getUNDEF(Agg.getNode()->getValueType(Agg.getResNo() + i)) :
-        SDValue(Agg.getNode(), Agg.getResNo() + i);
+        OutOfUndef
+            ? DAG.getUNDEF(Agg.getNode()->getValueType(Agg.getResNo() + i))
+            : SDValue(Agg.getNode(), Agg.getResNo() + i);
 
   setValue(&I, DAG.getNode(ISD::MERGE_VALUES, getCurSDLoc(),
                            DAG.getVTList(ValValueVTs), Values));
@@ -4043,8 +4049,8 @@ void SelectionDAGBuilder::visitGetElementPtr(const User &I) {
       SDValue IdxN = getValue(Idx);
 
       if (!IdxN.getValueType().isVector() && IsVectorGEP) {
-        EVT VT = EVT::getVectorVT(*Context, IdxN.getValueType(),
-                                  VectorElementCount);
+        EVT VT =
+            EVT::getVectorVT(*Context, IdxN.getValueType(), VectorElementCount);
         IdxN = DAG.getSplat(VT, dl, IdxN);
       }
 
@@ -4066,20 +4072,17 @@ void SelectionDAGBuilder::visitGetElementPtr(const User &I) {
         if (ElementMul != 1) {
           if (ElementMul.isPowerOf2()) {
             unsigned Amt = ElementMul.logBase2();
-            IdxN = DAG.getNode(ISD::SHL, dl,
-                               N.getValueType(), IdxN,
+            IdxN = DAG.getNode(ISD::SHL, dl, N.getValueType(), IdxN,
                                DAG.getConstant(Amt, dl, IdxN.getValueType()));
           } else {
             SDValue Scale = DAG.getConstant(ElementMul.getZExtValue(), dl,
                                             IdxN.getValueType());
-            IdxN = DAG.getNode(ISD::MUL, dl,
-                               N.getValueType(), IdxN, Scale);
+            IdxN = DAG.getNode(ISD::MUL, dl, N.getValueType(), IdxN, Scale);
           }
         }
       }
 
-      N = DAG.getNode(ISD::ADD, dl,
-                      N.getValueType(), N, IdxN);
+      N = DAG.getNode(ISD::ADD, dl, N.getValueType(), N, IdxN);
     }
   }
 
@@ -4100,7 +4103,7 @@ void SelectionDAGBuilder::visitAlloca(const AllocaInst &I) {
   // If this is a fixed sized alloca in the entry block of the function,
   // allocate it statically on the stack.
   if (FuncInfo.StaticAllocaMap.count(&I))
-    return;   // getValue will auto-populate this.
+    return; // getValue will auto-populate this.
 
   SDLoc dl = getCurSDLoc();
   Type *Ty = I.getAllocatedType();
@@ -4276,8 +4279,8 @@ void SelectionDAGBuilder::visitLoad(const LoadInst &I) {
       PendingLoads.push_back(Chain);
   }
 
-  setValue(&I, DAG.getNode(ISD::MERGE_VALUES, dl,
-                           DAG.getVTList(ValueVTs), Values));
+  setValue(&I,
+           DAG.getNode(ISD::MERGE_VALUES, dl, DAG.getVTList(ValueVTs), Values));
 }
 
 void SelectionDAGBuilder::visitStoreToSwiftError(const StoreInst &I) {
@@ -4307,8 +4310,7 @@ void SelectionDAGBuilder::visitLoadFromSwiftError(const LoadInst &I) {
   assert(DAG.getTargetLoweringInfo().supportSwiftError() &&
          "call visitLoadFromSwiftError when backend supports swifterror");
 
-  assert(!I.isVolatile() &&
-         !I.hasMetadata(LLVMContext::MD_nontemporal) &&
+  assert(!I.isVolatile() && !I.hasMetadata(LLVMContext::MD_nontemporal) &&
          !I.hasMetadata(LLVMContext::MD_invariant_load) &&
          "Support volatile, non temporal, invariant for load_from_swift_error");
 
@@ -4432,7 +4434,7 @@ void SelectionDAGBuilder::visitMaskedStore(const CallInst &I,
     Alignment = std::nullopt;
   };
 
-  Value  *PtrOperand, *MaskOperand, *Src0Operand;
+  Value *PtrOperand, *MaskOperand, *Src0Operand;
   MaybeAlign Alignment;
   if (IsCompressing)
     getCompressingStoreOps(PtrOperand, MaskOperand, Src0Operand, Alignment);
@@ -4477,7 +4479,7 @@ static bool getUniformBase(const Value *Ptr, SDValue &Base, SDValue &Index,
                            ISD::MemIndexType &IndexType, SDValue &Scale,
                            SelectionDAGBuilder *SDB, const BasicBlock *CurBB,
                            uint64_t ElemSize) {
-  SelectionDAG& DAG = SDB->DAG;
+  SelectionDAG &DAG = SDB->DAG;
   const TargetLowering &TLI = DAG.getTargetLoweringInfo();
   const DataLayout &DL = DAG.getDataLayout();
 
@@ -4561,7 +4563,8 @@ void SelectionDAGBuilder::visitMaskedScatter(const CallInst &I) {
     Base = DAG.getConstant(0, sdl, TLI.getPointerTy(DAG.getDataLayout()));
     Index = getValue(Ptr);
     IndexType = ISD::SIGNED_SCALED;
-    Scale = DAG.getTargetConstant(1, sdl, TLI.getPointerTy(DAG.getDataLayout()));
+    Scale =
+        DAG.getTargetConstant(1, sdl, TLI.getPointerTy(DAG.getDataLayout()));
   }
 
   EVT IdxVT = Index.getValueType();
@@ -4571,7 +4574,7 @@ void SelectionDAGBuilder::visitMaskedScatter(const CallInst &I) {
     Index = DAG.getNode(ISD::SIGN_EXTEND, sdl, NewIdxVT, Index);
   }
 
-  SDValue Ops[] = { getMemoryRoot(), Src0, Mask, Base, Index, Scale };
+  SDValue Ops[] = {getMemoryRoot(), Src0, Mask, Base, Index, Scale};
   SDValue Scatter = DAG.getMaskedScatter(DAG.getVTList(MVT::Other), VT, sdl,
                                          Ops, MMO, IndexType, false);
   DAG.setRoot(Scatter);
@@ -4598,7 +4601,7 @@ void SelectionDAGBuilder::visitMaskedLoad(const CallInst &I, bool IsExpanding) {
     Src0 = I.getArgOperand(2);
   };
 
-  Value  *PtrOperand, *MaskOperand, *Src0Operand;
+  Value *PtrOperand, *MaskOperand, *Src0Operand;
   MaybeAlign Alignment;
   if (IsExpanding)
     getExpandingLoadOps(PtrOperand, MaskOperand, Src0Operand, Alignment);
@@ -4669,7 +4672,8 @@ void SelectionDAGBuilder::visitMaskedGather(const CallInst &I) {
     Base = DAG.getConstant(0, sdl, TLI.getPointerTy(DAG.getDataLayout()));
     Index = getValue(Ptr);
     IndexType = ISD::SIGNED_SCALED;
-    Scale = DAG.getTargetConstant(1, sdl, TLI.getPointerTy(DAG.getDataLayout()));
+    Scale =
+        DAG.getTargetConstant(1, sdl, TLI.getPointerTy(DAG.getDataLayout()));
   }
 
   EVT IdxVT = Index.getValueType();
@@ -4679,7 +4683,7 @@ void SelectionDAGBuilder::visitMaskedGather(const CallInst &I) {
     Index = DAG.getNode(ISD::SIGN_EXTEND, sdl, NewIdxVT, Index);
   }
 
-  SDValue Ops[] = { Root, Src0, Mask, Base, Index, Scale };
+  SDValue Ops[] = {Root, Src0, Mask, Base, Index, Scale};
   SDValue Gather = DAG.getMaskedGather(DAG.getVTList(VT, MVT::Other), VT, sdl,
                                        Ops, MMO, IndexType, ISD::NON_EXTLOAD);
 
@@ -4707,11 +4711,10 @@ void SelectionDAGBuilder::visitAtomicCmpXchg(const AtomicCmpXchgInst &I) {
       DAG.getEVTAlign(MemVT), AAMDNodes(), nullptr, SSID, SuccessOrdering,
       FailureOrdering);
 
-  SDValue L = DAG.getAtomicCmpSwap(ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS,
-                                   dl, MemVT, VTs, InChain,
-                                   getValue(I.getPointerOperand()),
-                                   getValue(I.getCompareOperand()),
-                                   getValue(I.getNewValOperand()), MMO);
+  SDValue L = DAG.getAtomicCmpSwap(
+      ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS, dl, MemVT, VTs, InChain,
+      getValue(I.getPointerOperand()), getValue(I.getCompareOperand()),
+      getValue(I.getNewValOperand()), MMO);
 
   SDValue OutChain = L.getValue(2);
 
@@ -4723,22 +4726,53 @@ void SelectionDAGBuilder::visitAtomicRMW(const AtomicRMWInst &I) {
   SDLoc dl = getCurSDLoc();
   ISD::NodeType NT;
   switch (I.getOperation()) {
-  default: llvm_unreachable("Unknown atomicrmw operation");
-  case AtomicRMWInst::Xchg: NT = ISD::ATOMIC_SWAP; break;
-  case AtomicRMWInst::Add:  NT = ISD::ATOMIC_LOAD_ADD; break;
-  case AtomicRMWInst::Sub:  NT = ISD::ATOMIC_LOAD_SUB; break;
-  case AtomicRMWInst::And:  NT = ISD::ATOMIC_LOAD_AND; break;
-  case AtomicRMWInst::Nand: NT = ISD::ATOMIC_LOAD_NAND; break;
-  case AtomicRMWInst::Or:   NT = ISD::ATOMIC_LOAD_OR; break;
-  case AtomicRMWInst::Xor:  NT = ISD::ATOMIC_LOAD_XOR; break;
-  case AtomicRMWInst::Max:  NT = ISD::ATOMIC_LOAD_MAX; break;
-  case AtomicRMWInst::Min:  NT = ISD::ATOMIC_LOAD_MIN; break;
-  case AtomicRMWInst::UMax: NT = ISD::ATOMIC_LOAD_UMAX; break;
-  case AtomicRMWInst::UMin: NT = ISD::ATOMIC_LOAD_UMIN; break;
-  case AtomicRMWInst::FAdd: NT = ISD::ATOMIC_LOAD_FADD; break;
-  case AtomicRMWInst::FSub: NT = ISD::ATOMIC_LOAD_FSUB; break;
-  case AtomicRMWInst::FMax: NT = ISD::ATOMIC_LOAD_FMAX; break;
-  case AtomicRMWInst::FMin: NT = ISD::ATOMIC_LOAD_FMIN; break;
+  default:
+    llvm_unreachable("Unknown atomicrmw operation");
+  case AtomicRMWInst::Xchg:
+    NT = ISD::ATOMIC_SWAP;
+    break;
+  case AtomicRMWInst::Add:
+    NT = ISD::ATOMIC_LOAD_ADD;
+    break;
+  case AtomicRMWInst::Sub:
+    NT = ISD::ATOMIC_LOAD_SUB;
+    break;
+  case AtomicRMWInst::And:
+    NT = ISD::ATOMIC_LOAD_AND;
+    break;
+  case AtomicRMWInst::Nand:
+    NT = ISD::ATOMIC_LOAD_NAND;
+    break;
+  case AtomicRMWInst::Or:
+    NT = ISD::ATOMIC_LOAD_OR;
+    break;
+  case AtomicRMWInst::Xor:
+    NT = ISD::ATOMIC_LOAD_XOR;
+    break;
+  case AtomicRMWInst::Max:
+    NT = ISD::ATOMIC_LOAD_MAX;
+    break;
+  case AtomicRMWInst::Min:
+    NT = ISD::ATOMIC_LOAD_MIN;
+    break;
+  case AtomicRMWInst::UMax:
+    NT = ISD::ATOMIC_LOAD_UMAX;
+    break;
+  case AtomicRMWInst::UMin:
+    NT = ISD::ATOMIC_LOAD_UMIN;
+    break;
+  case AtomicRMWInst::FAdd:
+    NT = ISD::ATOMIC_LOAD_FADD;
+    break;
+  case AtomicRMWInst::FSub:
+    NT = ISD::ATOMIC_LOAD_FSUB;
+    break;
+  case AtomicRMWInst::FMax:
+    NT = ISD::ATOMIC_LOAD_FMAX;
+    break;
+  case AtomicRMWInst::FMin:
+    NT = ISD::ATOMIC_LOAD_FMIN;
+    break;
   case AtomicRMWInst::UIncWrap:
     NT = ISD::ATOMIC_LOAD_UINC_WRAP;
     break;
@@ -4761,9 +4795,8 @@ void SelectionDAGBuilder::visitAtomicRMW(const AtomicRMWInst &I) {
       DAG.getEVTAlign(MemVT), AAMDNodes(), nullptr, SSID, Ordering);
 
   SDValue L =
-    DAG.getAtomic(NT, dl, MemVT, InChain,
-                  getValue(I.getPointerOperand()), getValue(I.getValOperand()),
-                  MMO);
+      DAG.getAtomic(NT, dl, MemVT, InChain, getValue(I.getPointerOperand()),
+                    getValue(I.getValOperand()), MMO);
 
   SDValue OutChain = L.getValue(1);
 
@@ -4826,8 +4859,8 @@ void SelectionDAGBuilder::visitAtomicLoad(const LoadInst &I) {
     return;
   }
 
-  SDValue L = DAG.getAtomic(ISD::ATOMIC_LOAD, dl, MemVT, MemVT, InChain,
-                            Ptr, MMO);
+  SDValue L =
+      DAG.getAtomic(ISD::ATOMIC_LOAD, dl, MemVT, MemVT, InChain, Ptr, MMO);
 
   SDValue OutChain = L.getValue(1);
   if (MemVT != VT)
@@ -4893,7 +4926,7 @@ void SelectionDAGBuilder::visitTargetIntrinsic(const CallInst &I,
 
   // Build the operand list.
   SmallVector<SDValue, 8> Ops;
-  if (HasChain) {  // If this intrinsic has side-effects, chainify it.
+  if (HasChain) { // If this intrinsic has side-effects, chainify it.
     if (OnlyLoad) {
       // We don't need to serialize loads against other loads.
       Ops.push_back(DAG.getRoot());
@@ -4905,9 +4938,8 @@ void SelectionDAGBuilder::visitTargetIntrinsic(const CallInst &I,
   // Info is set by getTgtMemIntrinsic
   TargetLowering::IntrinsicInfo Info;
   const TargetLowering &TLI = DAG.getTargetLoweringInfo();
-  bool IsTgtIntrinsic = TLI.getTgtMemIntrinsic(Info, I,
-                                               DAG.getMachineFunction(),
-                                               Intrinsic);
+  bool IsTgtIntrinsic =
+      TLI.getTgtMemIntrinsic(Info, I, DAG.getMachineFunction(), Intrinsic);
 
   // Add the intrinsic ID as an integer operand if it's not a target intrinsic.
   if (!IsTgtIntrinsic || Info.opc == ISD::INTRINSIC_VOID ||
@@ -4975,7 +5007,7 @@ void SelectionDAGBuilder::visitTargetIntrinsic(const CallInst &I,
   }
 
   if (HasChain) {
-    SDValue Chain = Result.getValue(Result.getNode()->getNumValues()-1);
+    SDValue Chain = Result.getValue(Result.getNode()->getNumValues() - 1);
     if (OnlyLoad)
       PendingLoads.push_back(Chain);
     else
@@ -5132,8 +5164,8 @@ static SDValue getLimitedPrecisionExp2(SDValue t0, const SDLoc &dl,
 /// limited-precision mode.
 static SDValue expandExp(const SDLoc &dl, SDValue Op, SelectionDAG &DAG,
                          const TargetLowering &TLI, SDNodeFlags Flags) {
-  if (Op.getValueType() == MVT::f32 &&
-      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
+  if (Op.getValueType() == MVT::f32 && LimitFloatPrecision > 0 &&
+      LimitFloatPrecision <= 18) {
 
     // Put the exponent in the right bit position for later addition to the
     // final result:
@@ -5156,8 +5188,8 @@ static SDValue expandLog(const SDLoc &dl, SDValue Op, SelectionDAG &DAG,
                          const TargetLowering &TLI, SDNodeFlags Flags) {
   // TODO: What fast-math-flags should be set on the floating-point nodes?
 
-  if (Op.getValueType() == MVT::f32 &&
-      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
+  if (Op.getValueType() == MVT::f32 && LimitFloatPrecision > 0 &&
+      LimitFloatPrecision <= 18) {
     SDValue Op1 = DAG.getNode(ISD::BITCAST, dl, MVT::i32, Op);
 
     // Scale the exponent by log(2).
@@ -5255,8 +5287,8 @@ static SDValue expandLog2(const SDLoc &dl, SDValue Op, SelectionDAG &DAG,
                           const TargetLowering &TLI, SDNodeFlags Flags) {
   // TODO: What fast-math-flags should be set on the floating-point nodes?
 
-  if (Op.getValueType() == MVT::f32 &&
-      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
+  if (Op.getValueType() == MVT::f32 && LimitFloatPrecision > 0 &&
+      LimitFloatPrecision <= 18) {
     SDValue Op1 = DAG.getNode(ISD::BITCAST, dl, MVT::i32, Op);
 
     // Get the exponent.
@@ -5352,8 +5384,8 @@ static SDValue expandLog10(const SDLoc &dl, SDValue Op, SelectionDAG &DAG,
                            const TargetLowering &TLI, SDNodeFlags Flags) {
   // TODO: What fast-math-flags should be set on the floating-point nodes?
 
-  if (Op.getValueType() == MVT::f32 &&
-      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18) {
+  if (Op.getValueType() == MVT::f32 && LimitFloatPrecision > 0 &&
+      LimitFloatPrecision <= 18) {
     SDValue Op1 = DAG.getNode(ISD::BITCAST, dl, MVT::i32, Op);
 
     // Scale the exponent by log10(2) [0.30102999f].
@@ -5440,8 +5472,8 @@ static SDValue expandLog10(const SDLoc &dl, SDValue Op, SelectionDAG &DAG,
 /// limited-precision mode.
 static SDValue expandExp2(const SDLoc &dl, SDValue Op, SelectionDAG &DAG,
                           const TargetLowering &TLI, SDNodeFlags Flags) {
-  if (Op.getValueType() == MVT::f32 &&
-      LimitFloatPrecision > 0 && LimitFloatPrecision <= 18)
+  if (Op.getValueType() == MVT::f32 && LimitFloatPrecision > 0 &&
+      LimitFloatPrecision <= 18)
     return getLimitedPrecisionExp2(Op, dl, DAG);
 
   // No special expansion.
@@ -5530,9 +5562,9 @@ static SDValue ExpandPowI(const SDLoc &DL, SDValue LHS, SDValue RHS,
   return DAG.getNode(ISD::FPOWI, DL, LHS.getValueType(), LHS, RHS);
 }
 
-static SDValue expandDivFix(unsigned Opcode, const SDLoc &DL,
-                            SDValue LHS, SDValue RHS, SDValue Scale,
-                            SelectionDAG &DAG, const TargetLowering &TLI) {
+static SDValue expandDivFix(unsigned Opcode, const SDLoc &DL, SDValue LHS,
+                            SDValue RHS, SDValue Scale, SelectionDAG &DAG,
+                            const TargetLowering &TLI) {
   EVT VT = LHS.getValueType();
   bool Signed = Opcode == ISD::SDIVFIX || Opcode == ISD::SDIVFIXSAT;
   bool Saturating = Opcode == ISD::SDIVFIXSAT || Opcode == ISD::UDIVFIXSAT;
@@ -5559,8 +5591,8 @@ static SDValue expandDivFix(unsigned Opcode, const SDLoc &DL,
   if ((ScaleInt > 0 || (Saturating && Signed)) &&
       (TLI.isTypeLegal(VT) ||
        (VT.isVector() && TLI.isTypeLegal(VT.getVectorElementType())))) {
-    TargetLowering::LegalizeAction Action = TLI.getFixedPointOperationAction(
-        Opcode, VT, ScaleInt);
+    TargetLowering::LegalizeAction Action =
+        TLI.getFixedPointOperationAction(Opcode, VT, ScaleInt);
     if (Action != TargetLowering::Legal && Action != TargetLowering::Custom) {
       EVT PromVT;
       if (VT.isScalarInteger())
@@ -5684,8 +5716,8 @@ bool SelectionDAGBuilder::EmitFuncArgumentDbgValue(
     // we should only emit as ArgDbgValue if the Variable is an argument to the
     // current function, and the dbg.value intrinsic is found in the entry
     // block.
-    bool VariableIsFunctionInputArg = Variable->isParameter() &&
-        !DL->getInlinedAt();
+    bool VariableIsFunctionInputArg =
+        Variable->isParameter() && !DL->getInlinedAt();
     bool IsInPrologue = SDNodeOrder == LowestSDNodeOrder;
     if (!IsInPrologue && !VariableIsFunctionInputArg)
       return false;
@@ -5759,54 +5791,55 @@ bool SelectionDAGBuilder::EmitFuncArgumentDbgValue(
     SDValue LCandidate = peekThroughBitcasts(N);
     if (LoadSDNode *LNode = dyn_cast<LoadSDNode>(LCandidate.getNode()))
       if (FrameIndexSDNode *FINode =
-          dyn_cast<FrameIndexSDNode>(LNode->getBasePtr().getNode()))
+              dyn_cast<FrameIndexSDNode>(LNode->getBasePtr().getNode()))
         Op = MachineOperand::CreateFI(FINode->getIndex());
   }
 
   if (!Op) {
     // Create a DBG_VALUE for each decomposed value in ArgRegs to cover Reg
-    auto splitMultiRegDbgValue = [&](ArrayRef<std::pair<unsigned, TypeSize>>
-                                         SplitRegs) {
-      unsigned Offset = 0;
-      for (const auto &RegAndSize : SplitRegs) {
-        // If the expression is already a fragment, the current register
-        // offset+size might extend beyond the fragment. In this case, only
-        // the register bits that are inside the fragment are relevant.
-        int RegFragmentSizeInBits = RegAndSize.second;
-        if (auto ExprFragmentInfo = Expr->getFragmentInfo()) {
-          uint64_t ExprFragmentSizeInBits = ExprFragmentInfo->SizeInBits;
-          // The register is entirely outside the expression fragment,
-          // so is irrelevant for debug info.
-          if (Offset >= ExprFragmentSizeInBits)
-            break;
-          // The register is partially outside the expression fragment, only
-          // the low bits within the fragment are relevant for debug info.
-          if (Offset + RegFragmentSizeInBits > ExprFragmentSizeInBits) {
-            RegFragmentSizeInBits = ExprFragmentSizeInBits - Offset;
-          }
-        }
+    auto splitMultiRegDbgValue =
+        [&](ArrayRef<std::pair<unsigned, TypeSize>> SplitRegs) {
+          unsigned Offset = 0;
+          for (const auto &RegAndSize : SplitRegs) {
+            // If the expression is already a fragment, the current register
+            // offset+size might extend beyond the fragment. In this case, only
+            // the register bits that are inside the fragment are relevant.
+            int RegFragmentSizeInBits = RegAndSize.second;
+            if (auto ExprFragmentInfo = Expr->getFragmentInfo()) {
+              uint64_t ExprFragmentSizeInBits = ExprFragmentInfo->SizeInBits;
+              // The register is entirely outside the expression fragment,
+              // so is irrelevant for debug info.
+              if (Offset >= ExprFragmentSizeInBits)
+                break;
+              // The register is partially outside the expression fragment, only
+              // the low bits within the fragment are relevant for debug info.
+              if (Offset + RegFragmentSizeInBits > ExprFragmentSizeInBits) {
+                RegFragmentSizeInBits = ExprFragmentSizeInBits - Offset;
+              }
+            }
 
-        auto FragmentExpr = DIExpression::createFragmentExpression(
-            Expr, Offset, RegFragmentSizeInBits);
-        Offset += RegAndSize.second;
-        // If a valid fragment expression cannot be created, the variable's
-        // correct value cannot be determined and so it is set as Undef.
-        if (!FragmentExpr) {
-          SDDbgValue *SDV = DAG.getConstantDbgValue(
-              Variable, Expr, UndefValue::get(V->getType()), DL, SDNodeOrder);
-          DAG.AddDbgValue(SDV, false);
-          continue;
-        }
-        MachineInstr *NewMI =
-            MakeVRegDbgValue(RegAndSize.first, *FragmentExpr,
-                             Kind != FuncArgumentDbgValueKind::Value);
-        FuncInfo.ArgDbgValues.push_back(NewMI);
-      }
-    };
+            auto FragmentExpr = DIExpression::createFragmentExpression(
+                Expr, Offset, RegFragmentSizeInBits);
+            Offset += RegAndSize.second;
+            // If a valid fragment expression cannot be created, the variable's
+            // correct value cannot be determined and so it is set as Undef.
+            if (!FragmentExpr) {
+              SDDbgValue *SDV = DAG.getConstantDbgValue(
+                  Variable, Expr, UndefValue::get(V->getType()), DL,
+                  SDNodeOrder);
+              DAG.AddDbgValue(SDV, false);
+              continue;
+            }
+            MachineInstr *NewMI =
+                MakeVRegDbgValue(RegAndSize.first, *FragmentExpr,
+                                 Kind != FuncArgumentDbgValueKind::Value);
+            FuncInfo.ArgDbgValues.push_back(NewMI);
+          }
+        };
 
     // Check if ValueMap has reg number.
-    DenseMap<const Value *, Register>::const_iterator
-      VMI = FuncInfo.ValueMap.find(V);
+    DenseMap<const Value *, Register>::const_iterator VMI =
+        FuncInfo.ValueMap.find(V);
     if (VMI != FuncInfo.ValueMap.end()) {
       const auto &TLI = DAG.getTargetLoweringInfo();
       RegsForValue RFV(V->getContext(), TLI, DAG.getDataLayout(), VMI->second,
@@ -5892,7 +5925,7 @@ static unsigned FixedPointIntrinsicToOpcode(unsigned Intrinsic) {
 }
 
 void SelectionDAGBuilder::lowerCallToExternalSymbol(const CallInst &I,
-                                           const char *FunctionName) {
+                                                    const char *FunctionName) {
   assert(FunctionName && "FunctionName must not be nullptr");
   SDValue Callee = DAG.getExternalSymbol(
       FunctionName,
@@ -5974,9 +6007,15 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
     setValue(&I, DAG.getVScale(sdl, VT, APInt(VT.getSizeInBits(), 1)));
     return;
   }
-  case Intrinsic::vastart:  visitVAStart(I); return;
-  case Intrinsic::vaend:    visitVAEnd(I); return;
-  case Intrinsic::vacopy:   visitVACopy(I); return;
+  case Intrinsic::vastart:
+    visitVAStart(I);
+    return;
+  case Intrinsic::vaend:
+    visitVAEnd(I);
+    return;
+  case Intrinsic::vacopy:
+    visitVACopy(I);
+    return;
   case Intrinsic::returnaddress:
     setValue(&I, DAG.getNode(ISD::RETURNADDR, sdl,
                              TLI.getValueType(DAG.getDataLayout(), I.getType()),
@@ -6004,8 +6043,8 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
     SDValue RegName =
         DAG.getMDNode(cast<MDNode>(cast<MetadataAsValue>(Reg)->getMetadata()));
     EVT VT = TLI.getValueType(DAG.getDataLayout(), I.getType());
-    Res = DAG.getNode(ISD::READ_REGISTER, sdl,
-      DAG.getVTList(VT, MVT::Other), Chain, RegName);
+    Res = DAG.getNode(ISD::READ_REGISTER, sdl, DAG.getVTList(VT, MVT::Other),
+                      Chain, RegName);
     setValue(&I, Res);
     DAG.setRoot(Res.getValue(1));
     return;
@@ -6312,9 +6351,7 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
   case Intrinsic::eh_return_i32:
   case Intrinsic::eh_return_i64:
     DAG.getMachineFunction().setCallsEHReturn(true);
-    DAG.setRoot(DAG.getNode(ISD::EH_RETURN, sdl,
-                            MVT::Other,
-                            getControlRoot(),
+    DAG.setRoot(DAG.getNode(ISD::EH_RETURN, sdl, MVT::Other, getControlRoot(),
                             getValue(I.getArgOperand(0)),
                             getValue(I.getArgOperand(1))));
     return;
@@ -6338,7 +6375,7 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
     // Get and store the index of the function context.
     MachineFrameInfo &MFI = DAG.getMachineFunction().getFrameInfo();
     AllocaInst *FnCtx =
-      cast<AllocaInst>(I.getArgOperand(0)->stripPointerCasts());
+        cast<AllocaInst>(I.getArgOperand(0)->stripPointerCasts());
     int FI = FuncInfo.StaticAllocaMap[FnCtx];
     MFI.setFunctionContextIndex(FI);
     return;
@@ -6354,12 +6391,12 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
     return;
   }
   case Intrinsic::eh_sjlj_longjmp:
-    DAG.setRoot(DAG.getNode(ISD::EH_SJLJ_LONGJMP, sdl, MVT::Other,
-                            getRoot(), getValue(I.getArgOperand(0))));
+    DAG.setRoot(DAG.getNode(ISD::EH_SJLJ_LONGJMP, sdl, MVT::Other, getRoot(),
+                            getValue(I.getArgOperand(0))));
     return;
   case Intrinsic::eh_sjlj_setup_dispatch:
-    DAG.setRoot(DAG.getNode(ISD::EH_SJLJ_SETUP_DISPATCH, sdl, MVT::Other,
-                            getRoot()));
+    DAG.setRoot(
+        DAG.getNode(ISD::EH_SJLJ_SETUP_DISPATCH, sdl, MVT::Other, getRoot()));
     return;
   case Intrinsic::masked_gather:
     visitMaskedGather(I);
@@ -6420,20 +6457,47 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
   case Intrinsic::canonicalize: {
     unsigned Opcode;
     switch (Intrinsic) {
-    default: llvm_unreachable("Impossible intrinsic");  // Can't reach here.
-    case Intrinsic::sqrt:      Opcode = ISD::FSQRT;      break;
-    case Intrinsic::fabs:      Opcode = ISD::FABS;       break;
-    case Intrinsic::sin:       Opcode = ISD::FSIN;       break;
-    case Intrinsic::cos:       Opcode = ISD::FCOS;       break;
-    case Intrinsic::exp10:     Opcode = ISD::FEXP10;     break;
-    case Intrinsic::floor:     Opcode = ISD::FFLOOR;     break;
-    case Intrinsic::ceil:      Opcode = ISD::FCEIL;      break;
-    case Intrinsic::trunc:     Opcode = ISD::FTRUNC;     break;
-    case Intrinsic::rint:      Opcode = ISD::FRINT;      break;
-    case Intrinsic::nearbyint: Opcode = ISD::FNEARBYINT; break;
-    case Intrinsic::round:     Opcode = ISD::FROUND;     break;
-    case Intrinsic::roundeven: Opcode = ISD::FROUNDEVEN; break;
-    case Intrinsic::canonicalize: Opcode = ISD::FCANONICALIZE; break;
+    default:
+      llvm_unreachable("Impossible intrinsic"); // Can't reach here.
+    case Intrinsic::sqrt:
+      Opcode = ISD::FSQRT;
+      break;
+    case Intrinsic::fabs:
+      Opcode = ISD::FABS;
+      break;
+    case Intrinsic::sin:
+      Opcode = ISD::FSIN;
+      break;
+    case Intrinsic::cos:
+      Opcode = ISD::FCOS;
+      break;
+    case Intrinsic::exp10:
+      Opcode = ISD::FEXP10;
+      break;
+    case Intrinsic::floor:
+      Opcode = ISD::FFLOOR;
+      break;
+    case Intrinsic::ceil:
+      Opcode = ISD::FCEIL;
+      break;
+    case Intrinsic::trunc:
+      Opcode = ISD::FTRUNC;
+      break;
+    case Intrinsic::rint:
+      Opcode = ISD::FRINT;
+      break;
+    case Intrinsic::nearbyint:
+      Opcode = ISD::FNEARBYINT;
+      break;
+    case Intrinsic::round:
+      Opcode = ISD::FROUND;
+      break;
+    case Intrinsic::roundeven:
+      Opcode = ISD::FROUNDEVEN;
+      break;
+    case Intrinsic::canonicalize:
+      Opcode = ISD::FCANONICALIZE;
+      break;
     }
 
     setValue(&I, DAG.getNode(Opcode, sdl,
@@ -6447,16 +6511,24 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
   case Intrinsic::llrint: {
     unsigned Opcode;
     switch (Intrinsic) {
-    default: llvm_unreachable("Impossible intrinsic");  // Can't reach here.
-    case Intrinsic::lround:  Opcode = ISD::LROUND;  break;
-    case Intrinsic::llround: Opcode = ISD::LLROUND; break;
-    case Intrinsic::lrint:   Opcode = ISD::LRINT;   break;
-    case Intrinsic::llrint:  Opcode = ISD::LLRINT;  break;
+    default:
+      llvm_unreachable("Impossible intrinsic"); // Can't reach here.
+    case Intrinsic::lround:
+      Opcode = ISD::LROUND;
+      break;
+    case Intrinsic::llround:
+      Opcode = ISD::LLROUND;
+      break;
+    case Intrinsic::lrint:
+      Opcode = ISD::LRINT;
+      break;
+    case Intrinsic::llrint:
+      Opcode = ISD::LLRINT;
+      break;
     }
 
     EVT RetVT = TLI.getValueType(DAG.getDataLayout(), I.getType());
-    setValue(&I, DAG.getNode(Opcode, sdl, RetVT,
-                             getValue(I.getArgOperand(0))));
+    setValue(&I, DAG.getNode(Opcode, sdl, RetVT, getValue(I.getArgOperand(0))));
     return;
   }
   case Intrinsic::minnum:
@@ -6569,11 +6641,11 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
     return;
   }
   case Intrinsic::convert_to_fp16:
-    setValue(&I, DAG.getNode(ISD::BITCAST, sdl, MVT::i16,
-                             DAG.getNode(ISD::FP_ROUND, sdl, MVT::f16,
-                                         getValue(I.getArgOperand(0)),
-                                         DAG.getTargetConstant(0, sdl,
-                                                               MVT::i32))));
+    setValue(&I,
+             DAG.getNode(ISD::BITCAST, sdl, MVT::i16,
+                         DAG.getNode(ISD::FP_ROUND, sdl, MVT::f16,
+                                     getValue(I.getArgOperand(0)),
+                                     DAG.getTargetConstant(0, sdl, MVT::i32))));
     return;
   case Intrinsic::convert_from_fp16:
     setValue(&I, DAG.getNode(ISD::FP_EXTEND, sdl,
@@ -6820,8 +6892,8 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
     SDValue Op1 = getValue(I.getArgOperand(0));
     SDValue Op2 = getValue(I.getArgOperand(1));
     SDValue Op3 = getValue(I.getArgOperand(2));
-    setValue(&I, expandDivFix(FixedPointIntrinsicToOpcode(Intrinsic), sdl,
-                              Op1, Op2, Op3, DAG, TLI));
+    setValue(&I, expandDivFix(FixedPointIntrinsicToOpcode(Intrinsic), sdl, Op1,
+                              Op2, Op3, DAG, TLI));
     return;
   }
   case Intrinsic::smax: {
@@ -6864,7 +6936,8 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
   }
   case Intrinsic::stackrestore:
     Res = getValue(I.getArgOperand(0));
-    DAG.setRoot(DAG.getNode(ISD::STACKRESTORE, sdl, MVT::Other, getRoot(), Res));
+    DAG.setRoot(
+        DAG.getNode(ISD::STACKRESTORE, sdl, MVT::Other, getRoot(), Res));
     return;
   case Intrinsic::get_dynamic_area_offset: {
     SDValue Op = getRoot();
@@ -6910,7 +6983,7 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
     if (TLI.useLoadStackGuardNode())
       Src = getLoadStackGuard(DAG, sdl, Chain);
     else
-      Src = getValue(I.getArgOperand(0));   // The guard's value.
+      Src = getValue(I.getArgOperand(0)); // The guard's value.
 
     AllocaInst *Slot = cast<AllocaInst>(I.getArgOperand(1));
 
@@ -6999,7 +7072,8 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
   case Intrinsic::gcwrite:
     llvm_unreachable("GC failed to lower gcread/gcwrite intrinsics!");
   case Intrinsic::get_rounding:
-    Res = DAG.getNode(ISD::GET_ROUNDING, sdl, {MVT::i32, MVT::Other}, getRoot());
+    Res =
+        DAG.getNode(ISD::GET_ROUNDING, sdl, {MVT::i32, MVT::Other}, getRoot());
     setValue(&I, Res);
     DAG.setRoot(Res.getValue(1));
     return;
@@ -7029,7 +7103,8 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
                 cast<ConstantInt>(I.getArgOperand(0))->getZExtValue(), sdl,
                 MVT::i32)));
         break;
-      default: llvm_unreachable("unknown trap intrinsic");
+      default:
+        llvm_unreachable("unknown trap intrinsic");
       }
       return;
     }
@@ -7061,13 +7136,26 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
   case Intrinsic::smul_with_overflow: {
     ISD::NodeType Op;
     switch (Intrinsic) {
-    default: llvm_unreachable("Impossible intrinsic");  // Can't reach here.
-    case Intrinsic::uadd_with_overflow: Op = ISD::UADDO; break;
-    case Intrinsic::sadd_with_overflow: Op = ISD::SADDO; break;
-    case Intrinsic::usub_with_overflow: Op = ISD::USUBO; break;
-    case Intrinsic::ssub_with_overflow: Op = ISD::SSUBO; break;
-    case Intrinsic::umul_with_overflow: Op = ISD::UMULO; break;
-    case Intrinsic::smul_with_overflow: Op = ISD::SMULO; break;
+    default:
+      llvm_unreachable("Impossible intrinsic"); // Can't reach here.
+    case Intrinsic::uadd_with_overflow:
+      Op = ISD::UADDO;
+      break;
+    case Intrinsic::sadd_with_overflow:
+      Op = ISD::SADDO;
+      break;
+    case Intrinsic::usub_with_overflow:
+      Op = ISD::USUBO;
+      break;
+    case Intrinsic::ssub_with_overflow:
+      Op = ISD::SSUBO;
+      break;
+    case Intrinsic::umul_with_overflow:
+      Op = ISD::UMULO;
+      break;
+    case Intrinsic::smul_with_overflow:
+      Op = ISD::SMULO;
+      break;
     }
     SDValue Op1 = getValue(I.getArgOperand(0));
     SDValue Op2 = getValue(I.getArgOperand(1));
@@ -7075,8 +7163,8 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
     EVT ResultVT = Op1.getValueType();
     EVT OverflowVT = MVT::i1;
     if (ResultVT.isVector())
-      OverflowVT = EVT::getVectorVT(
-          *Context, OverflowVT, ResultVT.getVectorElementCount());
+      OverflowVT = EVT::getVectorVT(*Context, OverflowVT,
+                                    ResultVT.getVectorElementCount());
 
     SDVTList VTs = DAG.getVTList(ResultVT, OverflowVT);
     setValue(&I, DAG.getNode(Op, sdl, VTs, Op1, Op2));
@@ -7085,7 +7173,8 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
   case Intrinsic::prefetch: {
     SDValue Ops[5];
     unsigned rw = cast<ConstantInt>(I.getArgOperand(1))->getZExtValue();
-    auto Flags = rw == 0 ? MachineMemOperand::MOLoad :MachineMemOperand::MOStore;
+    auto Flags =
+        rw == 0 ? MachineMemOperand::MOLoad : MachineMemOperand::MOStore;
     Ops[0] = DAG.getRoot();
     Ops[1] = getValue(I.getArgOperand(0));
     Ops[2] = getValue(I.getArgOperand(1));
@@ -7238,8 +7327,7 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
     // Create a MCSymbol for the label to avoid any target lowering
     // that would make this PC relative.
     SDValue OffsetSym = DAG.getMCSymbol(FrameAllocSym, PtrVT);
-    SDValue OffsetVal =
-        DAG.getNode(ISD::LOCAL_RECOVER, sdl, PtrVT, OffsetSym);
+    SDValue OffsetVal = DAG.getNode(ISD::LOCAL_RECOVER, sdl, PtrVT, OffsetSym);
 
     // Add the offset to the FP.
     SDValue Add = DAG.getMemBasePlusOffset(FPVal, OffsetVal, sdl);
@@ -7449,10 +7537,10 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
     SDValue VectorIndex = DAG.getSplat(VecTy, sdl, Index);
     SDValue VectorTripCount = DAG.getSplat(VecTy, sdl, TripCount);
     SDValue VectorStep = DAG.getStepVector(sdl, VecTy);
-    SDValue VectorInduction = DAG.getNode(
-        ISD::UADDSAT, sdl, VecTy, VectorIndex, VectorStep);
-    SDValue SetCC = DAG.getSetCC(sdl, CCVT, VectorInduction,
-                                 VectorTripCount, ISD::CondCode::SETULT);
+    SDValue VectorInduction =
+        DAG.getNode(ISD::UADDSAT, sdl, VecTy, VectorIndex, VectorStep);
+    SDValue SetCC = DAG.getSetCC(sdl, CCVT, VectorInduction, VectorTripCount,
+                                 ISD::CondCode::SETULT);
     setValue(&I, SetCC);
     return;
   }
@@ -7480,8 +7568,8 @@ void SelectionDAGBuilder::visitIntrinsicCall(const CallInst &I,
       CountVT = VT;
     }
 
-    SDValue MaxEVL = DAG.getElementCount(sdl, CountVT,
-                                         ElementCount::get(VF, IsScalable));
+    SDValue MaxEVL =
+        DAG.getElementCount(sdl, CountVT, ElementCount::get(VF, IsScalable));
 
     SDValue UMin = DAG.getNode(ISD::UMIN, sdl, CountVT, Count, MaxEVL);
     // Clip to the result type if needed.
@@ -7602,7 +7690,8 @@ void SelectionDAGBuilder::visitConstrainedFPIntrinsic(
 
   unsigned Opcode;
   switch (FPI.getIntrinsicID()) {
-  default: llvm_unreachable("Impossible intrinsic");  // Can't reach here.
+  default:
+    llvm_unreachable("Impossible intrinsic"); // Can't reach here.
 #define DAG_INSTRUCTION(NAME, NARG, ROUND_MODE, INTRINSIC, DAGN)               \
   case Intrinsic::INTRINSIC:                                                   \
     Opcode = ISD::STRICT_##DAGN;                                               \
@@ -7629,7 +7718,8 @@ void SelectionDAGBuilder::visitConstrainedFPIntrinsic(
   // A few strict DAG nodes carry additional operands that are not
   // set up by the default code above.
   switch (Opcode) {
-  default: break;
+  default:
+    break;
   case ISD::STRICT_FP_ROUND:
     Opers.push_back(
         DAG.getTargetConstant(0, sdl, TLI.getPointerTy(DAG.getDataLayout())));
@@ -7725,15 +7815,15 @@ void SelectionDAGBuilder::visitVPGather(
   if (!Alignment)
     Alignment = DAG.getEVTAlign(VT.getScalarType());
   unsigned AS =
-    PtrOperand->getType()->getScalarType()->getPointerAddressSpace();
+      PtrOperand->getType()->getScalarType()->getPointerAddressSpace();
   MachineMemOperand *MMO = DAG.getMachineFunction().getMachineMemOperand(
-     MachinePointerInfo(AS), MachineMemOperand::MOLoad,
-     MemoryLocation::UnknownSize, *Alignment, AAInfo, Ranges);
+      MachinePointerInfo(AS), MachineMemOperand::MOLoad,
+      MemoryLocation::UnknownSize, *Alignment, AAInfo, Ranges);
   SDValue Base, Index, Scale;
   ISD::MemIndexType IndexType;
-  bool UniformBase = getUniformBase(PtrOperand, Base, Index, IndexType, Scale,
-                                    this, VPIntrin.getParent(),
-                                    VT.getScalarStoreSize());
+  bool UniformBase =
+      getUniformBase(PtrOperand, Base, Index, IndexType, Scale, this,
+                     VPIntrin.getParent(), VT.getScalarStoreSize());
   if (!UniformBase) {
     Base = DAG.getConstant(0, DL, TLI.getPointerTy(DAG.getDataLayout()));
     Index = getValue(PtrOperand);
@@ -7794,15 +7884,14 @@ void SelectionDAGBuilder::visitVPScatter(
       MemoryLocation::UnknownSize, *Alignment, AAInfo);
   SDValue Base, Index, Scale;
   ISD::MemIndexType IndexType;
-  bool UniformBase = getUniformBase(PtrOperand, Base, Index, IndexType, Scale,
-                                    this, VPIntrin.getParent(),
-                                    VT.getScalarStoreSize());
+  bool UniformBase =
+      getUniformBase(PtrOperand, Base, Index, IndexType, Scale, this,
+                     VPIntrin.getParent(), VT.getScalarStoreSize());
   if (!UniformBase) {
     Base = DAG.getConstant(0, DL, TLI.getPointerTy(DAG.getDataLayout()));
     Index = getValue(PtrOperand);
     IndexType = ISD::SIGNED_SCALED;
-    Scale =
-      DAG.getTargetConstant(1, DL, TLI.getPointerTy(DAG.getDataLayout()));
+    Scale = DAG.getTargetConstant(1, DL, TLI.getPointerTy(DAG.getDataLayout()));
   }
   EVT IdxVT = Index.getValueType();
   EVT EltTy = IdxVT.getVectorElementType();
@@ -8119,8 +8208,7 @@ SelectionDAGBuilder::lowerInvokable(TargetLowering::CallLoweringInfo &CLI,
 }
 
 void SelectionDAGBuilder::LowerCallTo(const CallBase &CB, SDValue Callee,
-                                      bool isTailCall,
-                                      bool isMustTailCall,
+                                      bool isTailCall, bool isMustTailCall,
                                       const BasicBlock *EHPadBB) {
   auto &DL = DAG.getDataLayout();
   FunctionType *FTy = CB.getFunctionType();
@@ -8137,7 +8225,8 @@ void SelectionDAGBuilder::LowerCallTo(const CallBase &CB, SDValue Callee,
     // attribute.
     auto *Caller = CB.getParent()->getParent();
     if (Caller->getFnAttribute("disable-tail-calls").getValueAsString() ==
-        "true" && !isMustTailCall)
+            "true" &&
+        !isMustTailCall)
       isTailCall = false;
 
     // We can't tail call inside a function with a swifterror argument. Lowering
@@ -8157,7 +8246,8 @@ void SelectionDAGBuilder::LowerCallTo(const CallBase &CB, SDValue Callee,
       continue;
 
     SDValue ArgNode = getValue(V);
-    Entry.Node = ArgNode; Entry.Ty = V->getType();
+    Entry.Node = ArgNode;
+    Entry.Ty = V->getType();
 
     Entry.setAttributes(&CB, I - CB.arg_begin());
 
@@ -8399,10 +8489,9 @@ bool SelectionDAGBuilder::visitMemChrCall(const CallInst &I) {
   const Value *Length = I.getArgOperand(2);
 
   const SelectionDAGTargetInfo &TSI = DAG.getSelectionDAGInfo();
-  std::pair<SDValue, SDValue> Res =
-    TSI.EmitTargetCodeForMemchr(DAG, getCurSDLoc(), DAG.getRoot(),
-                                getValue(Src), getValue(Char), getValue(Length),
-                                MachinePointerInfo(Src));
+  std::pair<SDValue, SDValue> Res = TSI.EmitTargetCodeForMemchr(
+      DAG, getCurSDLoc(), DAG.getRoot(), getValue(Src), getValue(Char),
+      getValue(Length), MachinePointerInfo(Src));
   if (Res.first.getNode()) {
     setValue(&I, Res.first);
     PendingLoads.push_back(Res.second);
@@ -8433,11 +8522,10 @@ bool SelectionDAGBuilder::visitMemPCpyCall(const CallInst &I) {
   // because the return pointer needs to be adjusted by the size of
   // the copied memory.
   SDValue Root = getMemoryRoot();
-  SDValue MC = DAG.getMemcpy(Root, sdl, Dst, Src, Size, Alignment, false, false,
-                             /*isTailCall=*/false,
-                             MachinePointerInfo(I.getArgOperand(0)),
-                             MachinePointerInfo(I.getArgOperand(1)),
-                             I.getAAMetadata());
+  SDValue MC = DAG.getMemcpy(
+      Root, sdl, Dst, Src, Size, Alignment, false, false,
+      /*isTailCall=*/false, MachinePointerInfo(I.getArgOperand(0)),
+      MachinePointerInfo(I.getArgOperand(1)), I.getAAMetadata());
   assert(MC.getNode() != nullptr &&
          "** memcpy should not be lowered as TailCall in mempcpy context **");
   DAG.setRoot(MC);
@@ -8446,8 +8534,8 @@ bool SelectionDAGBuilder::visitMemPCpyCall(const CallInst &I) {
   Size = DAG.getSExtOrTrunc(Size, sdl, Dst.getValueType());
 
   // Adjust return pointer to point just past the last dst byte.
-  SDValue DstPlusSize = DAG.getNode(ISD::ADD, sdl, Dst.getValueType(),
-                                    Dst, Size);
+  SDValue DstPlusSize =
+      DAG.getNode(ISD::ADD, sdl, Dst.getValueType(), Dst, Size);
   setValue(&I, DstPlusSize);
   return true;
 }
@@ -8461,11 +8549,9 @@ bool SelectionDAGBuilder::visitStrCpyCall(const CallInst &I, bool isStpcpy) {
   const Value *Arg0 = I.getArgOperand(0), *Arg1 = I.getArgOperand(1);
 
   const SelectionDAGTargetInfo &TSI = DAG.getSelectionDAGInfo();
-  std::pair<SDValue, SDValue> Res =
-    TSI.EmitTargetCodeForStrcpy(DAG, getCurSDLoc(), getRoot(),
-                                getValue(Arg0), getValue(Arg1),
-                                MachinePointerInfo(Arg0),
-                                MachinePointerInfo(Arg1), isStpcpy);
+  std::pair<SDValue, SDValue> Res = TSI.EmitTargetCodeForStrcpy(
+      DAG, getCurSDLoc(), getRoot(), getValue(Arg0), getValue(Arg1),
+      MachinePointerInfo(Arg0), MachinePointerInfo(Arg1), isStpcpy);
   if (Res.first.getNode()) {
     setValue(&I, Res.first);
     DAG.setRoot(Res.second);
@@ -8484,11 +8570,9 @@ bool SelectionDAGBuilder::visitStrCmpCall(const CallInst &I) {
   const Value *Arg0 = I.getArgOperand(0), *Arg1 = I.getArgOperand(1);
 
   const SelectionDAGTargetInfo &TSI = DAG.getSelectionDAGInfo();
-  std::pair<SDValue, SDValue> Res =
-    TSI.EmitTargetCodeForStrcmp(DAG, getCurSDLoc(), DAG.getRoot(),
-                                getValue(Arg0), getValue(Arg1),
-                                MachinePointerInfo(Arg0),
-                                MachinePointerInfo(Arg1));
+  std::pair<SDValue, SDValue> Res = TSI.EmitTargetCodeForStrcmp(
+      DAG, getCurSDLoc(), DAG.getRoot(), getValue(Arg0), getValue(Arg1),
+      MachinePointerInfo(Arg0), MachinePointerInfo(Arg1));
   if (Res.first.getNode()) {
     processIntegerCallValue(I, Res.first, true);
     PendingLoads.push_back(Res.second);
@@ -8508,8 +8592,8 @@ bool SelectionDAGBuilder::visitStrLenCall(const CallInst &I) {
 
   const SelectionDAGTargetInfo &TSI = DAG.getSelectionDAGInfo();
   std::pair<SDValue, SDValue> Res =
-    TSI.EmitTargetCodeForStrlen(DAG, getCurSDLoc(), DAG.getRoot(),
-                                getValue(Arg0), MachinePointerInfo(Arg0));
+      TSI.EmitTargetCodeForStrlen(DAG, getCurSDLoc(), DAG.getRoot(),
+                                  getValue(Arg0), MachinePointerInfo(Arg0));
   if (Res.first.getNode()) {
     processIntegerCallValue(I, Res.first, false);
     PendingLoads.push_back(Res.second);
@@ -8528,10 +8612,9 @@ bool SelectionDAGBuilder::visitStrNLenCall(const CallInst &I) {
   const Value *Arg0 = I.getArgOperand(0), *Arg1 = I.getArgOperand(1);
 
   const SelectionDAGTargetInfo &TSI = DAG.getSelectionDAGInfo();
-  std::pair<SDValue, SDValue> Res =
-    TSI.EmitTargetCodeForStrnlen(DAG, getCurSDLoc(), DAG.getRoot(),
-                                 getValue(Arg0), getValue(Arg1),
-                                 MachinePointerInfo(Arg0));
+  std::pair<SDValue, SDValue> Res = TSI.EmitTargetCodeForStrnlen(
+      DAG, getCurSDLoc(), DAG.getRoot(), getValue(Arg0), getValue(Arg1),
+      MachinePointerInfo(Arg0));
   if (Res.first.getNode()) {
     processIntegerCallValue(I, Res.first, false);
     PendingLoads.push_back(Res.second);
@@ -8613,7 +8696,8 @@ void SelectionDAGBuilder::visitCall(const CallInst &I) {
         F->hasName() && LibInfo->getLibFunc(*F, Func) &&
         LibInfo->hasOptimizedCodeGen(Func)) {
       switch (Func) {
-      default: break;
+      default:
+        break;
       case LibFunc_bcmp:
         if (visitMemCmpBCmpCall(I))
           return;
@@ -8802,8 +8886,7 @@ class SDISelAsmOperandInfo : public TargetLowering::AsmOperandInfo {
   RegsForValue AssignedRegs;
 
   explicit SDISelAsmOperandInfo(const TargetLowering::AsmOperandInfo &info)
-    : TargetLowering::AsmOperandInfo(info), CallOperand(nullptr, 0) {
-  }
+      : TargetLowering::AsmOperandInfo(info), CallOperand(nullptr, 0) {}
 
   /// Whether or not this operand accesses memory
   bool hasMemory(const TargetLowering &TLI) const {
@@ -8819,7 +8902,6 @@ class SDISelAsmOperandInfo : public TargetLowering::AsmOperandInfo {
   }
 };
 
-
 } // end anonymous namespace
 
 /// Make sure that the output operand \p OpInfo and its corresponding input
@@ -9116,8 +9198,8 @@ void SelectionDAGBuilder::visitInlineAsm(const CallBase &Call,
     // Compute the constraint code and ConstraintType to use.
     TLI.ComputeConstraintToUse(T, SDValue());
 
-    if (T.ConstraintType == TargetLowering::C_Immediate &&
-        OpInfo.CallOperand && !isa<ConstantSDNode>(OpInfo.CallOperand))
+    if (T.ConstraintType == TargetLowering::C_Immediate && OpInfo.CallOperand &&
+        !isa<ConstantSDNode>(OpInfo.CallOperand))
       // We've delayed emitting a diagnostic like the "n" constraint because
       // inlining could cause an integer showing up.
       return emitInlineAsmError(Call, "constraint '" + Twine(T.ConstraintCode) +
@@ -9218,14 +9300,14 @@ void SelectionDAGBuilder::visitInlineAsm(const CallBase &Call,
       // It is now an indirect operand.
       OpInfo.isIndirect = true;
     }
-
   }
 
   // AsmNodeOperands - The operands for the ISD::INLINEASM node.
   std::vector<SDValue> AsmNodeOperands;
-  AsmNodeOperands.push_back(SDValue());  // reserve space for input chain
+  AsmNodeOperands.push_back(SDValue()); // reserve space for input chain
   AsmNodeOperands.push_back(DAG.getTargetExternalSymbol(
-      IA->getAsmString().c_str(), TLI.getProgramPointerTy(DAG.getDataLayout())));
+      IA->getAsmString().c_str(),
+      TLI.getProgramPointerTy(DAG.getDataLayout())));
 
   // If we have a !srcloc metadata node associated with it, we want to attach
   // this to the ultimately generated inline asm machineinstr.  To do this, we
@@ -9289,8 +9371,8 @@ void SelectionDAGBuilder::visitInlineAsm(const CallBase &Call,
         // Add information to the INLINEASM node to know about this output.
         unsigned OpFlags = InlineAsm::getFlagWord(InlineAsm::Kind::Mem, 1);
         OpFlags = InlineAsm::getFlagWordForMem(OpFlags, ConstraintID);
-        AsmNodeOperands.push_back(DAG.getTargetConstant(OpFlags, getCurSDLoc(),
-                                                        MVT::i32));
+        AsmNodeOperands.push_back(
+            DAG.getTargetConstant(OpFlags, getCurSDLoc(), MVT::i32));
         AsmNodeOperands.push_back(OpInfo.CallOperand);
       } else {
         // Otherwise, this outputs to a register (directly for C_Register /
@@ -9325,7 +9407,7 @@ void SelectionDAGBuilder::visitInlineAsm(const CallBase &Call,
         auto CurOp = findMatchingInlineAsmOperand(OpInfo.getMatchedOperand(),
                                                   AsmNodeOperands);
         unsigned OpFlag =
-          cast<ConstantSDNode>(AsmNodeOperands[CurOp])->getZExtValue();
+            cast<ConstantSDNode>(AsmNodeOperands[CurOp])->getZExtValue();
         if (InlineAsm::isRegDefKind(OpFlag) ||
             InlineAsm::isRegDefEarlyClobberKind(OpFlag)) {
           // Add (OpFlag&0xffff)>>3 registers to MatchedRegs.
@@ -9341,7 +9423,7 @@ void SelectionDAGBuilder::visitInlineAsm(const CallBase &Call,
           MachineFunction &MF = DAG.getMachineFunction();
           MachineRegisterInfo &MRI = MF.getRegInfo();
           const TargetRegisterInfo &TRI = *MF.getSubtarget().getRegisterInfo();
-          auto *R = cast<RegisterSDNode>(AsmNodeOperands[CurOp+1]);
+          auto *R = cast<RegisterSDNode>(AsmNodeOperands[CurOp + 1]);
           Register TiedReg = R->getReg();
           MVT RegVT = R->getSimpleValueType(0);
           const TargetRegisterClass *RC =
@@ -9369,24 +9451,23 @@ void SelectionDAGBuilder::visitInlineAsm(const CallBase &Call,
         // Add information to the INLINEASM node to know about this input.
         // See InlineAsm.h isUseOperandTiedToDef.
         OpFlag = InlineAsm::convertMemFlagWordToMatchingFlagWord(OpFlag);
-        OpFlag = InlineAsm::getFlagWordForMatchingOp(OpFlag,
-                                                    OpInfo.getMatchedOperand());
+        OpFlag = InlineAsm::getFlagWordForMatchingOp(
+            OpFlag, OpInfo.getMatchedOperand());
         AsmNodeOperands.push_back(DAG.getTargetConstant(
             OpFlag, getCurSDLoc(), TLI.getPointerTy(DAG.getDataLayout())));
-        AsmNodeOperands.push_back(AsmNodeOperands[CurOp+1]);
+        AsmNodeOperands.push_back(AsmNodeOperands[CurOp + 1]);
         break;
       }
 
       // Treat indirect 'X' constraint as memory.
-      if (OpInfo.ConstraintType == TargetLowering::C_Other &&
-          OpInfo.isIndirect)
+      if (OpInfo.ConstraintType == TargetLowering::C_Other && OpInfo.isIndirect)
         OpInfo.ConstraintType = TargetLowering::C_Memory;
 
       if (OpInfo.ConstraintType == TargetLowering::C_Immediate ||
           OpInfo.ConstraintType == TargetLowering::C_Other) {
         std::vector<SDValue> Ops;
         TLI.LowerAsmOperandForConstraint(InOperandVal, OpInfo.ConstraintCode,
-                                          Ops, DAG);
+                                         Ops, DAG);
         if (Ops.empty()) {
           if (OpInfo.ConstraintType == TargetLowering::C_Immediate)
             if (isa<ConstantSDNode>(InOperandVal)) {
@@ -9426,9 +9507,8 @@ void SelectionDAGBuilder::visitInlineAsm(const CallBase &Call,
         // Add information to the INLINEASM node to know about this input.
         unsigned ResOpType = InlineAsm::getFlagWord(InlineAsm::Kind::Mem, 1);
         ResOpType = InlineAsm::getFlagWordForMem(ResOpType, ConstraintID);
-        AsmNodeOperands.push_back(DAG.getTargetConstant(ResOpType,
-                                                        getCurSDLoc(),
-                                                        MVT::i32));
+        AsmNodeOperands.push_back(
+            DAG.getTargetConstant(ResOpType, getCurSDLoc(), MVT::i32));
         AsmNodeOperands.push_back(InOperandVal);
         break;
       }
@@ -9506,7 +9586,8 @@ void SelectionDAGBuilder::visitInlineAsm(const CallBase &Call,
 
   // Finish up input operands.  Set the input chain and add the flag last.
   AsmNodeOperands[InlineAsm::Op_InputChain] = Chain;
-  if (Glue.getNode()) AsmNodeOperands.push_back(Glue);
+  if (Glue.getNode())
+    AsmNodeOperands.push_back(Glue);
 
   unsigned ISDOpc = IsCallBr ? ISD::INLINEASM_BR : ISD::INLINEASM;
   Chain = DAG.getNode(ISDOpc, getCurSDLoc(),
@@ -9650,8 +9731,7 @@ void SelectionDAGBuilder::emitInlineAsmError(const CallBase &Call,
 }
 
 void SelectionDAGBuilder::visitVAStart(const CallInst &I) {
-  DAG.setRoot(DAG.getNode(ISD::VASTART, getCurSDLoc(),
-                          MVT::Other, getRoot(),
+  DAG.setRoot(DAG.getNode(ISD::VASTART, getCurSDLoc(), MVT::Other, getRoot(),
                           getValue(I.getArgOperand(0)),
                           DAG.getSrcValue(I.getArgOperand(0))));
 }
@@ -9672,15 +9752,13 @@ void SelectionDAGBuilder::visitVAArg(const VAArgInst &I) {
 }
 
 void SelectionDAGBuilder::visitVAEnd(const CallInst &I) {
-  DAG.setRoot(DAG.getNode(ISD::VAEND, getCurSDLoc(),
-                          MVT::Other, getRoot(),
+  DAG.setRoot(DAG.getNode(ISD::VAEND, getCurSDLoc(), MVT::Other, getRoot(),
                           getValue(I.getArgOperand(0)),
                           DAG.getSrcValue(I.getArgOperand(0))));
 }
 
 void SelectionDAGBuilder::visitVACopy(const CallInst &I) {
-  DAG.setRoot(DAG.getNode(ISD::VACOPY, getCurSDLoc(),
-                          MVT::Other, getRoot(),
+  DAG.setRoot(DAG.getNode(ISD::VACOPY, getCurSDLoc(), MVT::Other, getRoot(),
                           getValue(I.getArgOperand(0)),
                           getValue(I.getArgOperand(1)),
                           DAG.getSrcValue(I.getArgOperand(0)),
@@ -9740,8 +9818,7 @@ void SelectionDAGBuilder::populateCallLoweringInfo(
 
   // Populate the argument list.
   // Attributes for args start at offset 1, after the return attribute.
-  for (unsigned ArgI = ArgIdx, ArgE = ArgIdx + NumArgs;
-       ArgI != ArgE; ++ArgI) {
+  for (unsigned ArgI = ArgIdx, ArgE = ArgIdx + NumArgs; ArgI != ArgE; ++ArgI) {
     const Value *V = Call->getOperand(ArgI);
 
     assert(!V->getType()->isEmptyTy() && "Empty type passed to intrinsic.");
@@ -9879,13 +9956,13 @@ void SelectionDAGBuilder::visitPatchpoint(const CallBase &CB,
   SDValue Callee = getValue(CB.getArgOperand(PatchPointOpers::TargetPos));
 
   // Handle immediate and symbolic callees.
-  if (auto* ConstCallee = dyn_cast<ConstantSDNode>(Callee))
+  if (auto *ConstCallee = dyn_cast<ConstantSDNode>(Callee))
     Callee = DAG.getIntPtrConstant(ConstCallee->getZExtValue(), dl,
                                    /*isTarget=*/true);
-  else if (auto* SymbolicCallee = dyn_cast<GlobalAddressSDNode>(Callee))
-    Callee =  DAG.getTargetGlobalAddress(SymbolicCallee->getGlobal(),
-                                         SDLoc(SymbolicCallee),
-                                         SymbolicCallee->getValueType(0));
+  else if (auto *SymbolicCallee = dyn_cast<GlobalAddressSDNode>(Callee))
+    Callee = DAG.getTargetGlobalAddress(SymbolicCallee->getGlobal(),
+                                        SDLoc(SymbolicCallee),
+                                        SymbolicCallee->getValueType(0));
 
   // Get the real number of arguments participating in the call <numArgs>
   SDValue NArgVal = getValue(CB.getArgOperand(PatchPointOpers::NArgPos));
@@ -9937,11 +10014,10 @@ void SelectionDAGBuilder::visitPatchpoint(const CallBase &CB,
   // Add the <id> and <numBytes> constants.
   SDValue IDVal = getValue(CB.getArgOperand(PatchPointOpers::IDPos));
   Ops.push_back(DAG.getTargetConstant(
-                  cast<ConstantSDNode>(IDVal)->getZExtValue(), dl, MVT::i64));
+      cast<ConstantSDNode>(IDVal)->getZExtValue(), dl, MVT::i64));
   SDValue NBytesVal = getValue(CB.getArgOperand(PatchPointOpers::NBytesPos));
   Ops.push_back(DAG.getTargetConstant(
-                  cast<ConstantSDNode>(NBytesVal)->getZExtValue(), dl,
-                  MVT::i32));
+      cast<ConstantSDNode>(NBytesVal)->getZExtValue(), dl, MVT::i32));
 
   // Add the callee.
   Ops.push_back(Callee);
@@ -9963,7 +10039,7 @@ void SelectionDAGBuilder::visitPatchpoint(const CallBase &CB,
       Ops.push_back(getValue(CB.getArgOperand(i)));
 
   // Push the arguments from the call instruction.
-  SDNode::op_iterator e = HasGlue ? Call->op_end()-2 : Call->op_end()-1;
+  SDNode::op_iterator e = HasGlue ? Call->op_end() - 2 : Call->op_end() - 1;
   Ops.append(Call->op_begin() + 2, e);
 
   // Push live variables for the stack map.
@@ -10153,10 +10229,11 @@ TargetLowering::LowerCallTo(TargetLowering::CallLoweringInfo &CLI) const {
     MachineFunction &MF = CLI.DAG.getMachineFunction();
     DemoteStackIdx =
         MF.getFrameInfo().CreateStackObject(TySize, Alignment, false);
-    Type *StackSlotPtrType = PointerType::get(CLI.RetTy,
-                                              DL.getAllocaAddrSpace());
+    Type *StackSlotPtrType =
+        PointerType::get(CLI.RetTy, DL.getAllocaAddrSpace());
 
-    DemoteStackSlot = CLI.DAG.getFrameIndex(DemoteStackIdx, getFrameIndexTy(DL));
+    DemoteStackSlot =
+        CLI.DAG.getFrameIndex(DemoteStackIdx, getFrameIndexTy(DL));
     ArgListEntry Entry;
     Entry.Node = DemoteStackSlot;
     Entry.Ty = StackSlotPtrType;
@@ -10248,8 +10325,8 @@ TargetLowering::LowerCallTo(TargetLowering::CallLoweringInfo &CLI) const {
          ++Value) {
       EVT VT = ValueVTs[Value];
       Type *ArgTy = VT.getTypeForEVT(CLI.RetTy->getContext());
-      SDValue Op = SDValue(Args[i].Node.getNode(),
-                           Args[i].Node.getResNo() + Value);
+      SDValue Op =
+          SDValue(Args[i].Node.getNode(), Args[i].Node.getResNo() + Value);
       ISD::ArgFlagsTy Flags;
 
       // Certain targets (such as MIPS), may have a different ABI alignment
@@ -10345,8 +10422,8 @@ TargetLowering::LowerCallTo(TargetLowering::CallLoweringInfo &CLI) const {
       else if (Args[i].IsZExt)
         ExtendKind = ISD::ZERO_EXTEND;
 
-      // Conservatively only handle 'returned' on non-vectors that can be lowered,
-      // for now.
+      // Conservatively only handle 'returned' on non-vectors that can be
+      // lowered, for now.
       if (Args[i].IsReturned && !Op.getValueType().isVector() &&
           CanLowerReturn) {
         assert((CLI.RetTy == Args[i].Ty ||
@@ -10453,9 +10530,9 @@ TargetLowering::LowerCallTo(TargetLowering::CallLoweringInfo &CLI) const {
     MachineFunction &MF = CLI.DAG.getMachineFunction();
     Align HiddenSRetAlign = MF.getFrameInfo().getObjectAlign(DemoteStackIdx);
     for (unsigned i = 0; i < NumValues; ++i) {
-      SDValue Add = CLI.DAG.getNode(ISD::ADD, CLI.DL, PtrVT, DemoteStackSlot,
-                                    CLI.DAG.getConstant(Offsets[i], CLI.DL,
-                                                        PtrVT), Flags);
+      SDValue Add = CLI.DAG.getNode(
+          ISD::ADD, CLI.DL, PtrVT, DemoteStackSlot,
+          CLI.DAG.getConstant(Offsets[i], CLI.DL, PtrVT), Flags);
       SDValue L = CLI.DAG.getLoad(
           RetTys[i], CLI.DL, CLI.Chain, Add,
           MachinePointerInfo::getFixedStack(CLI.DAG.getMachineFunction(),
@@ -10522,7 +10599,7 @@ void TargetLowering::LowerOperationWrapper(SDNode *N,
   // If the original node has multiple results, then the return node should
   // have the same number of results.
   assert((N->getNumValues() == Res->getNumValues()) &&
-      "Lowering returned the wrong number of results!");
+         "Lowering returned the wrong number of results!");
 
   // Places new result values base on N result number.
   for (unsigned I = 0, E = N->getNumValues(); I != E; ++I)
@@ -10573,7 +10650,7 @@ static bool isOnlyUsedInEntryBlock(const Argument *A, bool FastISel) {
   const BasicBlock &Entry = A->getParent()->front();
   for (const User *U : A->users())
     if (cast<Instruction>(U)->getParent() != &Entry || isa<SwitchInst>(U))
-      return false;  // Use not in entry block.
+      return false; // Use not in entry block.
 
   return true;
 }
@@ -10585,10 +10662,9 @@ using ArgCopyElisionMapTy =
 /// Scan the entry block of the function in FuncInfo for arguments that look
 /// like copies into a local alloca. Record any copied arguments in
 /// ArgCopyElisionCandidates.
-static void
-findArgumentCopyElisionCandidates(const DataLayout &DL,
-                                  FunctionLoweringInfo *FuncInfo,
-                                  ArgCopyElisionMapTy &ArgCopyElisionCandidates) {
+static void findArgumentCopyElisionCandidates(
+    const DataLayout &DL, FunctionLoweringInfo *FuncInfo,
+    ArgCopyElisionMapTy &ArgCopyElisionCandidates) {
   // Record the state of every static alloca used in the entry block. Argument
   // allocas are all used in the entry block, so we need approximately as many
   // entries as we have arguments.
@@ -10796,13 +10872,12 @@ void SelectionDAGISel::LowerArguments(const Function &F) {
       FinalType = Arg.getParamByValType();
     bool NeedsRegBlock = TLI->functionArgumentNeedsConsecutiveRegisters(
         FinalType, F.getCallingConv(), F.isVarArg(), DL);
-    for (unsigned Value = 0, NumValues = ValueVTs.size();
-         Value != NumValues; ++Value) {
+    for (unsigned Value = 0, NumValues = ValueVTs.size(); Value != NumValues;
+         ++Value) {
       EVT VT = ValueVTs[Value];
       Type *ArgTy = VT.getTypeForEVT(*DAG.getContext());
       ISD::ArgFlagsTy Flags;
 
-
       if (Arg.getType()->isPointerTy()) {
         Flags.setPointer();
         Flags.setPointerAddrSpace(
@@ -10966,8 +11041,8 @@ void SelectionDAGISel::LowerArguments(const Function &F) {
     SDValue ArgValue = getCopyFromParts(DAG, dl, &InVals[0], 1, RegVT, VT,
                                         nullptr, F.getCallingConv(), AssertOp);
 
-    MachineFunction& MF = SDB->DAG.getMachineFunction();
-    MachineRegisterInfo& RegInfo = MF.getRegInfo();
+    MachineFunction &MF = SDB->DAG.getMachineFunction();
+    MachineRegisterInfo &RegInfo = MF.getRegInfo();
     Register SRetReg =
         RegInfo.createVirtualRegister(TLI->getRegClassFor(RegVT));
     FuncInfo->DemoteRegister = SRetReg;
@@ -11004,17 +11079,16 @@ void SelectionDAGISel::LowerArguments(const Function &F) {
                              ArrayRef(&InVals[i], NumParts), ArgHasUses);
     }
 
-    // If this argument is unused then remember its value. It is used to generate
-    // debugging information.
+    // If this argument is unused then remember its value. It is used to
+    // generate debugging information.
     bool isSwiftErrorArg =
-        TLI->supportSwiftError() &&
-        Arg.hasAttribute(Attribute::SwiftError);
+        TLI->supportSwiftError() && Arg.hasAttribute(Attribute::SwiftError);
     if (!ArgHasUses && !isSwiftErrorArg) {
       SDB->setUnusedArgValue(&Arg, InVals[i]);
 
       // Also remember any frame index for use in FastISel.
       if (FrameIndexSDNode *FI =
-          dyn_cast<FrameIndexSDNode>(InVals[i].getNode()))
+              dyn_cast<FrameIndexSDNode>(InVals[i].getNode()))
         FuncInfo->setArgumentFrameIndex(&Arg, FI->getIndex());
     }
 
@@ -11049,7 +11123,7 @@ void SelectionDAGISel::LowerArguments(const Function &F) {
 
     // Note down frame index.
     if (FrameIndexSDNode *FI =
-        dyn_cast<FrameIndexSDNode>(ArgValues[0].getNode()))
+            dyn_cast<FrameIndexSDNode>(ArgValues[0].getNode()))
       FuncInfo->setArgumentFrameIndex(&Arg, FI->getIndex());
 
     SDValue Res = DAG.getMergeValues(ArrayRef(ArgValues.data(), NumValues),
@@ -11066,9 +11140,9 @@ void SelectionDAGISel::LowerArguments(const Function &F) {
       // significant bits will be in the second operand.
       unsigned LowAddressOp = DAG.getDataLayout().isBigEndian() ? 1 : 0;
       if (LoadSDNode *LNode =
-          dyn_cast<LoadSDNode>(Res.getOperand(LowAddressOp).getNode()))
+              dyn_cast<LoadSDNode>(Res.getOperand(LowAddressOp).getNode()))
         if (FrameIndexSDNode *FI =
-            dyn_cast<FrameIndexSDNode>(LNode->getBasePtr().getNode()))
+                dyn_cast<FrameIndexSDNode>(LNode->getBasePtr().getNode()))
           FuncInfo->setArgumentFrameIndex(&Arg, FI->getIndex());
     }
 
@@ -11132,8 +11206,8 @@ void SelectionDAGISel::LowerArguments(const Function &F) {
 /// directly add them, because expansion might result in multiple MBB's for one
 /// BB.  As such, the start of the BB might correspond to a different MBB than
 /// the end.
-void
-SelectionDAGBuilder::HandlePHINodesInSuccessorBlocks(const BasicBlock *LLVMBB) {
+void SelectionDAGBuilder::HandlePHINodesInSuccessorBlocks(
+    const BasicBlock *LLVMBB) {
   const TargetLowering &TLI = DAG.getTargetLoweringInfo();
 
   SmallPtrSet<MachineBasicBlock *, 4> SuccsHandled;
@@ -11141,7 +11215,8 @@ SelectionDAGBuilder::HandlePHINodesInSuccessorBlocks(const BasicBlock *LLVMBB) {
   // Check PHI nodes in successors that expect a value to be available from this
   // block.
   for (const BasicBlock *SuccBB : successors(LLVMBB->getTerminator())) {
-    if (!isa<PHINode>(SuccBB->begin())) continue;
+    if (!isa<PHINode>(SuccBB->begin()))
+      continue;
     MachineBasicBlock *SuccMBB = FuncInfo.MBBMap[SuccBB];
 
     // If this terminator has multiple identical successors (common for
@@ -11181,7 +11256,7 @@ SelectionDAGBuilder::HandlePHINodesInSuccessorBlocks(const BasicBlock *LLVMBB) {
         Reg = RegOut;
       } else {
         DenseMap<const Value *, Register>::iterator I =
-          FuncInfo.ValueMap.find(PHIOp);
+            FuncInfo.ValueMap.find(PHIOp);
         if (I != FuncInfo.ValueMap.end())
           Reg = I->second;
         else {
@@ -11198,7 +11273,8 @@ SelectionDAGBuilder::HandlePHINodesInSuccessorBlocks(const BasicBlock *LLVMBB) {
       SmallVector<EVT, 4> ValueVTs;
       ComputeValueVTs(TLI, DAG.getDataLayout(), PN.getType(), ValueVTs);
       for (EVT VT : ValueVTs) {
-        const unsigned NumRegisters = TLI.getNumRegisters(*DAG.getContext(), VT);
+        const unsigned NumRegisters =
+            TLI.getNumRegisters(*DAG.getContext(), VT);
         for (unsigned i = 0; i != NumRegisters; ++i)
           FuncInfo.PHINodesToUpdate.push_back(
               std::make_pair(&*MBBI++, Reg + i));
@@ -11305,14 +11381,14 @@ void SelectionDAGBuilder::lowerWorkItem(SwitchWorkListItem W, Value *Cond,
     // as a tie-breaker as clusters are guaranteed to never overlap.
     llvm::sort(W.FirstCluster, W.LastCluster + 1,
                [](const CaseCluster &a, const CaseCluster &b) {
-      return a.Prob != b.Prob ?
-             a.Prob > b.Prob :
-             a.Low->getValue().slt(b.Low->getValue());
-    });
+                 return a.Prob != b.Prob
+                            ? a.Prob > b.Prob
+                            : a.Low->getValue().slt(b.Low->getValue());
+               });
 
     // Rearrange the case blocks so that the last one falls through if possible
     // without changing the order of probabilities.
-    for (CaseClusterIt I = W.LastCluster; I > W.FirstCluster; ) {
+    for (CaseClusterIt I = W.LastCluster; I > W.FirstCluster;) {
       --I;
       if (I->Prob > W.LastCluster->Prob)
         break;
@@ -11341,146 +11417,147 @@ void SelectionDAGBuilder::lowerWorkItem(SwitchWorkListItem W, Value *Cond,
     } else {
       Fallthrough = CurMF->CreateMachineBasicBlock(CurMBB->getBasicBlock());
       CurMF->insert(BBI, Fallthrough);
-      // Put Cond in a virtual register to make it available from the new blocks.
+      // Put Cond in a virtual register to make it available from the new
+      // blocks.
       ExportFromCurrentBlock(Cond);
     }
     UnhandledProbs -= I->Prob;
 
     switch (I->Kind) {
-      case CC_JumpTable: {
-        // FIXME: Optimize away range check based on pivot comparisons.
-        JumpTableHeader *JTH = &SL->JTCases[I->JTCasesIndex].first;
-        SwitchCG::JumpTable *JT = &SL->JTCases[I->JTCasesIndex].second;
-
-        // The jump block hasn't been inserted yet; insert it here.
-        MachineBasicBlock *JumpMBB = JT->MBB;
-        CurMF->insert(BBI, JumpMBB);
-
-        auto JumpProb = I->Prob;
-        auto FallthroughProb = UnhandledProbs;
-
-        // If the default statement is a target of the jump table, we evenly
-        // distribute the default probability to successors of CurMBB. Also
-        // update the probability on the edge from JumpMBB to Fallthrough.
-        for (MachineBasicBlock::succ_iterator SI = JumpMBB->succ_begin(),
-                                              SE = JumpMBB->succ_end();
-             SI != SE; ++SI) {
-          if (*SI == DefaultMBB) {
-            JumpProb += DefaultProb / 2;
-            FallthroughProb -= DefaultProb / 2;
-            JumpMBB->setSuccProbability(SI, DefaultProb / 2);
-            JumpMBB->normalizeSuccProbs();
-            break;
-          }
+    case CC_JumpTable: {
+      // FIXME: Optimize away range check based on pivot comparisons.
+      JumpTableHeader *JTH = &SL->JTCases[I->JTCasesIndex].first;
+      SwitchCG::JumpTable *JT = &SL->JTCases[I->JTCasesIndex].second;
+
+      // The jump block hasn't been inserted yet; insert it here.
+      MachineBasicBlock *JumpMBB = JT->MBB;
+      CurMF->insert(BBI, JumpMBB);
+
+      auto JumpProb = I->Prob;
+      auto FallthroughProb = UnhandledProbs;
+
+      // If the default statement is a target of the jump table, we evenly
+      // distribute the default probability to successors of CurMBB. Also
+      // update the probability on the edge from JumpMBB to Fallthrough.
+      for (MachineBasicBlock::succ_iterator SI = JumpMBB->succ_begin(),
+                                            SE = JumpMBB->succ_end();
+           SI != SE; ++SI) {
+        if (*SI == DefaultMBB) {
+          JumpProb += DefaultProb / 2;
+          FallthroughProb -= DefaultProb / 2;
+          JumpMBB->setSuccProbability(SI, DefaultProb / 2);
+          JumpMBB->normalizeSuccProbs();
+          break;
         }
+      }
 
-        // If the default clause is unreachable, propagate that knowledge into
-        // JTH->FallthroughUnreachable which will use it to suppress the range
-        // check.
-        //
-        // However, don't do this if we're doing branch target enforcement,
-        // because a table branch _without_ a range check can be a tempting JOP
-        // gadget - out-of-bounds inputs that are impossible in correct
-        // execution become possible again if an attacker can influence the
-        // control flow. So if an attacker doesn't already have a BTI bypass
-        // available, we don't want them to be able to get one out of this
-        // table branch.
-        if (FallthroughUnreachable) {
-          Function &CurFunc = CurMF->getFunction();
-          bool HasBranchTargetEnforcement = false;
-          if (CurFunc.hasFnAttribute("branch-target-enforcement")) {
-            HasBranchTargetEnforcement =
-                CurFunc.getFnAttribute("branch-target-enforcement")
-                    .getValueAsBool();
-          } else {
-            HasBranchTargetEnforcement =
-                CurMF->getMMI().getModule()->getModuleFlag(
-                    "branch-target-enforcement");
-          }
-          if (!HasBranchTargetEnforcement)
-            JTH->FallthroughUnreachable = true;
+      // If the default clause is unreachable, propagate that knowledge into
+      // JTH->FallthroughUnreachable which will use it to suppress the range
+      // check.
+      //
+      // However, don't do this if we're doing branch target enforcement,
+      // because a table branch _without_ a range check can be a tempting JOP
+      // gadget - out-of-bounds inputs that are impossible in correct
+      // execution become possible again if an attacker can influence the
+      // control flow. So if an attacker doesn't already have a BTI bypass
+      // available, we don't want them to be able to get one out of this
+      // table branch.
+      if (FallthroughUnreachable) {
+        Function &CurFunc = CurMF->getFunction();
+        bool HasBranchTargetEnforcement = false;
+        if (CurFunc.hasFnAttribute("branch-target-enforcement")) {
+          HasBranchTargetEnforcement =
+              CurFunc.getFnAttribute("branch-target-enforcement")
+                  .getValueAsBool();
+        } else {
+          HasBranchTargetEnforcement =
+              CurMF->getMMI().getModule()->getModuleFlag(
+                  "branch-target-enforcement");
         }
+        if (!HasBranchTargetEnforcement)
+          JTH->FallthroughUnreachable = true;
+      }
 
-        if (!JTH->FallthroughUnreachable)
-          addSuccessorWithProb(CurMBB, Fallthrough, FallthroughProb);
-        addSuccessorWithProb(CurMBB, JumpMBB, JumpProb);
-        CurMBB->normalizeSuccProbs();
+      if (!JTH->FallthroughUnreachable)
+        addSuccessorWithProb(CurMBB, Fallthrough, FallthroughProb);
+      addSuccessorWithProb(CurMBB, JumpMBB, JumpProb);
+      CurMBB->normalizeSuccProbs();
 
-        // The jump table header will be inserted in our current block, do the
-        // range check, and fall through to our fallthrough block.
-        JTH->HeaderBB = CurMBB;
-        JT->Default = Fallthrough; // FIXME: Move Default to JumpTableHeader.
+      // The jump table header will be inserted in our current block, do the
+      // range check, and fall through to our fallthrough block.
+      JTH->HeaderBB = CurMBB;
+      JT->Default = Fallthrough; // FIXME: Move Default to JumpTableHeader.
 
-        // If we're in the right place, emit the jump table header right now.
-        if (CurMBB == SwitchMBB) {
-          visitJumpTableHeader(*JT, *JTH, SwitchMBB);
-          JTH->Emitted = true;
-        }
-        break;
+      // If we're in the right place, emit the jump table header right now.
+      if (CurMBB == SwitchMBB) {
+        visitJumpTableHeader(*JT, *JTH, SwitchMBB);
+        JTH->Emitted = true;
+      }
+      break;
+    }
+    case CC_BitTests: {
+      // FIXME: Optimize away range check based on pivot comparisons.
+      BitTestBlock *BTB = &SL->BitTestCases[I->BTCasesIndex];
+
+      // The bit test blocks haven't been inserted yet; insert them here.
+      for (BitTestCase &BTC : BTB->Cases)
+        CurMF->insert(BBI, BTC.ThisBB);
+
+      // Fill in fields of the BitTestBlock.
+      BTB->Parent = CurMBB;
+      BTB->Default = Fallthrough;
+
+      BTB->DefaultProb = UnhandledProbs;
+      // If the cases in bit test don't form a contiguous range, we evenly
+      // distribute the probability on the edge to Fallthrough to two
+      // successors of CurMBB.
+      if (!BTB->ContiguousRange) {
+        BTB->Prob += DefaultProb / 2;
+        BTB->DefaultProb -= DefaultProb / 2;
       }
-      case CC_BitTests: {
-        // FIXME: Optimize away range check based on pivot comparisons.
-        BitTestBlock *BTB = &SL->BitTestCases[I->BTCasesIndex];
-
-        // The bit test blocks haven't been inserted yet; insert them here.
-        for (BitTestCase &BTC : BTB->Cases)
-          CurMF->insert(BBI, BTC.ThisBB);
-
-        // Fill in fields of the BitTestBlock.
-        BTB->Parent = CurMBB;
-        BTB->Default = Fallthrough;
-
-        BTB->DefaultProb = UnhandledProbs;
-        // If the cases in bit test don't form a contiguous range, we evenly
-        // distribute the probability on the edge to Fallthrough to two
-        // successors of CurMBB.
-        if (!BTB->ContiguousRange) {
-          BTB->Prob += DefaultProb / 2;
-          BTB->DefaultProb -= DefaultProb / 2;
-        }
 
-        if (FallthroughUnreachable)
-          BTB->FallthroughUnreachable = true;
+      if (FallthroughUnreachable)
+        BTB->FallthroughUnreachable = true;
 
-        // If we're in the right place, emit the bit test header right now.
-        if (CurMBB == SwitchMBB) {
-          visitBitTestHeader(*BTB, SwitchMBB);
-          BTB->Emitted = true;
-        }
-        break;
+      // If we're in the right place, emit the bit test header right now.
+      if (CurMBB == SwitchMBB) {
+        visitBitTestHeader(*BTB, SwitchMBB);
+        BTB->Emitted = true;
+      }
+      break;
+    }
+    case CC_Range: {
+      const Value *RHS, *LHS, *MHS;
+      ISD::CondCode CC;
+      if (I->Low == I->High) {
+        // Check Cond == I->Low.
+        CC = ISD::SETEQ;
+        LHS = Cond;
+        RHS = I->Low;
+        MHS = nullptr;
+      } else {
+        // Check I->Low <= Cond <= I->High.
+        CC = ISD::SETLE;
+        LHS = I->Low;
+        MHS = Cond;
+        RHS = I->High;
       }
-      case CC_Range: {
-        const Value *RHS, *LHS, *MHS;
-        ISD::CondCode CC;
-        if (I->Low == I->High) {
-          // Check Cond == I->Low.
-          CC = ISD::SETEQ;
-          LHS = Cond;
-          RHS=I->Low;
-          MHS = nullptr;
-        } else {
-          // Check I->Low <= Cond <= I->High.
-          CC = ISD::SETLE;
-          LHS = I->Low;
-          MHS = Cond;
-          RHS = I->High;
-        }
 
-        // If Fallthrough is unreachable, fold away the comparison.
-        if (FallthroughUnreachable)
-          CC = ISD::SETTRUE;
+      // If Fallthrough is unreachable, fold away the comparison.
+      if (FallthroughUnreachable)
+        CC = ISD::SETTRUE;
 
-        // The false probability is the sum of all unhandled cases.
-        CaseBlock CB(CC, LHS, RHS, MHS, I->MBB, Fallthrough, CurMBB,
-                     getCurSDLoc(), I->Prob, UnhandledProbs);
+      // The false probability is the sum of all unhandled cases.
+      CaseBlock CB(CC, LHS, RHS, MHS, I->MBB, Fallthrough, CurMBB,
+                   getCurSDLoc(), I->Prob, UnhandledProbs);
 
-        if (CurMBB == SwitchMBB)
-          visitSwitchCase(CB, SwitchMBB);
-        else
-          SL->SwitchCases.push_back(CB);
+      if (CurMBB == SwitchMBB)
+        visitSwitchCase(CB, SwitchMBB);
+      else
+        SL->SwitchCases.push_back(CB);
 
-        break;
-      }
+      break;
+    }
     }
     CurMBB = Fallthrough;
   }
@@ -11529,10 +11606,10 @@ void SelectionDAGBuilder::splitWorkItem(SwitchWorkList &WorkList,
   }
 
   while (true) {
-    // Our binary search tree differs from a typical BST in that ours can have up
-    // to three values in each leaf. The pivot selection above doesn't take that
-    // into account, which means the tree might require more nodes and be less
-    // efficient. We compensate for this here.
+    // Our binary search tree differs from a typical BST in that ours can have
+    // up to three values in each leaf. The pivot selection above doesn't take
+    // that into account, which means the tree might require more nodes and be
+    // less efficient. We compensate for this here.
 
     unsigned NumLeft = LastLeft - W.FirstCluster + 1;
     unsigned NumRight = W.LastCluster - FirstRight + 1;
@@ -11609,8 +11686,8 @@ void SelectionDAGBuilder::splitWorkItem(SwitchWorkList &WorkList,
   // single cluster, RHS.Low == Pivot, and we can branch to its destination
   // directly if RHS.High equals the current upper bound.
   MachineBasicBlock *RightMBB;
-  if (FirstRight == LastRight && FirstRight->Kind == CC_Range &&
-      W.LT && (FirstRight->High->getValue() + 1ULL) == W.LT->getValue()) {
+  if (FirstRight == LastRight && FirstRight->Kind == CC_Range && W.LT &&
+      (FirstRight->High->getValue() + 1ULL) == W.LT->getValue()) {
     RightMBB = FirstRight->MBB;
   } else {
     RightMBB = FuncInfo.MF->CreateMachineBasicBlock(W.MBB->getBasicBlock());
@@ -11880,7 +11957,8 @@ void SelectionDAGBuilder::visitFreeze(const FreezeInst &I) {
   ComputeValueVTs(DAG.getTargetLoweringInfo(), DAG.getDataLayout(), I.getType(),
                   ValueVTs);
   unsigned NumValues = ValueVTs.size();
-  if (NumValues == 0) return;
+  if (NumValues == 0)
+    return;
 
   SmallVector<SDValue, 4> Values(NumValues);
   SDValue Op = getValue(I.getOperand(0));
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h
index 55a63ec7fd3f49f..50f293c56cd5df1 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGBuilder.h
@@ -98,11 +98,11 @@ class SelectionDAGBuilder {
   /// The current instruction being visited.
   const Instruction *CurInst = nullptr;
 
-  DenseMap<const Value*, SDValue> NodeMap;
+  DenseMap<const Value *, SDValue> NodeMap;
 
   /// Maps argument value for unused arguments. This is used
   /// to preserve debug information for incoming arguments.
-  DenseMap<const Value*, SDValue> UnusedArgNodeMap;
+  DenseMap<const Value *, SDValue> UnusedArgNodeMap;
 
   /// Helper type for DanglingDebugInfoMap.
   class DanglingDebugInfo {
@@ -173,7 +173,7 @@ class SelectionDAGBuilder {
 
   /// Keeps track of dbg_values for which we have not yet seen the referent.
   /// We defer handling these until we do see it.
-  MapVector<const Value*, DanglingDebugInfoVector> DanglingDebugInfoMap;
+  MapVector<const Value *, DanglingDebugInfoVector> DanglingDebugInfoMap;
 
   /// Cache the module flag for whether we should use debug-info assignment
   /// tracking.
@@ -297,8 +297,8 @@ class SelectionDAGBuilder {
   SelectionDAGBuilder(SelectionDAG &dag, FunctionLoweringInfo &funcinfo,
                       SwiftErrorValueTracking &swifterror, CodeGenOpt::Level ol)
       : SDNodeOrder(LowestSDNodeOrder), TM(dag.getTarget()), DAG(dag),
-        SL(std::make_unique<SDAGSwitchLowering>(this, funcinfo)), FuncInfo(funcinfo),
-        SwiftError(swifterror) {}
+        SL(std::make_unique<SDAGSwitchLowering>(this, funcinfo)),
+        FuncInfo(funcinfo), SwiftError(swifterror) {}
 
   void init(GCFunctionInfo *gfi, AAResults *AA, AssumptionCache *AC,
             const TargetLibraryInfo *li);
@@ -332,9 +332,7 @@ class SelectionDAGBuilder {
   /// It is necessary to do this before emitting a terminator instruction.
   SDValue getControlRoot();
 
-  SDLoc getCurSDLoc() const {
-    return SDLoc(CurInst, SDNodeOrder);
-  }
+  SDLoc getCurSDLoc() const { return SDLoc(CurInst, SDNodeOrder); }
 
   DebugLoc getCurDebugLoc() const {
     return CurInst ? CurInst->getDebugLoc() : DebugLoc();
@@ -409,8 +407,8 @@ class SelectionDAGBuilder {
                                     MachineBasicBlock *FBB,
                                     MachineBasicBlock *CurBB,
                                     MachineBasicBlock *SwitchBB,
-                                    BranchProbability TProb, BranchProbability FProb,
-                                    bool InvertCond);
+                                    BranchProbability TProb,
+                                    BranchProbability FProb, bool InvertCond);
   bool ShouldEmitAsBranches(const std::vector<SwitchCG::CaseBlock> &Cases);
   bool isExportableFromCurrentBlock(const Value *V, const BasicBlock *FromBB);
   void CopyToExportRegsIfNeeded(const Value *V);
@@ -550,11 +548,11 @@ class SelectionDAGBuilder {
 
   void visitBinary(const User &I, unsigned Opcode);
   void visitShift(const User &I, unsigned Opcode);
-  void visitAdd(const User &I)  { visitBinary(I, ISD::ADD); }
+  void visitAdd(const User &I) { visitBinary(I, ISD::ADD); }
   void visitFAdd(const User &I) { visitBinary(I, ISD::FADD); }
-  void visitSub(const User &I)  { visitBinary(I, ISD::SUB); }
+  void visitSub(const User &I) { visitBinary(I, ISD::SUB); }
   void visitFSub(const User &I) { visitBinary(I, ISD::FSUB); }
-  void visitMul(const User &I)  { visitBinary(I, ISD::MUL); }
+  void visitMul(const User &I) { visitBinary(I, ISD::MUL); }
   void visitFMul(const User &I) { visitBinary(I, ISD::FMUL); }
   void visitURem(const User &I) { visitBinary(I, ISD::UREM); }
   void visitSRem(const User &I) { visitBinary(I, ISD::SREM); }
@@ -562,10 +560,10 @@ class SelectionDAGBuilder {
   void visitUDiv(const User &I) { visitBinary(I, ISD::UDIV); }
   void visitSDiv(const User &I);
   void visitFDiv(const User &I) { visitBinary(I, ISD::FDIV); }
-  void visitAnd (const User &I) { visitBinary(I, ISD::AND); }
-  void visitOr  (const User &I) { visitBinary(I, ISD::OR); }
-  void visitXor (const User &I) { visitBinary(I, ISD::XOR); }
-  void visitShl (const User &I) { visitShift(I, ISD::SHL); }
+  void visitAnd(const User &I) { visitBinary(I, ISD::AND); }
+  void visitOr(const User &I) { visitBinary(I, ISD::OR); }
+  void visitXor(const User &I) { visitBinary(I, ISD::XOR); }
+  void visitShl(const User &I) { visitShift(I, ISD::SHL); }
   void visitLShr(const User &I) { visitShift(I, ISD::SRL); }
   void visitAShr(const User &I) { visitShift(I, ISD::SRA); }
   void visitICmp(const User &I);
@@ -670,8 +668,8 @@ class SelectionDAGBuilder {
     llvm_unreachable("UserOp2 should not exist at instruction selection time!");
   }
 
-  void processIntegerCallValue(const Instruction &I,
-                               SDValue Value, bool IsSigned);
+  void processIntegerCallValue(const Instruction &I, SDValue Value,
+                               bool IsSigned);
 
   void HandlePHINodesInSuccessorBlocks(const BasicBlock *LLVMBB);
 
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
index a92111ca23656eb..52ad1e9a7c71e87 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGDumper.cpp
@@ -50,10 +50,10 @@
 
 using namespace llvm;
 
-static cl::opt<bool>
-VerboseDAGDumping("dag-dump-verbose", cl::Hidden,
-                  cl::desc("Display more information when dumping selection "
-                           "DAG nodes."));
+static cl::opt<bool> VerboseDAGDumping(
+    "dag-dump-verbose", cl::Hidden,
+    cl::desc("Display more information when dumping selection "
+             "DAG nodes."));
 
 std::string SDNode::getOperationName(const SelectionDAG *G) const {
   switch (getOpcode()) {
@@ -70,81 +70,139 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
     if (G) {
       const TargetLowering &TLI = G->getTargetLoweringInfo();
       const char *Name = TLI.getTargetNodeName(getOpcode());
-      if (Name) return Name;
+      if (Name)
+        return Name;
       return "<<Unknown Target Node #" + utostr(getOpcode()) + ">>";
     }
     return "<<Unknown Node #" + utostr(getOpcode()) + ">>";
 
 #ifndef NDEBUG
-  case ISD::DELETED_NODE:               return "<<Deleted Node!>>";
+  case ISD::DELETED_NODE:
+    return "<<Deleted Node!>>";
 #endif
-  case ISD::PREFETCH:                   return "Prefetch";
-  case ISD::MEMBARRIER:                 return "MemBarrier";
-  case ISD::ATOMIC_FENCE:               return "AtomicFence";
-  case ISD::ATOMIC_CMP_SWAP:            return "AtomicCmpSwap";
-  case ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS: return "AtomicCmpSwapWithSuccess";
-  case ISD::ATOMIC_SWAP:                return "AtomicSwap";
-  case ISD::ATOMIC_LOAD_ADD:            return "AtomicLoadAdd";
-  case ISD::ATOMIC_LOAD_SUB:            return "AtomicLoadSub";
-  case ISD::ATOMIC_LOAD_AND:            return "AtomicLoadAnd";
-  case ISD::ATOMIC_LOAD_CLR:            return "AtomicLoadClr";
-  case ISD::ATOMIC_LOAD_OR:             return "AtomicLoadOr";
-  case ISD::ATOMIC_LOAD_XOR:            return "AtomicLoadXor";
-  case ISD::ATOMIC_LOAD_NAND:           return "AtomicLoadNand";
-  case ISD::ATOMIC_LOAD_MIN:            return "AtomicLoadMin";
-  case ISD::ATOMIC_LOAD_MAX:            return "AtomicLoadMax";
-  case ISD::ATOMIC_LOAD_UMIN:           return "AtomicLoadUMin";
-  case ISD::ATOMIC_LOAD_UMAX:           return "AtomicLoadUMax";
-  case ISD::ATOMIC_LOAD_FADD:           return "AtomicLoadFAdd";
+  case ISD::PREFETCH:
+    return "Prefetch";
+  case ISD::MEMBARRIER:
+    return "MemBarrier";
+  case ISD::ATOMIC_FENCE:
+    return "AtomicFence";
+  case ISD::ATOMIC_CMP_SWAP:
+    return "AtomicCmpSwap";
+  case ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS:
+    return "AtomicCmpSwapWithSuccess";
+  case ISD::ATOMIC_SWAP:
+    return "AtomicSwap";
+  case ISD::ATOMIC_LOAD_ADD:
+    return "AtomicLoadAdd";
+  case ISD::ATOMIC_LOAD_SUB:
+    return "AtomicLoadSub";
+  case ISD::ATOMIC_LOAD_AND:
+    return "AtomicLoadAnd";
+  case ISD::ATOMIC_LOAD_CLR:
+    return "AtomicLoadClr";
+  case ISD::ATOMIC_LOAD_OR:
+    return "AtomicLoadOr";
+  case ISD::ATOMIC_LOAD_XOR:
+    return "AtomicLoadXor";
+  case ISD::ATOMIC_LOAD_NAND:
+    return "AtomicLoadNand";
+  case ISD::ATOMIC_LOAD_MIN:
+    return "AtomicLoadMin";
+  case ISD::ATOMIC_LOAD_MAX:
+    return "AtomicLoadMax";
+  case ISD::ATOMIC_LOAD_UMIN:
+    return "AtomicLoadUMin";
+  case ISD::ATOMIC_LOAD_UMAX:
+    return "AtomicLoadUMax";
+  case ISD::ATOMIC_LOAD_FADD:
+    return "AtomicLoadFAdd";
   case ISD::ATOMIC_LOAD_UINC_WRAP:
     return "AtomicLoadUIncWrap";
   case ISD::ATOMIC_LOAD_UDEC_WRAP:
     return "AtomicLoadUDecWrap";
-  case ISD::ATOMIC_LOAD:                return "AtomicLoad";
-  case ISD::ATOMIC_STORE:               return "AtomicStore";
-  case ISD::PCMARKER:                   return "PCMarker";
-  case ISD::READCYCLECOUNTER:           return "ReadCycleCounter";
-  case ISD::SRCVALUE:                   return "SrcValue";
-  case ISD::MDNODE_SDNODE:              return "MDNode";
-  case ISD::EntryToken:                 return "EntryToken";
-  case ISD::TokenFactor:                return "TokenFactor";
-  case ISD::AssertSext:                 return "AssertSext";
-  case ISD::AssertZext:                 return "AssertZext";
-  case ISD::AssertAlign:                return "AssertAlign";
-
-  case ISD::BasicBlock:                 return "BasicBlock";
-  case ISD::VALUETYPE:                  return "ValueType";
-  case ISD::Register:                   return "Register";
-  case ISD::RegisterMask:               return "RegisterMask";
+  case ISD::ATOMIC_LOAD:
+    return "AtomicLoad";
+  case ISD::ATOMIC_STORE:
+    return "AtomicStore";
+  case ISD::PCMARKER:
+    return "PCMarker";
+  case ISD::READCYCLECOUNTER:
+    return "ReadCycleCounter";
+  case ISD::SRCVALUE:
+    return "SrcValue";
+  case ISD::MDNODE_SDNODE:
+    return "MDNode";
+  case ISD::EntryToken:
+    return "EntryToken";
+  case ISD::TokenFactor:
+    return "TokenFactor";
+  case ISD::AssertSext:
+    return "AssertSext";
+  case ISD::AssertZext:
+    return "AssertZext";
+  case ISD::AssertAlign:
+    return "AssertAlign";
+
+  case ISD::BasicBlock:
+    return "BasicBlock";
+  case ISD::VALUETYPE:
+    return "ValueType";
+  case ISD::Register:
+    return "Register";
+  case ISD::RegisterMask:
+    return "RegisterMask";
   case ISD::Constant:
     if (cast<ConstantSDNode>(this)->isOpaque())
       return "OpaqueConstant";
     return "Constant";
-  case ISD::ConstantFP:                 return "ConstantFP";
-  case ISD::GlobalAddress:              return "GlobalAddress";
-  case ISD::GlobalTLSAddress:           return "GlobalTLSAddress";
-  case ISD::FrameIndex:                 return "FrameIndex";
-  case ISD::JumpTable:                  return "JumpTable";
+  case ISD::ConstantFP:
+    return "ConstantFP";
+  case ISD::GlobalAddress:
+    return "GlobalAddress";
+  case ISD::GlobalTLSAddress:
+    return "GlobalTLSAddress";
+  case ISD::FrameIndex:
+    return "FrameIndex";
+  case ISD::JumpTable:
+    return "JumpTable";
   case ISD::JUMP_TABLE_DEBUG_INFO:
     return "JUMP_TABLE_DEBUG_INFO";
-  case ISD::GLOBAL_OFFSET_TABLE:        return "GLOBAL_OFFSET_TABLE";
-  case ISD::RETURNADDR:                 return "RETURNADDR";
-  case ISD::ADDROFRETURNADDR:           return "ADDROFRETURNADDR";
-  case ISD::FRAMEADDR:                  return "FRAMEADDR";
-  case ISD::SPONENTRY:                  return "SPONENTRY";
-  case ISD::LOCAL_RECOVER:              return "LOCAL_RECOVER";
-  case ISD::READ_REGISTER:              return "READ_REGISTER";
-  case ISD::WRITE_REGISTER:             return "WRITE_REGISTER";
-  case ISD::FRAME_TO_ARGS_OFFSET:       return "FRAME_TO_ARGS_OFFSET";
-  case ISD::EH_DWARF_CFA:               return "EH_DWARF_CFA";
-  case ISD::EH_RETURN:                  return "EH_RETURN";
-  case ISD::EH_SJLJ_SETJMP:             return "EH_SJLJ_SETJMP";
-  case ISD::EH_SJLJ_LONGJMP:            return "EH_SJLJ_LONGJMP";
-  case ISD::EH_SJLJ_SETUP_DISPATCH:     return "EH_SJLJ_SETUP_DISPATCH";
-  case ISD::ConstantPool:               return "ConstantPool";
-  case ISD::TargetIndex:                return "TargetIndex";
-  case ISD::ExternalSymbol:             return "ExternalSymbol";
-  case ISD::BlockAddress:               return "BlockAddress";
+  case ISD::GLOBAL_OFFSET_TABLE:
+    return "GLOBAL_OFFSET_TABLE";
+  case ISD::RETURNADDR:
+    return "RETURNADDR";
+  case ISD::ADDROFRETURNADDR:
+    return "ADDROFRETURNADDR";
+  case ISD::FRAMEADDR:
+    return "FRAMEADDR";
+  case ISD::SPONENTRY:
+    return "SPONENTRY";
+  case ISD::LOCAL_RECOVER:
+    return "LOCAL_RECOVER";
+  case ISD::READ_REGISTER:
+    return "READ_REGISTER";
+  case ISD::WRITE_REGISTER:
+    return "WRITE_REGISTER";
+  case ISD::FRAME_TO_ARGS_OFFSET:
+    return "FRAME_TO_ARGS_OFFSET";
+  case ISD::EH_DWARF_CFA:
+    return "EH_DWARF_CFA";
+  case ISD::EH_RETURN:
+    return "EH_RETURN";
+  case ISD::EH_SJLJ_SETJMP:
+    return "EH_SJLJ_SETJMP";
+  case ISD::EH_SJLJ_LONGJMP:
+    return "EH_SJLJ_LONGJMP";
+  case ISD::EH_SJLJ_SETUP_DISPATCH:
+    return "EH_SJLJ_SETUP_DISPATCH";
+  case ISD::ConstantPool:
+    return "ConstantPool";
+  case ISD::TargetIndex:
+    return "TargetIndex";
+  case ISD::ExternalSymbol:
+    return "ExternalSymbol";
+  case ISD::BlockAddress:
+    return "BlockAddress";
   case ISD::INTRINSIC_WO_CHAIN:
   case ISD::INTRINSIC_VOID:
   case ISD::INTRINSIC_W_CHAIN: {
@@ -159,356 +217,665 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
     llvm_unreachable("Invalid intrinsic ID");
   }
 
-  case ISD::BUILD_VECTOR:               return "BUILD_VECTOR";
+  case ISD::BUILD_VECTOR:
+    return "BUILD_VECTOR";
   case ISD::TargetConstant:
     if (cast<ConstantSDNode>(this)->isOpaque())
       return "OpaqueTargetConstant";
     return "TargetConstant";
-  case ISD::TargetConstantFP:           return "TargetConstantFP";
-  case ISD::TargetGlobalAddress:        return "TargetGlobalAddress";
-  case ISD::TargetGlobalTLSAddress:     return "TargetGlobalTLSAddress";
-  case ISD::TargetFrameIndex:           return "TargetFrameIndex";
-  case ISD::TargetJumpTable:            return "TargetJumpTable";
-  case ISD::TargetConstantPool:         return "TargetConstantPool";
-  case ISD::TargetExternalSymbol:       return "TargetExternalSymbol";
-  case ISD::MCSymbol:                   return "MCSymbol";
-  case ISD::TargetBlockAddress:         return "TargetBlockAddress";
-
-  case ISD::CopyToReg:                  return "CopyToReg";
-  case ISD::CopyFromReg:                return "CopyFromReg";
-  case ISD::UNDEF:                      return "undef";
-  case ISD::VSCALE:                     return "vscale";
-  case ISD::MERGE_VALUES:               return "merge_values";
-  case ISD::INLINEASM:                  return "inlineasm";
-  case ISD::INLINEASM_BR:               return "inlineasm_br";
-  case ISD::EH_LABEL:                   return "eh_label";
-  case ISD::ANNOTATION_LABEL:           return "annotation_label";
-  case ISD::HANDLENODE:                 return "handlenode";
+  case ISD::TargetConstantFP:
+    return "TargetConstantFP";
+  case ISD::TargetGlobalAddress:
+    return "TargetGlobalAddress";
+  case ISD::TargetGlobalTLSAddress:
+    return "TargetGlobalTLSAddress";
+  case ISD::TargetFrameIndex:
+    return "TargetFrameIndex";
+  case ISD::TargetJumpTable:
+    return "TargetJumpTable";
+  case ISD::TargetConstantPool:
+    return "TargetConstantPool";
+  case ISD::TargetExternalSymbol:
+    return "TargetExternalSymbol";
+  case ISD::MCSymbol:
+    return "MCSymbol";
+  case ISD::TargetBlockAddress:
+    return "TargetBlockAddress";
+
+  case ISD::CopyToReg:
+    return "CopyToReg";
+  case ISD::CopyFromReg:
+    return "CopyFromReg";
+  case ISD::UNDEF:
+    return "undef";
+  case ISD::VSCALE:
+    return "vscale";
+  case ISD::MERGE_VALUES:
+    return "merge_values";
+  case ISD::INLINEASM:
+    return "inlineasm";
+  case ISD::INLINEASM_BR:
+    return "inlineasm_br";
+  case ISD::EH_LABEL:
+    return "eh_label";
+  case ISD::ANNOTATION_LABEL:
+    return "annotation_label";
+  case ISD::HANDLENODE:
+    return "handlenode";
 
   // Unary operators
-  case ISD::FABS:                       return "fabs";
-  case ISD::FMINNUM:                    return "fminnum";
-  case ISD::STRICT_FMINNUM:             return "strict_fminnum";
-  case ISD::FMAXNUM:                    return "fmaxnum";
-  case ISD::STRICT_FMAXNUM:             return "strict_fmaxnum";
-  case ISD::FMINNUM_IEEE:               return "fminnum_ieee";
-  case ISD::FMAXNUM_IEEE:               return "fmaxnum_ieee";
-  case ISD::FMINIMUM:                   return "fminimum";
-  case ISD::STRICT_FMINIMUM:            return "strict_fminimum";
-  case ISD::FMAXIMUM:                   return "fmaximum";
-  case ISD::STRICT_FMAXIMUM:            return "strict_fmaximum";
-  case ISD::FNEG:                       return "fneg";
-  case ISD::FSQRT:                      return "fsqrt";
-  case ISD::STRICT_FSQRT:               return "strict_fsqrt";
-  case ISD::FCBRT:                      return "fcbrt";
-  case ISD::FSIN:                       return "fsin";
-  case ISD::STRICT_FSIN:                return "strict_fsin";
-  case ISD::FCOS:                       return "fcos";
-  case ISD::STRICT_FCOS:                return "strict_fcos";
-  case ISD::FSINCOS:                    return "fsincos";
-  case ISD::FTRUNC:                     return "ftrunc";
-  case ISD::STRICT_FTRUNC:              return "strict_ftrunc";
-  case ISD::FFLOOR:                     return "ffloor";
-  case ISD::STRICT_FFLOOR:              return "strict_ffloor";
-  case ISD::FCEIL:                      return "fceil";
-  case ISD::STRICT_FCEIL:               return "strict_fceil";
-  case ISD::FRINT:                      return "frint";
-  case ISD::STRICT_FRINT:               return "strict_frint";
-  case ISD::FNEARBYINT:                 return "fnearbyint";
-  case ISD::STRICT_FNEARBYINT:          return "strict_fnearbyint";
-  case ISD::FROUND:                     return "fround";
-  case ISD::STRICT_FROUND:              return "strict_fround";
-  case ISD::FROUNDEVEN:                 return "froundeven";
-  case ISD::STRICT_FROUNDEVEN:          return "strict_froundeven";
-  case ISD::FEXP:                       return "fexp";
-  case ISD::STRICT_FEXP:                return "strict_fexp";
-  case ISD::FEXP2:                      return "fexp2";
-  case ISD::STRICT_FEXP2:               return "strict_fexp2";
-  case ISD::FEXP10:                     return "fexp10";
-  case ISD::FLOG:                       return "flog";
-  case ISD::STRICT_FLOG:                return "strict_flog";
-  case ISD::FLOG2:                      return "flog2";
-  case ISD::STRICT_FLOG2:               return "strict_flog2";
-  case ISD::FLOG10:                     return "flog10";
-  case ISD::STRICT_FLOG10:              return "strict_flog10";
+  case ISD::FABS:
+    return "fabs";
+  case ISD::FMINNUM:
+    return "fminnum";
+  case ISD::STRICT_FMINNUM:
+    return "strict_fminnum";
+  case ISD::FMAXNUM:
+    return "fmaxnum";
+  case ISD::STRICT_FMAXNUM:
+    return "strict_fmaxnum";
+  case ISD::FMINNUM_IEEE:
+    return "fminnum_ieee";
+  case ISD::FMAXNUM_IEEE:
+    return "fmaxnum_ieee";
+  case ISD::FMINIMUM:
+    return "fminimum";
+  case ISD::STRICT_FMINIMUM:
+    return "strict_fminimum";
+  case ISD::FMAXIMUM:
+    return "fmaximum";
+  case ISD::STRICT_FMAXIMUM:
+    return "strict_fmaximum";
+  case ISD::FNEG:
+    return "fneg";
+  case ISD::FSQRT:
+    return "fsqrt";
+  case ISD::STRICT_FSQRT:
+    return "strict_fsqrt";
+  case ISD::FCBRT:
+    return "fcbrt";
+  case ISD::FSIN:
+    return "fsin";
+  case ISD::STRICT_FSIN:
+    return "strict_fsin";
+  case ISD::FCOS:
+    return "fcos";
+  case ISD::STRICT_FCOS:
+    return "strict_fcos";
+  case ISD::FSINCOS:
+    return "fsincos";
+  case ISD::FTRUNC:
+    return "ftrunc";
+  case ISD::STRICT_FTRUNC:
+    return "strict_ftrunc";
+  case ISD::FFLOOR:
+    return "ffloor";
+  case ISD::STRICT_FFLOOR:
+    return "strict_ffloor";
+  case ISD::FCEIL:
+    return "fceil";
+  case ISD::STRICT_FCEIL:
+    return "strict_fceil";
+  case ISD::FRINT:
+    return "frint";
+  case ISD::STRICT_FRINT:
+    return "strict_frint";
+  case ISD::FNEARBYINT:
+    return "fnearbyint";
+  case ISD::STRICT_FNEARBYINT:
+    return "strict_fnearbyint";
+  case ISD::FROUND:
+    return "fround";
+  case ISD::STRICT_FROUND:
+    return "strict_fround";
+  case ISD::FROUNDEVEN:
+    return "froundeven";
+  case ISD::STRICT_FROUNDEVEN:
+    return "strict_froundeven";
+  case ISD::FEXP:
+    return "fexp";
+  case ISD::STRICT_FEXP:
+    return "strict_fexp";
+  case ISD::FEXP2:
+    return "fexp2";
+  case ISD::STRICT_FEXP2:
+    return "strict_fexp2";
+  case ISD::FEXP10:
+    return "fexp10";
+  case ISD::FLOG:
+    return "flog";
+  case ISD::STRICT_FLOG:
+    return "strict_flog";
+  case ISD::FLOG2:
+    return "flog2";
+  case ISD::STRICT_FLOG2:
+    return "strict_flog2";
+  case ISD::FLOG10:
+    return "flog10";
+  case ISD::STRICT_FLOG10:
+    return "strict_flog10";
 
   // Binary operators
-  case ISD::ADD:                        return "add";
-  case ISD::SUB:                        return "sub";
-  case ISD::MUL:                        return "mul";
-  case ISD::MULHU:                      return "mulhu";
-  case ISD::MULHS:                      return "mulhs";
-  case ISD::AVGFLOORU:                  return "avgflooru";
-  case ISD::AVGFLOORS:                  return "avgfloors";
-  case ISD::AVGCEILU:                   return "avgceilu";
-  case ISD::AVGCEILS:                   return "avgceils";
-  case ISD::ABDS:                       return "abds";
-  case ISD::ABDU:                       return "abdu";
-  case ISD::SDIV:                       return "sdiv";
-  case ISD::UDIV:                       return "udiv";
-  case ISD::SREM:                       return "srem";
-  case ISD::UREM:                       return "urem";
-  case ISD::SMUL_LOHI:                  return "smul_lohi";
-  case ISD::UMUL_LOHI:                  return "umul_lohi";
-  case ISD::SDIVREM:                    return "sdivrem";
-  case ISD::UDIVREM:                    return "udivrem";
-  case ISD::AND:                        return "and";
-  case ISD::OR:                         return "or";
-  case ISD::XOR:                        return "xor";
-  case ISD::SHL:                        return "shl";
-  case ISD::SRA:                        return "sra";
-  case ISD::SRL:                        return "srl";
-  case ISD::ROTL:                       return "rotl";
-  case ISD::ROTR:                       return "rotr";
-  case ISD::FSHL:                       return "fshl";
-  case ISD::FSHR:                       return "fshr";
-  case ISD::FADD:                       return "fadd";
-  case ISD::STRICT_FADD:                return "strict_fadd";
-  case ISD::FSUB:                       return "fsub";
-  case ISD::STRICT_FSUB:                return "strict_fsub";
-  case ISD::FMUL:                       return "fmul";
-  case ISD::STRICT_FMUL:                return "strict_fmul";
-  case ISD::FDIV:                       return "fdiv";
-  case ISD::STRICT_FDIV:                return "strict_fdiv";
-  case ISD::FMA:                        return "fma";
-  case ISD::STRICT_FMA:                 return "strict_fma";
-  case ISD::FMAD:                       return "fmad";
-  case ISD::FREM:                       return "frem";
-  case ISD::STRICT_FREM:                return "strict_frem";
-  case ISD::FCOPYSIGN:                  return "fcopysign";
-  case ISD::FGETSIGN:                   return "fgetsign";
-  case ISD::FCANONICALIZE:              return "fcanonicalize";
-  case ISD::IS_FPCLASS:                 return "is_fpclass";
-  case ISD::FPOW:                       return "fpow";
-  case ISD::STRICT_FPOW:                return "strict_fpow";
-  case ISD::SMIN:                       return "smin";
-  case ISD::SMAX:                       return "smax";
-  case ISD::UMIN:                       return "umin";
-  case ISD::UMAX:                       return "umax";
-
-  case ISD::FLDEXP:                     return "fldexp";
-  case ISD::STRICT_FLDEXP:              return "strict_fldexp";
-  case ISD::FFREXP:                     return "ffrexp";
-  case ISD::FPOWI:                      return "fpowi";
-  case ISD::STRICT_FPOWI:               return "strict_fpowi";
-  case ISD::SETCC:                      return "setcc";
-  case ISD::SETCCCARRY:                 return "setcccarry";
-  case ISD::STRICT_FSETCC:              return "strict_fsetcc";
-  case ISD::STRICT_FSETCCS:             return "strict_fsetccs";
-  case ISD::SELECT:                     return "select";
-  case ISD::VSELECT:                    return "vselect";
-  case ISD::SELECT_CC:                  return "select_cc";
-  case ISD::INSERT_VECTOR_ELT:          return "insert_vector_elt";
-  case ISD::EXTRACT_VECTOR_ELT:         return "extract_vector_elt";
-  case ISD::CONCAT_VECTORS:             return "concat_vectors";
-  case ISD::INSERT_SUBVECTOR:           return "insert_subvector";
-  case ISD::EXTRACT_SUBVECTOR:          return "extract_subvector";
-  case ISD::VECTOR_DEINTERLEAVE:        return "vector_deinterleave";
-  case ISD::VECTOR_INTERLEAVE:          return "vector_interleave";
-  case ISD::SCALAR_TO_VECTOR:           return "scalar_to_vector";
-  case ISD::VECTOR_SHUFFLE:             return "vector_shuffle";
-  case ISD::VECTOR_SPLICE:              return "vector_splice";
-  case ISD::SPLAT_VECTOR:               return "splat_vector";
-  case ISD::SPLAT_VECTOR_PARTS:         return "splat_vector_parts";
-  case ISD::VECTOR_REVERSE:             return "vector_reverse";
-  case ISD::STEP_VECTOR:                return "step_vector";
-  case ISD::CARRY_FALSE:                return "carry_false";
-  case ISD::ADDC:                       return "addc";
-  case ISD::ADDE:                       return "adde";
-  case ISD::UADDO_CARRY:                return "uaddo_carry";
-  case ISD::SADDO_CARRY:                return "saddo_carry";
-  case ISD::SADDO:                      return "saddo";
-  case ISD::UADDO:                      return "uaddo";
-  case ISD::SSUBO:                      return "ssubo";
-  case ISD::USUBO:                      return "usubo";
-  case ISD::SMULO:                      return "smulo";
-  case ISD::UMULO:                      return "umulo";
-  case ISD::SUBC:                       return "subc";
-  case ISD::SUBE:                       return "sube";
-  case ISD::USUBO_CARRY:                return "usubo_carry";
-  case ISD::SSUBO_CARRY:                return "ssubo_carry";
-  case ISD::SHL_PARTS:                  return "shl_parts";
-  case ISD::SRA_PARTS:                  return "sra_parts";
-  case ISD::SRL_PARTS:                  return "srl_parts";
-
-  case ISD::SADDSAT:                    return "saddsat";
-  case ISD::UADDSAT:                    return "uaddsat";
-  case ISD::SSUBSAT:                    return "ssubsat";
-  case ISD::USUBSAT:                    return "usubsat";
-  case ISD::SSHLSAT:                    return "sshlsat";
-  case ISD::USHLSAT:                    return "ushlsat";
-
-  case ISD::SMULFIX:                    return "smulfix";
-  case ISD::SMULFIXSAT:                 return "smulfixsat";
-  case ISD::UMULFIX:                    return "umulfix";
-  case ISD::UMULFIXSAT:                 return "umulfixsat";
-
-  case ISD::SDIVFIX:                    return "sdivfix";
-  case ISD::SDIVFIXSAT:                 return "sdivfixsat";
-  case ISD::UDIVFIX:                    return "udivfix";
-  case ISD::UDIVFIXSAT:                 return "udivfixsat";
+  case ISD::ADD:
+    return "add";
+  case ISD::SUB:
+    return "sub";
+  case ISD::MUL:
+    return "mul";
+  case ISD::MULHU:
+    return "mulhu";
+  case ISD::MULHS:
+    return "mulhs";
+  case ISD::AVGFLOORU:
+    return "avgflooru";
+  case ISD::AVGFLOORS:
+    return "avgfloors";
+  case ISD::AVGCEILU:
+    return "avgceilu";
+  case ISD::AVGCEILS:
+    return "avgceils";
+  case ISD::ABDS:
+    return "abds";
+  case ISD::ABDU:
+    return "abdu";
+  case ISD::SDIV:
+    return "sdiv";
+  case ISD::UDIV:
+    return "udiv";
+  case ISD::SREM:
+    return "srem";
+  case ISD::UREM:
+    return "urem";
+  case ISD::SMUL_LOHI:
+    return "smul_lohi";
+  case ISD::UMUL_LOHI:
+    return "umul_lohi";
+  case ISD::SDIVREM:
+    return "sdivrem";
+  case ISD::UDIVREM:
+    return "udivrem";
+  case ISD::AND:
+    return "and";
+  case ISD::OR:
+    return "or";
+  case ISD::XOR:
+    return "xor";
+  case ISD::SHL:
+    return "shl";
+  case ISD::SRA:
+    return "sra";
+  case ISD::SRL:
+    return "srl";
+  case ISD::ROTL:
+    return "rotl";
+  case ISD::ROTR:
+    return "rotr";
+  case ISD::FSHL:
+    return "fshl";
+  case ISD::FSHR:
+    return "fshr";
+  case ISD::FADD:
+    return "fadd";
+  case ISD::STRICT_FADD:
+    return "strict_fadd";
+  case ISD::FSUB:
+    return "fsub";
+  case ISD::STRICT_FSUB:
+    return "strict_fsub";
+  case ISD::FMUL:
+    return "fmul";
+  case ISD::STRICT_FMUL:
+    return "strict_fmul";
+  case ISD::FDIV:
+    return "fdiv";
+  case ISD::STRICT_FDIV:
+    return "strict_fdiv";
+  case ISD::FMA:
+    return "fma";
+  case ISD::STRICT_FMA:
+    return "strict_fma";
+  case ISD::FMAD:
+    return "fmad";
+  case ISD::FREM:
+    return "frem";
+  case ISD::STRICT_FREM:
+    return "strict_frem";
+  case ISD::FCOPYSIGN:
+    return "fcopysign";
+  case ISD::FGETSIGN:
+    return "fgetsign";
+  case ISD::FCANONICALIZE:
+    return "fcanonicalize";
+  case ISD::IS_FPCLASS:
+    return "is_fpclass";
+  case ISD::FPOW:
+    return "fpow";
+  case ISD::STRICT_FPOW:
+    return "strict_fpow";
+  case ISD::SMIN:
+    return "smin";
+  case ISD::SMAX:
+    return "smax";
+  case ISD::UMIN:
+    return "umin";
+  case ISD::UMAX:
+    return "umax";
+
+  case ISD::FLDEXP:
+    return "fldexp";
+  case ISD::STRICT_FLDEXP:
+    return "strict_fldexp";
+  case ISD::FFREXP:
+    return "ffrexp";
+  case ISD::FPOWI:
+    return "fpowi";
+  case ISD::STRICT_FPOWI:
+    return "strict_fpowi";
+  case ISD::SETCC:
+    return "setcc";
+  case ISD::SETCCCARRY:
+    return "setcccarry";
+  case ISD::STRICT_FSETCC:
+    return "strict_fsetcc";
+  case ISD::STRICT_FSETCCS:
+    return "strict_fsetccs";
+  case ISD::SELECT:
+    return "select";
+  case ISD::VSELECT:
+    return "vselect";
+  case ISD::SELECT_CC:
+    return "select_cc";
+  case ISD::INSERT_VECTOR_ELT:
+    return "insert_vector_elt";
+  case ISD::EXTRACT_VECTOR_ELT:
+    return "extract_vector_elt";
+  case ISD::CONCAT_VECTORS:
+    return "concat_vectors";
+  case ISD::INSERT_SUBVECTOR:
+    return "insert_subvector";
+  case ISD::EXTRACT_SUBVECTOR:
+    return "extract_subvector";
+  case ISD::VECTOR_DEINTERLEAVE:
+    return "vector_deinterleave";
+  case ISD::VECTOR_INTERLEAVE:
+    return "vector_interleave";
+  case ISD::SCALAR_TO_VECTOR:
+    return "scalar_to_vector";
+  case ISD::VECTOR_SHUFFLE:
+    return "vector_shuffle";
+  case ISD::VECTOR_SPLICE:
+    return "vector_splice";
+  case ISD::SPLAT_VECTOR:
+    return "splat_vector";
+  case ISD::SPLAT_VECTOR_PARTS:
+    return "splat_vector_parts";
+  case ISD::VECTOR_REVERSE:
+    return "vector_reverse";
+  case ISD::STEP_VECTOR:
+    return "step_vector";
+  case ISD::CARRY_FALSE:
+    return "carry_false";
+  case ISD::ADDC:
+    return "addc";
+  case ISD::ADDE:
+    return "adde";
+  case ISD::UADDO_CARRY:
+    return "uaddo_carry";
+  case ISD::SADDO_CARRY:
+    return "saddo_carry";
+  case ISD::SADDO:
+    return "saddo";
+  case ISD::UADDO:
+    return "uaddo";
+  case ISD::SSUBO:
+    return "ssubo";
+  case ISD::USUBO:
+    return "usubo";
+  case ISD::SMULO:
+    return "smulo";
+  case ISD::UMULO:
+    return "umulo";
+  case ISD::SUBC:
+    return "subc";
+  case ISD::SUBE:
+    return "sube";
+  case ISD::USUBO_CARRY:
+    return "usubo_carry";
+  case ISD::SSUBO_CARRY:
+    return "ssubo_carry";
+  case ISD::SHL_PARTS:
+    return "shl_parts";
+  case ISD::SRA_PARTS:
+    return "sra_parts";
+  case ISD::SRL_PARTS:
+    return "srl_parts";
+
+  case ISD::SADDSAT:
+    return "saddsat";
+  case ISD::UADDSAT:
+    return "uaddsat";
+  case ISD::SSUBSAT:
+    return "ssubsat";
+  case ISD::USUBSAT:
+    return "usubsat";
+  case ISD::SSHLSAT:
+    return "sshlsat";
+  case ISD::USHLSAT:
+    return "ushlsat";
+
+  case ISD::SMULFIX:
+    return "smulfix";
+  case ISD::SMULFIXSAT:
+    return "smulfixsat";
+  case ISD::UMULFIX:
+    return "umulfix";
+  case ISD::UMULFIXSAT:
+    return "umulfixsat";
+
+  case ISD::SDIVFIX:
+    return "sdivfix";
+  case ISD::SDIVFIXSAT:
+    return "sdivfixsat";
+  case ISD::UDIVFIX:
+    return "udivfix";
+  case ISD::UDIVFIXSAT:
+    return "udivfixsat";
 
   // Conversion operators.
-  case ISD::SIGN_EXTEND:                return "sign_extend";
-  case ISD::ZERO_EXTEND:                return "zero_extend";
-  case ISD::ANY_EXTEND:                 return "any_extend";
-  case ISD::SIGN_EXTEND_INREG:          return "sign_extend_inreg";
-  case ISD::ANY_EXTEND_VECTOR_INREG:    return "any_extend_vector_inreg";
-  case ISD::SIGN_EXTEND_VECTOR_INREG:   return "sign_extend_vector_inreg";
-  case ISD::ZERO_EXTEND_VECTOR_INREG:   return "zero_extend_vector_inreg";
-  case ISD::TRUNCATE:                   return "truncate";
-  case ISD::FP_ROUND:                   return "fp_round";
-  case ISD::STRICT_FP_ROUND:            return "strict_fp_round";
-  case ISD::FP_EXTEND:                  return "fp_extend";
-  case ISD::STRICT_FP_EXTEND:           return "strict_fp_extend";
-
-  case ISD::SINT_TO_FP:                 return "sint_to_fp";
-  case ISD::STRICT_SINT_TO_FP:          return "strict_sint_to_fp";
-  case ISD::UINT_TO_FP:                 return "uint_to_fp";
-  case ISD::STRICT_UINT_TO_FP:          return "strict_uint_to_fp";
-  case ISD::FP_TO_SINT:                 return "fp_to_sint";
-  case ISD::STRICT_FP_TO_SINT:          return "strict_fp_to_sint";
-  case ISD::FP_TO_UINT:                 return "fp_to_uint";
-  case ISD::STRICT_FP_TO_UINT:          return "strict_fp_to_uint";
-  case ISD::FP_TO_SINT_SAT:             return "fp_to_sint_sat";
-  case ISD::FP_TO_UINT_SAT:             return "fp_to_uint_sat";
-  case ISD::BITCAST:                    return "bitcast";
-  case ISD::ADDRSPACECAST:              return "addrspacecast";
-  case ISD::FP16_TO_FP:                 return "fp16_to_fp";
-  case ISD::STRICT_FP16_TO_FP:          return "strict_fp16_to_fp";
-  case ISD::FP_TO_FP16:                 return "fp_to_fp16";
-  case ISD::STRICT_FP_TO_FP16:          return "strict_fp_to_fp16";
-  case ISD::BF16_TO_FP:                 return "bf16_to_fp";
-  case ISD::FP_TO_BF16:                 return "fp_to_bf16";
-  case ISD::LROUND:                     return "lround";
-  case ISD::STRICT_LROUND:              return "strict_lround";
-  case ISD::LLROUND:                    return "llround";
-  case ISD::STRICT_LLROUND:             return "strict_llround";
-  case ISD::LRINT:                      return "lrint";
-  case ISD::STRICT_LRINT:               return "strict_lrint";
-  case ISD::LLRINT:                     return "llrint";
-  case ISD::STRICT_LLRINT:              return "strict_llrint";
+  case ISD::SIGN_EXTEND:
+    return "sign_extend";
+  case ISD::ZERO_EXTEND:
+    return "zero_extend";
+  case ISD::ANY_EXTEND:
+    return "any_extend";
+  case ISD::SIGN_EXTEND_INREG:
+    return "sign_extend_inreg";
+  case ISD::ANY_EXTEND_VECTOR_INREG:
+    return "any_extend_vector_inreg";
+  case ISD::SIGN_EXTEND_VECTOR_INREG:
+    return "sign_extend_vector_inreg";
+  case ISD::ZERO_EXTEND_VECTOR_INREG:
+    return "zero_extend_vector_inreg";
+  case ISD::TRUNCATE:
+    return "truncate";
+  case ISD::FP_ROUND:
+    return "fp_round";
+  case ISD::STRICT_FP_ROUND:
+    return "strict_fp_round";
+  case ISD::FP_EXTEND:
+    return "fp_extend";
+  case ISD::STRICT_FP_EXTEND:
+    return "strict_fp_extend";
+
+  case ISD::SINT_TO_FP:
+    return "sint_to_fp";
+  case ISD::STRICT_SINT_TO_FP:
+    return "strict_sint_to_fp";
+  case ISD::UINT_TO_FP:
+    return "uint_to_fp";
+  case ISD::STRICT_UINT_TO_FP:
+    return "strict_uint_to_fp";
+  case ISD::FP_TO_SINT:
+    return "fp_to_sint";
+  case ISD::STRICT_FP_TO_SINT:
+    return "strict_fp_to_sint";
+  case ISD::FP_TO_UINT:
+    return "fp_to_uint";
+  case ISD::STRICT_FP_TO_UINT:
+    return "strict_fp_to_uint";
+  case ISD::FP_TO_SINT_SAT:
+    return "fp_to_sint_sat";
+  case ISD::FP_TO_UINT_SAT:
+    return "fp_to_uint_sat";
+  case ISD::BITCAST:
+    return "bitcast";
+  case ISD::ADDRSPACECAST:
+    return "addrspacecast";
+  case ISD::FP16_TO_FP:
+    return "fp16_to_fp";
+  case ISD::STRICT_FP16_TO_FP:
+    return "strict_fp16_to_fp";
+  case ISD::FP_TO_FP16:
+    return "fp_to_fp16";
+  case ISD::STRICT_FP_TO_FP16:
+    return "strict_fp_to_fp16";
+  case ISD::BF16_TO_FP:
+    return "bf16_to_fp";
+  case ISD::FP_TO_BF16:
+    return "fp_to_bf16";
+  case ISD::LROUND:
+    return "lround";
+  case ISD::STRICT_LROUND:
+    return "strict_lround";
+  case ISD::LLROUND:
+    return "llround";
+  case ISD::STRICT_LLROUND:
+    return "strict_llround";
+  case ISD::LRINT:
+    return "lrint";
+  case ISD::STRICT_LRINT:
+    return "strict_lrint";
+  case ISD::LLRINT:
+    return "llrint";
+  case ISD::STRICT_LLRINT:
+    return "strict_llrint";
 
     // Control flow instructions
-  case ISD::BR:                         return "br";
-  case ISD::BRIND:                      return "brind";
-  case ISD::BR_JT:                      return "br_jt";
-  case ISD::BRCOND:                     return "brcond";
-  case ISD::BR_CC:                      return "br_cc";
-  case ISD::CALLSEQ_START:              return "callseq_start";
-  case ISD::CALLSEQ_END:                return "callseq_end";
+  case ISD::BR:
+    return "br";
+  case ISD::BRIND:
+    return "brind";
+  case ISD::BR_JT:
+    return "br_jt";
+  case ISD::BRCOND:
+    return "brcond";
+  case ISD::BR_CC:
+    return "br_cc";
+  case ISD::CALLSEQ_START:
+    return "callseq_start";
+  case ISD::CALLSEQ_END:
+    return "callseq_end";
 
     // EH instructions
-  case ISD::CATCHRET:                   return "catchret";
-  case ISD::CLEANUPRET:                 return "cleanupret";
+  case ISD::CATCHRET:
+    return "catchret";
+  case ISD::CLEANUPRET:
+    return "cleanupret";
 
     // Other operators
-  case ISD::LOAD:                       return "load";
-  case ISD::STORE:                      return "store";
-  case ISD::MLOAD:                      return "masked_load";
-  case ISD::MSTORE:                     return "masked_store";
-  case ISD::MGATHER:                    return "masked_gather";
-  case ISD::MSCATTER:                   return "masked_scatter";
-  case ISD::VAARG:                      return "vaarg";
-  case ISD::VACOPY:                     return "vacopy";
-  case ISD::VAEND:                      return "vaend";
-  case ISD::VASTART:                    return "vastart";
-  case ISD::DYNAMIC_STACKALLOC:         return "dynamic_stackalloc";
-  case ISD::EXTRACT_ELEMENT:            return "extract_element";
-  case ISD::BUILD_PAIR:                 return "build_pair";
-  case ISD::STACKSAVE:                  return "stacksave";
-  case ISD::STACKRESTORE:               return "stackrestore";
-  case ISD::TRAP:                       return "trap";
-  case ISD::DEBUGTRAP:                  return "debugtrap";
-  case ISD::UBSANTRAP:                  return "ubsantrap";
-  case ISD::LIFETIME_START:             return "lifetime.start";
-  case ISD::LIFETIME_END:               return "lifetime.end";
+  case ISD::LOAD:
+    return "load";
+  case ISD::STORE:
+    return "store";
+  case ISD::MLOAD:
+    return "masked_load";
+  case ISD::MSTORE:
+    return "masked_store";
+  case ISD::MGATHER:
+    return "masked_gather";
+  case ISD::MSCATTER:
+    return "masked_scatter";
+  case ISD::VAARG:
+    return "vaarg";
+  case ISD::VACOPY:
+    return "vacopy";
+  case ISD::VAEND:
+    return "vaend";
+  case ISD::VASTART:
+    return "vastart";
+  case ISD::DYNAMIC_STACKALLOC:
+    return "dynamic_stackalloc";
+  case ISD::EXTRACT_ELEMENT:
+    return "extract_element";
+  case ISD::BUILD_PAIR:
+    return "build_pair";
+  case ISD::STACKSAVE:
+    return "stacksave";
+  case ISD::STACKRESTORE:
+    return "stackrestore";
+  case ISD::TRAP:
+    return "trap";
+  case ISD::DEBUGTRAP:
+    return "debugtrap";
+  case ISD::UBSANTRAP:
+    return "ubsantrap";
+  case ISD::LIFETIME_START:
+    return "lifetime.start";
+  case ISD::LIFETIME_END:
+    return "lifetime.end";
   case ISD::PSEUDO_PROBE:
     return "pseudoprobe";
-  case ISD::GC_TRANSITION_START:        return "gc_transition.start";
-  case ISD::GC_TRANSITION_END:          return "gc_transition.end";
-  case ISD::GET_DYNAMIC_AREA_OFFSET:    return "get.dynamic.area.offset";
-  case ISD::FREEZE:                     return "freeze";
+  case ISD::GC_TRANSITION_START:
+    return "gc_transition.start";
+  case ISD::GC_TRANSITION_END:
+    return "gc_transition.end";
+  case ISD::GET_DYNAMIC_AREA_OFFSET:
+    return "get.dynamic.area.offset";
+  case ISD::FREEZE:
+    return "freeze";
   case ISD::PREALLOCATED_SETUP:
     return "call_setup";
   case ISD::PREALLOCATED_ARG:
     return "call_alloc";
 
   // Floating point environment manipulation
-  case ISD::GET_ROUNDING:               return "get_rounding";
-  case ISD::SET_ROUNDING:               return "set_rounding";
-  case ISD::GET_FPENV:                  return "get_fpenv";
-  case ISD::SET_FPENV:                  return "set_fpenv";
-  case ISD::RESET_FPENV:                return "reset_fpenv";
-  case ISD::GET_FPENV_MEM:              return "get_fpenv_mem";
-  case ISD::SET_FPENV_MEM:              return "set_fpenv_mem";
-  case ISD::GET_FPMODE:                 return "get_fpmode";
-  case ISD::SET_FPMODE:                 return "set_fpmode";
-  case ISD::RESET_FPMODE:               return "reset_fpmode";
+  case ISD::GET_ROUNDING:
+    return "get_rounding";
+  case ISD::SET_ROUNDING:
+    return "set_rounding";
+  case ISD::GET_FPENV:
+    return "get_fpenv";
+  case ISD::SET_FPENV:
+    return "set_fpenv";
+  case ISD::RESET_FPENV:
+    return "reset_fpenv";
+  case ISD::GET_FPENV_MEM:
+    return "get_fpenv_mem";
+  case ISD::SET_FPENV_MEM:
+    return "set_fpenv_mem";
+  case ISD::GET_FPMODE:
+    return "get_fpmode";
+  case ISD::SET_FPMODE:
+    return "set_fpmode";
+  case ISD::RESET_FPMODE:
+    return "reset_fpmode";
 
   // Bit manipulation
-  case ISD::ABS:                        return "abs";
-  case ISD::BITREVERSE:                 return "bitreverse";
-  case ISD::BSWAP:                      return "bswap";
-  case ISD::CTPOP:                      return "ctpop";
-  case ISD::CTTZ:                       return "cttz";
-  case ISD::CTTZ_ZERO_UNDEF:            return "cttz_zero_undef";
-  case ISD::CTLZ:                       return "ctlz";
-  case ISD::CTLZ_ZERO_UNDEF:            return "ctlz_zero_undef";
-  case ISD::PARITY:                     return "parity";
+  case ISD::ABS:
+    return "abs";
+  case ISD::BITREVERSE:
+    return "bitreverse";
+  case ISD::BSWAP:
+    return "bswap";
+  case ISD::CTPOP:
+    return "ctpop";
+  case ISD::CTTZ:
+    return "cttz";
+  case ISD::CTTZ_ZERO_UNDEF:
+    return "cttz_zero_undef";
+  case ISD::CTLZ:
+    return "ctlz";
+  case ISD::CTLZ_ZERO_UNDEF:
+    return "ctlz_zero_undef";
+  case ISD::PARITY:
+    return "parity";
 
   // Trampolines
-  case ISD::INIT_TRAMPOLINE:            return "init_trampoline";
-  case ISD::ADJUST_TRAMPOLINE:          return "adjust_trampoline";
+  case ISD::INIT_TRAMPOLINE:
+    return "init_trampoline";
+  case ISD::ADJUST_TRAMPOLINE:
+    return "adjust_trampoline";
 
   case ISD::CONDCODE:
     switch (cast<CondCodeSDNode>(this)->get()) {
-    default: llvm_unreachable("Unknown setcc condition!");
-    case ISD::SETOEQ:                   return "setoeq";
-    case ISD::SETOGT:                   return "setogt";
-    case ISD::SETOGE:                   return "setoge";
-    case ISD::SETOLT:                   return "setolt";
-    case ISD::SETOLE:                   return "setole";
-    case ISD::SETONE:                   return "setone";
-
-    case ISD::SETO:                     return "seto";
-    case ISD::SETUO:                    return "setuo";
-    case ISD::SETUEQ:                   return "setueq";
-    case ISD::SETUGT:                   return "setugt";
-    case ISD::SETUGE:                   return "setuge";
-    case ISD::SETULT:                   return "setult";
-    case ISD::SETULE:                   return "setule";
-    case ISD::SETUNE:                   return "setune";
-
-    case ISD::SETEQ:                    return "seteq";
-    case ISD::SETGT:                    return "setgt";
-    case ISD::SETGE:                    return "setge";
-    case ISD::SETLT:                    return "setlt";
-    case ISD::SETLE:                    return "setle";
-    case ISD::SETNE:                    return "setne";
-
-    case ISD::SETTRUE:                  return "settrue";
-    case ISD::SETTRUE2:                 return "settrue2";
-    case ISD::SETFALSE:                 return "setfalse";
-    case ISD::SETFALSE2:                return "setfalse2";
+    default:
+      llvm_unreachable("Unknown setcc condition!");
+    case ISD::SETOEQ:
+      return "setoeq";
+    case ISD::SETOGT:
+      return "setogt";
+    case ISD::SETOGE:
+      return "setoge";
+    case ISD::SETOLT:
+      return "setolt";
+    case ISD::SETOLE:
+      return "setole";
+    case ISD::SETONE:
+      return "setone";
+
+    case ISD::SETO:
+      return "seto";
+    case ISD::SETUO:
+      return "setuo";
+    case ISD::SETUEQ:
+      return "setueq";
+    case ISD::SETUGT:
+      return "setugt";
+    case ISD::SETUGE:
+      return "setuge";
+    case ISD::SETULT:
+      return "setult";
+    case ISD::SETULE:
+      return "setule";
+    case ISD::SETUNE:
+      return "setune";
+
+    case ISD::SETEQ:
+      return "seteq";
+    case ISD::SETGT:
+      return "setgt";
+    case ISD::SETGE:
+      return "setge";
+    case ISD::SETLT:
+      return "setlt";
+    case ISD::SETLE:
+      return "setle";
+    case ISD::SETNE:
+      return "setne";
+
+    case ISD::SETTRUE:
+      return "settrue";
+    case ISD::SETTRUE2:
+      return "settrue2";
+    case ISD::SETFALSE:
+      return "setfalse";
+    case ISD::SETFALSE2:
+      return "setfalse2";
     }
-  case ISD::VECREDUCE_FADD:             return "vecreduce_fadd";
-  case ISD::VECREDUCE_SEQ_FADD:         return "vecreduce_seq_fadd";
-  case ISD::VECREDUCE_FMUL:             return "vecreduce_fmul";
-  case ISD::VECREDUCE_SEQ_FMUL:         return "vecreduce_seq_fmul";
-  case ISD::VECREDUCE_ADD:              return "vecreduce_add";
-  case ISD::VECREDUCE_MUL:              return "vecreduce_mul";
-  case ISD::VECREDUCE_AND:              return "vecreduce_and";
-  case ISD::VECREDUCE_OR:               return "vecreduce_or";
-  case ISD::VECREDUCE_XOR:              return "vecreduce_xor";
-  case ISD::VECREDUCE_SMAX:             return "vecreduce_smax";
-  case ISD::VECREDUCE_SMIN:             return "vecreduce_smin";
-  case ISD::VECREDUCE_UMAX:             return "vecreduce_umax";
-  case ISD::VECREDUCE_UMIN:             return "vecreduce_umin";
-  case ISD::VECREDUCE_FMAX:             return "vecreduce_fmax";
-  case ISD::VECREDUCE_FMIN:             return "vecreduce_fmin";
-  case ISD::VECREDUCE_FMAXIMUM:         return "vecreduce_fmaximum";
-  case ISD::VECREDUCE_FMINIMUM:         return "vecreduce_fminimum";
+  case ISD::VECREDUCE_FADD:
+    return "vecreduce_fadd";
+  case ISD::VECREDUCE_SEQ_FADD:
+    return "vecreduce_seq_fadd";
+  case ISD::VECREDUCE_FMUL:
+    return "vecreduce_fmul";
+  case ISD::VECREDUCE_SEQ_FMUL:
+    return "vecreduce_seq_fmul";
+  case ISD::VECREDUCE_ADD:
+    return "vecreduce_add";
+  case ISD::VECREDUCE_MUL:
+    return "vecreduce_mul";
+  case ISD::VECREDUCE_AND:
+    return "vecreduce_and";
+  case ISD::VECREDUCE_OR:
+    return "vecreduce_or";
+  case ISD::VECREDUCE_XOR:
+    return "vecreduce_xor";
+  case ISD::VECREDUCE_SMAX:
+    return "vecreduce_smax";
+  case ISD::VECREDUCE_SMIN:
+    return "vecreduce_smin";
+  case ISD::VECREDUCE_UMAX:
+    return "vecreduce_umax";
+  case ISD::VECREDUCE_UMIN:
+    return "vecreduce_umin";
+  case ISD::VECREDUCE_FMAX:
+    return "vecreduce_fmax";
+  case ISD::VECREDUCE_FMIN:
+    return "vecreduce_fmin";
+  case ISD::VECREDUCE_FMAXIMUM:
+    return "vecreduce_fmaximum";
+  case ISD::VECREDUCE_FMINIMUM:
+    return "vecreduce_fminimum";
   case ISD::STACKMAP:
     return "stackmap";
   case ISD::PATCHPOINT:
@@ -524,11 +891,16 @@ std::string SDNode::getOperationName(const SelectionDAG *G) const {
 
 const char *SDNode::getIndexedModeName(ISD::MemIndexedMode AM) {
   switch (AM) {
-  default:              return "";
-  case ISD::PRE_INC:    return "<pre-inc>";
-  case ISD::PRE_DEC:    return "<pre-dec>";
-  case ISD::POST_INC:   return "<post-inc>";
-  case ISD::POST_DEC:   return "<post-dec>";
+  default:
+    return "";
+  case ISD::PRE_INC:
+    return "<pre-inc>";
+  case ISD::PRE_DEC:
+    return "<pre-dec>";
+  case ISD::POST_INC:
+    return "<post-inc>";
+  case ISD::POST_DEC:
+    return "<post-dec>";
   }
 }
 
@@ -537,7 +909,7 @@ static Printable PrintNodeId(const SDNode &Node) {
 #ifndef NDEBUG
     OS << 't' << Node.PersistentId;
 #else
-    OS << (const void*)&Node;
+    OS << (const void *)&Node;
 #endif
   });
 }
@@ -579,7 +951,8 @@ LLVM_DUMP_METHOD void SDNode::dump(const SelectionDAG *G) const {
 
 void SDNode::print_types(raw_ostream &OS, const SelectionDAG *G) const {
   for (unsigned i = 0, e = getNumValues(); i != e; ++i) {
-    if (i) OS << ",";
+    if (i)
+      OS << ",";
     if (getValueType(i) == MVT::Other)
       OS << "ch";
     else
@@ -626,7 +999,8 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const {
       OS << "<";
       OS << "Mem:";
       for (MachineSDNode::mmo_iterator i = MN->memoperands_begin(),
-           e = MN->memoperands_end(); i != e; ++i) {
+                                       e = MN->memoperands_end();
+           i != e; ++i) {
         printMemOperand(OS, **i, G);
         if (std::next(i) != e)
           OS << " ";
@@ -634,11 +1008,12 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const {
       OS << ">";
     }
   } else if (const ShuffleVectorSDNode *SVN =
-               dyn_cast<ShuffleVectorSDNode>(this)) {
+                 dyn_cast<ShuffleVectorSDNode>(this)) {
     OS << "<";
     for (unsigned i = 0, e = ValueList[0].getVectorNumElements(); i != e; ++i) {
       int Idx = SVN->getMaskElt(i);
-      if (i) OS << ",";
+      if (i)
+        OS << ",";
       if (Idx < 0)
         OS << "u";
       else
@@ -658,7 +1033,7 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const {
       OS << ")>";
     }
   } else if (const GlobalAddressSDNode *GADN =
-             dyn_cast<GlobalAddressSDNode>(this)) {
+                 dyn_cast<GlobalAddressSDNode>(this)) {
     int64_t offset = GADN->getOffset();
     OS << '<';
     GADN->getGlobal()->printAsOperand(OS);
@@ -675,7 +1050,8 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const {
     OS << "<" << JTDN->getIndex() << ">";
     if (unsigned int TF = JTDN->getTargetFlags())
       OS << " [TF=" << TF << ']';
-  } else if (const ConstantPoolSDNode *CP = dyn_cast<ConstantPoolSDNode>(this)){
+  } else if (const ConstantPoolSDNode *CP =
+                 dyn_cast<ConstantPoolSDNode>(this)) {
     int offset = CP->getOffset();
     if (CP->isMachineConstantPoolEntry())
       OS << "<" << *CP->getMachineCPVal() << ">";
@@ -693,15 +1069,16 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const {
       OS << " [TF=" << TF << ']';
   } else if (const BasicBlockSDNode *BBDN = dyn_cast<BasicBlockSDNode>(this)) {
     OS << "<";
-    const Value *LBB = (const Value*)BBDN->getBasicBlock()->getBasicBlock();
+    const Value *LBB = (const Value *)BBDN->getBasicBlock()->getBasicBlock();
     if (LBB)
       OS << LBB->getName() << " ";
-    OS << (const void*)BBDN->getBasicBlock() << ">";
+    OS << (const void *)BBDN->getBasicBlock() << ">";
   } else if (const RegisterSDNode *R = dyn_cast<RegisterSDNode>(this)) {
-    OS << ' ' << printReg(R->getReg(),
-                          G ? G->getSubtarget().getRegisterInfo() : nullptr);
+    OS << ' '
+       << printReg(R->getReg(),
+                   G ? G->getSubtarget().getRegisterInfo() : nullptr);
   } else if (const ExternalSymbolSDNode *ES =
-             dyn_cast<ExternalSymbolSDNode>(this)) {
+                 dyn_cast<ExternalSymbolSDNode>(this)) {
     OS << "'" << ES->getSymbol() << "'";
     if (unsigned int TF = ES->getTargetFlags())
       OS << " [TF=" << TF << ']';
@@ -717,18 +1094,25 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const {
       OS << "<null>";
   } else if (const VTSDNode *N = dyn_cast<VTSDNode>(this)) {
     OS << ":" << N->getVT();
-  }
-  else if (const LoadSDNode *LD = dyn_cast<LoadSDNode>(this)) {
+  } else if (const LoadSDNode *LD = dyn_cast<LoadSDNode>(this)) {
     OS << "<";
 
     printMemOperand(OS, *LD->getMemOperand(), G);
 
     bool doExt = true;
     switch (LD->getExtensionType()) {
-    default: doExt = false; break;
-    case ISD::EXTLOAD:  OS << ", anyext"; break;
-    case ISD::SEXTLOAD: OS << ", sext"; break;
-    case ISD::ZEXTLOAD: OS << ", zext"; break;
+    default:
+      doExt = false;
+      break;
+    case ISD::EXTLOAD:
+      OS << ", anyext";
+      break;
+    case ISD::SEXTLOAD:
+      OS << ", sext";
+      break;
+    case ISD::ZEXTLOAD:
+      OS << ", zext";
+      break;
     }
     if (doExt)
       OS << " from " << LD->getMemoryVT();
@@ -757,10 +1141,18 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const {
 
     bool doExt = true;
     switch (MLd->getExtensionType()) {
-    default: doExt = false; break;
-    case ISD::EXTLOAD:  OS << ", anyext"; break;
-    case ISD::SEXTLOAD: OS << ", sext"; break;
-    case ISD::ZEXTLOAD: OS << ", zext"; break;
+    default:
+      doExt = false;
+      break;
+    case ISD::EXTLOAD:
+      OS << ", anyext";
+      break;
+    case ISD::SEXTLOAD:
+      OS << ", sext";
+      break;
+    case ISD::ZEXTLOAD:
+      OS << ", zext";
+      break;
     }
     if (doExt)
       OS << " from " << MLd->getMemoryVT();
@@ -794,10 +1186,18 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const {
 
     bool doExt = true;
     switch (MGather->getExtensionType()) {
-    default: doExt = false; break;
-    case ISD::EXTLOAD:  OS << ", anyext"; break;
-    case ISD::SEXTLOAD: OS << ", sext"; break;
-    case ISD::ZEXTLOAD: OS << ", zext"; break;
+    default:
+      doExt = false;
+      break;
+    case ISD::EXTLOAD:
+      OS << ", anyext";
+      break;
+    case ISD::SEXTLOAD:
+      OS << ", sext";
+      break;
+    case ISD::ZEXTLOAD:
+      OS << ", zext";
+      break;
     }
     if (doExt)
       OS << " from " << MGather->getMemoryVT();
@@ -824,7 +1224,7 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const {
     printMemOperand(OS, *M->getMemOperand(), G);
     OS << ">";
   } else if (const BlockAddressSDNode *BA =
-               dyn_cast<BlockAddressSDNode>(this)) {
+                 dyn_cast<BlockAddressSDNode>(this)) {
     int64_t offset = BA->getOffset();
     OS << "<";
     BA->getBlockAddress()->getFunction()->printAsOperand(OS, false);
@@ -838,22 +1238,20 @@ void SDNode::print_details(raw_ostream &OS, const SelectionDAG *G) const {
     if (unsigned int TF = BA->getTargetFlags())
       OS << " [TF=" << TF << ']';
   } else if (const AddrSpaceCastSDNode *ASC =
-               dyn_cast<AddrSpaceCastSDNode>(this)) {
-    OS << '['
-       << ASC->getSrcAddressSpace()
-       << " -> "
-       << ASC->getDestAddressSpace()
-       << ']';
+                 dyn_cast<AddrSpaceCastSDNode>(this)) {
+    OS << '[' << ASC->getSrcAddressSpace() << " -> "
+       << ASC->getDestAddressSpace() << ']';
   } else if (const LifetimeSDNode *LN = dyn_cast<LifetimeSDNode>(this)) {
     if (LN->hasOffset())
-      OS << "<" << LN->getOffset() << " to " << LN->getOffset() + LN->getSize() << ">";
+      OS << "<" << LN->getOffset() << " to " << LN->getOffset() + LN->getSize()
+         << ">";
   } else if (const auto *AA = dyn_cast<AssertAlignSDNode>(this)) {
     OS << '<' << AA->getAlign().value() << '>';
   }
 
   if (VerboseDAGDumping) {
     if (unsigned Order = getIROrder())
-        OS << " [ORD=" << Order << ']';
+      OS << " [ORD=" << Order << ']';
 
     if (getNodeId() != -1)
       OS << " [ID=" << getNodeId() << ']';
@@ -907,7 +1305,8 @@ LLVM_DUMP_METHOD void SDDbgValue::print(raw_ostream &OS) const {
     Comma = true;
   }
   OS << ")";
-  if (isIndirect()) OS << "(Indirect)";
+  if (isIndirect())
+    OS << "(Indirect)";
   if (isVariadic())
     OS << "(Variadic)";
   OS << ":\"" << Var->getName() << '"';
@@ -944,7 +1343,7 @@ static void DumpNodes(const SDNode *N, unsigned indent, const SelectionDAG *G) {
     if (shouldPrintInline(*Op.getNode(), G))
       continue;
     if (Op.getNode()->hasOneUse())
-      DumpNodes(Op.getNode(), indent+2, G);
+      DumpNodes(Op.getNode(), indent + 2, G);
   }
 
   dbgs().indent(indent);
@@ -960,7 +1359,8 @@ LLVM_DUMP_METHOD void SelectionDAG::dump() const {
       DumpNodes(&N, 2, this);
   }
 
-  if (getRoot().getNode()) DumpNodes(getRoot().getNode(), 2, this);
+  if (getRoot().getNode())
+    DumpNodes(getRoot().getNode(), 2, this);
   dbgs() << "\n";
 
   if (VerboseDAGDumping) {
@@ -1018,7 +1418,8 @@ static void DumpNodesr(raw_ostream &OS, const SDNode *N, unsigned indent,
 
   // Having printed this SDNode, walk the children:
   for (unsigned i = 0, e = N->getNumOperands(); i != e; ++i) {
-    if (i) OS << ",";
+    if (i)
+      OS << ",";
     OS << " ";
 
     const SDValue Op = N->getOperand(i);
@@ -1031,7 +1432,7 @@ static void DumpNodesr(raw_ostream &OS, const SDNode *N, unsigned indent,
 
   // Dump children that have grandchildren on their own line(s).
   for (const SDValue &Op : N->op_values())
-    DumpNodesr(OS, Op.getNode(), indent+2, G, once);
+    DumpNodesr(OS, Op.getNode(), indent + 2, G, once);
 }
 
 LLVM_DUMP_METHOD void SDNode::dumpr() const {
@@ -1065,7 +1466,7 @@ static void printrWithDepthHelper(raw_ostream &OS, const SDNode *N,
 }
 
 void SDNode::printrWithDepth(raw_ostream &OS, const SelectionDAG *G,
-                            unsigned depth) const {
+                             unsigned depth) const {
   printrWithDepthHelper(OS, this, G, depth, 0);
 }
 
@@ -1092,7 +1493,10 @@ void SDNode::print(raw_ostream &OS, const SelectionDAG *G) const {
   if (isDivergent() && !VerboseDAGDumping)
     OS << " # D:1";
   for (unsigned i = 0, e = getNumOperands(); i != e; ++i) {
-    if (i) OS << ", "; else OS << " ";
+    if (i)
+      OS << ", ";
+    else
+      OS << " ";
     printOperand(OS, G, getOperand(i));
   }
   if (DebugLoc DL = getDebugLoc()) {
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
index d3456d574666d06..b01d875fd9bc673 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
@@ -118,7 +118,8 @@ STATISTIC(NumFastIselFailures, "Number of instructions fast isel failed on");
 STATISTIC(NumFastIselSuccess, "Number of instructions fast isel selected");
 STATISTIC(NumFastIselBlocks, "Number of blocks selected entirely by fast isel");
 STATISTIC(NumDAGBlocks, "Number of blocks selected using DAG");
-STATISTIC(NumDAGIselRetries,"Number of times dag isel has to try another path");
+STATISTIC(NumDAGIselRetries,
+          "Number of times dag isel has to try another path");
 STATISTIC(NumEntryBlocks, "Number of entry blocks encountered");
 STATISTIC(NumFastIselFailLowerArguments,
           "Number of entry blocks where fast isel failed to lower arguments");
@@ -136,23 +137,22 @@ static cl::opt<bool> EnableFastISelFallbackReport(
     cl::desc("Emit a diagnostic when \"fast\" instruction selection "
              "falls back to SelectionDAG."));
 
-static cl::opt<bool>
-UseMBPI("use-mbpi",
-        cl::desc("use Machine Branch Probability Info"),
-        cl::init(true), cl::Hidden);
+static cl::opt<bool> UseMBPI("use-mbpi",
+                             cl::desc("use Machine Branch Probability Info"),
+                             cl::init(true), cl::Hidden);
 
 #ifndef NDEBUG
-static cl::opt<std::string>
-FilterDAGBasicBlockName("filter-view-dags", cl::Hidden,
-                        cl::desc("Only display the basic block whose name "
-                                 "matches this for all view-*-dags options"));
-static cl::opt<bool>
-ViewDAGCombine1("view-dag-combine1-dags", cl::Hidden,
-          cl::desc("Pop up a window to show dags before the first "
-                   "dag combine pass"));
+static cl::opt<std::string> FilterDAGBasicBlockName(
+    "filter-view-dags", cl::Hidden,
+    cl::desc("Only display the basic block whose name "
+             "matches this for all view-*-dags options"));
 static cl::opt<bool>
-ViewLegalizeTypesDAGs("view-legalize-types-dags", cl::Hidden,
-          cl::desc("Pop up a window to show dags before legalize types"));
+    ViewDAGCombine1("view-dag-combine1-dags", cl::Hidden,
+                    cl::desc("Pop up a window to show dags before the first "
+                             "dag combine pass"));
+static cl::opt<bool> ViewLegalizeTypesDAGs(
+    "view-legalize-types-dags", cl::Hidden,
+    cl::desc("Pop up a window to show dags before legalize types"));
 static cl::opt<bool>
     ViewDAGCombineLT("view-dag-combine-lt-dags", cl::Hidden,
                      cl::desc("Pop up a window to show dags before the post "
@@ -161,18 +161,18 @@ static cl::opt<bool>
     ViewLegalizeDAGs("view-legalize-dags", cl::Hidden,
                      cl::desc("Pop up a window to show dags before legalize"));
 static cl::opt<bool>
-ViewDAGCombine2("view-dag-combine2-dags", cl::Hidden,
-          cl::desc("Pop up a window to show dags before the second "
-                   "dag combine pass"));
-static cl::opt<bool>
-ViewISelDAGs("view-isel-dags", cl::Hidden,
-          cl::desc("Pop up a window to show isel dags as they are selected"));
-static cl::opt<bool>
-ViewSchedDAGs("view-sched-dags", cl::Hidden,
-          cl::desc("Pop up a window to show sched dags as they are processed"));
-static cl::opt<bool>
-ViewSUnitDAGs("view-sunit-dags", cl::Hidden,
-      cl::desc("Pop up a window to show SUnit dags after they are processed"));
+    ViewDAGCombine2("view-dag-combine2-dags", cl::Hidden,
+                    cl::desc("Pop up a window to show dags before the second "
+                             "dag combine pass"));
+static cl::opt<bool> ViewISelDAGs(
+    "view-isel-dags", cl::Hidden,
+    cl::desc("Pop up a window to show isel dags as they are selected"));
+static cl::opt<bool> ViewSchedDAGs(
+    "view-sched-dags", cl::Hidden,
+    cl::desc("Pop up a window to show sched dags as they are processed"));
+static cl::opt<bool> ViewSUnitDAGs(
+    "view-sunit-dags", cl::Hidden,
+    cl::desc("Pop up a window to show SUnit dags after they are processed"));
 #else
 static const bool ViewDAGCombine1 = false, ViewLegalizeTypesDAGs = false,
                   ViewDAGCombineLT = false, ViewLegalizeDAGs = false,
@@ -195,92 +195,90 @@ MachinePassRegistry<RegisterScheduler::FunctionPassCtor>
 //===---------------------------------------------------------------------===//
 static cl::opt<RegisterScheduler::FunctionPassCtor, false,
                RegisterPassParser<RegisterScheduler>>
-ISHeuristic("pre-RA-sched",
-            cl::init(&createDefaultScheduler), cl::Hidden,
-            cl::desc("Instruction schedulers available (before register"
-                     " allocation):"));
+    ISHeuristic("pre-RA-sched", cl::init(&createDefaultScheduler), cl::Hidden,
+                cl::desc("Instruction schedulers available (before register"
+                         " allocation):"));
 
 static RegisterScheduler
-defaultListDAGScheduler("default", "Best scheduler for the target",
-                        createDefaultScheduler);
+    defaultListDAGScheduler("default", "Best scheduler for the target",
+                            createDefaultScheduler);
 
 namespace llvm {
 
-  //===--------------------------------------------------------------------===//
-  /// This class is used by SelectionDAGISel to temporarily override
-  /// the optimization level on a per-function basis.
-  class OptLevelChanger {
-    SelectionDAGISel &IS;
-    CodeGenOpt::Level SavedOptLevel;
-    bool SavedFastISel;
-
-  public:
-    OptLevelChanger(SelectionDAGISel &ISel,
-                    CodeGenOpt::Level NewOptLevel) : IS(ISel) {
-      SavedOptLevel = IS.OptLevel;
-      SavedFastISel = IS.TM.Options.EnableFastISel;
-      if (NewOptLevel == SavedOptLevel)
-        return;
-      IS.OptLevel = NewOptLevel;
-      IS.TM.setOptLevel(NewOptLevel);
-      LLVM_DEBUG(dbgs() << "\nChanging optimization level for Function "
-                        << IS.MF->getFunction().getName() << "\n");
-      LLVM_DEBUG(dbgs() << "\tBefore: -O" << SavedOptLevel << " ; After: -O"
-                        << NewOptLevel << "\n");
-      if (NewOptLevel == CodeGenOpt::None) {
-        IS.TM.setFastISel(IS.TM.getO0WantsFastISel());
-        LLVM_DEBUG(
-            dbgs() << "\tFastISel is "
-                   << (IS.TM.Options.EnableFastISel ? "enabled" : "disabled")
-                   << "\n");
-      }
-    }
+//===--------------------------------------------------------------------===//
+/// This class is used by SelectionDAGISel to temporarily override
+/// the optimization level on a per-function basis.
+class OptLevelChanger {
+  SelectionDAGISel &IS;
+  CodeGenOpt::Level SavedOptLevel;
+  bool SavedFastISel;
 
-    ~OptLevelChanger() {
-      if (IS.OptLevel == SavedOptLevel)
-        return;
-      LLVM_DEBUG(dbgs() << "\nRestoring optimization level for Function "
-                        << IS.MF->getFunction().getName() << "\n");
-      LLVM_DEBUG(dbgs() << "\tBefore: -O" << IS.OptLevel << " ; After: -O"
-                        << SavedOptLevel << "\n");
-      IS.OptLevel = SavedOptLevel;
-      IS.TM.setOptLevel(SavedOptLevel);
-      IS.TM.setFastISel(SavedFastISel);
+public:
+  OptLevelChanger(SelectionDAGISel &ISel, CodeGenOpt::Level NewOptLevel)
+      : IS(ISel) {
+    SavedOptLevel = IS.OptLevel;
+    SavedFastISel = IS.TM.Options.EnableFastISel;
+    if (NewOptLevel == SavedOptLevel)
+      return;
+    IS.OptLevel = NewOptLevel;
+    IS.TM.setOptLevel(NewOptLevel);
+    LLVM_DEBUG(dbgs() << "\nChanging optimization level for Function "
+                      << IS.MF->getFunction().getName() << "\n");
+    LLVM_DEBUG(dbgs() << "\tBefore: -O" << SavedOptLevel << " ; After: -O"
+                      << NewOptLevel << "\n");
+    if (NewOptLevel == CodeGenOpt::None) {
+      IS.TM.setFastISel(IS.TM.getO0WantsFastISel());
+      LLVM_DEBUG(
+          dbgs() << "\tFastISel is "
+                 << (IS.TM.Options.EnableFastISel ? "enabled" : "disabled")
+                 << "\n");
     }
-  };
+  }
 
-  //===--------------------------------------------------------------------===//
-  /// createDefaultScheduler - This creates an instruction scheduler appropriate
-  /// for the target.
-  ScheduleDAGSDNodes* createDefaultScheduler(SelectionDAGISel *IS,
-                                             CodeGenOpt::Level OptLevel) {
-    const TargetLowering *TLI = IS->TLI;
-    const TargetSubtargetInfo &ST = IS->MF->getSubtarget();
-
-    // Try first to see if the Target has its own way of selecting a scheduler
-    if (auto *SchedulerCtor = ST.getDAGScheduler(OptLevel)) {
-      return SchedulerCtor(IS, OptLevel);
-    }
+  ~OptLevelChanger() {
+    if (IS.OptLevel == SavedOptLevel)
+      return;
+    LLVM_DEBUG(dbgs() << "\nRestoring optimization level for Function "
+                      << IS.MF->getFunction().getName() << "\n");
+    LLVM_DEBUG(dbgs() << "\tBefore: -O" << IS.OptLevel << " ; After: -O"
+                      << SavedOptLevel << "\n");
+    IS.OptLevel = SavedOptLevel;
+    IS.TM.setOptLevel(SavedOptLevel);
+    IS.TM.setFastISel(SavedFastISel);
+  }
+};
 
-    if (OptLevel == CodeGenOpt::None ||
-        (ST.enableMachineScheduler() && ST.enableMachineSchedDefaultSched()) ||
-        TLI->getSchedulingPreference() == Sched::Source)
-      return createSourceListDAGScheduler(IS, OptLevel);
-    if (TLI->getSchedulingPreference() == Sched::RegPressure)
-      return createBURRListDAGScheduler(IS, OptLevel);
-    if (TLI->getSchedulingPreference() == Sched::Hybrid)
-      return createHybridListDAGScheduler(IS, OptLevel);
-    if (TLI->getSchedulingPreference() == Sched::VLIW)
-      return createVLIWDAGScheduler(IS, OptLevel);
-    if (TLI->getSchedulingPreference() == Sched::Fast)
-      return createFastDAGScheduler(IS, OptLevel);
-    if (TLI->getSchedulingPreference() == Sched::Linearize)
-      return createDAGLinearizer(IS, OptLevel);
-    assert(TLI->getSchedulingPreference() == Sched::ILP &&
-           "Unknown sched type!");
-    return createILPListDAGScheduler(IS, OptLevel);
+//===--------------------------------------------------------------------===//
+/// createDefaultScheduler - This creates an instruction scheduler appropriate
+/// for the target.
+ScheduleDAGSDNodes *createDefaultScheduler(SelectionDAGISel *IS,
+                                           CodeGenOpt::Level OptLevel) {
+  const TargetLowering *TLI = IS->TLI;
+  const TargetSubtargetInfo &ST = IS->MF->getSubtarget();
+
+  // Try first to see if the Target has its own way of selecting a scheduler
+  if (auto *SchedulerCtor = ST.getDAGScheduler(OptLevel)) {
+    return SchedulerCtor(IS, OptLevel);
   }
 
+  if (OptLevel == CodeGenOpt::None ||
+      (ST.enableMachineScheduler() && ST.enableMachineSchedDefaultSched()) ||
+      TLI->getSchedulingPreference() == Sched::Source)
+    return createSourceListDAGScheduler(IS, OptLevel);
+  if (TLI->getSchedulingPreference() == Sched::RegPressure)
+    return createBURRListDAGScheduler(IS, OptLevel);
+  if (TLI->getSchedulingPreference() == Sched::Hybrid)
+    return createHybridListDAGScheduler(IS, OptLevel);
+  if (TLI->getSchedulingPreference() == Sched::VLIW)
+    return createVLIWDAGScheduler(IS, OptLevel);
+  if (TLI->getSchedulingPreference() == Sched::Fast)
+    return createFastDAGScheduler(IS, OptLevel);
+  if (TLI->getSchedulingPreference() == Sched::Linearize)
+    return createDAGLinearizer(IS, OptLevel);
+  assert(TLI->getSchedulingPreference() == Sched::ILP && "Unknown sched type!");
+  return createILPListDAGScheduler(IS, OptLevel);
+}
+
 } // end namespace llvm
 
 // EmitInstrWithCustomInserter - This method should be implemented by targets
@@ -297,8 +295,8 @@ TargetLowering::EmitInstrWithCustomInserter(MachineInstr &MI,
                                             MachineBasicBlock *MBB) const {
 #ifndef NDEBUG
   dbgs() << "If a target marks an instruction with "
-          "'usesCustomInserter', it must implement "
-          "TargetLowering::EmitInstrWithCustomInserter!\n";
+            "'usesCustomInserter', it must implement "
+            "TargetLowering::EmitInstrWithCustomInserter!\n";
 #endif
   llvm_unreachable(nullptr);
 }
@@ -414,7 +412,8 @@ bool SelectionDAGISel::runOnMachineFunction(MachineFunction &mf) {
   LibInfo = &getAnalysis<TargetLibraryInfoWrapperPass>().getTLI(Fn);
   GFI = Fn.hasGC() ? &getAnalysis<GCModuleInfo>().getFunctionInfo(Fn) : nullptr;
   ORE = std::make_unique<OptimizationRemarkEmitter>(&Fn);
-  AC = &getAnalysis<AssumptionCacheTracker>().getAssumptionCache(mf.getFunction());
+  AC = &getAnalysis<AssumptionCacheTracker>().getAssumptionCache(
+      mf.getFunction());
   auto *PSI = &getAnalysis<ProfileSummaryInfoWrapperPass>().getPSI();
   BlockFrequencyInfo *BFI = nullptr;
   if (PSI && PSI->hasProfileSummary() && OptLevel != CodeGenOpt::None)
@@ -526,7 +525,7 @@ bool SelectionDAGISel::runOnMachineFunction(MachineFunction &mf) {
 
   // Insert copies in the entry block and the return blocks.
   if (FuncInfo->SplitCSR) {
-    SmallVector<MachineBasicBlock*, 4> Returns;
+    SmallVector<MachineBasicBlock *, 4> Returns;
     // Collect all the return blocks.
     for (MachineBasicBlock &MBB : mf) {
       if (!MBB.succ_empty())
@@ -599,15 +598,19 @@ bool SelectionDAGISel::runOnMachineFunction(MachineFunction &mf) {
       // user of LDI->second.
       MachineInstr *CopyUseMI = nullptr;
       for (MachineRegisterInfo::use_instr_iterator
-           UI = RegInfo->use_instr_begin(LDI->second),
-           E = RegInfo->use_instr_end(); UI != E; ) {
+               UI = RegInfo->use_instr_begin(LDI->second),
+               E = RegInfo->use_instr_end();
+           UI != E;) {
         MachineInstr *UseMI = &*(UI++);
-        if (UseMI->isDebugValue()) continue;
+        if (UseMI->isDebugValue())
+          continue;
         if (UseMI->isCopy() && !CopyUseMI && UseMI->getParent() == EntryMBB) {
-          CopyUseMI = UseMI; continue;
+          CopyUseMI = UseMI;
+          continue;
         }
         // Otherwise this is another use or second copy use.
-        CopyUseMI = nullptr; break;
+        CopyUseMI = nullptr;
+        break;
       }
       if (CopyUseMI &&
           TRI.getRegSizeInBits(LDI->second, MRI) ==
@@ -686,7 +689,8 @@ void SelectionDAGISel::SelectBasicBlock(BasicBlock::const_iterator Begin,
 
   // Lower the instructions. If a call is emitted as a tail call, cease emitting
   // nodes for this block.
-  for (BasicBlock::const_iterator I = Begin; I != End && !SDB->HasTailCall; ++I) {
+  for (BasicBlock::const_iterator I = Begin; I != End && !SDB->HasTailCall;
+       ++I) {
     if (!ElidedArgCopyInstrs.count(&*I))
       SDB->visit(*I);
   }
@@ -703,7 +707,7 @@ void SelectionDAGISel::SelectBasicBlock(BasicBlock::const_iterator Begin,
 
 void SelectionDAGISel::ComputeLiveOutVRegInfo() {
   SmallPtrSet<SDNode *, 16> Added;
-  SmallVector<SDNode*, 128> Worklist;
+  SmallVector<SDNode *, 128> Worklist;
 
   Worklist.push_back(CurDAG->getRoot().getNode());
   Added.insert(CurDAG->getRoot().getNode());
@@ -742,7 +746,8 @@ void SelectionDAGISel::CodeGenAndEmitDAG() {
   StringRef GroupName = "sdag";
   StringRef GroupDescription = "Instruction Selection and Scheduling";
   std::string BlockName;
-  bool MatchFilterBB = false; (void)MatchFilterBB;
+  bool MatchFilterBB = false;
+  (void)MatchFilterBB;
 #ifndef NDEBUG
   TargetTransformInfo &TTI =
       getAnalysis<TargetTransformInfoWrapperPass>().getTTI(*FuncInfo->Fn);
@@ -752,9 +757,9 @@ void SelectionDAGISel::CodeGenAndEmitDAG() {
   CurDAG->NewNodesMustHaveLegalTypes = false;
 
 #ifndef NDEBUG
-  MatchFilterBB = (FilterDAGBasicBlockName.empty() ||
-                   FilterDAGBasicBlockName ==
-                       FuncInfo->MBB->getBasicBlock()->getName());
+  MatchFilterBB =
+      (FilterDAGBasicBlockName.empty() ||
+       FilterDAGBasicBlockName == FuncInfo->MBB->getBasicBlock()->getName());
 #endif
 #ifdef NDEBUG
   if (ViewDAGCombine1 || ViewLegalizeTypesDAGs || ViewDAGCombineLT ||
@@ -1005,7 +1010,7 @@ class ISelUpdater : public SelectionDAG::DAGUpdateListener {
 
 public:
   ISelUpdater(SelectionDAG &DAG, SelectionDAG::allnodes_iterator &isp)
-    : SelectionDAG::DAGUpdateListener(DAG), ISelPosition(isp) {}
+      : SelectionDAG::DAGUpdateListener(DAG), ISelPosition(isp) {}
 
   /// NodeDeleted - Handle nodes deleted from the graph. If the node being
   /// deleted is the current ISelPosition node, update ISelPosition.
@@ -1095,7 +1100,7 @@ void SelectionDAGISel::DoInstructionSelection() {
     // a reference to the root node, preventing it from being deleted,
     // and tracking any changes of the root.
     HandleSDNode Dummy(CurDAG->getRoot());
-    SelectionDAG::allnodes_iterator ISelPosition (CurDAG->getRoot().getNode());
+    SelectionDAG::allnodes_iterator ISelPosition(CurDAG->getRoot().getNode());
     ++ISelPosition;
 
     // Make sure that ISelPosition gets properly updated when nodes are deleted
@@ -1167,8 +1172,8 @@ void SelectionDAGISel::DoInstructionSelection() {
           ActionVT = Node->getValueType(0);
           break;
         }
-        if (TLI->getOperationAction(Node->getOpcode(), ActionVT)
-            == TargetLowering::Expand)
+        if (TLI->getOperationAction(Node->getOpcode(), ActionVT) ==
+            TargetLowering::Expand)
           Node = CurDAG->mutateStrictFPToFP(Node);
       }
 
@@ -1267,8 +1272,7 @@ bool SelectionDAGISel::PrepareEHLandingPad() {
   MCSymbol *Label = MF->addLandingPad(MBB);
 
   const MCInstrDesc &II = TII->get(TargetOpcode::EH_LABEL);
-  BuildMI(*MBB, FuncInfo->InsertPt, SDB->getCurDebugLoc(), II)
-    .addSym(Label);
+  BuildMI(*MBB, FuncInfo->InsertPt, SDB->getCurDebugLoc(), II).addSym(Label);
 
   // If the unwinder does not preserve all registers, ensure that the
   // function marks the clobbered registers as used.
@@ -1450,7 +1454,7 @@ void SelectionDAGISel::SelectAllBasicBlocks(const Function &Fn) {
     FastIS = TLI->createFastISel(*FuncInfo, LibInfo);
   }
 
-  ReversePostOrderTraversal<const Function*> RPOT(&Fn);
+  ReversePostOrderTraversal<const Function *> RPOT(&Fn);
 
   // Lower arguments up front. An RPO iteration always visits the entry block
   // first.
@@ -1474,8 +1478,7 @@ void SelectionDAGISel::SelectAllBasicBlocks(const Function &Fn) {
       ++NumFastIselFailLowerArguments;
 
       OptimizationRemarkMissed R("sdagisel", "FastISelFailure",
-                                 Fn.getSubprogram(),
-                                 &Fn.getEntryBlock());
+                                 Fn.getSubprogram(), &Fn.getEntryBlock());
       R << "FastISel didn't lower all arguments: "
         << ore::NV("Prototype", Fn.getFunctionType());
       reportFastISelFailure(*MF, *ORE, R, EnableFastISelAbort > 1);
@@ -1728,8 +1731,7 @@ void SelectionDAGISel::SelectAllBasicBlocks(const Function &Fn) {
   SDB->SPDescriptor.resetPerFunctionState();
 }
 
-void
-SelectionDAGISel::FinishBasicBlock() {
+void SelectionDAGISel::FinishBasicBlock() {
   LLVM_DEBUG(dbgs() << "Total amount of phi nodes to update: "
                     << FuncInfo->PHINodesToUpdate.size() << "\n";
              for (unsigned i = 0, e = FuncInfo->PHINodesToUpdate.size(); i != e;
@@ -1756,8 +1758,7 @@ SelectionDAGISel::FinishBasicBlock() {
 
     // Add load and check to the basicblock.
     FuncInfo->MBB = ParentMBB;
-    FuncInfo->InsertPt =
-        findSplitPointForStackProtector(ParentMBB, *TII);
+    FuncInfo->InsertPt = findSplitPointForStackProtector(ParentMBB, *TII);
     SDB->visitSPDescriptorParent(SDB->SPDescriptor, ParentMBB);
     CurDAG->setRoot(SDB->getRoot());
     SDB->clear();
@@ -1779,8 +1780,7 @@ SelectionDAGISel::FinishBasicBlock() {
         findSplitPointForStackProtector(ParentMBB, *TII);
 
     // Splice the terminator of ParentMBB into SuccessMBB.
-    SuccessMBB->splice(SuccessMBB->end(), ParentMBB,
-                       SplitPoint,
+    SuccessMBB->splice(SuccessMBB->end(), ParentMBB, SplitPoint,
                        ParentMBB->end());
 
     // Add compare/jump on neq/jump to the parent BB.
@@ -1876,11 +1876,11 @@ SelectionDAGISel::FinishBasicBlock() {
         PHI.addReg(P.second).addMBB(BTB.Parent);
         if (!BTB.ContiguousRange) {
           PHI.addReg(P.second).addMBB(BTB.Cases.back().ThisBB);
-         }
+        }
       }
       // One of "cases" BB.
       for (const SwitchCG::BitTestCase &BT : BTB.Cases) {
-        MachineBasicBlock* cBB = BT.ThisBB;
+        MachineBasicBlock *cBB = BT.ThisBB;
         if (cBB->isSuccessor(PHIBB))
           PHI.addReg(P.second).addMBB(cBB);
       }
@@ -1915,8 +1915,8 @@ SelectionDAGISel::FinishBasicBlock() {
     CodeGenAndEmitDAG();
 
     // Update PHI Nodes
-    for (unsigned pi = 0, pe = FuncInfo->PHINodesToUpdate.size();
-         pi != pe; ++pi) {
+    for (unsigned pi = 0, pe = FuncInfo->PHINodesToUpdate.size(); pi != pe;
+         ++pi) {
       MachineInstrBuilder PHI(*MF, FuncInfo->PHINodesToUpdate[pi].first);
       MachineBasicBlock *PHIBB = PHI->getParent();
       assert(PHI->isPHI() &&
@@ -1924,7 +1924,7 @@ SelectionDAGISel::FinishBasicBlock() {
       // "default" BB. We can go there only from header BB.
       if (PHIBB == SDB->SL->JTCases[i].second.Default)
         PHI.addReg(FuncInfo->PHINodesToUpdate[pi].second)
-           .addMBB(SDB->SL->JTCases[i].first.HeaderBB);
+            .addMBB(SDB->SL->JTCases[i].first.HeaderBB);
       // JT BB. Just iterate over successors here
       if (FuncInfo->MBB->isSuccessor(PHIBB))
         PHI.addReg(FuncInfo->PHINodesToUpdate[pi].second).addMBB(FuncInfo->MBB);
@@ -1965,12 +1965,12 @@ SelectionDAGISel::FinishBasicBlock() {
       // FuncInfo->MBB may have been removed from the CFG if a branch was
       // constant folded.
       if (ThisBB->isSuccessor(FuncInfo->MBB)) {
-        for (MachineBasicBlock::iterator
-             MBBI = FuncInfo->MBB->begin(), MBBE = FuncInfo->MBB->end();
+        for (MachineBasicBlock::iterator MBBI = FuncInfo->MBB->begin(),
+                                         MBBE = FuncInfo->MBB->end();
              MBBI != MBBE && MBBI->isPHI(); ++MBBI) {
           MachineInstrBuilder PHI(*MF, MBBI);
           // This value for this PHI node is recorded in PHINodesToUpdate.
-          for (unsigned pn = 0; ; ++pn) {
+          for (unsigned pn = 0;; ++pn) {
             assert(pn != FuncInfo->PHINodesToUpdate.size() &&
                    "Didn't find PHI entry!");
             if (FuncInfo->PHINodesToUpdate[pn].first == PHI) {
@@ -2072,15 +2072,16 @@ void SelectionDAGISel::SelectInlineAsmMemoryOperands(std::vector<SDValue> &Ops,
   Ops.push_back(InOps[InlineAsm::Op_ExtraInfo]);  // 3 (SideEffect, AlignStack)
 
   unsigned i = InlineAsm::Op_FirstOperand, e = InOps.size();
-  if (InOps[e-1].getValueType() == MVT::Glue)
-    --e;  // Don't process a glue operand if it is here.
+  if (InOps[e - 1].getValueType() == MVT::Glue)
+    --e; // Don't process a glue operand if it is here.
 
   while (i != e) {
     unsigned Flags = cast<ConstantSDNode>(InOps[i])->getZExtValue();
     if (!InlineAsm::isMemKind(Flags) && !InlineAsm::isFuncKind(Flags)) {
       // Just skip over this operand, copying the operands verbatim.
-      Ops.insert(Ops.end(), InOps.begin()+i,
-                 InOps.begin()+i+InlineAsm::getNumOperandRegisters(Flags) + 1);
+      Ops.insert(Ops.end(), InOps.begin() + i,
+                 InOps.begin() + i + InlineAsm::getNumOperandRegisters(Flags) +
+                     1);
       i += InlineAsm::getNumOperandRegisters(Flags) + 1;
     } else {
       assert(InlineAsm::getNumOperandRegisters(Flags) == 1 &&
@@ -2092,7 +2093,7 @@ void SelectionDAGISel::SelectInlineAsmMemoryOperands(std::vector<SDValue> &Ops,
         unsigned CurOp = InlineAsm::Op_FirstOperand;
         Flags = cast<ConstantSDNode>(InOps[CurOp])->getZExtValue();
         for (; TiedToOperand; --TiedToOperand) {
-          CurOp += InlineAsm::getNumOperandRegisters(Flags)+1;
+          CurOp += InlineAsm::getNumOperandRegisters(Flags) + 1;
           Flags = cast<ConstantSDNode>(InOps[CurOp])->getZExtValue();
         }
       }
@@ -2100,7 +2101,7 @@ void SelectionDAGISel::SelectInlineAsmMemoryOperands(std::vector<SDValue> &Ops,
       // Otherwise, this is a memory operand.  Ask the target to select it.
       std::vector<SDValue> SelOps;
       unsigned ConstraintID = InlineAsm::getMemoryConstraintID(Flags);
-      if (SelectInlineAsmMemoryOperand(InOps[i+1], ConstraintID, SelOps))
+      if (SelectInlineAsmMemoryOperand(InOps[i + 1], ConstraintID, SelOps))
         report_fatal_error("Could not match memory address.  Inline asm"
                            " failure!");
 
@@ -2125,7 +2126,7 @@ void SelectionDAGISel::SelectInlineAsmMemoryOperands(std::vector<SDValue> &Ops,
 /// SDNode.
 ///
 static SDNode *findGlueUse(SDNode *N) {
-  unsigned FlagResNo = N->getNumValues()-1;
+  unsigned FlagResNo = N->getNumValues() - 1;
   for (SDNode::use_iterator I = N->use_begin(), E = N->use_end(); I != E; ++I) {
     SDUse &Use = I.getUse();
     if (Use.getResNo() == FlagResNo)
@@ -2178,7 +2179,8 @@ static bool findNonImmUse(SDNode *Root, SDNode *Def, SDNode *ImmedUse,
 /// operand node N of U during instruction selection that starts at Root.
 bool SelectionDAGISel::IsProfitableToFold(SDValue N, SDNode *U,
                                           SDNode *Root) const {
-  if (OptLevel == CodeGenOpt::None) return false;
+  if (OptLevel == CodeGenOpt::None)
+    return false;
   return N.hasOneUse();
 }
 
@@ -2187,7 +2189,8 @@ bool SelectionDAGISel::IsProfitableToFold(SDValue N, SDNode *U,
 bool SelectionDAGISel::IsLegalToFold(SDValue N, SDNode *U, SDNode *Root,
                                      CodeGenOpt::Level OptLevel,
                                      bool IgnoreChains) {
-  if (OptLevel == CodeGenOpt::None) return false;
+  if (OptLevel == CodeGenOpt::None)
+    return false;
 
   // If Root use can somehow reach N through a path that doesn't contain
   // U then folding N would create a cycle. e.g. In the following
@@ -2233,13 +2236,13 @@ bool SelectionDAGISel::IsLegalToFold(SDValue N, SDNode *U, SDNode *Root,
 
   // If the node has glue, walk down the graph to the "lowest" node in the
   // glueged set.
-  EVT VT = Root->getValueType(Root->getNumValues()-1);
+  EVT VT = Root->getValueType(Root->getNumValues() - 1);
   while (VT == MVT::Glue) {
     SDNode *GU = findGlueUse(Root);
     if (!GU)
       break;
     Root = GU;
-    VT = Root->getValueType(Root->getNumValues()-1);
+    VT = Root->getValueType(Root->getNumValues() - 1);
 
     // If our query node has a glue result with a use, we've walked up it.  If
     // the user (which has already been selected) has a chain or indirectly uses
@@ -2271,11 +2274,10 @@ void SelectionDAGISel::Select_READ_REGISTER(SDNode *Op) {
 
   EVT VT = Op->getValueType(0);
   LLT Ty = VT.isSimple() ? getLLTForMVT(VT.getSimpleVT()) : LLT();
-  Register Reg =
-      TLI->getRegisterByName(RegStr->getString().data(), Ty,
-                             CurDAG->getMachineFunction());
-  SDValue New = CurDAG->getCopyFromReg(
-                        Op->getOperand(0), dl, Reg, Op->getValueType(0));
+  Register Reg = TLI->getRegisterByName(RegStr->getString().data(), Ty,
+                                        CurDAG->getMachineFunction());
+  SDValue New =
+      CurDAG->getCopyFromReg(Op->getOperand(0), dl, Reg, Op->getValueType(0));
   New->setNodeId(-1);
   ReplaceUses(Op, New.getNode());
   CurDAG->RemoveDeadNode(Op);
@@ -2291,8 +2293,8 @@ void SelectionDAGISel::Select_WRITE_REGISTER(SDNode *Op) {
 
   Register Reg = TLI->getRegisterByName(RegStr->getString().data(), Ty,
                                         CurDAG->getMachineFunction());
-  SDValue New = CurDAG->getCopyToReg(
-                        Op->getOperand(0), dl, Reg, Op->getOperand(2));
+  SDValue New =
+      CurDAG->getCopyToReg(Op->getOperand(0), dl, Reg, Op->getOperand(2));
   New->setNodeId(-1);
   ReplaceUses(Op, New.getNode());
   CurDAG->RemoveDeadNode(Op);
@@ -2424,13 +2426,13 @@ void SelectionDAGISel::Select_PATCHPOINT(SDNode *N) {
 LLVM_ATTRIBUTE_ALWAYS_INLINE static uint64_t
 GetVBR(uint64_t Val, const unsigned char *MatcherTable, unsigned &Idx) {
   assert(Val >= 128 && "Not a VBR");
-  Val &= 127;  // Remove first vbr bit.
+  Val &= 127; // Remove first vbr bit.
 
   unsigned Shift = 7;
   uint64_t NextBits;
   do {
     NextBits = MatcherTable[Idx++];
-    Val |= (NextBits&127) << Shift;
+    Val |= (NextBits & 127) << Shift;
     Shift += 7;
   } while (NextBits & 128);
 
@@ -2449,7 +2451,7 @@ void SelectionDAGISel::Select_JUMP_TABLE_DEBUG_INFO(SDNode *N) {
 void SelectionDAGISel::UpdateChains(
     SDNode *NodeToMatch, SDValue InputChain,
     SmallVectorImpl<SDNode *> &ChainNodesMatched, bool isMorphNodeTo) {
-  SmallVector<SDNode*, 4> NowDeadNodes;
+  SmallVector<SDNode *, 4> NowDeadNodes;
 
   // Now that all the normal results are replaced, we replace the chain and
   // glue results if present.
@@ -2473,9 +2475,9 @@ void SelectionDAGISel::UpdateChains(
       if (ChainNode == NodeToMatch && isMorphNodeTo)
         continue;
 
-      SDValue ChainVal = SDValue(ChainNode, ChainNode->getNumValues()-1);
+      SDValue ChainVal = SDValue(ChainNode, ChainNode->getNumValues() - 1);
       if (ChainVal.getValueType() == MVT::Glue)
-        ChainVal = ChainVal.getValue(ChainVal->getNumValues()-2);
+        ChainVal = ChainVal.getValue(ChainVal->getNumValues() - 2);
       assert(ChainVal.getValueType() == MVT::Other && "Not a chain?");
       SelectionDAG::DAGNodeDeletedListener NDL(
           *CurDAG, [&](SDNode *N, SDNode *E) {
@@ -2505,7 +2507,7 @@ void SelectionDAGISel::UpdateChains(
 /// induce cycles in the DAG) and if so, creating a TokenFactor node. that will
 /// be used as the input node chain for the generated nodes.
 static SDValue
-HandleMergeInputChains(SmallVectorImpl<SDNode*> &ChainNodesMatched,
+HandleMergeInputChains(SmallVectorImpl<SDNode *> &ChainNodesMatched,
                        SelectionDAG *CurDAG) {
 
   SmallPtrSet<const SDNode *, 16> Visited;
@@ -2564,9 +2566,9 @@ HandleMergeInputChains(SmallVectorImpl<SDNode*> &ChainNodesMatched,
 }
 
 /// MorphNode - Handle morphing a node in place for the selector.
-SDNode *SelectionDAGISel::
-MorphNode(SDNode *Node, unsigned TargetOpc, SDVTList VTList,
-          ArrayRef<SDValue> Ops, unsigned EmitNodeInfo) {
+SDNode *SelectionDAGISel::MorphNode(SDNode *Node, unsigned TargetOpc,
+                                    SDVTList VTList, ArrayRef<SDValue> Ops,
+                                    unsigned EmitNodeInfo) {
   // It is possible we're using MorphNodeTo to replace a node with no
   // normal results with one that has a normal result (or we could be
   // adding a chain) and the input could have glue and chains as well.
@@ -2576,13 +2578,13 @@ MorphNode(SDNode *Node, unsigned TargetOpc, SDVTList VTList,
   int OldGlueResultNo = -1, OldChainResultNo = -1;
 
   unsigned NTMNumResults = Node->getNumValues();
-  if (Node->getValueType(NTMNumResults-1) == MVT::Glue) {
-    OldGlueResultNo = NTMNumResults-1;
+  if (Node->getValueType(NTMNumResults - 1) == MVT::Glue) {
+    OldGlueResultNo = NTMNumResults - 1;
     if (NTMNumResults != 1 &&
-        Node->getValueType(NTMNumResults-2) == MVT::Other)
-      OldChainResultNo = NTMNumResults-2;
-  } else if (Node->getValueType(NTMNumResults-1) == MVT::Other)
-    OldChainResultNo = NTMNumResults-1;
+        Node->getValueType(NTMNumResults - 2) == MVT::Other)
+      OldChainResultNo = NTMNumResults - 2;
+  } else if (Node->getValueType(NTMNumResults - 1) == MVT::Other)
+    OldChainResultNo = NTMNumResults - 1;
 
   // Call the underlying SelectionDAG routine to do the transmogrification. Note
   // that this deletes operands of the old node that become dead.
@@ -2600,7 +2602,7 @@ MorphNode(SDNode *Node, unsigned TargetOpc, SDVTList VTList,
   unsigned ResNumResults = Res->getNumValues();
   // Move the glue if needed.
   if ((EmitNodeInfo & OPFL_GlueOutput) && OldGlueResultNo != -1 &&
-      (unsigned)OldGlueResultNo != ResNumResults-1)
+      (unsigned)OldGlueResultNo != ResNumResults - 1)
     ReplaceUses(SDValue(Node, OldGlueResultNo),
                 SDValue(Res, ResNumResults - 1));
 
@@ -2609,7 +2611,7 @@ MorphNode(SDNode *Node, unsigned TargetOpc, SDVTList VTList,
 
   // Move the chain reference if needed.
   if ((EmitNodeInfo & OPFL_Chain) && OldChainResultNo != -1 &&
-      (unsigned)OldChainResultNo != ResNumResults-1)
+      (unsigned)OldChainResultNo != ResNumResults - 1)
     ReplaceUses(SDValue(Node, OldChainResultNo),
                 SDValue(Res, ResNumResults - 1));
 
@@ -2640,7 +2642,7 @@ LLVM_ATTRIBUTE_ALWAYS_INLINE static bool CheckChildSame(
     const SmallVectorImpl<std::pair<SDValue, SDNode *>> &RecordedNodes,
     unsigned ChildNo) {
   if (ChildNo >= N.getNumOperands())
-    return false;  // Match fails if out of range child #.
+    return false; // Match fails if out of range child #.
   return ::CheckSame(MatcherTable, MatcherIndex, N.getOperand(ChildNo),
                      RecordedNodes);
 }
@@ -2674,7 +2676,8 @@ LLVM_ATTRIBUTE_ALWAYS_INLINE static bool
 CheckType(const unsigned char *MatcherTable, unsigned &MatcherIndex, SDValue N,
           const TargetLowering *TLI, const DataLayout &DL) {
   MVT::SimpleValueType VT = (MVT::SimpleValueType)MatcherTable[MatcherIndex++];
-  if (N.getValueType() == VT) return true;
+  if (N.getValueType() == VT)
+    return true;
 
   // Handle the case when VT is iPTR.
   return VT == MVT::iPTR && N.getValueType() == TLI->getPointerTy(DL);
@@ -2685,7 +2688,7 @@ CheckChildType(const unsigned char *MatcherTable, unsigned &MatcherIndex,
                SDValue N, const TargetLowering *TLI, const DataLayout &DL,
                unsigned ChildNo) {
   if (ChildNo >= N.getNumOperands())
-    return false;  // Match fails if out of range child #.
+    return false; // Match fails if out of range child #.
   return ::CheckType(MatcherTable, MatcherIndex, N.getOperand(ChildNo), TLI,
                      DL);
 }
@@ -2694,7 +2697,7 @@ LLVM_ATTRIBUTE_ALWAYS_INLINE static bool
 CheckCondCode(const unsigned char *MatcherTable, unsigned &MatcherIndex,
               SDValue N) {
   return cast<CondCodeSDNode>(N)->get() ==
-      (ISD::CondCode)MatcherTable[MatcherIndex++];
+         (ISD::CondCode)MatcherTable[MatcherIndex++];
 }
 
 LLVM_ATTRIBUTE_ALWAYS_INLINE static bool
@@ -2744,7 +2747,7 @@ LLVM_ATTRIBUTE_ALWAYS_INLINE static bool
 CheckChildInteger(const unsigned char *MatcherTable, unsigned &MatcherIndex,
                   SDValue N, unsigned ChildNo) {
   if (ChildNo >= N.getNumOperands())
-    return false;  // Match fails if out of range child #.
+    return false; // Match fails if out of range child #.
   return ::CheckInteger(MatcherTable, MatcherIndex, N.getOperand(ChildNo));
 }
 
@@ -2755,7 +2758,8 @@ CheckAndImm(const unsigned char *MatcherTable, unsigned &MatcherIndex,
   if (Val & 128)
     Val = GetVBR(Val, MatcherTable, MatcherIndex);
 
-  if (N->getOpcode() != ISD::AND) return false;
+  if (N->getOpcode() != ISD::AND)
+    return false;
 
   ConstantSDNode *C = dyn_cast<ConstantSDNode>(N->getOperand(1));
   return C && SDISel.CheckAndMask(N.getOperand(0), C, Val);
@@ -2768,7 +2772,8 @@ CheckOrImm(const unsigned char *MatcherTable, unsigned &MatcherIndex, SDValue N,
   if (Val & 128)
     Val = GetVBR(Val, MatcherTable, MatcherIndex);
 
-  if (N->getOpcode() != ISD::OR) return false;
+  if (N->getOpcode() != ISD::OR)
+    return false;
 
   ConstantSDNode *C = dyn_cast<ConstantSDNode>(N->getOperand(1));
   return C && SDISel.CheckOrMask(N.getOperand(0), C, Val);
@@ -2780,15 +2785,14 @@ CheckOrImm(const unsigned char *MatcherTable, unsigned &MatcherIndex, SDValue N,
 /// known to pass, set Result=false and return the MatcherIndex to continue
 /// with.  If the current predicate is unknown, set Result=false and return the
 /// MatcherIndex to continue with.
-static unsigned IsPredicateKnownToFail(const unsigned char *Table,
-                                       unsigned Index, SDValue N,
-                                       bool &Result,
-                                       const SelectionDAGISel &SDISel,
-                  SmallVectorImpl<std::pair<SDValue, SDNode*>> &RecordedNodes) {
+static unsigned IsPredicateKnownToFail(
+    const unsigned char *Table, unsigned Index, SDValue N, bool &Result,
+    const SelectionDAGISel &SDISel,
+    SmallVectorImpl<std::pair<SDValue, SDNode *>> &RecordedNodes) {
   switch (Table[Index++]) {
   default:
     Result = false;
-    return Index-1;  // Could not evaluate this predicate.
+    return Index - 1; // Could not evaluate this predicate.
   case SelectionDAGISel::OPC_CheckSame:
     Result = !::CheckSame(Table, Index, N, RecordedNodes);
     return Index;
@@ -2797,7 +2801,8 @@ static unsigned IsPredicateKnownToFail(const unsigned char *Table,
   case SelectionDAGISel::OPC_CheckChild2Same:
   case SelectionDAGISel::OPC_CheckChild3Same:
     Result = !::CheckChildSame(Table, Index, N, RecordedNodes,
-                        Table[Index-1] - SelectionDAGISel::OPC_CheckChild0Same);
+                               Table[Index - 1] -
+                                   SelectionDAGISel::OPC_CheckChild0Same);
     return Index;
   case SelectionDAGISel::OPC_CheckPatternPredicate:
   case SelectionDAGISel::OPC_CheckPatternPredicate2:
@@ -2830,8 +2835,8 @@ static unsigned IsPredicateKnownToFail(const unsigned char *Table,
   case SelectionDAGISel::OPC_CheckChild6Type:
   case SelectionDAGISel::OPC_CheckChild7Type:
     Result = !::CheckChildType(
-                 Table, Index, N, SDISel.TLI, SDISel.CurDAG->getDataLayout(),
-                 Table[Index - 1] - SelectionDAGISel::OPC_CheckChild0Type);
+        Table, Index, N, SDISel.TLI, SDISel.CurDAG->getDataLayout(),
+        Table[Index - 1] - SelectionDAGISel::OPC_CheckChild0Type);
     return Index;
   case SelectionDAGISel::OPC_CheckCondCode:
     Result = !::CheckCondCode(Table, Index, N);
@@ -2852,7 +2857,8 @@ static unsigned IsPredicateKnownToFail(const unsigned char *Table,
   case SelectionDAGISel::OPC_CheckChild3Integer:
   case SelectionDAGISel::OPC_CheckChild4Integer:
     Result = !::CheckChildInteger(Table, Index, N,
-                     Table[Index-1] - SelectionDAGISel::OPC_CheckChild0Integer);
+                                  Table[Index - 1] -
+                                      SelectionDAGISel::OPC_CheckChild0Integer);
     return Index;
   case SelectionDAGISel::OPC_CheckAndImm:
     Result = !::CheckAndImm(Table, Index, N, SDISel);
@@ -2889,8 +2895,7 @@ struct MatchScope {
 /// (i.e. RecordedNodes and MatchScope) uptodate if the target is allowed to
 /// change the DAG while matching.  X86 addressing mode matcher is an example
 /// for this.
-class MatchStateUpdater : public SelectionDAG::DAGUpdateListener
-{
+class MatchStateUpdater : public SelectionDAG::DAGUpdateListener {
   SDNode **NodeToMatch;
   SmallVectorImpl<std::pair<SDValue, SDNode *>> &RecordedNodes;
   SmallVectorImpl<MatchScope> &MatchScopes;
@@ -2936,7 +2941,7 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
   switch (NodeToMatch->getOpcode()) {
   default:
     break;
-  case ISD::EntryToken:       // These nodes remain the same.
+  case ISD::EntryToken: // These nodes remain the same.
   case ISD::BasicBlock:
   case ISD::Register:
   case ISD::RegisterMask:
@@ -3015,11 +3020,11 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
   // RecordedNodes - This is the set of nodes that have been recorded by the
   // state machine.  The second value is the parent of the node, or null if the
   // root is recorded.
-  SmallVector<std::pair<SDValue, SDNode*>, 8> RecordedNodes;
+  SmallVector<std::pair<SDValue, SDNode *>, 8> RecordedNodes;
 
   // MatchedMemRefs - This is the set of MemRef's we've seen in the input
   // pattern.
-  SmallVector<MachineMemOperand*, 2> MatchedMemRefs;
+  SmallVector<MachineMemOperand *, 2> MatchedMemRefs;
 
   // These are the current input chain and glue for use when generating nodes.
   // Various Emit operations change these.  For example, emitting a copytoreg
@@ -3030,7 +3035,7 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
   // chains, the OPC_EmitMergeInputChains operation is emitted which indicates
   // which ones they are.  The result is captured into this list so that we can
   // update the chain results when the pattern is complete.
-  SmallVector<SDNode*, 3> ChainNodesMatched;
+  SmallVector<SDNode *, 3> ChainNodesMatched;
 
   LLVM_DEBUG(dbgs() << "ISEL: Starting pattern match\n");
 
@@ -3056,13 +3061,14 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       unsigned CaseSize = MatcherTable[Idx++];
       if (CaseSize & 128)
         CaseSize = GetVBR(CaseSize, MatcherTable, Idx);
-      if (CaseSize == 0) break;
+      if (CaseSize == 0)
+        break;
 
       // Get the opcode, add the index to the table.
       uint16_t Opc = MatcherTable[Idx++];
       Opc |= (unsigned short)MatcherTable[Idx++] << 8;
       if (Opc >= OpcodeOffset.size())
-        OpcodeOffset.resize((Opc+1)*2);
+        OpcodeOffset.resize((Opc + 1) * 2);
       OpcodeOffset[Opc] = Idx;
       Idx += CaseSize;
     }
@@ -3097,7 +3103,7 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
           break;
         }
 
-        FailIndex = MatcherIndex+NumToSkip;
+        FailIndex = MatcherIndex + NumToSkip;
 
         unsigned MatcherIndexOfPredicate = MatcherIndex;
         (void)MatcherIndexOfPredicate; // silence warning.
@@ -3123,7 +3129,8 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       }
 
       // If the whole scope failed to match, bail.
-      if (FailIndex == 0) break;
+      if (FailIndex == 0)
+        break;
 
       // Push a MatchScope which indicates where to go if the first child fails
       // to match.
@@ -3142,21 +3149,25 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       // Remember this node, it may end up being an operand in the pattern.
       SDNode *Parent = nullptr;
       if (NodeStack.size() > 1)
-        Parent = NodeStack[NodeStack.size()-2].getNode();
+        Parent = NodeStack[NodeStack.size() - 2].getNode();
       RecordedNodes.push_back(std::make_pair(N, Parent));
       continue;
     }
 
-    case OPC_RecordChild0: case OPC_RecordChild1:
-    case OPC_RecordChild2: case OPC_RecordChild3:
-    case OPC_RecordChild4: case OPC_RecordChild5:
-    case OPC_RecordChild6: case OPC_RecordChild7: {
-      unsigned ChildNo = Opcode-OPC_RecordChild0;
+    case OPC_RecordChild0:
+    case OPC_RecordChild1:
+    case OPC_RecordChild2:
+    case OPC_RecordChild3:
+    case OPC_RecordChild4:
+    case OPC_RecordChild5:
+    case OPC_RecordChild6:
+    case OPC_RecordChild7: {
+      unsigned ChildNo = Opcode - OPC_RecordChild0;
       if (ChildNo >= N.getNumOperands())
-        break;  // Match fails if out of range child #.
+        break; // Match fails if out of range child #.
 
-      RecordedNodes.push_back(std::make_pair(N->getOperand(ChildNo),
-                                             N.getNode()));
+      RecordedNodes.push_back(
+          std::make_pair(N->getOperand(ChildNo), N.getNode()));
       continue;
     }
     case OPC_RecordMemRef:
@@ -3172,26 +3183,30 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
     case OPC_CaptureGlueInput:
       // If the current node has an input glue, capture it in InputGlue.
       if (N->getNumOperands() != 0 &&
-          N->getOperand(N->getNumOperands()-1).getValueType() == MVT::Glue)
-        InputGlue = N->getOperand(N->getNumOperands()-1);
+          N->getOperand(N->getNumOperands() - 1).getValueType() == MVT::Glue)
+        InputGlue = N->getOperand(N->getNumOperands() - 1);
       continue;
 
     case OPC_MoveChild: {
       unsigned ChildNo = MatcherTable[MatcherIndex++];
       if (ChildNo >= N.getNumOperands())
-        break;  // Match fails if out of range child #.
+        break; // Match fails if out of range child #.
       N = N.getOperand(ChildNo);
       NodeStack.push_back(N);
       continue;
     }
 
-    case OPC_MoveChild0: case OPC_MoveChild1:
-    case OPC_MoveChild2: case OPC_MoveChild3:
-    case OPC_MoveChild4: case OPC_MoveChild5:
-    case OPC_MoveChild6: case OPC_MoveChild7: {
-      unsigned ChildNo = Opcode-OPC_MoveChild0;
+    case OPC_MoveChild0:
+    case OPC_MoveChild1:
+    case OPC_MoveChild2:
+    case OPC_MoveChild3:
+    case OPC_MoveChild4:
+    case OPC_MoveChild5:
+    case OPC_MoveChild6:
+    case OPC_MoveChild7: {
+      unsigned ChildNo = Opcode - OPC_MoveChild0;
       if (ChildNo >= N.getNumOperands())
-        break;  // Match fails if out of range child #.
+        break; // Match fails if out of range child #.
       N = N.getOperand(ChildNo);
       NodeStack.push_back(N);
       continue;
@@ -3205,13 +3220,16 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       continue;
 
     case OPC_CheckSame:
-      if (!::CheckSame(MatcherTable, MatcherIndex, N, RecordedNodes)) break;
+      if (!::CheckSame(MatcherTable, MatcherIndex, N, RecordedNodes))
+        break;
       continue;
 
-    case OPC_CheckChild0Same: case OPC_CheckChild1Same:
-    case OPC_CheckChild2Same: case OPC_CheckChild3Same:
+    case OPC_CheckChild0Same:
+    case OPC_CheckChild1Same:
+    case OPC_CheckChild2Same:
+    case OPC_CheckChild3Same:
       if (!::CheckChildSame(MatcherTable, MatcherIndex, N, RecordedNodes,
-                            Opcode-OPC_CheckChild0Same))
+                            Opcode - OPC_CheckChild0Same))
         break;
       continue;
 
@@ -3222,8 +3240,7 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
         break;
       continue;
     case OPC_CheckPredicate:
-      if (!::CheckNodePredicate(MatcherTable, MatcherIndex, *this,
-                                N.getNode()))
+      if (!::CheckNodePredicate(MatcherTable, MatcherIndex, *this, N.getNode()))
         break;
       continue;
     case OPC_CheckPredicateWithOperands: {
@@ -3257,7 +3274,8 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       continue;
     }
     case OPC_CheckOpcode:
-      if (!::CheckOpcode(MatcherTable, MatcherIndex, N.getNode())) break;
+      if (!::CheckOpcode(MatcherTable, MatcherIndex, N.getNode()))
+        break;
       continue;
 
     case OPC_CheckType:
@@ -3276,14 +3294,16 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
 
     case OPC_SwitchOpcode: {
       unsigned CurNodeOpcode = N.getOpcode();
-      unsigned SwitchStart = MatcherIndex-1; (void)SwitchStart;
+      unsigned SwitchStart = MatcherIndex - 1;
+      (void)SwitchStart;
       unsigned CaseSize;
       while (true) {
         // Get the size of this case.
         CaseSize = MatcherTable[MatcherIndex++];
         if (CaseSize & 128)
           CaseSize = GetVBR(CaseSize, MatcherTable, MatcherIndex);
-        if (CaseSize == 0) break;
+        if (CaseSize == 0)
+          break;
 
         uint16_t Opc = MatcherTable[MatcherIndex++];
         Opc |= (unsigned short)MatcherTable[MatcherIndex++] << 8;
@@ -3297,7 +3317,8 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       }
 
       // If no cases matched, bail out.
-      if (CaseSize == 0) break;
+      if (CaseSize == 0)
+        break;
 
       // Otherwise, execute the case we found.
       LLVM_DEBUG(dbgs() << "  OpcodeSwitch from " << SwitchStart << " to "
@@ -3307,14 +3328,16 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
 
     case OPC_SwitchType: {
       MVT CurNodeVT = N.getSimpleValueType();
-      unsigned SwitchStart = MatcherIndex-1; (void)SwitchStart;
+      unsigned SwitchStart = MatcherIndex - 1;
+      (void)SwitchStart;
       unsigned CaseSize;
       while (true) {
         // Get the size of this case.
         CaseSize = MatcherTable[MatcherIndex++];
         if (CaseSize & 128)
           CaseSize = GetVBR(CaseSize, MatcherTable, MatcherIndex);
-        if (CaseSize == 0) break;
+        if (CaseSize == 0)
+          break;
 
         MVT CaseVT = (MVT::SimpleValueType)MatcherTable[MatcherIndex++];
         if (CaseVT == MVT::iPTR)
@@ -3329,28 +3352,34 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       }
 
       // If no cases matched, bail out.
-      if (CaseSize == 0) break;
+      if (CaseSize == 0)
+        break;
 
       // Otherwise, execute the case we found.
-      LLVM_DEBUG(dbgs() << "  TypeSwitch[" << CurNodeVT
-                        << "] from " << SwitchStart << " to " << MatcherIndex
-                        << '\n');
+      LLVM_DEBUG(dbgs() << "  TypeSwitch[" << CurNodeVT << "] from "
+                        << SwitchStart << " to " << MatcherIndex << '\n');
       continue;
     }
-    case OPC_CheckChild0Type: case OPC_CheckChild1Type:
-    case OPC_CheckChild2Type: case OPC_CheckChild3Type:
-    case OPC_CheckChild4Type: case OPC_CheckChild5Type:
-    case OPC_CheckChild6Type: case OPC_CheckChild7Type:
+    case OPC_CheckChild0Type:
+    case OPC_CheckChild1Type:
+    case OPC_CheckChild2Type:
+    case OPC_CheckChild3Type:
+    case OPC_CheckChild4Type:
+    case OPC_CheckChild5Type:
+    case OPC_CheckChild6Type:
+    case OPC_CheckChild7Type:
       if (!::CheckChildType(MatcherTable, MatcherIndex, N, TLI,
                             CurDAG->getDataLayout(),
                             Opcode - OPC_CheckChild0Type))
         break;
       continue;
     case OPC_CheckCondCode:
-      if (!::CheckCondCode(MatcherTable, MatcherIndex, N)) break;
+      if (!::CheckCondCode(MatcherTable, MatcherIndex, N))
+        break;
       continue;
     case OPC_CheckChild2CondCode:
-      if (!::CheckChild2CondCode(MatcherTable, MatcherIndex, N)) break;
+      if (!::CheckChild2CondCode(MatcherTable, MatcherIndex, N))
+        break;
       continue;
     case OPC_CheckValueType:
       if (!::CheckValueType(MatcherTable, MatcherIndex, N, TLI,
@@ -3358,19 +3387,25 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
         break;
       continue;
     case OPC_CheckInteger:
-      if (!::CheckInteger(MatcherTable, MatcherIndex, N)) break;
+      if (!::CheckInteger(MatcherTable, MatcherIndex, N))
+        break;
       continue;
-    case OPC_CheckChild0Integer: case OPC_CheckChild1Integer:
-    case OPC_CheckChild2Integer: case OPC_CheckChild3Integer:
+    case OPC_CheckChild0Integer:
+    case OPC_CheckChild1Integer:
+    case OPC_CheckChild2Integer:
+    case OPC_CheckChild3Integer:
     case OPC_CheckChild4Integer:
       if (!::CheckChildInteger(MatcherTable, MatcherIndex, N,
-                               Opcode-OPC_CheckChild0Integer)) break;
+                               Opcode - OPC_CheckChild0Integer))
+        break;
       continue;
     case OPC_CheckAndImm:
-      if (!::CheckAndImm(MatcherTable, MatcherIndex, N, *this)) break;
+      if (!::CheckAndImm(MatcherTable, MatcherIndex, N, *this))
+        break;
       continue;
     case OPC_CheckOrImm:
-      if (!::CheckOrImm(MatcherTable, MatcherIndex, N, *this)) break;
+      if (!::CheckOrImm(MatcherTable, MatcherIndex, N, *this))
+        break;
       continue;
     case OPC_CheckImmAllOnesV:
       if (!ISD::isConstantSplatVectorAllOnes(N.getNode()))
@@ -3386,7 +3421,7 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       // Verify that all intermediate nodes between the root and this one have
       // a single use (ignoring chains, which are handled in UpdateChains).
       bool HasMultipleUses = false;
-      for (unsigned i = 1, e = NodeStack.size()-1; i != e; ++i) {
+      for (unsigned i = 1, e = NodeStack.size() - 1; i != e; ++i) {
         unsigned NNonChainUses = 0;
         SDNode *NS = NodeStack[i].getNode();
         for (auto UI = NS->use_begin(), UE = NS->use_end(); UI != UE; ++UI)
@@ -3395,17 +3430,19 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
               HasMultipleUses = true;
               break;
             }
-        if (HasMultipleUses) break;
+        if (HasMultipleUses)
+          break;
       }
-      if (HasMultipleUses) break;
+      if (HasMultipleUses)
+        break;
 
       // Check to see that the target thinks this is profitable to fold and that
       // we can fold it without inducing cycles in the graph.
-      if (!IsProfitableToFold(N, NodeStack[NodeStack.size()-2].getNode(),
+      if (!IsProfitableToFold(N, NodeStack[NodeStack.size() - 2].getNode(),
                               NodeToMatch) ||
-          !IsLegalToFold(N, NodeStack[NodeStack.size()-2].getNode(),
+          !IsLegalToFold(N, NodeStack[NodeStack.size() - 2].getNode(),
                          NodeToMatch, OptLevel,
-                         true/*We validate our own chains*/))
+                         true /*We validate our own chains*/))
         break;
 
       continue;
@@ -3413,23 +3450,22 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
     case OPC_EmitInteger:
     case OPC_EmitStringInteger: {
       MVT::SimpleValueType VT =
-        (MVT::SimpleValueType)MatcherTable[MatcherIndex++];
+          (MVT::SimpleValueType)MatcherTable[MatcherIndex++];
       int64_t Val = MatcherTable[MatcherIndex++];
       if (Val & 128)
         Val = GetVBR(Val, MatcherTable, MatcherIndex);
       if (Opcode == OPC_EmitInteger)
         Val = decodeSignRotatedValue(Val);
-      RecordedNodes.push_back(std::pair<SDValue, SDNode*>(
-                              CurDAG->getTargetConstant(Val, SDLoc(NodeToMatch),
-                                                        VT), nullptr));
+      RecordedNodes.push_back(std::pair<SDValue, SDNode *>(
+          CurDAG->getTargetConstant(Val, SDLoc(NodeToMatch), VT), nullptr));
       continue;
     }
     case OPC_EmitRegister: {
       MVT::SimpleValueType VT =
-        (MVT::SimpleValueType)MatcherTable[MatcherIndex++];
+          (MVT::SimpleValueType)MatcherTable[MatcherIndex++];
       unsigned RegNo = MatcherTable[MatcherIndex++];
-      RecordedNodes.push_back(std::pair<SDValue, SDNode*>(
-                              CurDAG->getRegister(RegNo, VT), nullptr));
+      RecordedNodes.push_back(std::pair<SDValue, SDNode *>(
+          CurDAG->getRegister(RegNo, VT), nullptr));
       continue;
     }
     case OPC_EmitRegister2: {
@@ -3437,26 +3473,28 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       // values are stored in two bytes in the matcher table (just like
       // opcodes).
       MVT::SimpleValueType VT =
-        (MVT::SimpleValueType)MatcherTable[MatcherIndex++];
+          (MVT::SimpleValueType)MatcherTable[MatcherIndex++];
       unsigned RegNo = MatcherTable[MatcherIndex++];
       RegNo |= MatcherTable[MatcherIndex++] << 8;
-      RecordedNodes.push_back(std::pair<SDValue, SDNode*>(
-                              CurDAG->getRegister(RegNo, VT), nullptr));
+      RecordedNodes.push_back(std::pair<SDValue, SDNode *>(
+          CurDAG->getRegister(RegNo, VT), nullptr));
       continue;
     }
 
-    case OPC_EmitConvertToTarget:  {
+    case OPC_EmitConvertToTarget: {
       // Convert from IMM/FPIMM to target version.
       unsigned RecNo = MatcherTable[MatcherIndex++];
       assert(RecNo < RecordedNodes.size() && "Invalid EmitConvertToTarget");
       SDValue Imm = RecordedNodes[RecNo].first;
 
       if (Imm->getOpcode() == ISD::Constant) {
-        const ConstantInt *Val=cast<ConstantSDNode>(Imm)->getConstantIntValue();
+        const ConstantInt *Val =
+            cast<ConstantSDNode>(Imm)->getConstantIntValue();
         Imm = CurDAG->getTargetConstant(*Val, SDLoc(NodeToMatch),
                                         Imm.getValueType());
       } else if (Imm->getOpcode() == ISD::ConstantFP) {
-        const ConstantFP *Val=cast<ConstantFPSDNode>(Imm)->getConstantFPValue();
+        const ConstantFP *Val =
+            cast<ConstantFPSDNode>(Imm)->getConstantFPValue();
         Imm = CurDAG->getTargetConstantFP(*Val, SDLoc(NodeToMatch),
                                           Imm.getValueType());
       }
@@ -3465,9 +3503,9 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       continue;
     }
 
-    case OPC_EmitMergeInputChains1_0:    // OPC_EmitMergeInputChains, 1, 0
-    case OPC_EmitMergeInputChains1_1:    // OPC_EmitMergeInputChains, 1, 1
-    case OPC_EmitMergeInputChains1_2: {  // OPC_EmitMergeInputChains, 1, 2
+    case OPC_EmitMergeInputChains1_0:   // OPC_EmitMergeInputChains, 1, 0
+    case OPC_EmitMergeInputChains1_1:   // OPC_EmitMergeInputChains, 1, 1
+    case OPC_EmitMergeInputChains1_2: { // OPC_EmitMergeInputChains, 1, 2
       // These are space-optimized forms of OPC_EmitMergeInputChains.
       assert(!InputChain.getNode() &&
              "EmitMergeInputChains should be the first chain producing node");
@@ -3493,7 +3531,7 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       InputChain = HandleMergeInputChains(ChainNodesMatched, CurDAG);
 
       if (!InputChain.getNode())
-        break;  // Failed to merge.
+        break; // Failed to merge.
       continue;
     }
 
@@ -3537,7 +3575,7 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       InputChain = HandleMergeInputChains(ChainNodesMatched, CurDAG);
 
       if (!InputChain.getNode())
-        break;  // Failed to merge.
+        break; // Failed to merge.
 
       continue;
     }
@@ -3553,9 +3591,9 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       if (!InputChain.getNode())
         InputChain = CurDAG->getEntryNode();
 
-      InputChain = CurDAG->getCopyToReg(InputChain, SDLoc(NodeToMatch),
-                                        DestPhysReg, RecordedNodes[RecNo].first,
-                                        InputGlue);
+      InputChain =
+          CurDAG->getCopyToReg(InputChain, SDLoc(NodeToMatch), DestPhysReg,
+                               RecordedNodes[RecNo].first, InputGlue);
 
       InputGlue = InputChain.getValue(1);
       continue;
@@ -3566,7 +3604,7 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       unsigned RecNo = MatcherTable[MatcherIndex++];
       assert(RecNo < RecordedNodes.size() && "Invalid EmitNodeXForm");
       SDValue Res = RunSDNodeXForm(RecordedNodes[RecNo].first, XFormNo);
-      RecordedNodes.push_back(std::pair<SDValue,SDNode*>(Res, nullptr));
+      RecordedNodes.push_back(std::pair<SDValue, SDNode *>(Res, nullptr));
       continue;
     }
     case OPC_Coverage: {
@@ -3579,9 +3617,14 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       continue;
     }
 
-    case OPC_EmitNode:     case OPC_MorphNodeTo:
-    case OPC_EmitNode0:    case OPC_EmitNode1:    case OPC_EmitNode2:
-    case OPC_MorphNodeTo0: case OPC_MorphNodeTo1: case OPC_MorphNodeTo2: {
+    case OPC_EmitNode:
+    case OPC_MorphNodeTo:
+    case OPC_EmitNode0:
+    case OPC_EmitNode1:
+    case OPC_EmitNode2:
+    case OPC_MorphNodeTo0:
+    case OPC_MorphNodeTo1:
+    case OPC_MorphNodeTo2: {
       uint16_t TargetOpc = MatcherTable[MatcherIndex++];
       TargetOpc |= (unsigned short)MatcherTable[MatcherIndex++] << 8;
       unsigned EmitNodeInfo = MatcherTable[MatcherIndex++];
@@ -3598,7 +3641,7 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       SmallVector<EVT, 4> VTs;
       for (unsigned i = 0; i != NumVTs; ++i) {
         MVT::SimpleValueType VT =
-          (MVT::SimpleValueType)MatcherTable[MatcherIndex++];
+            (MVT::SimpleValueType)MatcherTable[MatcherIndex++];
         if (VT == MVT::iPTR)
           VT = TLI->getPointerTy(CurDAG->getDataLayout()).SimpleTy;
         VTs.push_back(VT);
@@ -3643,7 +3686,8 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
         for (unsigned i = FirstOpToCopy, e = NodeToMatch->getNumOperands();
              i != e; ++i) {
           SDValue V = NodeToMatch->getOperand(i);
-          if (V.getValueType() == MVT::Glue) break;
+          if (V.getValueType() == MVT::Glue)
+            break;
           Ops.push_back(V);
         }
       }
@@ -3665,33 +3709,35 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
 
       // Create the node.
       MachineSDNode *Res = nullptr;
-      bool IsMorphNodeTo = Opcode == OPC_MorphNodeTo ||
-                     (Opcode >= OPC_MorphNodeTo0 && Opcode <= OPC_MorphNodeTo2);
+      bool IsMorphNodeTo =
+          Opcode == OPC_MorphNodeTo ||
+          (Opcode >= OPC_MorphNodeTo0 && Opcode <= OPC_MorphNodeTo2);
       if (!IsMorphNodeTo) {
         // If this is a normal EmitNode command, just create the new node and
         // add the results to the RecordedNodes list.
-        Res = CurDAG->getMachineNode(TargetOpc, SDLoc(NodeToMatch),
-                                     VTList, Ops);
+        Res =
+            CurDAG->getMachineNode(TargetOpc, SDLoc(NodeToMatch), VTList, Ops);
 
         // Add all the non-glue/non-chain results to the RecordedNodes list.
         for (unsigned i = 0, e = VTs.size(); i != e; ++i) {
-          if (VTs[i] == MVT::Other || VTs[i] == MVT::Glue) break;
-          RecordedNodes.push_back(std::pair<SDValue,SDNode*>(SDValue(Res, i),
-                                                             nullptr));
+          if (VTs[i] == MVT::Other || VTs[i] == MVT::Glue)
+            break;
+          RecordedNodes.push_back(
+              std::pair<SDValue, SDNode *>(SDValue(Res, i), nullptr));
         }
       } else {
         assert(NodeToMatch->getOpcode() != ISD::DELETED_NODE &&
                "NodeToMatch was removed partway through selection");
-        SelectionDAG::DAGNodeDeletedListener NDL(*CurDAG, [&](SDNode *N,
-                                                              SDNode *E) {
-          CurDAG->salvageDebugInfo(*N);
-          auto &Chain = ChainNodesMatched;
-          assert((!E || !is_contained(Chain, N)) &&
-                 "Chain node replaced during MorphNode");
-          llvm::erase_value(Chain, N);
-        });
-        Res = cast<MachineSDNode>(MorphNode(NodeToMatch, TargetOpc, VTList,
-                                            Ops, EmitNodeInfo));
+        SelectionDAG::DAGNodeDeletedListener NDL(
+            *CurDAG, [&](SDNode *N, SDNode *E) {
+              CurDAG->salvageDebugInfo(*N);
+              auto &Chain = ChainNodesMatched;
+              assert((!E || !is_contained(Chain, N)) &&
+                     "Chain node replaced during MorphNode");
+              llvm::erase_value(Chain, N);
+            });
+        Res = cast<MachineSDNode>(
+            MorphNode(NodeToMatch, TargetOpc, VTList, Ops, EmitNodeInfo));
       }
 
       // Set the NoFPExcept flag when no original matched node could
@@ -3705,11 +3751,11 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       // If the node had chain/glue results, update our notion of the current
       // chain and glue.
       if (EmitNodeInfo & OPFL_GlueOutput) {
-        InputGlue = SDValue(Res, VTs.size()-1);
+        InputGlue = SDValue(Res, VTs.size() - 1);
         if (EmitNodeInfo & OPFL_Chain)
-          InputChain = SDValue(Res, VTs.size()-2);
+          InputChain = SDValue(Res, VTs.size() - 2);
       } else if (EmitNodeInfo & OPFL_Chain)
-        InputChain = SDValue(Res, VTs.size()-1);
+        InputChain = SDValue(Res, VTs.size() - 1);
 
       // If the OPFL_MemRefs glue is set on this node, slap all of the
       // accumulated memrefs onto it.
@@ -3846,7 +3892,7 @@ void SelectionDAGISel::SelectCodeCommon(SDNode *NodeToMatch,
       // If we have another child in this scope to match, update FailIndex and
       // try it.
       if (NumToSkip != 0) {
-        LastScope.FailIndex = MatcherIndex+NumToSkip;
+        LastScope.FailIndex = MatcherIndex + NumToSkip;
         break;
       }
 
@@ -3903,7 +3949,7 @@ void SelectionDAGISel::CannotYetSelect(SDNode *N) {
   } else {
     bool HasInputChain = N->getOperand(0).getValueType() == MVT::Other;
     unsigned iid =
-      cast<ConstantSDNode>(N->getOperand(HasInputChain))->getZExtValue();
+        cast<ConstantSDNode>(N->getOperand(HasInputChain))->getZExtValue();
     if (iid < Intrinsic::num_intrinsics)
       Msg << "intrinsic %" << Intrinsic::getBaseName((Intrinsic::ID)iid);
     else if (const TargetIntrinsicInfo *TII = TM.getIntrinsicInfo())
diff --git a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGPrinter.cpp b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGPrinter.cpp
index b66eeb6d2bb1aa6..4635ac26392d886 100644
--- a/llvm/lib/CodeGen/SelectionDAG/SelectionDAGPrinter.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/SelectionDAGPrinter.cpp
@@ -23,142 +23,134 @@ using namespace llvm;
 #define DEBUG_TYPE "dag-printer"
 
 namespace llvm {
-  template<>
-  struct DOTGraphTraits<SelectionDAG*> : public DefaultDOTGraphTraits {
+template <>
+struct DOTGraphTraits<SelectionDAG *> : public DefaultDOTGraphTraits {
 
-    explicit DOTGraphTraits(bool isSimple=false) :
-      DefaultDOTGraphTraits(isSimple) {}
+  explicit DOTGraphTraits(bool isSimple = false)
+      : DefaultDOTGraphTraits(isSimple) {}
 
-    static bool hasEdgeDestLabels() {
-      return true;
-    }
+  static bool hasEdgeDestLabels() { return true; }
 
-    static unsigned numEdgeDestLabels(const void *Node) {
-      return ((const SDNode *) Node)->getNumValues();
-    }
+  static unsigned numEdgeDestLabels(const void *Node) {
+    return ((const SDNode *)Node)->getNumValues();
+  }
 
-    static std::string getEdgeDestLabel(const void *Node, unsigned i) {
-      return ((const SDNode *) Node)->getValueType(i).getEVTString();
-    }
+  static std::string getEdgeDestLabel(const void *Node, unsigned i) {
+    return ((const SDNode *)Node)->getValueType(i).getEVTString();
+  }
 
-    template<typename EdgeIter>
-    static std::string getEdgeSourceLabel(const void *Node, EdgeIter I) {
-      return itostr(I - SDNodeIterator::begin((const SDNode *) Node));
-    }
+  template <typename EdgeIter>
+  static std::string getEdgeSourceLabel(const void *Node, EdgeIter I) {
+    return itostr(I - SDNodeIterator::begin((const SDNode *)Node));
+  }
 
-    /// edgeTargetsEdgeSource - This method returns true if this outgoing edge
-    /// should actually target another edge source, not a node.  If this method
-    /// is implemented, getEdgeTarget should be implemented.
-    template<typename EdgeIter>
-    static bool edgeTargetsEdgeSource(const void *Node, EdgeIter I) {
-      return true;
-    }
+  /// edgeTargetsEdgeSource - This method returns true if this outgoing edge
+  /// should actually target another edge source, not a node.  If this method
+  /// is implemented, getEdgeTarget should be implemented.
+  template <typename EdgeIter>
+  static bool edgeTargetsEdgeSource(const void *Node, EdgeIter I) {
+    return true;
+  }
 
-    /// getEdgeTarget - If edgeTargetsEdgeSource returns true, this method is
-    /// called to determine which outgoing edge of Node is the target of this
-    /// edge.
-    template<typename EdgeIter>
-    static EdgeIter getEdgeTarget(const void *Node, EdgeIter I) {
-      SDNode *TargetNode = *I;
-      SDNodeIterator NI = SDNodeIterator::begin(TargetNode);
-      std::advance(NI, I.getNode()->getOperand(I.getOperand()).getResNo());
-      return NI;
-    }
+  /// getEdgeTarget - If edgeTargetsEdgeSource returns true, this method is
+  /// called to determine which outgoing edge of Node is the target of this
+  /// edge.
+  template <typename EdgeIter>
+  static EdgeIter getEdgeTarget(const void *Node, EdgeIter I) {
+    SDNode *TargetNode = *I;
+    SDNodeIterator NI = SDNodeIterator::begin(TargetNode);
+    std::advance(NI, I.getNode()->getOperand(I.getOperand()).getResNo());
+    return NI;
+  }
 
-    static std::string getGraphName(const SelectionDAG *G) {
-      return std::string(G->getMachineFunction().getName());
-    }
+  static std::string getGraphName(const SelectionDAG *G) {
+    return std::string(G->getMachineFunction().getName());
+  }
 
-    static bool renderGraphFromBottomUp() {
-      return true;
-    }
+  static bool renderGraphFromBottomUp() { return true; }
 
-    static std::string getNodeIdentifierLabel(const SDNode *Node,
-                                              const SelectionDAG *Graph) {
-      std::string R;
-      raw_string_ostream OS(R);
+  static std::string getNodeIdentifierLabel(const SDNode *Node,
+                                            const SelectionDAG *Graph) {
+    std::string R;
+    raw_string_ostream OS(R);
 #ifndef NDEBUG
-      OS << 't' << Node->PersistentId;
+    OS << 't' << Node->PersistentId;
 #else
-      OS << static_cast<const void *>(Node);
+    OS << static_cast<const void *>(Node);
 #endif
-      return R;
-    }
-
-    /// If you want to override the dot attributes printed for a particular
-    /// edge, override this method.
-    template<typename EdgeIter>
-    static std::string getEdgeAttributes(const void *Node, EdgeIter EI,
-                                         const SelectionDAG *Graph) {
-      SDValue Op = EI.getNode()->getOperand(EI.getOperand());
-      EVT VT = Op.getValueType();
-      if (VT == MVT::Glue)
-        return "color=red,style=bold";
-      else if (VT == MVT::Other)
-        return "color=blue,style=dashed";
-      return "";
-    }
+    return R;
+  }
 
+  /// If you want to override the dot attributes printed for a particular
+  /// edge, override this method.
+  template <typename EdgeIter>
+  static std::string getEdgeAttributes(const void *Node, EdgeIter EI,
+                                       const SelectionDAG *Graph) {
+    SDValue Op = EI.getNode()->getOperand(EI.getOperand());
+    EVT VT = Op.getValueType();
+    if (VT == MVT::Glue)
+      return "color=red,style=bold";
+    else if (VT == MVT::Other)
+      return "color=blue,style=dashed";
+    return "";
+  }
 
-    static std::string getSimpleNodeLabel(const SDNode *Node,
-                                          const SelectionDAG *G) {
-      std::string Result = Node->getOperationName(G);
-      {
-        raw_string_ostream OS(Result);
-        Node->print_details(OS, G);
-      }
-      return Result;
+  static std::string getSimpleNodeLabel(const SDNode *Node,
+                                        const SelectionDAG *G) {
+    std::string Result = Node->getOperationName(G);
+    {
+      raw_string_ostream OS(Result);
+      Node->print_details(OS, G);
     }
-    std::string getNodeLabel(const SDNode *Node, const SelectionDAG *Graph);
-    static std::string getNodeAttributes(const SDNode *N,
-                                         const SelectionDAG *Graph) {
+    return Result;
+  }
+  std::string getNodeLabel(const SDNode *Node, const SelectionDAG *Graph);
+  static std::string getNodeAttributes(const SDNode *N,
+                                       const SelectionDAG *Graph) {
 #ifndef NDEBUG
-      const std::string &Attrs = Graph->getGraphAttrs(N);
-      if (!Attrs.empty()) {
-        if (Attrs.find("shape=") == std::string::npos)
-          return std::string("shape=Mrecord,") + Attrs;
-        else
-          return Attrs;
-      }
-#endif
-      return "shape=Mrecord";
+    const std::string &Attrs = Graph->getGraphAttrs(N);
+    if (!Attrs.empty()) {
+      if (Attrs.find("shape=") == std::string::npos)
+        return std::string("shape=Mrecord,") + Attrs;
+      else
+        return Attrs;
     }
+#endif
+    return "shape=Mrecord";
+  }
 
-    static void addCustomGraphFeatures(SelectionDAG *G,
-                                       GraphWriter<SelectionDAG*> &GW) {
-      GW.emitSimpleNode(nullptr, "plaintext=circle", "GraphRoot");
-      if (G->getRoot().getNode())
-        GW.emitEdge(nullptr, -1, G->getRoot().getNode(), G->getRoot().getResNo(),
-                    "color=blue,style=dashed");
-    }
-  };
-}
+  static void addCustomGraphFeatures(SelectionDAG *G,
+                                     GraphWriter<SelectionDAG *> &GW) {
+    GW.emitSimpleNode(nullptr, "plaintext=circle", "GraphRoot");
+    if (G->getRoot().getNode())
+      GW.emitEdge(nullptr, -1, G->getRoot().getNode(), G->getRoot().getResNo(),
+                  "color=blue,style=dashed");
+  }
+};
+} // namespace llvm
 
-std::string DOTGraphTraits<SelectionDAG*>::getNodeLabel(const SDNode *Node,
-                                                        const SelectionDAG *G) {
-  return DOTGraphTraits<SelectionDAG*>::getSimpleNodeLabel(Node, G);
+std::string
+DOTGraphTraits<SelectionDAG *>::getNodeLabel(const SDNode *Node,
+                                             const SelectionDAG *G) {
+  return DOTGraphTraits<SelectionDAG *>::getSimpleNodeLabel(Node, G);
 }
 
-
 /// viewGraph - Pop up a ghostview window with the reachable parts of the DAG
 /// rendered using 'dot'.
 ///
 void SelectionDAG::viewGraph(const std::string &Title) {
 // This code is only for debugging!
 #ifndef NDEBUG
-  ViewGraph(this, "dag." + getMachineFunction().getName(),
-            false, Title);
+  ViewGraph(this, "dag." + getMachineFunction().getName(), false, Title);
 #else
   errs() << "SelectionDAG::viewGraph is only available in debug builds on "
          << "systems with Graphviz or gv!\n";
-#endif  // NDEBUG
+#endif // NDEBUG
 }
 
 // This overload is defined out-of-line here instead of just using a
 // default parameter because this is easiest for gdb to call.
-void SelectionDAG::viewGraph() {
-  viewGraph("");
-}
+void SelectionDAG::viewGraph() { viewGraph(""); }
 
 /// Just dump dot graph to a user-provided path and title.
 /// This doesn't open the dot viewer program and
@@ -185,7 +177,6 @@ void SelectionDAG::clearGraphAttrs() {
 #endif
 }
 
-
 /// setGraphAttrs - Set graph attributes for a node. (eg. "color=red".)
 ///
 void SelectionDAG::setGraphAttrs(const SDNode *N, const char *Attrs) {
@@ -197,13 +188,12 @@ void SelectionDAG::setGraphAttrs(const SDNode *N, const char *Attrs) {
 #endif
 }
 
-
 /// getGraphAttrs - Get graph attributes for a node. (eg. "color=red".)
 /// Used from getNodeAttributes.
 std::string SelectionDAG::getGraphAttrs(const SDNode *N) const {
 #if LLVM_ENABLE_ABI_BREAKING_CHECKS
   std::map<const SDNode *, std::string>::const_iterator I =
-    NodeGraphAttrs.find(N);
+      NodeGraphAttrs.find(N);
 
   if (I != NodeGraphAttrs.end())
     return I->second;
@@ -230,7 +220,8 @@ void SelectionDAG::setGraphColor(const SDNode *N, const char *Color) {
 /// setSubgraphColorHelper - Implement setSubgraphColor.  Return
 /// whether we truncated the search.
 ///
-bool SelectionDAG::setSubgraphColorHelper(SDNode *N, const char *Color, DenseSet<SDNode *> &visited,
+bool SelectionDAG::setSubgraphColorHelper(SDNode *N, const char *Color,
+                                          DenseSet<SDNode *> &visited,
                                           int level, bool &printed) {
   bool hit_limit = false;
 
@@ -247,10 +238,12 @@ bool SelectionDAG::setSubgraphColorHelper(SDNode *N, const char *Color, DenseSet
   visited.insert(N);
   if (visited.size() != oldSize) {
     setGraphColor(N, Color);
-    for(SDNodeIterator i = SDNodeIterator::begin(N), iend = SDNodeIterator::end(N);
-        i != iend;
-        ++i) {
-      hit_limit = setSubgraphColorHelper(*i, Color, visited, level+1, printed) || hit_limit;
+    for (SDNodeIterator i = SDNodeIterator::begin(N),
+                        iend = SDNodeIterator::end(N);
+         i != iend; ++i) {
+      hit_limit =
+          setSubgraphColorHelper(*i, Color, visited, level + 1, printed) ||
+          hit_limit;
     }
   }
 #else
@@ -290,8 +283,8 @@ std::string ScheduleDAGSDNodes::getGraphNodeLabel(const SUnit *SU) const {
     for (SDNode *N = SU->getNode(); N; N = N->getGluedNode())
       GluedNodes.push_back(N);
     while (!GluedNodes.empty()) {
-      O << DOTGraphTraits<SelectionDAG*>
-        ::getSimpleNodeLabel(GluedNodes.back(), DAG);
+      O << DOTGraphTraits<SelectionDAG *>::getSimpleNodeLabel(GluedNodes.back(),
+                                                              DAG);
       GluedNodes.pop_back();
       if (!GluedNodes.empty())
         O << "\n    ";
@@ -302,7 +295,8 @@ std::string ScheduleDAGSDNodes::getGraphNodeLabel(const SUnit *SU) const {
   return O.str();
 }
 
-void ScheduleDAGSDNodes::getCustomGraphFeatures(GraphWriter<ScheduleDAG*> &GW) const {
+void ScheduleDAGSDNodes::getCustomGraphFeatures(
+    GraphWriter<ScheduleDAG *> &GW) const {
   if (DAG) {
     // Draw a special "GraphRoot" node to indicate the root of the graph.
     GW.emitSimpleNode(nullptr, "plaintext=circle", "GraphRoot");
diff --git a/llvm/lib/CodeGen/SelectionDAG/StatepointLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/StatepointLowering.cpp
index 5ea05028cee0826..410b1dcc40e30eb 100644
--- a/llvm/lib/CodeGen/SelectionDAG/StatepointLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/StatepointLowering.cpp
@@ -76,11 +76,11 @@ cl::opt<unsigned> MaxRegistersForGCPointers(
 
 typedef FunctionLoweringInfo::StatepointRelocationRecord RecordType;
 
-static void pushStackMapConstant(SmallVectorImpl<SDValue>& Ops,
+static void pushStackMapConstant(SmallVectorImpl<SDValue> &Ops,
                                  SelectionDAGBuilder &Builder, uint64_t Value) {
   SDLoc L = Builder.getCurSDLoc();
-  Ops.push_back(Builder.DAG.getTargetConstant(StackMaps::ConstantOp, L,
-                                              MVT::i64));
+  Ops.push_back(
+      Builder.DAG.getTargetConstant(StackMaps::ConstantOp, L, MVT::i64));
   Ops.push_back(Builder.DAG.getTargetConstant(Value, L, MVT::i64));
 }
 
@@ -123,7 +123,7 @@ StatepointLoweringState::allocateStackSlot(EVT ValueType,
   assert(NextSlotToAllocate <= NumSlots && "Broken invariant");
 
   assert(AllocatedStackSlots.size() ==
-         Builder.FuncInfo.StatepointStackSlots.size() &&
+             Builder.FuncInfo.StatepointStackSlots.size() &&
          "Broken invariant");
 
   for (; NextSlotToAllocate < NumSlots; NextSlotToAllocate++) {
@@ -144,9 +144,9 @@ StatepointLoweringState::allocateStackSlot(EVT ValueType,
   MFI.markAsStatepointSpillSlotObjectIndex(FI);
 
   Builder.FuncInfo.StatepointStackSlots.push_back(FI);
-  AllocatedStackSlots.resize(AllocatedStackSlots.size()+1, true);
+  AllocatedStackSlots.resize(AllocatedStackSlots.size() + 1, true);
   assert(AllocatedStackSlots.size() ==
-         Builder.FuncInfo.StatepointStackSlots.size() &&
+             Builder.FuncInfo.StatepointStackSlots.size() &&
          "Broken invariant");
 
   StatepointMaxSlotsRequired.updateMax(
@@ -173,8 +173,9 @@ static std::optional<int> findPreviousSpillSlot(const Value *Val,
     if (isa<UndefValue>(Statepoint))
       return std::nullopt;
 
-    const auto &RelocationMap = Builder.FuncInfo.StatepointRelocationMaps
-                                    [cast<GCStatepointInst>(Statepoint)];
+    const auto &RelocationMap =
+        Builder.FuncInfo
+            .StatepointRelocationMaps[cast<GCStatepointInst>(Statepoint)];
 
     auto It = RelocationMap.find(Relocate);
     if (It == RelocationMap.end())
@@ -353,11 +354,11 @@ static std::pair<SDValue, SDNode *> lowerCallFromStatepointLoweringInfo(
   return std::make_pair(ReturnValue, CallEnd->getOperand(0).getNode());
 }
 
-static MachineMemOperand* getMachineMemOperand(MachineFunction &MF,
+static MachineMemOperand *getMachineMemOperand(MachineFunction &MF,
                                                FrameIndexSDNode &FI) {
   auto PtrInfo = MachinePointerInfo::getFixedStack(MF, FI.getIndex());
-  auto MMOFlags = MachineMemOperand::MOStore |
-    MachineMemOperand::MOLoad | MachineMemOperand::MOVolatile;
+  auto MMOFlags = MachineMemOperand::MOStore | MachineMemOperand::MOLoad |
+                  MachineMemOperand::MOVolatile;
   auto &MFI = MF.getFrameInfo();
   return MF.getMachineMemOperand(PtrInfo, MMOFlags,
                                  MFI.getObjectSize(FI.getIndex()),
@@ -370,11 +371,11 @@ static MachineMemOperand* getMachineMemOperand(MachineFunction &MF,
 /// is a null constant. Return pair with first element being frame index
 /// containing saved value and second element with outgoing chain from the
 /// emitted store
-static std::tuple<SDValue, SDValue, MachineMemOperand*>
+static std::tuple<SDValue, SDValue, MachineMemOperand *>
 spillIncomingStatepointValue(SDValue Incoming, SDValue Chain,
                              SelectionDAGBuilder &Builder) {
   SDValue Loc = Builder.StatepointLowering.getLocation(Incoming);
-  MachineMemOperand* MMO = nullptr;
+  MachineMemOperand *MMO = nullptr;
 
   // Emit new store if we didn't do it for this ptr before
   if (!Loc.getNode()) {
@@ -423,7 +424,7 @@ lowerIncomingStatepointValue(SDValue Incoming, bool RequireSpillSlot,
                              SmallVectorImpl<SDValue> &Ops,
                              SmallVectorImpl<MachineMemOperand *> &MemRefs,
                              SelectionDAGBuilder &Builder) {
-  
+
   if (willLowerDirectly(Incoming)) {
     if (FrameIndexSDNode *FI = dyn_cast<FrameIndexSDNode>(Incoming)) {
       // This handles allocas as arguments to the statepoint (this is only
@@ -441,7 +442,7 @@ lowerIncomingStatepointValue(SDValue Incoming, bool RequireSpillSlot,
     }
 
     assert(Incoming.getValueType().getSizeInBits() <= 64);
-    
+
     if (Incoming.isUndef()) {
       // Put an easily recognized constant that's unlikely to be a valid
       // value so that uses of undef by the consumer of the stackmap is
@@ -467,8 +468,6 @@ lowerIncomingStatepointValue(SDValue Incoming, bool RequireSpillSlot,
     llvm_unreachable("unhandled direct lowering case");
   }
 
-
-
   if (!RequireSpillSlot) {
     // If this value is live in (not live-on-return, or live-through), we can
     // treat it the same way patchpoint treats it's "live in" values.  We'll
@@ -483,7 +482,7 @@ lowerIncomingStatepointValue(SDValue Incoming, bool RequireSpillSlot,
     // found by the runtime later.  Note: We know all of these spills are
     // independent, but don't bother to exploit that chain wise.  DAGCombine
     // will happily do so as needed, so doing it here would be a small compile
-    // time win at most. 
+    // time win at most.
     SDValue Chain = Builder.getRoot();
     auto Res = spillIncomingStatepointValue(Incoming, Chain, Builder);
     Ops.push_back(std::get<0>(Res));
@@ -492,7 +491,6 @@ lowerIncomingStatepointValue(SDValue Incoming, bool RequireSpillSlot,
     Chain = std::get<1>(Res);
     Builder.DAG.setRoot(Chain);
   }
-
 }
 
 /// Return true if value V represents the GC value. The behavior is conservative
@@ -535,7 +533,7 @@ lowerStatepointMetaArgs(SmallVectorImpl<SDValue> &Ops,
   // assumptions about the code generator producing the callee, we could
   // potentially allow live-through values in callee saved registers.
   const bool LiveInDeopt =
-    SI.StatepointFlags & (uint64_t)StatepointFlags::DeoptLiveIn;
+      SI.StatepointFlags & (uint64_t)StatepointFlags::DeoptLiveIn;
 
   // Decide which deriver pointers will go on VRegs
   unsigned MaxVRegPtrs = MaxRegistersForGCPointers.getValue();
@@ -600,7 +598,7 @@ lowerStatepointMetaArgs(SmallVectorImpl<SDValue> &Ops,
 
   auto requireSpillSlot = [&](const Value *V) {
     if (!Builder.DAG.getTargetLoweringInfo().isTypeLegal(
-             Builder.getValue(V).getValueType()))
+            Builder.getValue(V).getValueType()))
       return true;
     if (isGCValue(V, Builder))
       return !LowerAsVReg.count(Builder.getValue(V));
@@ -729,7 +727,7 @@ SDValue SelectionDAGBuilder::LowerAsSTATEPOINT(
   SmallVector<SDValue, 10> LoweredMetaArgs;
   // Lowered GC pointers (subset of above).
   SmallVector<SDValue, 16> LoweredGCArgs;
-  SmallVector<MachineMemOperand*, 16> MemRefs;
+  SmallVector<MachineMemOperand *, 16> MemRefs;
   // Maps derived pointer SDValue to statepoint result of relocated pointer.
   DenseMap<SDValue, int> LowerAsVReg;
   lowerStatepointMetaArgs(LoweredMetaArgs, MemRefs, LoweredGCArgs, LowerAsVReg,
@@ -742,7 +740,8 @@ SDValue SelectionDAGBuilder::LowerAsSTATEPOINT(
   // Get call node, we will replace it later with statepoint
   SDValue ReturnVal;
   SDNode *CallNode;
-  std::tie(ReturnVal, CallNode) = lowerCallFromStatepointLoweringInfo(SI, *this);
+  std::tie(ReturnVal, CallNode) =
+      lowerCallFromStatepointLoweringInfo(SI, *this);
 
   // Construct the actual GC_TRANSITION_START, STATEPOINT, and GC_TRANSITION_END
   // nodes with all the appropriate arguments and return values.
@@ -853,13 +852,14 @@ SDValue SelectionDAGBuilder::LowerAsSTATEPOINT(
     NodeTys.push_back(SD.getValueType());
   }
   LLVM_DEBUG(dbgs() << "Statepoint has " << NodeTys.size() << " results\n");
-  assert(NodeTys.size() == LowerAsVReg.size() && "Inconsistent GC Ptr lowering");
+  assert(NodeTys.size() == LowerAsVReg.size() &&
+         "Inconsistent GC Ptr lowering");
   NodeTys.push_back(MVT::Other);
   NodeTys.push_back(MVT::Glue);
 
   unsigned NumResults = NodeTys.size();
   MachineSDNode *StatepointMCNode =
-    DAG.getMachineNode(TargetOpcode::STATEPOINT, getCurSDLoc(), NodeTys, Ops);
+      DAG.getMachineNode(TargetOpcode::STATEPOINT, getCurSDLoc(), NodeTys, Ops);
   DAG.setNodeMemRefs(StatepointMCNode, MemRefs);
 
   // For values lowered to tied-defs, create the virtual registers if used
@@ -934,8 +934,6 @@ SDValue SelectionDAGBuilder::LowerAsSTATEPOINT(
     RelocationMap[Relocate] = Record;
   }
 
-  
-
   SDNode *SinkNode = StatepointMCNode;
 
   // Build the GC_TRANSITION_END node if necessary.
@@ -992,9 +990,9 @@ SDValue SelectionDAGBuilder::LowerAsSTATEPOINT(
 /// Return two gc.results if present.  First result is a block local
 /// gc.result, second result is a non-block local gc.result.  Corresponding
 /// entry will be nullptr if not present.
-static std::pair<const GCResultInst*, const GCResultInst*>
+static std::pair<const GCResultInst *, const GCResultInst *>
 getGCResultLocality(const GCStatepointInst &S) {
-  std::pair<const GCResultInst *, const GCResultInst*> Res(nullptr, nullptr);
+  std::pair<const GCResultInst *, const GCResultInst *> Res(nullptr, nullptr);
   for (const auto *U : S.users()) {
     auto *GRI = dyn_cast<GCResultInst>(U);
     if (!GRI)
@@ -1007,9 +1005,8 @@ getGCResultLocality(const GCStatepointInst &S) {
   return Res;
 }
 
-void
-SelectionDAGBuilder::LowerStatepoint(const GCStatepointInst &I,
-                                     const BasicBlock *EHPadBB /*= nullptr*/) {
+void SelectionDAGBuilder::LowerStatepoint(
+    const GCStatepointInst &I, const BasicBlock *EHPadBB /*= nullptr*/) {
   assert(I.getCallingConv() != CallingConv::AnyReg &&
          "anyregcc is not supported on statepoints!");
 
@@ -1104,7 +1101,7 @@ SelectionDAGBuilder::LowerStatepoint(const GCStatepointInst &I,
   if (GCResultLocality.first) {
     // Result value will be used in a same basic block. Don't export it or
     // perform any explicit register copies. The gc_result will simply grab
-    // this value. 
+    // this value.
     setValue(&I, ReturnValue);
   }
 
@@ -1121,10 +1118,9 @@ SelectionDAGBuilder::LowerStatepoint(const GCStatepointInst &I,
   Type *RetTy = GCResultLocality.second->getType();
   Register Reg = FuncInfo.CreateRegs(RetTy);
   RegsForValue RFV(*DAG.getContext(), DAG.getTargetLoweringInfo(),
-                   DAG.getDataLayout(), Reg, RetTy,
-                   I.getCallingConv());
+                   DAG.getDataLayout(), Reg, RetTy, I.getCallingConv());
   SDValue Chain = DAG.getEntryNode();
-  
+
   RFV.getCopyToRegs(ReturnValue, DAG, getCurSDLoc(), Chain, nullptr);
   PendingExports.push_back(Chain);
   FuncInfo.ValueMap[&I] = Reg;
@@ -1192,7 +1188,7 @@ void SelectionDAGBuilder::visitGCResult(const GCResultInst &CI) {
   // which is always i32 in our case.
   Type *RetTy = CI.getType();
   SDValue CopyFromReg = getCopyFromRegs(SI, RetTy);
-  
+
   assert(CopyFromReg.getNode());
   setValue(&CI, CopyFromReg);
 }
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index bd1940994a87f0f..203e712480a8a29 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -79,8 +79,8 @@ bool TargetLowering::isInTailCallPosition(SelectionDAG &DAG, SDNode *Node,
   return isUsedByReturnOnly(Node, Chain);
 }
 
-bool TargetLowering::parametersInCSRMatch(const MachineRegisterInfo &MRI,
-    const uint32_t *CallerPreservedMask,
+bool TargetLowering::parametersInCSRMatch(
+    const MachineRegisterInfo &MRI, const uint32_t *CallerPreservedMask,
     const SmallVectorImpl<CCValAssign> &ArgLocs,
     const SmallVectorImpl<SDValue> &OutVals) const {
   for (unsigned I = 0, E = ArgLocs.size(); I != E; ++I) {
@@ -141,12 +141,9 @@ void TargetLoweringBase::ArgListEntry::setAttributes(const CallBase *Call,
 
 /// Generate a libcall taking the given operands as arguments and returning a
 /// result of type RetVT.
-std::pair<SDValue, SDValue>
-TargetLowering::makeLibCall(SelectionDAG &DAG, RTLIB::Libcall LC, EVT RetVT,
-                            ArrayRef<SDValue> Ops,
-                            MakeLibCallOptions CallOptions,
-                            const SDLoc &dl,
-                            SDValue InChain) const {
+std::pair<SDValue, SDValue> TargetLowering::makeLibCall(
+    SelectionDAG &DAG, RTLIB::Libcall LC, EVT RetVT, ArrayRef<SDValue> Ops,
+    MakeLibCallOptions CallOptions, const SDLoc &dl, SDValue InChain) const {
   if (!InChain)
     InChain = DAG.getEntryNode();
 
@@ -158,8 +155,8 @@ TargetLowering::makeLibCall(SelectionDAG &DAG, RTLIB::Libcall LC, EVT RetVT,
     SDValue NewOp = Ops[i];
     Entry.Node = NewOp;
     Entry.Ty = Entry.Node.getValueType().getTypeForEVT(*DAG.getContext());
-    Entry.IsSExt = shouldSignExtendTypeInLibCall(NewOp.getValueType(),
-                                                 CallOptions.IsSExt);
+    Entry.IsSExt =
+        shouldSignExtendTypeInLibCall(NewOp.getValueType(), CallOptions.IsSExt);
     Entry.IsZExt = !Entry.IsSExt;
 
     if (CallOptions.IsSoften &&
@@ -289,8 +286,8 @@ bool TargetLowering::findOptimalMemOpLowering(
 /// SELECT_CC, and SETCC handlers.
 void TargetLowering::softenSetCCOperands(SelectionDAG &DAG, EVT VT,
                                          SDValue &NewLHS, SDValue &NewRHS,
-                                         ISD::CondCode &CCCode,
-                                         const SDLoc &dl, const SDValue OldLHS,
+                                         ISD::CondCode &CCCode, const SDLoc &dl,
+                                         const SDValue OldLHS,
                                          const SDValue OldRHS) const {
   SDValue Chain;
   return softenSetCCOperands(DAG, VT, NewLHS, NewRHS, CCCode, dl, OldLHS,
@@ -299,17 +296,17 @@ void TargetLowering::softenSetCCOperands(SelectionDAG &DAG, EVT VT,
 
 void TargetLowering::softenSetCCOperands(SelectionDAG &DAG, EVT VT,
                                          SDValue &NewLHS, SDValue &NewRHS,
-                                         ISD::CondCode &CCCode,
-                                         const SDLoc &dl, const SDValue OldLHS,
-                                         const SDValue OldRHS,
-                                         SDValue &Chain,
+                                         ISD::CondCode &CCCode, const SDLoc &dl,
+                                         const SDValue OldLHS,
+                                         const SDValue OldRHS, SDValue &Chain,
                                          bool IsSignaling) const {
   // FIXME: Currently we cannot really respect all IEEE predicates due to libgcc
   // not supporting it. We can update this code when libgcc provides such
   // functions.
 
-  assert((VT == MVT::f32 || VT == MVT::f64 || VT == MVT::f128 || VT == MVT::ppcf128)
-         && "Unsupported setcc type!");
+  assert((VT == MVT::f32 || VT == MVT::f64 || VT == MVT::f128 ||
+          VT == MVT::ppcf128) &&
+         "Unsupported setcc type!");
 
   // Expand into one or more soft-fp libcall(s).
   RTLIB::Libcall LC1 = RTLIB::UNKNOWN_LIBCALL, LC2 = RTLIB::UNKNOWN_LIBCALL;
@@ -317,85 +314,99 @@ void TargetLowering::softenSetCCOperands(SelectionDAG &DAG, EVT VT,
   switch (CCCode) {
   case ISD::SETEQ:
   case ISD::SETOEQ:
-    LC1 = (VT == MVT::f32) ? RTLIB::OEQ_F32 :
-          (VT == MVT::f64) ? RTLIB::OEQ_F64 :
-          (VT == MVT::f128) ? RTLIB::OEQ_F128 : RTLIB::OEQ_PPCF128;
+    LC1 = (VT == MVT::f32)    ? RTLIB::OEQ_F32
+          : (VT == MVT::f64)  ? RTLIB::OEQ_F64
+          : (VT == MVT::f128) ? RTLIB::OEQ_F128
+                              : RTLIB::OEQ_PPCF128;
     break;
   case ISD::SETNE:
   case ISD::SETUNE:
-    LC1 = (VT == MVT::f32) ? RTLIB::UNE_F32 :
-          (VT == MVT::f64) ? RTLIB::UNE_F64 :
-          (VT == MVT::f128) ? RTLIB::UNE_F128 : RTLIB::UNE_PPCF128;
+    LC1 = (VT == MVT::f32)    ? RTLIB::UNE_F32
+          : (VT == MVT::f64)  ? RTLIB::UNE_F64
+          : (VT == MVT::f128) ? RTLIB::UNE_F128
+                              : RTLIB::UNE_PPCF128;
     break;
   case ISD::SETGE:
   case ISD::SETOGE:
-    LC1 = (VT == MVT::f32) ? RTLIB::OGE_F32 :
-          (VT == MVT::f64) ? RTLIB::OGE_F64 :
-          (VT == MVT::f128) ? RTLIB::OGE_F128 : RTLIB::OGE_PPCF128;
+    LC1 = (VT == MVT::f32)    ? RTLIB::OGE_F32
+          : (VT == MVT::f64)  ? RTLIB::OGE_F64
+          : (VT == MVT::f128) ? RTLIB::OGE_F128
+                              : RTLIB::OGE_PPCF128;
     break;
   case ISD::SETLT:
   case ISD::SETOLT:
-    LC1 = (VT == MVT::f32) ? RTLIB::OLT_F32 :
-          (VT == MVT::f64) ? RTLIB::OLT_F64 :
-          (VT == MVT::f128) ? RTLIB::OLT_F128 : RTLIB::OLT_PPCF128;
+    LC1 = (VT == MVT::f32)    ? RTLIB::OLT_F32
+          : (VT == MVT::f64)  ? RTLIB::OLT_F64
+          : (VT == MVT::f128) ? RTLIB::OLT_F128
+                              : RTLIB::OLT_PPCF128;
     break;
   case ISD::SETLE:
   case ISD::SETOLE:
-    LC1 = (VT == MVT::f32) ? RTLIB::OLE_F32 :
-          (VT == MVT::f64) ? RTLIB::OLE_F64 :
-          (VT == MVT::f128) ? RTLIB::OLE_F128 : RTLIB::OLE_PPCF128;
+    LC1 = (VT == MVT::f32)    ? RTLIB::OLE_F32
+          : (VT == MVT::f64)  ? RTLIB::OLE_F64
+          : (VT == MVT::f128) ? RTLIB::OLE_F128
+                              : RTLIB::OLE_PPCF128;
     break;
   case ISD::SETGT:
   case ISD::SETOGT:
-    LC1 = (VT == MVT::f32) ? RTLIB::OGT_F32 :
-          (VT == MVT::f64) ? RTLIB::OGT_F64 :
-          (VT == MVT::f128) ? RTLIB::OGT_F128 : RTLIB::OGT_PPCF128;
+    LC1 = (VT == MVT::f32)    ? RTLIB::OGT_F32
+          : (VT == MVT::f64)  ? RTLIB::OGT_F64
+          : (VT == MVT::f128) ? RTLIB::OGT_F128
+                              : RTLIB::OGT_PPCF128;
     break;
   case ISD::SETO:
     ShouldInvertCC = true;
     [[fallthrough]];
   case ISD::SETUO:
-    LC1 = (VT == MVT::f32) ? RTLIB::UO_F32 :
-          (VT == MVT::f64) ? RTLIB::UO_F64 :
-          (VT == MVT::f128) ? RTLIB::UO_F128 : RTLIB::UO_PPCF128;
+    LC1 = (VT == MVT::f32)    ? RTLIB::UO_F32
+          : (VT == MVT::f64)  ? RTLIB::UO_F64
+          : (VT == MVT::f128) ? RTLIB::UO_F128
+                              : RTLIB::UO_PPCF128;
     break;
   case ISD::SETONE:
     // SETONE = O && UNE
     ShouldInvertCC = true;
     [[fallthrough]];
   case ISD::SETUEQ:
-    LC1 = (VT == MVT::f32) ? RTLIB::UO_F32 :
-          (VT == MVT::f64) ? RTLIB::UO_F64 :
-          (VT == MVT::f128) ? RTLIB::UO_F128 : RTLIB::UO_PPCF128;
-    LC2 = (VT == MVT::f32) ? RTLIB::OEQ_F32 :
-          (VT == MVT::f64) ? RTLIB::OEQ_F64 :
-          (VT == MVT::f128) ? RTLIB::OEQ_F128 : RTLIB::OEQ_PPCF128;
+    LC1 = (VT == MVT::f32)    ? RTLIB::UO_F32
+          : (VT == MVT::f64)  ? RTLIB::UO_F64
+          : (VT == MVT::f128) ? RTLIB::UO_F128
+                              : RTLIB::UO_PPCF128;
+    LC2 = (VT == MVT::f32)    ? RTLIB::OEQ_F32
+          : (VT == MVT::f64)  ? RTLIB::OEQ_F64
+          : (VT == MVT::f128) ? RTLIB::OEQ_F128
+                              : RTLIB::OEQ_PPCF128;
     break;
   default:
     // Invert CC for unordered comparisons
     ShouldInvertCC = true;
     switch (CCCode) {
     case ISD::SETULT:
-      LC1 = (VT == MVT::f32) ? RTLIB::OGE_F32 :
-            (VT == MVT::f64) ? RTLIB::OGE_F64 :
-            (VT == MVT::f128) ? RTLIB::OGE_F128 : RTLIB::OGE_PPCF128;
+      LC1 = (VT == MVT::f32)    ? RTLIB::OGE_F32
+            : (VT == MVT::f64)  ? RTLIB::OGE_F64
+            : (VT == MVT::f128) ? RTLIB::OGE_F128
+                                : RTLIB::OGE_PPCF128;
       break;
     case ISD::SETULE:
-      LC1 = (VT == MVT::f32) ? RTLIB::OGT_F32 :
-            (VT == MVT::f64) ? RTLIB::OGT_F64 :
-            (VT == MVT::f128) ? RTLIB::OGT_F128 : RTLIB::OGT_PPCF128;
+      LC1 = (VT == MVT::f32)    ? RTLIB::OGT_F32
+            : (VT == MVT::f64)  ? RTLIB::OGT_F64
+            : (VT == MVT::f128) ? RTLIB::OGT_F128
+                                : RTLIB::OGT_PPCF128;
       break;
     case ISD::SETUGT:
-      LC1 = (VT == MVT::f32) ? RTLIB::OLE_F32 :
-            (VT == MVT::f64) ? RTLIB::OLE_F64 :
-            (VT == MVT::f128) ? RTLIB::OLE_F128 : RTLIB::OLE_PPCF128;
+      LC1 = (VT == MVT::f32)    ? RTLIB::OLE_F32
+            : (VT == MVT::f64)  ? RTLIB::OLE_F64
+            : (VT == MVT::f128) ? RTLIB::OLE_F128
+                                : RTLIB::OLE_PPCF128;
       break;
     case ISD::SETUGE:
-      LC1 = (VT == MVT::f32) ? RTLIB::OLT_F32 :
-            (VT == MVT::f64) ? RTLIB::OLT_F64 :
-            (VT == MVT::f128) ? RTLIB::OLT_F128 : RTLIB::OLT_PPCF128;
+      LC1 = (VT == MVT::f32)    ? RTLIB::OLT_F32
+            : (VT == MVT::f64)  ? RTLIB::OLT_F64
+            : (VT == MVT::f128) ? RTLIB::OLT_F128
+                                : RTLIB::OLT_PPCF128;
       break;
-    default: llvm_unreachable("Do not know how to soften this setcc!");
+    default:
+      llvm_unreachable("Do not know how to soften this setcc!");
     }
   }
 
@@ -403,8 +414,7 @@ void TargetLowering::softenSetCCOperands(SelectionDAG &DAG, EVT VT,
   EVT RetVT = getCmpLibcallReturnType();
   SDValue Ops[2] = {NewLHS, NewRHS};
   TargetLowering::MakeLibCallOptions CallOptions;
-  EVT OpsVT[2] = { OldLHS.getValueType(),
-                   OldRHS.getValueType() };
+  EVT OpsVT[2] = {OldLHS.getValueType(), OldRHS.getValueType()};
   CallOptions.setTypeListBeforeSoften(OpsVT, RetVT, true);
   auto Call = makeLibCall(DAG, LC1, RetVT, Ops, CallOptions, dl, Chain);
   NewLHS = Call.first;
@@ -466,9 +476,8 @@ SDValue TargetLowering::getPICJumpTableRelocBase(SDValue Table,
 
 /// This returns the relocation base for the given PIC jumptable, the same as
 /// getPICJumpTableRelocBase, but as an MCExpr.
-const MCExpr *
-TargetLowering::getPICJumpTableRelocBaseExpr(const MachineFunction *MF,
-                                             unsigned JTI,MCContext &Ctx) const{
+const MCExpr *TargetLowering::getPICJumpTableRelocBaseExpr(
+    const MachineFunction *MF, unsigned JTI, MCContext &Ctx) const {
   // The normal PIC reloc base is the label at the start of the jump table.
   return MCSymbolRefExpr::create(MF->getJTISymbol(JTI, Ctx), Ctx);
 }
@@ -484,8 +493,7 @@ SDValue TargetLowering::expandIndirectJTBranch(const SDLoc &dl, SDValue Value,
   return DAG.getNode(ISD::BRIND, dl, MVT::Other, Chain, Addr);
 }
 
-bool
-TargetLowering::isOffsetFoldingLegal(const GlobalAddressSDNode *GA) const {
+bool TargetLowering::isOffsetFoldingLegal(const GlobalAddressSDNode *GA) const {
   const TargetMachine &TM = getTargetMachine();
   const GlobalValue *GV = GA->getGlobal();
 
@@ -940,8 +948,7 @@ SDValue TargetLowering::SimplifyMultipleUseDemandedVectorElts(
 static SDValue combineShiftToAVG(SDValue Op, SelectionDAG &DAG,
                                  const TargetLowering &TLI,
                                  const APInt &DemandedBits,
-                                 const APInt &DemandedElts,
-                                 unsigned Depth) {
+                                 const APInt &DemandedElts, unsigned Depth) {
   assert((Op.getOpcode() == ISD::SRL || Op.getOpcode() == ISD::SRA) &&
          "SRL or SRA node is required here!");
   // Is the right shift using an immediate value of 1?
@@ -980,11 +987,12 @@ static SDValue combineShiftToAVG(SDValue Op, SelectionDAG &DAG,
     }
     return false;
   };
-  bool IsCeil =
-      (ExtOpA.getOpcode() == ISD::ADD &&
-       MatchOperands(ExtOpA.getOperand(0), ExtOpA.getOperand(1), ExtOpB, ExtOpA)) ||
-      (ExtOpB.getOpcode() == ISD::ADD &&
-       MatchOperands(ExtOpB.getOperand(0), ExtOpB.getOperand(1), ExtOpA, ExtOpB));
+  bool IsCeil = (ExtOpA.getOpcode() == ISD::ADD &&
+                 MatchOperands(ExtOpA.getOperand(0), ExtOpA.getOperand(1),
+                               ExtOpB, ExtOpA)) ||
+                (ExtOpB.getOpcode() == ISD::ADD &&
+                 MatchOperands(ExtOpB.getOperand(0), ExtOpB.getOperand(1),
+                               ExtOpA, ExtOpB));
 
   // If the shift is signed (sra):
   //  - Needs >= 2 sign bit for both operands.
@@ -2103,8 +2111,8 @@ bool TargetLowering::SimplifyDemandedBits(
     // For pow-2 bitwidths we only demand the bottom modulo amt bits.
     if (isPowerOf2_32(BitWidth)) {
       APInt DemandedAmtBits(Op2.getScalarValueSizeInBits(), BitWidth - 1);
-      if (SimplifyDemandedBits(Op2, DemandedAmtBits, DemandedElts,
-                               Known2, TLO, Depth + 1))
+      if (SimplifyDemandedBits(Op2, DemandedAmtBits, DemandedElts, Known2, TLO,
+                               Depth + 1))
         return true;
     }
     break;
@@ -2137,12 +2145,14 @@ bool TargetLowering::SimplifyDemandedBits(
       // See if we don't demand either half of the rotated bits.
       if ((!TLO.LegalOperations() || isOperationLegal(ISD::SHL, VT)) &&
           DemandedBits.countr_zero() >= (IsROTL ? Amt : RevAmt)) {
-        Op1 = TLO.DAG.getConstant(IsROTL ? Amt : RevAmt, dl, Op1.getValueType());
+        Op1 =
+            TLO.DAG.getConstant(IsROTL ? Amt : RevAmt, dl, Op1.getValueType());
         return TLO.CombineTo(Op, TLO.DAG.getNode(ISD::SHL, dl, VT, Op0, Op1));
       }
       if ((!TLO.LegalOperations() || isOperationLegal(ISD::SRL, VT)) &&
           DemandedBits.countl_zero() >= (IsROTL ? RevAmt : Amt)) {
-        Op1 = TLO.DAG.getConstant(IsROTL ? RevAmt : Amt, dl, Op1.getValueType());
+        Op1 =
+            TLO.DAG.getConstant(IsROTL ? RevAmt : Amt, dl, Op1.getValueType());
         return TLO.CombineTo(Op, TLO.DAG.getNode(ISD::SRL, dl, VT, Op0, Op1));
       }
     }
@@ -2259,8 +2269,8 @@ bool TargetLowering::SimplifyDemandedBits(
     // op legalization.
     // FIXME: Limit to scalars for now.
     if (DemandedBits.isOne() && !TLO.LegalOps && !VT.isVector())
-      return TLO.CombineTo(Op, TLO.DAG.getNode(ISD::PARITY, dl, VT,
-                                               Op.getOperand(0)));
+      return TLO.CombineTo(
+          Op, TLO.DAG.getNode(ISD::PARITY, dl, VT, Op.getOperand(0)));
 
     Known = TLO.DAG.computeKnownBits(Op, DemandedElts, Depth);
     break;
@@ -2345,7 +2355,8 @@ bool TargetLowering::SimplifyDemandedBits(
     SDValue Src = Op.getOperand(0);
     EVT SrcVT = Src.getValueType();
     unsigned InBits = SrcVT.getScalarSizeInBits();
-    unsigned InElts = SrcVT.isFixedLengthVector() ? SrcVT.getVectorNumElements() : 1;
+    unsigned InElts =
+        SrcVT.isFixedLengthVector() ? SrcVT.getVectorNumElements() : 1;
     bool IsVecInReg = Op.getOpcode() == ISD::ZERO_EXTEND_VECTOR_INREG;
 
     // If none of the top bits are demanded, convert this into an any_extend.
@@ -2385,7 +2396,8 @@ bool TargetLowering::SimplifyDemandedBits(
     SDValue Src = Op.getOperand(0);
     EVT SrcVT = Src.getValueType();
     unsigned InBits = SrcVT.getScalarSizeInBits();
-    unsigned InElts = SrcVT.isFixedLengthVector() ? SrcVT.getVectorNumElements() : 1;
+    unsigned InElts =
+        SrcVT.isFixedLengthVector() ? SrcVT.getVectorNumElements() : 1;
     bool IsVecInReg = Op.getOpcode() == ISD::SIGN_EXTEND_VECTOR_INREG;
 
     // If none of the top bits are demanded, convert this into an any_extend.
@@ -2440,7 +2452,8 @@ bool TargetLowering::SimplifyDemandedBits(
     SDValue Src = Op.getOperand(0);
     EVT SrcVT = Src.getValueType();
     unsigned InBits = SrcVT.getScalarSizeInBits();
-    unsigned InElts = SrcVT.isFixedLengthVector() ? SrcVT.getVectorNumElements() : 1;
+    unsigned InElts =
+        SrcVT.isFixedLengthVector() ? SrcVT.getVectorNumElements() : 1;
     bool IsVecInReg = Op.getOpcode() == ISD::ANY_EXTEND_VECTOR_INREG;
 
     // If we only need the bottom element then we can just bitcast.
@@ -2787,7 +2800,8 @@ bool TargetLowering::SimplifyDemandedBits(
       return 0;
     };
 
-    auto foldMul = [&](ISD::NodeType NT, SDValue X, SDValue Y, unsigned ShlAmt) {
+    auto foldMul = [&](ISD::NodeType NT, SDValue X, SDValue Y,
+                       unsigned ShlAmt) {
       EVT ShiftAmtTy = getShiftAmountTy(VT, TLO.DAG.getDataLayout());
       SDValue ShlAmtC = TLO.DAG.getConstant(ShlAmt, dl, ShiftAmtTy);
       SDValue Shl = TLO.DAG.getNode(ISD::SHL, dl, VT, X, ShlAmtC);
@@ -3622,14 +3636,14 @@ void TargetLowering::computeKnownBitsForTargetInstr(
 }
 
 void TargetLowering::computeKnownBitsForFrameIndex(
-  const int FrameIdx, KnownBits &Known, const MachineFunction &MF) const {
+    const int FrameIdx, KnownBits &Known, const MachineFunction &MF) const {
   // The low bits are known zero if the pointer is aligned.
   Known.Zero.setLowBits(Log2(MF.getFrameInfo().getObjectAlign(FrameIdx)));
 }
 
 Align TargetLowering::computeKnownAlignForTargetInstr(
-  GISelKnownBits &Analysis, Register R, const MachineRegisterInfo &MRI,
-  unsigned Depth) const {
+    GISelKnownBits &Analysis, Register R, const MachineRegisterInfo &MRI,
+    unsigned Depth) const {
   return Align(1);
 }
 
@@ -3649,8 +3663,8 @@ unsigned TargetLowering::ComputeNumSignBitsForTargetNode(SDValue Op,
 }
 
 unsigned TargetLowering::computeNumSignBitsForTargetInstr(
-  GISelKnownBits &Analysis, Register R, const APInt &DemandedElts,
-  const MachineRegisterInfo &MRI, unsigned Depth) const {
+    GISelKnownBits &Analysis, Register R, const APInt &DemandedElts,
+    const MachineRegisterInfo &MRI, unsigned Depth) const {
   return 1;
 }
 
@@ -3692,10 +3706,10 @@ SDValue TargetLowering::SimplifyMultipleUseDemandedBitsForTargetNode(
   return SDValue();
 }
 
-SDValue
-TargetLowering::buildLegalVectorShuffle(EVT VT, const SDLoc &DL, SDValue N0,
-                                        SDValue N1, MutableArrayRef<int> Mask,
-                                        SelectionDAG &DAG) const {
+SDValue TargetLowering::buildLegalVectorShuffle(EVT VT, const SDLoc &DL,
+                                                SDValue N0, SDValue N1,
+                                                MutableArrayRef<int> Mask,
+                                                SelectionDAG &DAG) const {
   bool LegalMask = isShuffleMaskLegal(Mask, VT);
   if (!LegalMask) {
     std::swap(N0, N1);
@@ -3709,7 +3723,7 @@ TargetLowering::buildLegalVectorShuffle(EVT VT, const SDLoc &DL, SDValue N0,
   return DAG.getVectorShuffle(VT, DL, N0, N1, Mask);
 }
 
-const Constant *TargetLowering::getTargetConstantFromLoad(LoadSDNode*) const {
+const Constant *TargetLowering::getTargetConstantFromLoad(LoadSDNode *) const {
   return nullptr;
 }
 
@@ -4138,8 +4152,8 @@ SDValue TargetLowering::foldSetCCWithBinOp(EVT VT, SDValue N0, SDValue N1,
     return SDValue();
 
   // (X - Y) == Y --> X == Y << 1
-  EVT ShiftVT = getShiftAmountTy(OpVT, DAG.getDataLayout(),
-                                 !DCI.isBeforeLegalize());
+  EVT ShiftVT =
+      getShiftAmountTy(OpVT, DAG.getDataLayout(), !DCI.isBeforeLegalize());
   SDValue One = DAG.getConstant(1, DL, ShiftVT);
   SDValue YShl1 = DAG.getNode(ISD::SHL, DL, N1.getValueType(), Y, One);
   if (!DCI.isCalledByLegalizer())
@@ -4155,7 +4169,8 @@ static SDValue simplifySetCCWithCTPOP(const TargetLowering &TLI, EVT VT,
   // FIXME: Add vector support? Need to be careful with setcc result type below.
   SDValue CTPOP = N0;
   if (N0.getOpcode() == ISD::TRUNCATE && N0.hasOneUse() && !VT.isVector() &&
-      N0.getScalarValueSizeInBits() > Log2_32(N0.getOperand(0).getScalarValueSizeInBits()))
+      N0.getScalarValueSizeInBits() >
+          Log2_32(N0.getOperand(0).getScalarValueSizeInBits()))
     CTPOP = N0.getOperand(0);
 
   if (CTPOP.getOpcode() != ISD::CTPOP || !CTPOP.hasOneUse())
@@ -4429,8 +4444,8 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
 
     // (zext x) == C --> x == (trunc C)
     // (sext x) == C --> x == (trunc C)
-    if ((Cond == ISD::SETEQ || Cond == ISD::SETNE) &&
-        DCI.isBeforeLegalize() && N0->hasOneUse()) {
+    if ((Cond == ISD::SETEQ || Cond == ISD::SETNE) && DCI.isBeforeLegalize() &&
+        N0->hasOneUse()) {
       unsigned MinBits = N0.getValueSizeInBits();
       SDValue PreExt;
       bool Signed = false;
@@ -4441,7 +4456,7 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
       } else if (N0->getOpcode() == ISD::AND) {
         // DAGCombine turns costly ZExts into ANDs
         if (auto *C = dyn_cast<ConstantSDNode>(N0->getOperand(1)))
-          if ((C->getAPIntValue()+1).isPowerOf2()) {
+          if ((C->getAPIntValue() + 1).isPowerOf2()) {
             MinBits = C->getAPIntValue().countr_one();
             PreExt = N0->getOperand(0);
           }
@@ -4466,9 +4481,7 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
       unsigned ReqdBits = Signed ? C1.getSignificantBits() : C1.getActiveBits();
 
       // Make sure we're not losing bits from the constant.
-      if (MinBits > 0 &&
-          MinBits < C1.getBitWidth() &&
-          MinBits >= ReqdBits) {
+      if (MinBits > 0 && MinBits < C1.getBitWidth() && MinBits >= ReqdBits) {
         EVT MinVT = EVT::getIntegerVT(*DAG.getContext(), MinBits);
         if (isTypeDesirableForOp(ISD::SETCC, MinVT)) {
           // Will get folded away.
@@ -4508,8 +4521,7 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
               cast<CondCodeSDNode>(TopSetCC.getOperand(2))->get(),
               TopSetCC.getOperand(0).getValueType());
           return DAG.getSetCC(dl, VT, TopSetCC.getOperand(0),
-                                      TopSetCC.getOperand(1),
-                                      InvCond);
+                              TopSetCC.getOperand(1), InvCond);
         }
       }
     }
@@ -4517,10 +4529,8 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
     // If the LHS is '(and load, const)', the RHS is 0, the test is for
     // equality or unsigned, and all 1 bits of the const are in the same
     // partial word, see if we can shorten the load.
-    if (DCI.isBeforeLegalize() &&
-        !ISD::isSignedIntSetCC(Cond) &&
-        N0.getOpcode() == ISD::AND && C1 == 0 &&
-        N0.getNode()->hasOneUse() &&
+    if (DCI.isBeforeLegalize() && !ISD::isSignedIntSetCC(Cond) &&
+        N0.getOpcode() == ISD::AND && C1 == 0 && N0.getNode()->hasOneUse() &&
         isa<LoadSDNode>(N0.getOperand(0)) &&
         N0.getOperand(0).getNode()->hasOneUse() &&
         isa<ConstantSDNode>(N0.getOperand(1))) {
@@ -4535,15 +4545,15 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
         if (Lod->getExtensionType() != ISD::NON_EXTLOAD)
           origWidth = Lod->getMemoryVT().getSizeInBits();
         const APInt &Mask = N0.getConstantOperandAPInt(1);
-        for (unsigned width = origWidth / 2; width>=8; width /= 2) {
+        for (unsigned width = origWidth / 2; width >= 8; width /= 2) {
           APInt newMask = APInt::getLowBitsSet(maskWidth, width);
-          for (unsigned offset=0; offset<origWidth/width; offset++) {
+          for (unsigned offset = 0; offset < origWidth / width; offset++) {
             if (Mask.isSubsetOf(newMask)) {
               if (Layout.isLittleEndian())
-                bestOffset = (uint64_t)offset * (width/8);
+                bestOffset = (uint64_t)offset * (width / 8);
               else
-                bestOffset = (origWidth/width - offset - 1) * (width/8);
-              bestMask = Mask.lshr(offset * (width/8) * 8);
+                bestOffset = (origWidth / width - offset - 1) * (width / 8);
+              bestMask = Mask.lshr(offset * (width / 8) * 8);
               bestWidth = width;
               break;
             }
@@ -4563,11 +4573,12 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
               DAG.getLoad(newVT, dl, Lod->getChain(), Ptr,
                           Lod->getPointerInfo().getWithOffset(bestOffset),
                           Lod->getOriginalAlign());
-          return DAG.getSetCC(dl, VT,
-                              DAG.getNode(ISD::AND, dl, newVT, NewLoad,
-                                      DAG.getConstant(bestMask.trunc(bestWidth),
-                                                      dl, newVT)),
-                              DAG.getConstant(0LL, dl, newVT), Cond);
+          return DAG.getSetCC(
+              dl, VT,
+              DAG.getNode(
+                  ISD::AND, dl, newVT, NewLoad,
+                  DAG.getConstant(bestMask.trunc(bestWidth), dl, newVT)),
+              DAG.getConstant(0LL, dl, newVT), Cond);
         }
       }
     }
@@ -4617,8 +4628,8 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
           EVT NewSetCCVT = getSetCCResultType(Layout, *DAG.getContext(), newVT);
           SDValue NewConst = DAG.getConstant(C1.trunc(InSize), dl, newVT);
 
-          SDValue NewSetCC = DAG.getSetCC(dl, NewSetCCVT, N0.getOperand(0),
-                                          NewConst, Cond);
+          SDValue NewSetCC =
+              DAG.getSetCC(dl, NewSetCCVT, N0.getOperand(0), NewConst, Cond);
           return DAG.getBoolExtOrTrunc(NewSetCC, dl, VT, N0.getValueType());
         }
         break;
@@ -4653,11 +4664,11 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
     } else if ((N1C->isZero() || N1C->isOne()) &&
                (Cond == ISD::SETEQ || Cond == ISD::SETNE)) {
       // SETCC (SETCC), [0|1], [EQ|NE]  -> SETCC
-      if (N0.getOpcode() == ISD::SETCC &&
-          isTypeLegal(VT) && VT.bitsLE(N0.getValueType()) &&
+      if (N0.getOpcode() == ISD::SETCC && isTypeLegal(VT) &&
+          VT.bitsLE(N0.getValueType()) &&
           (N0.getValueType() == MVT::i1 ||
            getBooleanContents(N0.getOperand(0).getValueType()) ==
-                       ZeroOrOneBooleanContent)) {
+               ZeroOrOneBooleanContent)) {
         bool TrueWhenTrue = (Cond == ISD::SETEQ) ^ (!N1C->isOne());
         if (TrueWhenTrue)
           return DAG.getNode(ISD::TRUNCATE, dl, VT, N0);
@@ -4677,20 +4688,18 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
         // If this is (X^1) == 0/1, swap the RHS and eliminate the xor.  We
         // can only do this if the top bits are known zero.
         unsigned BitWidth = N0.getValueSizeInBits();
-        if (DAG.MaskedValueIsZero(N0,
-                                  APInt::getHighBitsSet(BitWidth,
-                                                        BitWidth-1))) {
+        if (DAG.MaskedValueIsZero(
+                N0, APInt::getHighBitsSet(BitWidth, BitWidth - 1))) {
           // Okay, get the un-inverted input value.
           SDValue Val;
           if (N0.getOpcode() == ISD::XOR) {
             Val = N0.getOperand(0);
           } else {
             assert(N0.getOpcode() == ISD::AND &&
-                    N0.getOperand(0).getOpcode() == ISD::XOR);
+                   N0.getOperand(0).getOpcode() == ISD::XOR);
             // ((X^1)&1)^1 -> X & 1
             Val = DAG.getNode(ISD::AND, dl, N0.getValueType(),
-                              N0.getOperand(0).getOperand(0),
-                              N0.getOperand(1));
+                              N0.getOperand(0).getOperand(0), N0.getOperand(1));
           }
 
           return DAG.getSetCC(dl, VT, Val, N1,
@@ -4709,9 +4718,9 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
           // Ensure that the input setccs return an i1 type or 0/1 value.
           if (Op0.getValueType() == MVT::i1 ||
               (getBooleanContents(XorLHS.getOperand(0).getValueType()) ==
-                      ZeroOrOneBooleanContent &&
+                   ZeroOrOneBooleanContent &&
                getBooleanContents(XorRHS.getOperand(0).getValueType()) ==
-                        ZeroOrOneBooleanContent)) {
+                   ZeroOrOneBooleanContent)) {
             // (xor (setcc), (setcc)) == / != 1 -> (setcc) != / == (setcc)
             Cond = (Cond == ISD::SETEQ) ? ISD::SETNE : ISD::SETEQ;
             return DAG.getSetCC(dl, VT, XorLHS, XorRHS, Cond);
@@ -4720,13 +4729,15 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
         if (Op0.getOpcode() == ISD::AND && isOneConstant(Op0.getOperand(1))) {
           // If this is (X&1) == / != 1, normalize it to (X&1) != / == 0.
           if (Op0.getValueType().bitsGT(VT))
-            Op0 = DAG.getNode(ISD::AND, dl, VT,
-                          DAG.getNode(ISD::TRUNCATE, dl, VT, Op0.getOperand(0)),
-                          DAG.getConstant(1, dl, VT));
+            Op0 = DAG.getNode(
+                ISD::AND, dl, VT,
+                DAG.getNode(ISD::TRUNCATE, dl, VT, Op0.getOperand(0)),
+                DAG.getConstant(1, dl, VT));
           else if (Op0.getValueType().bitsLT(VT))
-            Op0 = DAG.getNode(ISD::AND, dl, VT,
-                        DAG.getNode(ISD::ANY_EXTEND, dl, VT, Op0.getOperand(0)),
-                        DAG.getConstant(1, dl, VT));
+            Op0 = DAG.getNode(
+                ISD::AND, dl, VT,
+                DAG.getNode(ISD::ANY_EXTEND, dl, VT, Op0.getOperand(0)),
+                DAG.getConstant(1, dl, VT));
 
           return DAG.getSetCC(dl, VT, Op0,
                               DAG.getConstant(0, dl, Op0.getValueType()),
@@ -4798,8 +4809,7 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
             (!N1C->isOpaque() || (C.getBitWidth() <= 64 &&
                                   isLegalICmpImmediate(C.getSExtValue())))) {
           return DAG.getSetCC(dl, VT, N0,
-                              DAG.getConstant(C, dl, N1.getValueType()),
-                              NewCC);
+                              DAG.getConstant(C, dl, N1.getValueType()), NewCC);
         }
       }
     }
@@ -4818,8 +4828,7 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
             (!N1C->isOpaque() || (C.getBitWidth() <= 64 &&
                                   isLegalICmpImmediate(C.getSExtValue())))) {
           return DAG.getSetCC(dl, VT, N0,
-                              DAG.getConstant(C, dl, N1.getValueType()),
-                              NewCC);
+                              DAG.getConstant(C, dl, N1.getValueType()), NewCC);
         }
       }
     }
@@ -4835,7 +4844,7 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
           return DAG.getSetCC(dl, VT, N0, N1, ISD::SETNE);
 
         // If we have setult X, 1, turn it into seteq X, 0
-        if (C1 == MinVal+1)
+        if (C1 == MinVal + 1)
           return DAG.getSetCC(dl, VT, N0,
                               DAG.getConstant(MinVal, dl, N0.getValueType()),
                               ISD::SETEQ);
@@ -4853,7 +4862,7 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
           return DAG.getSetCC(dl, VT, N0, N1, ISD::SETNE);
 
         // If we have setugt X, Max-1, turn it into seteq X, Max
-        if (C1 == MaxVal-1)
+        if (C1 == MaxVal - 1)
           return DAG.getSetCC(dl, VT, N0,
                               DAG.getConstant(MaxVal, dl, N0.getValueType()),
                               ISD::SETEQ);
@@ -4937,9 +4946,8 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
       // SETUGE X, SINTMIN -> SETLT X, 0
       if ((Cond == ISD::SETUGT && C1.isMaxSignedValue()) ||
           (Cond == ISD::SETUGE && C1.isMinSignedValue()))
-        return DAG.getSetCC(dl, VT, N0,
-                            DAG.getConstant(0, dl, N1.getValueType()),
-                            ISD::SETLT);
+        return DAG.getSetCC(
+            dl, VT, N0, DAG.getConstant(0, dl, N1.getValueType()), ISD::SETLT);
 
       // SETULT X, SINTMIN  -> SETGT X, -1
       // SETULE X, SINTMAX  -> SETGT X, -1
@@ -4970,7 +4978,7 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
       if (auto *AndRHS = dyn_cast<ConstantSDNode>(N0.getOperand(1))) {
         EVT ShiftTy =
             getShiftAmountTy(ShValTy, Layout, !DCI.isBeforeLegalize());
-        if (Cond == ISD::SETNE && C1 == 0) {// (X & 8) != 0  -->  (X & 8) >> 3
+        if (Cond == ISD::SETNE && C1 == 0) { // (X & 8) != 0  -->  (X & 8) >> 3
           // Perform the xform if the AND RHS is a single bit.
           unsigned ShCt = AndRHS->getAPIntValue().logBase2();
           if (AndRHS->getAPIntValue().isPowerOf2() &&
@@ -5005,8 +5013,8 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
             unsigned ShiftBits = AndRHSC.countr_zero();
             if (!TLI.shouldAvoidTransformToShift(ShValTy, ShiftBits)) {
               SDValue Shift =
-                DAG.getNode(ISD::SRL, dl, ShValTy, N0.getOperand(0),
-                            DAG.getConstant(ShiftBits, dl, ShiftTy));
+                  DAG.getNode(ISD::SRL, dl, ShValTy, N0.getOperand(0),
+                              DAG.getConstant(ShiftBits, dl, ShiftTy));
               SDValue CmpRHS = DAG.getConstant(C1.lshr(ShiftBits), dl, ShValTy);
               return DAG.getSetCC(dl, VT, Shift, CmpRHS, Cond);
             }
@@ -5073,11 +5081,20 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
         bool IsNegInf = CFP->getValueAPF().isNegative();
         ISD::CondCode NewCond = ISD::SETCC_INVALID;
         switch (Cond) {
-        case ISD::SETOEQ: NewCond = IsNegInf ? ISD::SETOLE : ISD::SETOGE; break;
-        case ISD::SETUEQ: NewCond = IsNegInf ? ISD::SETULE : ISD::SETUGE; break;
-        case ISD::SETUNE: NewCond = IsNegInf ? ISD::SETUGT : ISD::SETULT; break;
-        case ISD::SETONE: NewCond = IsNegInf ? ISD::SETOGT : ISD::SETOLT; break;
-        default: break;
+        case ISD::SETOEQ:
+          NewCond = IsNegInf ? ISD::SETOLE : ISD::SETOGE;
+          break;
+        case ISD::SETUEQ:
+          NewCond = IsNegInf ? ISD::SETULE : ISD::SETUGE;
+          break;
+        case ISD::SETUNE:
+          NewCond = IsNegInf ? ISD::SETUGT : ISD::SETULT;
+          break;
+        case ISD::SETONE:
+          NewCond = IsNegInf ? ISD::SETOGT : ISD::SETOLT;
+          break;
+        default:
+          break;
         }
         if (NewCond != ISD::SETCC_INVALID &&
             isCondCodeLegal(NewCond, N0.getSimpleValueType()))
@@ -5101,8 +5118,7 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
     // Otherwise, we can't fold it.  However, we can simplify it to SETUO/SETO
     // if it is not already.
     ISD::CondCode NewCond = UOF == 0 ? ISD::SETO : ISD::SETUO;
-    if (NewCond != Cond &&
-        (DCI.isBeforeLegalizeOps() ||
+    if (NewCond != Cond && (DCI.isBeforeLegalizeOps() ||
                             isCondCodeLegal(NewCond, N0.getSimpleValueType())))
       return DAG.getSetCC(dl, VT, N0, N1, NewCond);
   }
@@ -5221,14 +5237,15 @@ SDValue TargetLowering::SimplifySetCC(EVT VT, SDValue N0, SDValue N1,
   if (N0.getValueType().getScalarType() == MVT::i1 && foldBooleans) {
     SDValue Temp;
     switch (Cond) {
-    default: llvm_unreachable("Unknown integer setcc!");
-    case ISD::SETEQ:  // X == Y  -> ~(X^Y)
+    default:
+      llvm_unreachable("Unknown integer setcc!");
+    case ISD::SETEQ: // X == Y  -> ~(X^Y)
       Temp = DAG.getNode(ISD::XOR, dl, OpVT, N0, N1);
       N0 = DAG.getNOT(dl, Temp, OpVT);
       if (!DCI.isCalledByLegalizer())
         DCI.AddToWorklist(Temp.getNode());
       break;
-    case ISD::SETNE:  // X != Y   -->  (X^Y)
+    case ISD::SETNE: // X != Y   -->  (X^Y)
       N0 = DAG.getNode(ISD::XOR, dl, OpVT, N0, N1);
       break;
     case ISD::SETGT:  // X >s Y   -->  X == 0 & Y == 1  -->  ~X & Y
@@ -5320,7 +5337,8 @@ TargetLowering::getConstraintType(StringRef Constraint) const {
 
   if (S == 1) {
     switch (Constraint[0]) {
-    default: break;
+    default:
+      break;
     case 'r':
       return C_RegisterClass;
     case 'm': // memory
@@ -5382,15 +5400,17 @@ void TargetLowering::LowerAsmOperandForConstraint(SDValue Op,
                                                   std::vector<SDValue> &Ops,
                                                   SelectionDAG &DAG) const {
 
-  if (Constraint.length() > 1) return;
+  if (Constraint.length() > 1)
+    return;
 
   char ConstraintLetter = Constraint[0];
   switch (ConstraintLetter) {
-  default: break;
-  case 'X':    // Allows any operand
-  case 'i':    // Simple Integer or Relocatable Constant
-  case 'n':    // Simple Integer
-  case 's': {  // Relocatable Constant
+  default:
+    break;
+  case 'X':   // Allows any operand
+  case 'i':   // Simple Integer or Relocatable Constant
+  case 'n':   // Simple Integer
+  case 's': { // Relocatable Constant
 
     ConstantSDNode *C;
     uint64_t Offset = 0;
@@ -5530,8 +5550,8 @@ TargetLowering::ParseConstraints(const DataLayout &DL,
 
   // Do a prepass over the constraints, canonicalizing them, and building up the
   // ConstraintOperands list.
-  unsigned ArgNo = 0; // ArgNo - The argument of the CallInst.
-  unsigned ResNo = 0; // ResNo - The result number of the next output.
+  unsigned ArgNo = 0;   // ArgNo - The argument of the CallInst.
+  unsigned ResNo = 0;   // ResNo - The result number of the next output.
   unsigned LabelNo = 0; // LabelNo - CallBr indirect dest number.
 
   for (InlineAsm::ConstraintInfo &CI : IA->ParseConstraints()) {
@@ -5595,7 +5615,8 @@ TargetLowering::ParseConstraints(const DataLayout &DL,
       if (!OpTy->isSingleValueType() && OpTy->isSized()) {
         unsigned BitSize = DL.getTypeSizeInBits(OpTy);
         switch (BitSize) {
-        default: break;
+        default:
+          break;
         case 1:
         case 8:
         case 16:
@@ -5723,8 +5744,8 @@ static unsigned getConstraintGenerality(TargetLowering::ConstraintType CT) {
 /// This object must already have been set up with the operand type
 /// and the current alternative constraint selected.
 TargetLowering::ConstraintWeight
-  TargetLowering::getMultipleConstraintMatchWeight(
-    AsmOperandInfo &info, int maIndex) const {
+TargetLowering::getMultipleConstraintMatchWeight(AsmOperandInfo &info,
+                                                 int maIndex) const {
   InlineAsm::ConstraintCodeVector *rCodes;
   if (maIndex >= (int)info.multipleAlternatives.size())
     rCodes = &info.Codes;
@@ -5747,44 +5768,44 @@ TargetLowering::ConstraintWeight
 /// This object must already have been set up with the operand type
 /// and the current alternative constraint selected.
 TargetLowering::ConstraintWeight
-  TargetLowering::getSingleConstraintMatchWeight(
-    AsmOperandInfo &info, const char *constraint) const {
+TargetLowering::getSingleConstraintMatchWeight(AsmOperandInfo &info,
+                                               const char *constraint) const {
   ConstraintWeight weight = CW_Invalid;
   Value *CallOperandVal = info.CallOperandVal;
-    // If we don't have a value, we can't do a match,
-    // but allow it at the lowest weight.
+  // If we don't have a value, we can't do a match,
+  // but allow it at the lowest weight.
   if (!CallOperandVal)
     return CW_Default;
   // Look at the constraint type.
   switch (*constraint) {
-    case 'i': // immediate integer.
-    case 'n': // immediate integer with a known value.
-      if (isa<ConstantInt>(CallOperandVal))
-        weight = CW_Constant;
-      break;
-    case 's': // non-explicit intregal immediate.
-      if (isa<GlobalValue>(CallOperandVal))
-        weight = CW_Constant;
-      break;
-    case 'E': // immediate float if host format.
-    case 'F': // immediate float.
-      if (isa<ConstantFP>(CallOperandVal))
-        weight = CW_Constant;
-      break;
-    case '<': // memory operand with autodecrement.
-    case '>': // memory operand with autoincrement.
-    case 'm': // memory operand.
-    case 'o': // offsettable memory operand
-    case 'V': // non-offsettable memory operand
-      weight = CW_Memory;
-      break;
-    case 'r': // general register.
-    case 'g': // general register, memory operand or immediate integer.
-              // note: Clang converts "g" to "imr".
-      if (CallOperandVal->getType()->isIntegerTy())
-        weight = CW_Register;
-      break;
-    case 'X': // any operand.
+  case 'i': // immediate integer.
+  case 'n': // immediate integer with a known value.
+    if (isa<ConstantInt>(CallOperandVal))
+      weight = CW_Constant;
+    break;
+  case 's': // non-explicit intregal immediate.
+    if (isa<GlobalValue>(CallOperandVal))
+      weight = CW_Constant;
+    break;
+  case 'E': // immediate float if host format.
+  case 'F': // immediate float.
+    if (isa<ConstantFP>(CallOperandVal))
+      weight = CW_Constant;
+    break;
+  case '<': // memory operand with autodecrement.
+  case '>': // memory operand with autoincrement.
+  case 'm': // memory operand.
+  case 'o': // offsettable memory operand
+  case 'V': // non-offsettable memory operand
+    weight = CW_Memory;
+    break;
+  case 'r': // general register.
+  case 'g': // general register, memory operand or immediate integer.
+            // note: Clang converts "g" to "imr".
+    if (CallOperandVal->getType()->isIntegerTy())
+      weight = CW_Register;
+    break;
+  case 'X': // any operand.
   default:
     weight = CW_Default;
     break;
@@ -5813,8 +5834,8 @@ TargetLowering::ConstraintWeight
 ///     'm' over 'r', for example.
 ///
 static void ChooseConstraint(TargetLowering::AsmOperandInfo &OpInfo,
-                             const TargetLowering &TLI,
-                             SDValue Op, SelectionDAG *DAG) {
+                             const TargetLowering &TLI, SDValue Op,
+                             SelectionDAG *DAG) {
   assert(OpInfo.Codes.size() > 1 && "Doesn't have multiple constraint options");
   unsigned BestIdx = 0;
   TargetLowering::ConstraintType BestType = TargetLowering::C_Unknown;
@@ -5823,7 +5844,7 @@ static void ChooseConstraint(TargetLowering::AsmOperandInfo &OpInfo,
   // Loop over the options, keeping track of the most general one.
   for (unsigned i = 0, e = OpInfo.Codes.size(); i != e; ++i) {
     TargetLowering::ConstraintType CType =
-      TLI.getConstraintType(OpInfo.Codes[i]);
+        TLI.getConstraintType(OpInfo.Codes[i]);
 
     // Indirect 'other' or 'immediate' constraints are not allowed.
     if (OpInfo.isIndirect && !(CType == TargetLowering::C_Memory ||
@@ -5836,12 +5857,12 @@ static void ChooseConstraint(TargetLowering::AsmOperandInfo &OpInfo,
     // the operand is an integer in the range [0..31] we want to use I (saving a
     // load of a register), otherwise we must use 'r'.
     if ((CType == TargetLowering::C_Other ||
-         CType == TargetLowering::C_Immediate) && Op.getNode()) {
+         CType == TargetLowering::C_Immediate) &&
+        Op.getNode()) {
       assert(OpInfo.Codes[i].size() == 1 &&
              "Unhandled multi-letter 'other' constraint");
       std::vector<SDValue> ResultOps;
-      TLI.LowerAsmOperandForConstraint(Op, OpInfo.Codes[i],
-                                       ResultOps, *DAG);
+      TLI.LowerAsmOperandForConstraint(Op, OpInfo.Codes[i], ResultOps, *DAG);
       if (!ResultOps.empty()) {
         BestType = CType;
         BestIdx = i;
@@ -5869,8 +5890,7 @@ static void ChooseConstraint(TargetLowering::AsmOperandInfo &OpInfo,
 
 /// Determines the constraint code and constraint type to use for the specific
 /// AsmOperandInfo, setting OpInfo.ConstraintCode and OpInfo.ConstraintType.
-void TargetLowering::ComputeConstraintToUse(AsmOperandInfo &OpInfo,
-                                            SDValue Op,
+void TargetLowering::ComputeConstraintToUse(AsmOperandInfo &OpInfo, SDValue Op,
                                             SelectionDAG *DAG) const {
   assert(!OpInfo.Codes.empty() && "Must have at least one constraint");
 
@@ -5974,7 +5994,8 @@ static SDValue BuildExactSDIV(const TargetLowering &TLI, SDNode *N,
   return DAG.getNode(ISD::MUL, dl, VT, Res, Factor);
 }
 
-SDValue TargetLowering::BuildSDIVPow2(SDNode *N, const APInt &Divisor,
+SDValue
+TargetLowering::BuildSDIVPow2(SDNode *N, const APInt &Divisor,
                               SelectionDAG &DAG,
                               SmallVectorImpl<SDNode *> &Created) const {
   AttributeList Attr = DAG.getMachineFunction().getFunction().getAttributes();
@@ -6039,7 +6060,8 @@ SDValue TargetLowering::BuildSDIV(SDNode *N, SelectionDAG &DAG,
       return false;
 
     const APInt &Divisor = C->getAPIntValue();
-    SignedDivisionByConstantInfo magics = SignedDivisionByConstantInfo::get(Divisor);
+    SignedDivisionByConstantInfo magics =
+        SignedDivisionByConstantInfo::get(Divisor);
     int NumeratorFactor = 0;
     int ShiftMask = -1;
 
@@ -6212,7 +6234,7 @@ SDValue TargetLowering::BuildUDIV(SDNode *N, SelectionDAG &DAG,
   auto BuildUDIVPattern = [&](ConstantSDNode *C) {
     if (C->isZero())
       return false;
-    const APInt& Divisor = C->getAPIntValue();
+    const APInt &Divisor = C->getAPIntValue();
 
     SDValue PreShift, MagicFactor, NPQFactor, PostShift;
 
@@ -6231,8 +6253,7 @@ SDValue TargetLowering::BuildUDIV(SDNode *N, SelectionDAG &DAG,
              "We shouldn't generate an undefined shift!");
       assert(magics.PostShift < Divisor.getBitWidth() &&
              "We shouldn't generate an undefined shift!");
-      assert((!magics.IsAdd || magics.PreShift == 0) &&
-             "Unexpected pre-shift");
+      assert((!magics.IsAdd || magics.PreShift == 0) && "Unexpected pre-shift");
       PreShift = DAG.getConstant(magics.PreShift, dl, ShSVT);
       PostShift = DAG.getConstant(magics.PostShift, dl, ShSVT);
       NPQFactor = DAG.getConstant(
@@ -6901,8 +6922,8 @@ TargetLowering::prepareSREMEqFold(EVT SETCCVT, SDValue REMNode,
   return Blended;
 }
 
-bool TargetLowering::
-verifyReturnAddressArgumentIsConstant(SDValue Op, SelectionDAG &DAG) const {
+bool TargetLowering::verifyReturnAddressArgumentIsConstant(
+    SDValue Op, SelectionDAG &DAG) const {
   if (!isa<ConstantSDNode>(Op.getOperand(0))) {
     DAG.getContext()->emitError("argument to '__builtin_return_address' must "
                                 "be a constant integer");
@@ -7947,8 +7968,7 @@ bool TargetLowering::expandFP_TO_SINT(SDNode *Node, SDValue &Result,
 }
 
 bool TargetLowering::expandFP_TO_UINT(SDNode *Node, SDValue &Result,
-                                      SDValue &Chain,
-                                      SelectionDAG &DAG) const {
+                                      SDValue &Chain, SelectionDAG &DAG) const {
   SDLoc dl(SDValue(Node, 0));
   unsigned OpNo = Node->isStrictFPOpcode() ? 1 : 0;
   SDValue Src = Node->getOperand(OpNo);
@@ -7961,8 +7981,8 @@ bool TargetLowering::expandFP_TO_UINT(SDNode *Node, SDValue &Result,
       getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(), DstVT);
 
   // Only expand vector types if we have the appropriate vector bit operations.
-  unsigned SIntOpcode = Node->isStrictFPOpcode() ? ISD::STRICT_FP_TO_SINT :
-                                                   ISD::FP_TO_SINT;
+  unsigned SIntOpcode =
+      Node->isStrictFPOpcode() ? ISD::STRICT_FP_TO_SINT : ISD::FP_TO_SINT;
   if (DstVT.isVector() && (!isOperationLegalOrCustom(SIntOpcode, DstVT) ||
                            !isOperationLegalOrCustomOrPromote(ISD::XOR, SrcVT)))
     return false;
@@ -7976,8 +7996,8 @@ bool TargetLowering::expandFP_TO_UINT(SDNode *Node, SDValue &Result,
   if (APFloat::opOverflow &
       APF.convertFromAPInt(SignMask, false, APFloat::rmNearestTiesToEven)) {
     if (Node->isStrictFPOpcode()) {
-      Result = DAG.getNode(ISD::STRICT_FP_TO_SINT, dl, { DstVT, MVT::Other },
-                           { Node->getOperand(0), Src });
+      Result = DAG.getNode(ISD::STRICT_FP_TO_SINT, dl, {DstVT, MVT::Other},
+                           {Node->getOperand(0), Src});
       Chain = Result.getValue(1);
     } else
       Result = DAG.getNode(ISD::FP_TO_SINT, dl, DstVT, Src);
@@ -7993,8 +8013,8 @@ bool TargetLowering::expandFP_TO_UINT(SDNode *Node, SDValue &Result,
   SDValue Sel;
 
   if (Node->isStrictFPOpcode()) {
-    Sel = DAG.getSetCC(dl, SetCCVT, Src, Cst, ISD::SETLT,
-                       Node->getOperand(0), /*IsSignaling*/ true);
+    Sel = DAG.getSetCC(dl, SetCCVT, Src, Cst, ISD::SETLT, Node->getOperand(0),
+                       /*IsSignaling*/ true);
     Chain = Sel.getValue(1);
   } else {
     Sel = DAG.getSetCC(dl, SetCCVT, Src, Cst, ISD::SETLT);
@@ -8012,18 +8032,18 @@ bool TargetLowering::expandFP_TO_UINT(SDNode *Node, SDValue &Result,
     // Result = fp_to_sint(Src - FltOfs) ^ IntOfs
 
     // TODO: Should any fast-math-flags be set for the FSUB?
-    SDValue FltOfs = DAG.getSelect(dl, SrcVT, Sel,
-                                   DAG.getConstantFP(0.0, dl, SrcVT), Cst);
+    SDValue FltOfs =
+        DAG.getSelect(dl, SrcVT, Sel, DAG.getConstantFP(0.0, dl, SrcVT), Cst);
     Sel = DAG.getBoolExtOrTrunc(Sel, dl, DstSetCCVT, DstVT);
-    SDValue IntOfs = DAG.getSelect(dl, DstVT, Sel,
-                                   DAG.getConstant(0, dl, DstVT),
-                                   DAG.getConstant(SignMask, dl, DstVT));
+    SDValue IntOfs =
+        DAG.getSelect(dl, DstVT, Sel, DAG.getConstant(0, dl, DstVT),
+                      DAG.getConstant(SignMask, dl, DstVT));
     SDValue SInt;
     if (Node->isStrictFPOpcode()) {
-      SDValue Val = DAG.getNode(ISD::STRICT_FSUB, dl, { SrcVT, MVT::Other },
-                                { Chain, Src, FltOfs });
-      SInt = DAG.getNode(ISD::STRICT_FP_TO_SINT, dl, { DstVT, MVT::Other },
-                         { Val.getValue(1), Val });
+      SDValue Val = DAG.getNode(ISD::STRICT_FSUB, dl, {SrcVT, MVT::Other},
+                                {Chain, Src, FltOfs});
+      SInt = DAG.getNode(ISD::STRICT_FP_TO_SINT, dl, {DstVT, MVT::Other},
+                         {Val.getValue(1), Val});
       Chain = SInt.getValue(1);
     } else {
       SDValue Val = DAG.getNode(ISD::FSUB, dl, SrcVT, Src, FltOfs);
@@ -8049,8 +8069,7 @@ bool TargetLowering::expandFP_TO_UINT(SDNode *Node, SDValue &Result,
 }
 
 bool TargetLowering::expandUINT_TO_FP(SDNode *Node, SDValue &Result,
-                                      SDValue &Chain,
-                                      SelectionDAG &DAG) const {
+                                      SDValue &Chain, SelectionDAG &DAG) const {
   // This transform is not correct for converting 0 when rounding mode is set
   // to round toward negative infinity which will produce -0.0. So disable under
   // strictfp.
@@ -8093,8 +8112,7 @@ bool TargetLowering::expandUINT_TO_FP(SDNode *Node, SDValue &Result,
   SDValue HiOr = DAG.getNode(ISD::OR, dl, SrcVT, Hi, TwoP84);
   SDValue LoFlt = DAG.getBitcast(DstVT, LoOr);
   SDValue HiFlt = DAG.getBitcast(DstVT, HiOr);
-  SDValue HiSub =
-      DAG.getNode(ISD::FSUB, dl, DstVT, HiFlt, TwoP84PlusTwoP52);
+  SDValue HiSub = DAG.getNode(ISD::FSUB, dl, DstVT, HiFlt, TwoP84PlusTwoP52);
   Result = DAG.getNode(ISD::FADD, dl, DstVT, LoFlt, HiSub);
   return true;
 }
@@ -8126,8 +8144,8 @@ TargetLowering::createSelectForFMINNUM_FMAXNUM(SDNode *Node,
 SDValue TargetLowering::expandFMINNUM_FMAXNUM(SDNode *Node,
                                               SelectionDAG &DAG) const {
   SDLoc dl(Node);
-  unsigned NewOp = Node->getOpcode() == ISD::FMINNUM ?
-    ISD::FMINNUM_IEEE : ISD::FMAXNUM_IEEE;
+  unsigned NewOp =
+      Node->getOpcode() == ISD::FMINNUM ? ISD::FMINNUM_IEEE : ISD::FMAXNUM_IEEE;
   EVT VT = Node->getValueType(0);
 
   if (VT.isScalableVector())
@@ -8142,12 +8160,12 @@ SDValue TargetLowering::expandFMINNUM_FMAXNUM(SDNode *Node,
       // Insert canonicalizes if it's possible we need to quiet to get correct
       // sNaN behavior.
       if (!DAG.isKnownNeverSNaN(Quiet0)) {
-        Quiet0 = DAG.getNode(ISD::FCANONICALIZE, dl, VT, Quiet0,
-                             Node->getFlags());
+        Quiet0 =
+            DAG.getNode(ISD::FCANONICALIZE, dl, VT, Quiet0, Node->getFlags());
       }
       if (!DAG.isKnownNeverSNaN(Quiet1)) {
-        Quiet1 = DAG.getNode(ISD::FCANONICALIZE, dl, VT, Quiet1,
-                             Node->getFlags());
+        Quiet1 =
+            DAG.getNode(ISD::FCANONICALIZE, dl, VT, Quiet1, Node->getFlags());
       }
     }
 
@@ -8525,10 +8543,10 @@ SDValue TargetLowering::expandCTPOP(SDNode *Node, SelectionDAG &DAG) const {
   if (Len == 16 && !VT.isVector()) {
     // v = (v + (v >> 8)) & 0x00FF;
     return DAG.getNode(ISD::AND, dl, VT,
-                     DAG.getNode(ISD::ADD, dl, VT, Op,
-                                 DAG.getNode(ISD::SRL, dl, VT, Op,
-                                             DAG.getConstant(8, dl, ShVT))),
-                     DAG.getConstant(0xFF, dl, VT));
+                       DAG.getNode(ISD::ADD, dl, VT, Op,
+                                   DAG.getNode(ISD::SRL, dl, VT, Op,
+                                               DAG.getConstant(8, dl, ShVT))),
+                       DAG.getConstant(0xFF, dl, VT));
   }
 
   // v = (v * 0x01010101...) >> (Len - 8)
@@ -8709,12 +8727,11 @@ SDValue TargetLowering::CTTZTableLookup(SDNode *Node, SelectionDAG &DAG,
   if (Node->getOpcode() == ISD::CTTZ_ZERO_UNDEF)
     return ExtLoad;
 
-  EVT SetCCVT =
-      getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(), VT);
+  EVT SetCCVT = getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(), VT);
   SDValue Zero = DAG.getConstant(0, DL, VT);
   SDValue SrcIsZero = DAG.getSetCC(DL, SetCCVT, Op, Zero, ISD::SETEQ);
-  return DAG.getSelect(DL, VT, SrcIsZero,
-                       DAG.getConstant(BitWidth, DL, VT), ExtLoad);
+  return DAG.getSelect(DL, VT, SrcIsZero, DAG.getConstant(BitWidth, DL, VT),
+                       ExtLoad);
 }
 
 SDValue TargetLowering::expandCTTZ(SDNode *Node, SelectionDAG &DAG) const {
@@ -8894,8 +8911,7 @@ SDValue TargetLowering::expandBSWAP(SDNode *N, SelectionDAG &DAG) const {
     return DAG.getNode(ISD::ROTL, dl, VT, Op, DAG.getConstant(8, dl, SHVT));
   case MVT::i32:
     Tmp4 = DAG.getNode(ISD::SHL, dl, VT, Op, DAG.getConstant(24, dl, SHVT));
-    Tmp3 = DAG.getNode(ISD::AND, dl, VT, Op,
-                       DAG.getConstant(0xFF00, dl, VT));
+    Tmp3 = DAG.getNode(ISD::AND, dl, VT, Op, DAG.getConstant(0xFF00, dl, VT));
     Tmp3 = DAG.getNode(ISD::SHL, dl, VT, Tmp3, DAG.getConstant(8, dl, SHVT));
     Tmp2 = DAG.getNode(ISD::SRL, dl, VT, Op, DAG.getConstant(8, dl, SHVT));
     Tmp2 = DAG.getNode(ISD::AND, dl, VT, Tmp2, DAG.getConstant(0xFF00, dl, VT));
@@ -8905,24 +8921,24 @@ SDValue TargetLowering::expandBSWAP(SDNode *N, SelectionDAG &DAG) const {
     return DAG.getNode(ISD::OR, dl, VT, Tmp4, Tmp2);
   case MVT::i64:
     Tmp8 = DAG.getNode(ISD::SHL, dl, VT, Op, DAG.getConstant(56, dl, SHVT));
-    Tmp7 = DAG.getNode(ISD::AND, dl, VT, Op,
-                       DAG.getConstant(255ULL<<8, dl, VT));
+    Tmp7 =
+        DAG.getNode(ISD::AND, dl, VT, Op, DAG.getConstant(255ULL << 8, dl, VT));
     Tmp7 = DAG.getNode(ISD::SHL, dl, VT, Tmp7, DAG.getConstant(40, dl, SHVT));
     Tmp6 = DAG.getNode(ISD::AND, dl, VT, Op,
-                       DAG.getConstant(255ULL<<16, dl, VT));
+                       DAG.getConstant(255ULL << 16, dl, VT));
     Tmp6 = DAG.getNode(ISD::SHL, dl, VT, Tmp6, DAG.getConstant(24, dl, SHVT));
     Tmp5 = DAG.getNode(ISD::AND, dl, VT, Op,
-                       DAG.getConstant(255ULL<<24, dl, VT));
+                       DAG.getConstant(255ULL << 24, dl, VT));
     Tmp5 = DAG.getNode(ISD::SHL, dl, VT, Tmp5, DAG.getConstant(8, dl, SHVT));
     Tmp4 = DAG.getNode(ISD::SRL, dl, VT, Op, DAG.getConstant(8, dl, SHVT));
     Tmp4 = DAG.getNode(ISD::AND, dl, VT, Tmp4,
-                       DAG.getConstant(255ULL<<24, dl, VT));
+                       DAG.getConstant(255ULL << 24, dl, VT));
     Tmp3 = DAG.getNode(ISD::SRL, dl, VT, Op, DAG.getConstant(24, dl, SHVT));
     Tmp3 = DAG.getNode(ISD::AND, dl, VT, Tmp3,
-                       DAG.getConstant(255ULL<<16, dl, VT));
+                       DAG.getConstant(255ULL << 16, dl, VT));
     Tmp2 = DAG.getNode(ISD::SRL, dl, VT, Op, DAG.getConstant(40, dl, SHVT));
     Tmp2 = DAG.getNode(ISD::AND, dl, VT, Tmp2,
-                       DAG.getConstant(255ULL<<8, dl, VT));
+                       DAG.getConstant(255ULL << 8, dl, VT));
     Tmp1 = DAG.getNode(ISD::SRL, dl, VT, Op, DAG.getConstant(56, dl, SHVT));
     Tmp8 = DAG.getNode(ISD::OR, dl, VT, Tmp8, Tmp7);
     Tmp6 = DAG.getNode(ISD::OR, dl, VT, Tmp6, Tmp5);
@@ -9055,7 +9071,7 @@ SDValue TargetLowering::expandBITREVERSE(SDNode *N, SelectionDAG &DAG) const {
   }
 
   Tmp = DAG.getConstant(0, dl, VT);
-  for (unsigned I = 0, J = Sz-1; I < Sz; ++I, --J) {
+  for (unsigned I = 0, J = Sz - 1; I < Sz; ++I, --J) {
     if (I < J)
       Tmp2 =
           DAG.getNode(ISD::SHL, dl, VT, Op, DAG.getConstant(J - I, dl, SHVT));
@@ -9134,8 +9150,7 @@ SDValue TargetLowering::expandVPBITREVERSE(SDNode *N, SelectionDAG &DAG) const {
 }
 
 std::pair<SDValue, SDValue>
-TargetLowering::scalarizeVectorLoad(LoadSDNode *LD,
-                                    SelectionDAG &DAG) const {
+TargetLowering::scalarizeVectorLoad(LoadSDNode *LD, SelectionDAG &DAG) const {
   SDLoc SL(LD);
   SDValue Chain = LD->getChain();
   SDValue BasePTR = LD->getBasePtr();
@@ -9315,20 +9330,19 @@ TargetLowering::expandUnalignedLoad(LoadSDNode *LD, SelectionDAG &DAG) const {
   if (VT.isFloatingPoint() || VT.isVector()) {
     EVT intVT = EVT::getIntegerVT(*DAG.getContext(), LoadedVT.getSizeInBits());
     if (isTypeLegal(intVT) && isTypeLegal(LoadedVT)) {
-      if (!isOperationLegalOrCustom(ISD::LOAD, intVT) &&
-          LoadedVT.isVector()) {
+      if (!isOperationLegalOrCustom(ISD::LOAD, intVT) && LoadedVT.isVector()) {
         // Scalarize the load and let the individual components be handled.
         return scalarizeVectorLoad(LD, DAG);
       }
 
       // Expand to a (misaligned) integer load of the same size,
       // then bitconvert to floating point or vector.
-      SDValue newLoad = DAG.getLoad(intVT, dl, Chain, Ptr,
-                                    LD->getMemOperand());
+      SDValue newLoad = DAG.getLoad(intVT, dl, Chain, Ptr, LD->getMemOperand());
       SDValue Result = DAG.getNode(ISD::BITCAST, dl, LoadedVT, newLoad);
       if (LoadedVT != VT)
-        Result = DAG.getNode(VT.isFloatingPoint() ? ISD::FP_EXTEND :
-                             ISD::ANY_EXTEND, dl, VT, Result);
+        Result =
+            DAG.getNode(VT.isFloatingPoint() ? ISD::FP_EXTEND : ISD::ANY_EXTEND,
+                        dl, VT, Result);
 
       return std::make_pair(Result, newLoad.getValue(1));
     }
@@ -9372,8 +9386,8 @@ TargetLowering::expandUnalignedLoad(LoadSDNode *LD, SelectionDAG &DAG) const {
     }
 
     // The last copy may be partial.  Do an extending load.
-    EVT MemVT = EVT::getIntegerVT(*DAG.getContext(),
-                                  8 * (LoadedBytes - Offset));
+    EVT MemVT =
+        EVT::getIntegerVT(*DAG.getContext(), 8 * (LoadedBytes - Offset));
     SDValue Load =
         DAG.getExtLoad(ISD::EXTLOAD, dl, RegVT, Chain, Ptr,
                        LD->getPointerInfo().getWithOffset(Offset), MemVT,
@@ -9405,7 +9419,7 @@ TargetLowering::expandUnalignedLoad(LoadSDNode *LD, SelectionDAG &DAG) const {
   // integer MVT.
   unsigned NumBits = LoadedVT.getSizeInBits();
   EVT NewLoadedVT;
-  NewLoadedVT = EVT::getIntegerVT(*DAG.getContext(), NumBits/2);
+  NewLoadedVT = EVT::getIntegerVT(*DAG.getContext(), NumBits / 2);
   NumBits >>= 1;
 
   Align Alignment = LD->getOriginalAlign();
@@ -9441,14 +9455,13 @@ TargetLowering::expandUnalignedLoad(LoadSDNode *LD, SelectionDAG &DAG) const {
   }
 
   // aggregate the two parts
-  SDValue ShiftAmount =
-      DAG.getConstant(NumBits, dl, getShiftAmountTy(Hi.getValueType(),
-                                                    DAG.getDataLayout()));
+  SDValue ShiftAmount = DAG.getConstant(
+      NumBits, dl, getShiftAmountTy(Hi.getValueType(), DAG.getDataLayout()));
   SDValue Result = DAG.getNode(ISD::SHL, dl, VT, Hi, ShiftAmount);
   Result = DAG.getNode(ISD::OR, dl, VT, Result, Lo);
 
   SDValue TF = DAG.getNode(ISD::TokenFactor, dl, MVT::Other, Lo.getValue(1),
-                             Hi.getValue(1));
+                           Hi.getValue(1));
 
   return std::make_pair(Result, TF);
 }
@@ -9578,11 +9591,10 @@ SDValue TargetLowering::expandUnalignedStore(StoreSDNode *ST,
   return Result;
 }
 
-SDValue
-TargetLowering::IncrementMemoryAddress(SDValue Addr, SDValue Mask,
-                                       const SDLoc &DL, EVT DataVT,
-                                       SelectionDAG &DAG,
-                                       bool IsCompressedMemory) const {
+SDValue TargetLowering::IncrementMemoryAddress(SDValue Addr, SDValue Mask,
+                                               const SDLoc &DL, EVT DataVT,
+                                               SelectionDAG &DAG,
+                                               bool IsCompressedMemory) const {
   SDValue Increment;
   EVT AddrVT = Addr.getValueType();
   EVT MaskVT = Mask.getValueType();
@@ -9593,7 +9605,8 @@ TargetLowering::IncrementMemoryAddress(SDValue Addr, SDValue Mask,
       report_fatal_error(
           "Cannot currently handle compressed memory with scalable vectors");
     // Incrementing the pointer according to number of '1's in the mask.
-    EVT MaskIntVT = EVT::getIntegerVT(*DAG.getContext(), MaskVT.getSizeInBits());
+    EVT MaskIntVT =
+        EVT::getIntegerVT(*DAG.getContext(), MaskVT.getSizeInBits());
     SDValue MaskInIntReg = DAG.getBitcast(MaskIntVT, Mask);
     if (MaskIntVT.getSizeInBits() < 32) {
       MaskInIntReg = DAG.getNode(ISD::ZERO_EXTEND, DL, MVT::i32, MaskInIntReg);
@@ -9604,8 +9617,8 @@ TargetLowering::IncrementMemoryAddress(SDValue Addr, SDValue Mask,
     Increment = DAG.getNode(ISD::CTPOP, DL, MaskIntVT, MaskInIntReg);
     Increment = DAG.getZExtOrTrunc(Increment, DL, AddrVT);
     // Scale is an element size in bytes.
-    SDValue Scale = DAG.getConstant(DataVT.getScalarSizeInBits() / 8, DL,
-                                    AddrVT);
+    SDValue Scale =
+        DAG.getConstant(DataVT.getScalarSizeInBits() / 8, DL, AddrVT);
     Increment = DAG.getNode(ISD::MUL, DL, AddrVT, Increment, Scale);
   } else if (DataVT.isScalableVector()) {
     Increment = DAG.getVScale(DL, AddrVT,
@@ -9671,7 +9684,8 @@ SDValue TargetLowering::getVectorSubVecPointer(SelectionDAG &DAG,
   EVT EltVT = VecVT.getVectorElementType();
 
   // Calculate the element offset and add it to the pointer.
-  unsigned EltSize = EltVT.getFixedSizeInBits() / 8; // FIXME: should be ABI size.
+  unsigned EltSize =
+      EltVT.getFixedSizeInBits() / 8; // FIXME: should be ABI size.
   assert(EltSize * 8 == EltVT.getFixedSizeInBits() &&
          "Converting bits to bytes lost precision");
   assert(SubVecVT.getVectorElementType() == EltVT &&
@@ -9705,7 +9719,7 @@ SDValue TargetLowering::LowerToTLSEmulatedModel(const GlobalAddressSDNode *GA,
   ArgListTy Args;
   ArgListEntry Entry;
   std::string NameString = ("__emutls_v." + GA->getGlobal()->getName()).str();
-  Module *VariableModule = const_cast<Module*>(GA->getGlobal()->getParent());
+  Module *VariableModule = const_cast<Module *>(GA->getGlobal()->getParent());
   StringRef EmuTlsVarName(NameString);
   GlobalVariable *EmuTlsVar = VariableModule->getNamedGlobal(EmuTlsVarName);
   assert(EmuTlsVar && "Cannot find EmuTlsVar ");
@@ -9884,7 +9898,8 @@ SDValue TargetLowering::expandAddSubSat(SDNode *Node, SelectionDAG &DAG) const {
 
   unsigned BitWidth = LHS.getScalarValueSizeInBits();
   EVT BoolVT = getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(), VT);
-  SDValue Result = DAG.getNode(OverflowOp, dl, DAG.getVTList(VT, BoolVT), LHS, RHS);
+  SDValue Result =
+      DAG.getNode(OverflowOp, dl, DAG.getVTList(VT, BoolVT), LHS, RHS);
   SDValue SumDiff = Result.getValue(0);
   SDValue Overflow = Result.getValue(1);
   SDValue Zero = DAG.getConstant(0, dl, VT);
@@ -9961,7 +9976,7 @@ SDValue TargetLowering::expandShlSat(SDNode *Node, SelectionDAG &DAG) const {
 
   assert((Node->getOpcode() == ISD::SSHLSAT ||
           Node->getOpcode() == ISD::USHLSAT) &&
-          "Expected a SHLSAT opcode");
+         "Expected a SHLSAT opcode");
   assert(VT == RHS.getValueType() && "Expected operands to be the same type");
   assert(VT.isInteger() && "Expected operands to be integers");
 
@@ -9990,8 +10005,8 @@ SDValue TargetLowering::expandShlSat(SDNode *Node, SelectionDAG &DAG) const {
   return DAG.getSelect(dl, VT, Cond, SatVal, Result);
 }
 
-SDValue
-TargetLowering::expandFixedPointMul(SDNode *Node, SelectionDAG &DAG) const {
+SDValue TargetLowering::expandFixedPointMul(SDNode *Node,
+                                            SelectionDAG &DAG) const {
   assert((Node->getOpcode() == ISD::SMULFIX ||
           Node->getOpcode() == ISD::UMULFIX ||
           Node->getOpcode() == ISD::SMULFIXSAT ||
@@ -10089,11 +10104,10 @@ TargetLowering::expandFixedPointMul(SDNode *Node, SelectionDAG &DAG) const {
     // Saturate to max if ((Hi >> Scale) != 0),
     // which is the same as if (Hi > ((1 << Scale) - 1))
     APInt MaxVal = APInt::getMaxValue(VTSize);
-    SDValue LowMask = DAG.getConstant(APInt::getLowBitsSet(VTSize, Scale),
-                                      dl, VT);
-    Result = DAG.getSelectCC(dl, Hi, LowMask,
-                             DAG.getConstant(MaxVal, dl, VT), Result,
-                             ISD::SETUGT);
+    SDValue LowMask =
+        DAG.getConstant(APInt::getLowBitsSet(VTSize, Scale), dl, VT);
+    Result = DAG.getSelectCC(dl, Hi, LowMask, DAG.getConstant(MaxVal, dl, VT),
+                             Result, ISD::SETUGT);
 
     return Result;
   }
@@ -10111,8 +10125,8 @@ TargetLowering::expandFixedPointMul(SDNode *Node, SelectionDAG &DAG) const {
     // Saturated to SatMin if wide product is negative, and SatMax if wide
     // product is positive ...
     SDValue Zero = DAG.getConstant(0, dl, VT);
-    SDValue ResultIfOverflow = DAG.getSelectCC(dl, Hi, Zero, SatMin, SatMax,
-                                               ISD::SETLT);
+    SDValue ResultIfOverflow =
+        DAG.getSelectCC(dl, Hi, Zero, SatMin, SatMax, ISD::SETLT);
     // ... but only if we overflowed.
     return DAG.getSelect(dl, VT, Overflow, ResultIfOverflow, Result);
   }
@@ -10121,22 +10135,21 @@ TargetLowering::expandFixedPointMul(SDNode *Node, SelectionDAG &DAG) const {
 
   // Saturate to max if ((Hi >> (Scale - 1)) > 0),
   // which is the same as if (Hi > (1 << (Scale - 1)) - 1)
-  SDValue LowMask = DAG.getConstant(APInt::getLowBitsSet(VTSize, Scale - 1),
-                                    dl, VT);
+  SDValue LowMask =
+      DAG.getConstant(APInt::getLowBitsSet(VTSize, Scale - 1), dl, VT);
   Result = DAG.getSelectCC(dl, Hi, LowMask, SatMax, Result, ISD::SETGT);
   // Saturate to min if (Hi >> (Scale - 1)) < -1),
   // which is the same as if (HI < (-1 << (Scale - 1))
-  SDValue HighMask =
-      DAG.getConstant(APInt::getHighBitsSet(VTSize, VTSize - Scale + 1),
-                      dl, VT);
+  SDValue HighMask = DAG.getConstant(
+      APInt::getHighBitsSet(VTSize, VTSize - Scale + 1), dl, VT);
   Result = DAG.getSelectCC(dl, Hi, HighMask, SatMin, Result, ISD::SETLT);
   return Result;
 }
 
-SDValue
-TargetLowering::expandFixedPointDiv(unsigned Opcode, const SDLoc &dl,
-                                    SDValue LHS, SDValue RHS,
-                                    unsigned Scale, SelectionDAG &DAG) const {
+SDValue TargetLowering::expandFixedPointDiv(unsigned Opcode, const SDLoc &dl,
+                                            SDValue LHS, SDValue RHS,
+                                            unsigned Scale,
+                                            SelectionDAG &DAG) const {
   assert((Opcode == ISD::SDIVFIX || Opcode == ISD::SDIVFIXSAT ||
           Opcode == ISD::UDIVFIX || Opcode == ISD::UDIVFIXSAT) &&
          "Expected a fixed point division opcode");
@@ -10190,38 +10203,33 @@ TargetLowering::expandFixedPointDiv(unsigned Opcode, const SDLoc &dl,
     // FIXME: Ideally we would always produce an SDIVREM here, but if the
     // type isn't legal, SDIVREM cannot be expanded. There is no reason why
     // we couldn't just form a libcall, but the type legalizer doesn't do it.
-    if (isTypeLegal(VT) &&
-        isOperationLegalOrCustom(ISD::SDIVREM, VT)) {
-      Quot = DAG.getNode(ISD::SDIVREM, dl,
-                         DAG.getVTList(VT, VT),
-                         LHS, RHS);
+    if (isTypeLegal(VT) && isOperationLegalOrCustom(ISD::SDIVREM, VT)) {
+      Quot = DAG.getNode(ISD::SDIVREM, dl, DAG.getVTList(VT, VT), LHS, RHS);
       Rem = Quot.getValue(1);
       Quot = Quot.getValue(0);
     } else {
-      Quot = DAG.getNode(ISD::SDIV, dl, VT,
-                         LHS, RHS);
-      Rem = DAG.getNode(ISD::SREM, dl, VT,
-                        LHS, RHS);
+      Quot = DAG.getNode(ISD::SDIV, dl, VT, LHS, RHS);
+      Rem = DAG.getNode(ISD::SREM, dl, VT, LHS, RHS);
     }
     SDValue Zero = DAG.getConstant(0, dl, VT);
     SDValue RemNonZero = DAG.getSetCC(dl, BoolVT, Rem, Zero, ISD::SETNE);
     SDValue LHSNeg = DAG.getSetCC(dl, BoolVT, LHS, Zero, ISD::SETLT);
     SDValue RHSNeg = DAG.getSetCC(dl, BoolVT, RHS, Zero, ISD::SETLT);
     SDValue QuotNeg = DAG.getNode(ISD::XOR, dl, BoolVT, LHSNeg, RHSNeg);
-    SDValue Sub1 = DAG.getNode(ISD::SUB, dl, VT, Quot,
-                               DAG.getConstant(1, dl, VT));
+    SDValue Sub1 =
+        DAG.getNode(ISD::SUB, dl, VT, Quot, DAG.getConstant(1, dl, VT));
     Quot = DAG.getSelect(dl, VT,
                          DAG.getNode(ISD::AND, dl, BoolVT, RemNonZero, QuotNeg),
                          Sub1, Quot);
   } else
-    Quot = DAG.getNode(ISD::UDIV, dl, VT,
-                       LHS, RHS);
+    Quot = DAG.getNode(ISD::UDIV, dl, VT, LHS, RHS);
 
   return Quot;
 }
 
-void TargetLowering::expandUADDSUBO(
-    SDNode *Node, SDValue &Result, SDValue &Overflow, SelectionDAG &DAG) const {
+void TargetLowering::expandUADDSUBO(SDNode *Node, SDValue &Result,
+                                    SDValue &Overflow,
+                                    SelectionDAG &DAG) const {
   SDLoc dl(Node);
   SDValue LHS = Node->getOperand(0);
   SDValue RHS = Node->getOperand(1);
@@ -10231,19 +10239,19 @@ void TargetLowering::expandUADDSUBO(
   unsigned OpcCarry = IsAdd ? ISD::UADDO_CARRY : ISD::USUBO_CARRY;
   if (isOperationLegalOrCustom(OpcCarry, Node->getValueType(0))) {
     SDValue CarryIn = DAG.getConstant(0, dl, Node->getValueType(1));
-    SDValue NodeCarry = DAG.getNode(OpcCarry, dl, Node->getVTList(),
-                                    { LHS, RHS, CarryIn });
+    SDValue NodeCarry =
+        DAG.getNode(OpcCarry, dl, Node->getVTList(), {LHS, RHS, CarryIn});
     Result = SDValue(NodeCarry.getNode(), 0);
     Overflow = SDValue(NodeCarry.getNode(), 1);
     return;
   }
 
-  Result = DAG.getNode(IsAdd ? ISD::ADD : ISD::SUB, dl,
-                            LHS.getValueType(), LHS, RHS);
+  Result = DAG.getNode(IsAdd ? ISD::ADD : ISD::SUB, dl, LHS.getValueType(), LHS,
+                       RHS);
 
   EVT ResultType = Node->getValueType(1);
-  EVT SetCCType = getSetCCResultType(
-      DAG.getDataLayout(), *DAG.getContext(), Node->getValueType(0));
+  EVT SetCCType = getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
+                                     Node->getValueType(0));
   SDValue SetCC;
   if (IsAdd && isOneConstant(RHS)) {
     // Special case: uaddo X, 1 overflowed if X+1 is 0. This potential reduces
@@ -10266,19 +10274,20 @@ void TargetLowering::expandUADDSUBO(
   Overflow = DAG.getBoolExtOrTrunc(SetCC, dl, ResultType, ResultType);
 }
 
-void TargetLowering::expandSADDSUBO(
-    SDNode *Node, SDValue &Result, SDValue &Overflow, SelectionDAG &DAG) const {
+void TargetLowering::expandSADDSUBO(SDNode *Node, SDValue &Result,
+                                    SDValue &Overflow,
+                                    SelectionDAG &DAG) const {
   SDLoc dl(Node);
   SDValue LHS = Node->getOperand(0);
   SDValue RHS = Node->getOperand(1);
   bool IsAdd = Node->getOpcode() == ISD::SADDO;
 
-  Result = DAG.getNode(IsAdd ? ISD::ADD : ISD::SUB, dl,
-                            LHS.getValueType(), LHS, RHS);
+  Result = DAG.getNode(IsAdd ? ISD::ADD : ISD::SUB, dl, LHS.getValueType(), LHS,
+                       RHS);
 
   EVT ResultType = Node->getValueType(1);
-  EVT OType = getSetCCResultType(
-      DAG.getDataLayout(), *DAG.getContext(), Node->getValueType(0));
+  EVT OType = getSetCCResultType(DAG.getDataLayout(), *DAG.getContext(),
+                                 Node->getValueType(0));
 
   // If SADDSAT/SSUBSAT is legal, compare results to detect overflow.
   unsigned OpcSat = IsAdd ? ISD::SADDSAT : ISD::SSUBSAT;
@@ -10326,37 +10335,39 @@ bool TargetLowering::expandMULO(SDNode *Node, SDValue &Result,
       SDValue ShiftAmt = DAG.getConstant(C.logBase2(), dl, ShiftAmtTy);
       Result = DAG.getNode(ISD::SHL, dl, VT, LHS, ShiftAmt);
       Overflow = DAG.getSetCC(dl, SetCCVT,
-          DAG.getNode(UseArithShift ? ISD::SRA : ISD::SRL,
-                      dl, VT, Result, ShiftAmt),
-          LHS, ISD::SETNE);
+                              DAG.getNode(UseArithShift ? ISD::SRA : ISD::SRL,
+                                          dl, VT, Result, ShiftAmt),
+                              LHS, ISD::SETNE);
       return true;
     }
   }
 
-  EVT WideVT = EVT::getIntegerVT(*DAG.getContext(), VT.getScalarSizeInBits() * 2);
+  EVT WideVT =
+      EVT::getIntegerVT(*DAG.getContext(), VT.getScalarSizeInBits() * 2);
   if (VT.isVector())
     WideVT =
         EVT::getVectorVT(*DAG.getContext(), WideVT, VT.getVectorElementCount());
 
   SDValue BottomHalf;
   SDValue TopHalf;
-  static const unsigned Ops[2][3] =
-      { { ISD::MULHU, ISD::UMUL_LOHI, ISD::ZERO_EXTEND },
-        { ISD::MULHS, ISD::SMUL_LOHI, ISD::SIGN_EXTEND }};
+  static const unsigned Ops[2][3] = {
+      {ISD::MULHU, ISD::UMUL_LOHI, ISD::ZERO_EXTEND},
+      {ISD::MULHS, ISD::SMUL_LOHI, ISD::SIGN_EXTEND}};
   if (isOperationLegalOrCustom(Ops[isSigned][0], VT)) {
     BottomHalf = DAG.getNode(ISD::MUL, dl, VT, LHS, RHS);
     TopHalf = DAG.getNode(Ops[isSigned][0], dl, VT, LHS, RHS);
   } else if (isOperationLegalOrCustom(Ops[isSigned][1], VT)) {
-    BottomHalf = DAG.getNode(Ops[isSigned][1], dl, DAG.getVTList(VT, VT), LHS,
-                             RHS);
+    BottomHalf =
+        DAG.getNode(Ops[isSigned][1], dl, DAG.getVTList(VT, VT), LHS, RHS);
     TopHalf = BottomHalf.getValue(1);
   } else if (isTypeLegal(WideVT)) {
     LHS = DAG.getNode(Ops[isSigned][2], dl, WideVT, LHS);
     RHS = DAG.getNode(Ops[isSigned][2], dl, WideVT, RHS);
     SDValue Mul = DAG.getNode(ISD::MUL, dl, WideVT, LHS, RHS);
     BottomHalf = DAG.getNode(ISD::TRUNCATE, dl, VT, Mul);
-    SDValue ShiftAmt = DAG.getConstant(VT.getScalarSizeInBits(), dl,
-        getShiftAmountTy(WideVT, DAG.getDataLayout()));
+    SDValue ShiftAmt =
+        DAG.getConstant(VT.getScalarSizeInBits(), dl,
+                        getShiftAmountTy(WideVT, DAG.getDataLayout()));
     TopHalf = DAG.getNode(ISD::TRUNCATE, dl, VT,
                           DAG.getNode(ISD::SRL, dl, WideVT, Mul, ShiftAmt));
   } else {
@@ -10384,17 +10395,15 @@ bool TargetLowering::expandMULO(SDNode *Node, SDValue &Result,
       // The high part is obtained by SRA'ing all but one of the bits of low
       // part.
       unsigned LoSize = VT.getFixedSizeInBits();
-      HiLHS =
-          DAG.getNode(ISD::SRA, dl, VT, LHS,
-                      DAG.getConstant(LoSize - 1, dl,
-                                      getPointerTy(DAG.getDataLayout())));
-      HiRHS =
-          DAG.getNode(ISD::SRA, dl, VT, RHS,
-                      DAG.getConstant(LoSize - 1, dl,
-                                      getPointerTy(DAG.getDataLayout())));
+      HiLHS = DAG.getNode(
+          ISD::SRA, dl, VT, LHS,
+          DAG.getConstant(LoSize - 1, dl, getPointerTy(DAG.getDataLayout())));
+      HiRHS = DAG.getNode(
+          ISD::SRA, dl, VT, RHS,
+          DAG.getConstant(LoSize - 1, dl, getPointerTy(DAG.getDataLayout())));
     } else {
-        HiLHS = DAG.getConstant(0, dl, VT);
-        HiRHS = DAG.getConstant(0, dl, VT);
+      HiLHS = DAG.getConstant(0, dl, VT);
+      HiRHS = DAG.getConstant(0, dl, VT);
     }
 
     // Here we're passing the 2 arguments explicitly as 4 arguments that are
@@ -10410,10 +10419,10 @@ bool TargetLowering::expandMULO(SDNode *Node, SDValue &Result,
       // depending on platform endianness. This is usually handled by
       // the C calling convention, but we can't defer to it in
       // the legalizer.
-      SDValue Args[] = { LHS, HiLHS, RHS, HiRHS };
+      SDValue Args[] = {LHS, HiLHS, RHS, HiRHS};
       Ret = makeLibCall(DAG, LC, WideVT, Args, CallOptions, dl).first;
     } else {
-      SDValue Args[] = { HiLHS, LHS, HiRHS, RHS };
+      SDValue Args[] = {HiLHS, LHS, HiRHS, RHS};
       Ret = makeLibCall(DAG, LC, WideVT, Args, CallOptions, dl).first;
     }
     assert(Ret.getOpcode() == ISD::MERGE_VALUES &&
@@ -10436,8 +10445,8 @@ bool TargetLowering::expandMULO(SDNode *Node, SDValue &Result,
     SDValue Sign = DAG.getNode(ISD::SRA, dl, VT, BottomHalf, ShiftAmt);
     Overflow = DAG.getSetCC(dl, SetCCVT, TopHalf, Sign, ISD::SETNE);
   } else {
-    Overflow = DAG.getSetCC(dl, SetCCVT, TopHalf,
-                            DAG.getConstant(0, dl, VT), ISD::SETNE);
+    Overflow = DAG.getSetCC(dl, SetCCVT, TopHalf, DAG.getConstant(0, dl, VT),
+                            ISD::SETNE);
   }
 
   // Truncate the result if SetCC returns a larger type than needed.
@@ -10490,7 +10499,8 @@ SDValue TargetLowering::expandVecReduce(SDNode *Node, SelectionDAG &DAG) const {
   return Res;
 }
 
-SDValue TargetLowering::expandVecReduceSeq(SDNode *Node, SelectionDAG &DAG) const {
+SDValue TargetLowering::expandVecReduceSeq(SDNode *Node,
+                                           SelectionDAG &DAG) const {
   SDLoc dl(Node);
   SDValue AccOp = Node->getOperand(0);
   SDValue VecOp = Node->getOperand(1);
diff --git a/llvm/lib/CodeGen/ShadowStackGCLowering.cpp b/llvm/lib/CodeGen/ShadowStackGCLowering.cpp
index 153fe77b8b4ae4a..7da63479c249457 100644
--- a/llvm/lib/CodeGen/ShadowStackGCLowering.cpp
+++ b/llvm/lib/CodeGen/ShadowStackGCLowering.cpp
@@ -82,8 +82,8 @@ class ShadowStackGCLowering : public FunctionPass {
                                       Type *Ty, Value *BasePtr, int Idx1,
                                       const char *Name);
   static GetElementPtrInst *CreateGEP(LLVMContext &Context, IRBuilder<> &B,
-                                      Type *Ty, Value *BasePtr, int Idx1, int Idx2,
-                                      const char *Name);
+                                      Type *Ty, Value *BasePtr, int Idx1,
+                                      int Idx2, const char *Name);
 };
 
 } // end anonymous namespace
@@ -98,7 +98,9 @@ INITIALIZE_PASS_DEPENDENCY(DominatorTreeWrapperPass)
 INITIALIZE_PASS_END(ShadowStackGCLowering, DEBUG_TYPE,
                     "Shadow Stack GC Lowering", false, false)
 
-FunctionPass *llvm::createShadowStackGCLoweringPass() { return new ShadowStackGCLowering(); }
+FunctionPass *llvm::createShadowStackGCLoweringPass() {
+  return new ShadowStackGCLowering();
+}
 
 ShadowStackGCLowering::ShadowStackGCLowering() : FunctionPass(ID) {
   initializeShadowStackGCLoweringPass(*PassRegistry::getPassRegistry());
@@ -274,8 +276,9 @@ GetElementPtrInst *ShadowStackGCLowering::CreateGEP(LLVMContext &Context,
 }
 
 GetElementPtrInst *ShadowStackGCLowering::CreateGEP(LLVMContext &Context,
-                                            IRBuilder<> &B, Type *Ty, Value *BasePtr,
-                                            int Idx, const char *Name) {
+                                                    IRBuilder<> &B, Type *Ty,
+                                                    Value *BasePtr, int Idx,
+                                                    const char *Name) {
   Value *Indices[] = {ConstantInt::get(Type::getInt32Ty(Context), 0),
                       ConstantInt::get(Type::getInt32Ty(Context), Idx)};
   Value *Val = B.CreateGEP(Ty, BasePtr, Indices, Name);
@@ -292,8 +295,7 @@ void ShadowStackGCLowering::getAnalysisUsage(AnalysisUsage &AU) const {
 /// runOnFunction - Insert code to maintain the shadow stack.
 bool ShadowStackGCLowering::runOnFunction(Function &F) {
   // Quick exit for functions that do not use the shadow stack GC.
-  if (!F.hasGC() ||
-      F.getGC() != std::string("shadow-stack"))
+  if (!F.hasGC() || F.getGC() != std::string("shadow-stack"))
     return false;
 
   LLVMContext &Context = F.getContext();
diff --git a/llvm/lib/CodeGen/ShrinkWrap.cpp b/llvm/lib/CodeGen/ShrinkWrap.cpp
index 4b1d3637a7462ec..f94497e02850de1 100644
--- a/llvm/lib/CodeGen/ShrinkWrap.cpp
+++ b/llvm/lib/CodeGen/ShrinkWrap.cpp
@@ -96,8 +96,8 @@ STATISTIC(NumCandidatesDropped,
           "Number of shrink-wrapping candidates dropped because of frequency");
 
 static cl::opt<cl::boolOrDefault>
-EnableShrinkWrapOpt("enable-shrink-wrap", cl::Hidden,
-                    cl::desc("enable the shrink-wrapping pass"));
+    EnableShrinkWrapOpt("enable-shrink-wrap", cl::Hidden,
+                        cl::desc("enable the shrink-wrapping pass"));
 static cl::opt<bool> EnablePostShrinkWrapOpt(
     "enable-shrink-wrap-region-split", cl::init(true), cl::Hidden,
     cl::desc("enable splitting of the restore block if possible"));
@@ -269,7 +269,7 @@ class ShrinkWrap : public MachineFunctionPass {
 
   MachineFunctionProperties getRequiredProperties() const override {
     return MachineFunctionProperties().set(
-      MachineFunctionProperties::Property::NoVRegs);
+        MachineFunctionProperties::Property::NoVRegs);
   }
 
   StringRef getPassName() const override { return "Shrink Wrapping analysis"; }
@@ -775,12 +775,12 @@ void ShrinkWrap::updateSaveRestorePoints(MachineBasicBlock &MBB,
       } else {
         // If the loop does not exit, there is no point in looking
         // for a post-dominator outside the loop.
-        SmallVector<MachineBasicBlock*, 4> ExitBlocks;
+        SmallVector<MachineBasicBlock *, 4> ExitBlocks;
         MLI->getLoopFor(Restore)->getExitingBlocks(ExitBlocks);
         // Push Restore outside of this loop.
         // Look for the immediate post-dominator of the loop exits.
         MachineBasicBlock *IPdom = Restore;
-        for (MachineBasicBlock *LoopExitBB: ExitBlocks) {
+        for (MachineBasicBlock *LoopExitBB : ExitBlocks) {
           IPdom = FindIDom<>(*IPdom, LoopExitBB->successors(), *MPDT);
           if (!IPdom)
             break;
diff --git a/llvm/lib/CodeGen/SjLjEHPrepare.cpp b/llvm/lib/CodeGen/SjLjEHPrepare.cpp
index 7994821ae7c0a14..50bfe1c63eb14d5 100644
--- a/llvm/lib/CodeGen/SjLjEHPrepare.cpp
+++ b/llvm/lib/CodeGen/SjLjEHPrepare.cpp
@@ -77,8 +77,8 @@ class SjLjEHPrepare : public FunctionPass {
 } // end anonymous namespace
 
 char SjLjEHPrepare::ID = 0;
-INITIALIZE_PASS(SjLjEHPrepare, DEBUG_TYPE, "Prepare SjLj exceptions",
-                false, false)
+INITIALIZE_PASS(SjLjEHPrepare, DEBUG_TYPE, "Prepare SjLj exceptions", false,
+                false)
 
 // Public Interface To the SjLjEHPrepare pass.
 FunctionPass *llvm::createSjLjEHPreparePass(const TargetMachine *TM) {
@@ -116,7 +116,7 @@ void SjLjEHPrepare::insertCallSiteStore(Instruction *I, int Number) {
   Type *Int32Ty = Type::getInt32Ty(I->getContext());
   Value *Zero = ConstantInt::get(Int32Ty, 0);
   Value *One = ConstantInt::get(Int32Ty, 1);
-  Value *Idxs[2] = { Zero, One };
+  Value *Idxs[2] = {Zero, One};
   Value *CallSite =
       Builder.CreateGEP(FunctionContextTy, FuncCtx, Idxs, "call_site");
 
@@ -132,7 +132,7 @@ static void MarkBlocksLiveIn(BasicBlock *BB,
   if (!LiveBBs.insert(BB).second)
     return; // already been here.
 
-  df_iterator_default_set<BasicBlock*> Visited;
+  df_iterator_default_set<BasicBlock *> Visited;
 
   for (BasicBlock *B : inverse_depth_first_ext(BB, Visited))
     LiveBBs.insert(B);
@@ -484,9 +484,9 @@ bool SjLjEHPrepare::setupEntryBlockAndCallSites(Function &F) {
 
 bool SjLjEHPrepare::runOnFunction(Function &F) {
   Module &M = *F.getParent();
-  RegisterFn = M.getOrInsertFunction(
-      "_Unwind_SjLj_Register", Type::getVoidTy(M.getContext()),
-      PointerType::getUnqual(FunctionContextTy));
+  RegisterFn = M.getOrInsertFunction("_Unwind_SjLj_Register",
+                                     Type::getVoidTy(M.getContext()),
+                                     PointerType::getUnqual(FunctionContextTy));
   UnregisterFn = M.getOrInsertFunction(
       "_Unwind_SjLj_Unregister", Type::getVoidTy(M.getContext()),
       PointerType::getUnqual(FunctionContextTy));
@@ -500,7 +500,7 @@ bool SjLjEHPrepare::runOnFunction(Function &F) {
   StackRestoreFn =
       Intrinsic::getDeclaration(&M, Intrinsic::stackrestore, {AllocaPtrTy});
   BuiltinSetupDispatchFn =
-    Intrinsic::getDeclaration(&M, Intrinsic::eh_sjlj_setup_dispatch);
+      Intrinsic::getDeclaration(&M, Intrinsic::eh_sjlj_setup_dispatch);
   LSDAAddrFn = Intrinsic::getDeclaration(&M, Intrinsic::eh_sjlj_lsda);
   CallSiteFn = Intrinsic::getDeclaration(&M, Intrinsic::eh_sjlj_callsite);
   FuncCtxFn = Intrinsic::getDeclaration(&M, Intrinsic::eh_sjlj_functioncontext);
diff --git a/llvm/lib/CodeGen/SlotIndexes.cpp b/llvm/lib/CodeGen/SlotIndexes.cpp
index 47ee36971d0eae9..c8f1fc41a3a8921 100644
--- a/llvm/lib/CodeGen/SlotIndexes.cpp
+++ b/llvm/lib/CodeGen/SlotIndexes.cpp
@@ -29,10 +29,9 @@ SlotIndexes::~SlotIndexes() {
   indexList.clearAndLeakNodesUnsafely();
 }
 
-INITIALIZE_PASS(SlotIndexes, DEBUG_TYPE,
-                "Slot index numbering", false, false)
+INITIALIZE_PASS(SlotIndexes, DEBUG_TYPE, "Slot index numbering", false, false)
 
-STATISTIC(NumLocalRenum,  "Number of local renumberings");
+STATISTIC(NumLocalRenum, "Number of local renumberings");
 
 void SlotIndexes::getAnalysisUsage(AnalysisUsage &au) const {
   au.setPreservesAll();
@@ -98,8 +97,8 @@ bool SlotIndexes::runOnMachineFunction(MachineFunction &fn) {
     indexList.push_back(createEntry(nullptr, index += SlotIndex::InstrDist));
 
     MBBRanges[MBB.getNumber()].first = blockStartIndex;
-    MBBRanges[MBB.getNumber()].second = SlotIndex(&indexList.back(),
-                                                   SlotIndex::Slot_Block);
+    MBBRanges[MBB.getNumber()].second =
+        SlotIndex(&indexList.back(), SlotIndex::Slot_Block);
     idx2MBBMap.push_back(IdxMBBPair(blockStartIndex, &MBB));
   }
 
@@ -159,7 +158,7 @@ void SlotIndexes::removeSingleMachineInstrFromMaps(MachineInstr &MI) {
 // index.
 void SlotIndexes::renumberIndexes(IndexList::iterator curItr) {
   // Number indexes with half the default spacing so we can catch up quickly.
-  const unsigned Space = SlotIndex::InstrDist/2;
+  const unsigned Space = SlotIndex::InstrDist / 2;
   static_assert((Space & 3) == 0, "InstrDist must be a multiple of 2*NUM");
 
   IndexList::iterator startItr = std::prev(curItr);
diff --git a/llvm/lib/CodeGen/SpillPlacement.cpp b/llvm/lib/CodeGen/SpillPlacement.cpp
index 91da5e49713c1aa..7ef70510eb867ea 100644
--- a/llvm/lib/CodeGen/SpillPlacement.cpp
+++ b/llvm/lib/CodeGen/SpillPlacement.cpp
@@ -53,8 +53,8 @@ INITIALIZE_PASS_BEGIN(SpillPlacement, DEBUG_TYPE,
                       "Spill Code Placement Analysis", true, true)
 INITIALIZE_PASS_DEPENDENCY(EdgeBundles)
 INITIALIZE_PASS_DEPENDENCY(MachineLoopInfo)
-INITIALIZE_PASS_END(SpillPlacement, DEBUG_TYPE,
-                    "Spill Code Placement Analysis", true, true)
+INITIALIZE_PASS_END(SpillPlacement, DEBUG_TYPE, "Spill Code Placement Analysis",
+                    true, true)
 
 void SpillPlacement::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.setPreservesAll();
@@ -341,7 +341,7 @@ void SpillPlacement::iterate() {
   // Update the network energy starting at this new frontier.
   // The call to ::update will add the nodes that changed into the todolist.
   unsigned Limit = bundles->getNumBundles() * 10;
-  while(Limit-- > 0 && !TodoList.empty()) {
+  while (Limit-- > 0 && !TodoList.empty()) {
     unsigned n = TodoList.pop_back_val();
     if (!update(n))
       continue;
@@ -359,8 +359,7 @@ void SpillPlacement::prepare(BitVector &RegBundles) {
   ActiveNodes->resize(bundles->getNumBundles());
 }
 
-bool
-SpillPlacement::finish() {
+bool SpillPlacement::finish() {
   assert(ActiveNodes && "Call prepare() first");
 
   // Write preferences back to ActiveNodes.
@@ -376,20 +375,23 @@ SpillPlacement::finish() {
 
 void SpillPlacement::BlockConstraint::print(raw_ostream &OS) const {
   auto toString = [](BorderConstraint C) -> StringRef {
-    switch(C) {
-    case DontCare: return "DontCare";
-    case PrefReg: return "PrefReg";
-    case PrefSpill: return "PrefSpill";
-    case PrefBoth: return "PrefBoth";
-    case MustSpill: return "MustSpill";
+    switch (C) {
+    case DontCare:
+      return "DontCare";
+    case PrefReg:
+      return "PrefReg";
+    case PrefSpill:
+      return "PrefSpill";
+    case PrefBoth:
+      return "PrefBoth";
+    case MustSpill:
+      return "MustSpill";
     };
     llvm_unreachable("uncovered switch");
   };
 
-  dbgs() << "{" << Number << ", "
-         << toString(Entry) << ", "
-         << toString(Exit) << ", "
-         << (ChangesValue ? "changes" : "no change") << "}";
+  dbgs() << "{" << Number << ", " << toString(Entry) << ", " << toString(Exit)
+         << ", " << (ChangesValue ? "changes" : "no change") << "}";
 }
 
 void SpillPlacement::BlockConstraint::dump() const {
diff --git a/llvm/lib/CodeGen/SplitKit.cpp b/llvm/lib/CodeGen/SplitKit.cpp
index 1664c304f643c3f..4cddabe6da352bf 100644
--- a/llvm/lib/CodeGen/SplitKit.cpp
+++ b/llvm/lib/CodeGen/SplitKit.cpp
@@ -46,9 +46,9 @@ using namespace llvm;
 #define DEBUG_TYPE "regalloc"
 
 STATISTIC(NumFinished, "Number of splits finished");
-STATISTIC(NumSimple,   "Number of splits that were simple");
-STATISTIC(NumCopies,   "Number of copies inserted for splitting");
-STATISTIC(NumRemats,   "Number of rematerialized defs for splitting");
+STATISTIC(NumSimple, "Number of splits that were simple");
+STATISTIC(NumCopies, "Number of copies inserted for splitting");
+STATISTIC(NumRemats, "Number of rematerialized defs for splitting");
 
 //===----------------------------------------------------------------------===//
 //                     Last Insert Point Analysis
@@ -179,9 +179,9 @@ void SplitAnalysis::analyzeUses() {
 
   // Remove duplicates, keeping the smaller slot for each instruction.
   // That is what we want for early clobbers.
-  UseSlots.erase(std::unique(UseSlots.begin(), UseSlots.end(),
-                             SlotIndex::isSameInstr),
-                 UseSlots.end());
+  UseSlots.erase(
+      std::unique(UseSlots.begin(), UseSlots.end(), SlotIndex::isSameInstr),
+      UseSlots.end());
 
   // Compute per-live block info.
   calcLiveBlockInfo();
@@ -228,7 +228,8 @@ void SplitAnalysis::calcLiveBlockInfo() {
       // This block has uses. Find the first and last uses in the block.
       BI.FirstInstr = *UseI;
       assert(BI.FirstInstr >= Start);
-      do ++UseI;
+      do
+        ++UseI;
       while (UseI != UseE && *UseI < Stop);
       BI.LastInstr = UseI[-1];
       assert(BI.LastInstr < Stop);
@@ -299,7 +300,7 @@ void SplitAnalysis::calcLiveBlockInfo() {
 unsigned SplitAnalysis::countLiveBlocks(const LiveInterval *cli) const {
   if (cli->empty())
     return 0;
-  LiveInterval *li = const_cast<LiveInterval*>(cli);
+  LiveInterval *li = const_cast<LiveInterval *>(cli);
   LiveInterval::iterator LVI = li->begin();
   LiveInterval::iterator LVE = li->end();
   unsigned Count = 0;
@@ -457,10 +458,8 @@ void SplitEditor::addDeadDef(LiveInterval &LI, VNInfo *VNI, bool Original) {
   }
 }
 
-VNInfo *SplitEditor::defValue(unsigned RegIdx,
-                              const VNInfo *ParentVNI,
-                              SlotIndex Idx,
-                              bool Original) {
+VNInfo *SplitEditor::defValue(unsigned RegIdx, const VNInfo *ParentVNI,
+                              SlotIndex Idx, bool Original) {
   assert(ParentVNI && "Mapping  NULL value");
   assert(Idx.isValid() && "Invalid SlotIndex");
   assert(Edit->getParent().getVNInfoAt(Idx) == ParentVNI && "Bad Parent VNI");
@@ -473,7 +472,7 @@ VNInfo *SplitEditor::defValue(unsigned RegIdx,
   ValueForcePair FP(Force ? nullptr : VNI, Force);
   // Use insert for lookup, so we can add missing values with a second lookup.
   std::pair<ValueMap::iterator, bool> InsP =
-    Values.insert(std::make_pair(std::make_pair(RegIdx, ParentVNI->id), FP));
+      Values.insert(std::make_pair(std::make_pair(RegIdx, ParentVNI->id), FP));
 
   // This was the first time (RegIdx, ParentVNI) was mapped, and it is not
   // forced. Keep it as a simple def without any liveness.
@@ -518,10 +517,13 @@ SlotIndex SplitEditor::buildSingleSubRegCopy(
     MachineBasicBlock::iterator InsertBefore, unsigned SubIdx,
     LiveInterval &DestLI, bool Late, SlotIndex Def, const MCInstrDesc &Desc) {
   bool FirstCopy = !Def.isValid();
-  MachineInstr *CopyMI = BuildMI(MBB, InsertBefore, DebugLoc(), Desc)
-      .addReg(ToReg, RegState::Define | getUndefRegState(FirstCopy)
-              | getInternalReadRegState(!FirstCopy), SubIdx)
-      .addReg(FromReg, 0, SubIdx);
+  MachineInstr *CopyMI =
+      BuildMI(MBB, InsertBefore, DebugLoc(), Desc)
+          .addReg(ToReg,
+                  RegState::Define | getUndefRegState(FirstCopy) |
+                      getInternalReadRegState(!FirstCopy),
+                  SubIdx)
+          .addReg(FromReg, 0, SubIdx);
 
   SlotIndexes &Indexes = *LIS.getSlotIndexes();
   if (FirstCopy) {
@@ -533,8 +535,9 @@ SlotIndex SplitEditor::buildSingleSubRegCopy(
 }
 
 SlotIndex SplitEditor::buildCopy(Register FromReg, Register ToReg,
-    LaneBitmask LaneMask, MachineBasicBlock &MBB,
-    MachineBasicBlock::iterator InsertBefore, bool Late, unsigned RegIdx) {
+                                 LaneBitmask LaneMask, MachineBasicBlock &MBB,
+                                 MachineBasicBlock::iterator InsertBefore,
+                                 bool Late, unsigned RegIdx) {
   const MCInstrDesc &Desc =
       TII.get(TII.getLiveRangeSplitOpcode(FromReg, *MBB.getParent()));
   SlotIndexes &Indexes = *LIS.getSlotIndexes();
@@ -838,7 +841,7 @@ void SplitEditor::overlapIntv(SlotIndex Start, SlotIndex End) {
 //                                  Spill modes
 //===----------------------------------------------------------------------===//
 
-void SplitEditor::removeBackCopies(SmallVectorImpl<VNInfo*> &Copies) {
+void SplitEditor::removeBackCopies(SmallVectorImpl<VNInfo *> &Copies) {
   LiveInterval *LI = &LIS.getInterval(Edit->get(0));
   LLVM_DEBUG(dbgs() << "Removing " << Copies.size() << " back-copies.\n");
   RegAssignMap::iterator AssignI;
@@ -852,7 +855,8 @@ void SplitEditor::removeBackCopies(SmallVectorImpl<VNInfo*> &Copies) {
     MachineBasicBlock *MBB = MI->getParent();
     MachineBasicBlock::iterator MBBI(MI);
     bool AtBegin;
-    do AtBegin = MBBI == MBB->begin();
+    do
+      AtBegin = MBBI == MBB->begin();
     while (!AtBegin && (--MBBI)->isDebugOrPseudoInstr());
 
     LLVM_DEBUG(dbgs() << "Removing " << Def << '\t' << *MI);
@@ -887,7 +891,7 @@ void SplitEditor::removeBackCopies(SmallVectorImpl<VNInfo*> &Copies) {
   }
 }
 
-MachineBasicBlock*
+MachineBasicBlock *
 SplitEditor::findShallowDominator(MachineBasicBlock *MBB,
                                   MachineBasicBlock *DefMBB) {
   if (MBB == DefMBB)
@@ -1054,7 +1058,7 @@ void SplitEditor::hoistCopies() {
     } else {
       // Different basic blocks. Check if one dominates.
       MachineBasicBlock *Near =
-        MDT.findNearestCommonDominator(Dom.first, ValMBB);
+          MDT.findNearestCommonDominator(Dom.first, ValMBB);
       if (Near == ValMBB)
         // Def ValMBB dominates.
         Dom = DomPair(ValMBB, VNI->def);
@@ -1092,12 +1096,13 @@ void SplitEditor::hoistCopies() {
       continue;
     }
     Dom.second = defFromParent(0, ParentVNI, LSP, *Dom.first,
-                               SA.getLastSplitPointIter(Dom.first))->def;
+                               SA.getLastSplitPointIter(Dom.first))
+                     ->def;
   }
 
   // Remove redundant back-copies that are now known to be dominated by another
   // def with the same value.
-  SmallVector<VNInfo*, 8> BackCopies;
+  SmallVector<VNInfo *, 8> BackCopies;
   for (VNInfo *VNI : LI->valnos) {
     if (VNI->isUnused())
       continue;
@@ -1304,14 +1309,14 @@ void SplitEditor::extendPHIKillRanges() {
 void SplitEditor::rewriteAssigned(bool ExtendRanges) {
   struct ExtPoint {
     ExtPoint(const MachineOperand &O, unsigned R, SlotIndex N)
-      : MO(O), RegIdx(R), Next(N) {}
+        : MO(O), RegIdx(R), Next(N) {}
 
     MachineOperand MO;
     unsigned RegIdx;
     SlotIndex Next;
   };
 
-  SmallVector<ExtPoint,4> ExtPoints;
+  SmallVector<ExtPoint, 4> ExtPoints;
 
   for (MachineOperand &MO :
        llvm::make_early_inc_range(MRI.reg_operands(Edit->getReg()))) {
@@ -1425,7 +1430,7 @@ void SplitEditor::rewriteAssigned(bool ExtendRanges) {
 }
 
 void SplitEditor::deleteRematVictims() {
-  SmallVector<MachineInstr*, 8> Dead;
+  SmallVector<MachineInstr *, 8> Dead;
   for (const Register &R : *Edit) {
     LiveInterval *LI = &LIS.getInterval(R);
     for (const LiveRange::Segment &S : LI->segments) {
@@ -1484,7 +1489,7 @@ void SplitEditor::forceRecomputeVNI(const VNInfo &ParentVNI) {
       if (Visited.insert(PredVNI).second)
         WorkList.push_back(PredVNI);
     }
-  } while(!WorkList.empty());
+  } while (!WorkList.empty());
 }
 
 void SplitEditor::finish(SmallVectorImpl<unsigned> *LRMap) {
@@ -1551,7 +1556,7 @@ void SplitEditor::finish(SmallVectorImpl<unsigned> *LRMap) {
     // Don't use iterators, they are invalidated by create() below.
     Register VReg = Edit->get(i);
     LiveInterval &LI = LIS.getInterval(VReg);
-    SmallVector<LiveInterval*, 8> SplitLIs;
+    SmallVector<LiveInterval *, 8> SplitLIs;
     LIS.splitSeparateComponents(LI, SplitLIs);
     Register Original = VRM.getOriginal(VReg);
     for (LiveInterval *SplitLI : SplitLIs)
@@ -1595,12 +1600,11 @@ bool SplitAnalysis::shouldSplitSingleBlock(const BlockInfo &BI,
 void SplitEditor::splitSingleBlock(const SplitAnalysis::BlockInfo &BI) {
   openIntv();
   SlotIndex LastSplitPoint = SA.getLastSplitPoint(BI.MBB);
-  SlotIndex SegStart = enterIntvBefore(std::min(BI.FirstInstr,
-    LastSplitPoint));
+  SlotIndex SegStart = enterIntvBefore(std::min(BI.FirstInstr, LastSplitPoint));
   if (!BI.LiveOut || BI.LastInstr < LastSplitPoint) {
     useIntv(SegStart, leaveIntvAfter(BI.LastInstr));
   } else {
-      // The last use is after the last valid split point.
+    // The last use is after the last valid split point.
     SlotIndex SegStop = leaveIntvBefore(LastSplitPoint);
     useIntv(SegStart, SegStop);
     overlapIntv(SegStop, BI.LastInstr);
@@ -1618,9 +1622,9 @@ void SplitEditor::splitSingleBlock(const SplitAnalysis::BlockInfo &BI) {
 // Note that splitSingleBlock is also useful for blocks where both CFG edges
 // are on the stack.
 
-void SplitEditor::splitLiveThroughBlock(unsigned MBBNum,
-                                        unsigned IntvIn, SlotIndex LeaveBefore,
-                                        unsigned IntvOut, SlotIndex EnterAfter){
+void SplitEditor::splitLiveThroughBlock(unsigned MBBNum, unsigned IntvIn,
+                                        SlotIndex LeaveBefore, unsigned IntvOut,
+                                        SlotIndex EnterAfter) {
   SlotIndex Start, Stop;
   std::tie(Start, Stop) = LIS.getSlotIndexes()->getMBBRange(MBBNum);
 
@@ -1679,8 +1683,9 @@ void SplitEditor::splitLiveThroughBlock(unsigned MBBNum,
   SlotIndex LSP = SA.getLastSplitPoint(MBBNum);
   assert((!IntvOut || !EnterAfter || EnterAfter < LSP) && "Impossible intf");
 
-  if (IntvIn != IntvOut && (!LeaveBefore || !EnterAfter ||
-                  LeaveBefore.getBaseIndex() > EnterAfter.getBoundaryIndex())) {
+  if (IntvIn != IntvOut &&
+      (!LeaveBefore || !EnterAfter ||
+       LeaveBefore.getBaseIndex() > EnterAfter.getBoundaryIndex())) {
     LLVM_DEBUG(dbgs() << ", switch avoiding interference.\n");
     //
     //    >>>>     <<<<    Non-overlapping EnterAfter/LeaveBefore interference.
@@ -1879,9 +1884,8 @@ void SplitEditor::splitRegOutBlock(const SplitAnalysis::BlockInfo &BI,
 void SplitAnalysis::BlockInfo::print(raw_ostream &OS) const {
   OS << "{" << printMBBReference(*MBB) << ", "
      << "uses " << FirstInstr << " to " << LastInstr << ", "
-     << "1st def " << FirstDef << ", "
-     << (LiveIn ? "live in" : "dead in") << ", "
-     << (LiveOut ? "live out" : "dead out") << "}";
+     << "1st def " << FirstDef << ", " << (LiveIn ? "live in" : "dead in")
+     << ", " << (LiveOut ? "live out" : "dead out") << "}";
 }
 
 void SplitAnalysis::BlockInfo::dump() const {
diff --git a/llvm/lib/CodeGen/SplitKit.h b/llvm/lib/CodeGen/SplitKit.h
index 1174e392e4e4424..83cdf6a2444eef8 100644
--- a/llvm/lib/CodeGen/SplitKit.h
+++ b/llvm/lib/CodeGen/SplitKit.h
@@ -88,7 +88,6 @@ class LLVM_LIBRARY_VISIBILITY InsertPointAnalysis {
     }
     return Res;
   }
-
 };
 
 /// SplitAnalysis - Analyze a LiveInterval, looking for live range splitting
@@ -381,7 +380,7 @@ class LLVM_LIBRARY_VISIBILITY SplitEditor {
 
   /// removeBackCopies - Remove the copy instructions that defines the values
   /// in the vector in the complement interval.
-  void removeBackCopies(SmallVectorImpl<VNInfo*> &Copies);
+  void removeBackCopies(SmallVectorImpl<VNInfo *> &Copies);
 
   /// getShallowDominator - Returns the least busy dominator of MBB that is
   /// also dominated by DefMBB.  Busy is measured by loop depth.
@@ -424,8 +423,9 @@ class LLVM_LIBRARY_VISIBILITY SplitEditor {
   /// \p InsertBefore. This can be invoked with a \p LaneMask which may make it
   /// necessary to construct a sequence of copies to cover it exactly.
   SlotIndex buildCopy(Register FromReg, Register ToReg, LaneBitmask LaneMask,
-      MachineBasicBlock &MBB, MachineBasicBlock::iterator InsertBefore,
-      bool Late, unsigned RegIdx);
+                      MachineBasicBlock &MBB,
+                      MachineBasicBlock::iterator InsertBefore, bool Late,
+                      unsigned RegIdx);
 
   SlotIndex buildSingleSubRegCopy(Register FromReg, Register ToReg,
                                   MachineBasicBlock &MB,
@@ -442,7 +442,7 @@ class LLVM_LIBRARY_VISIBILITY SplitEditor {
               VirtRegAuxInfo &VRAI);
 
   /// reset - Prepare for a new split.
-  void reset(LiveRangeEdit&, ComplementSpillMode = SM_Partition);
+  void reset(LiveRangeEdit &, ComplementSpillMode = SM_Partition);
 
   /// Create a new virtual register and live interval.
   /// Return the interval index, starting from 1. Interval index 0 is the
@@ -527,9 +527,9 @@ class LLVM_LIBRARY_VISIBILITY SplitEditor {
   /// @param LeaveBefore When set, leave IntvIn before this point.
   /// @param IntvOut     Interval index leaving the block.
   /// @param EnterAfter  When set, enter IntvOut after this point.
-  void splitLiveThroughBlock(unsigned MBBNum,
-                             unsigned IntvIn, SlotIndex LeaveBefore,
-                             unsigned IntvOut, SlotIndex EnterAfter);
+  void splitLiveThroughBlock(unsigned MBBNum, unsigned IntvIn,
+                             SlotIndex LeaveBefore, unsigned IntvOut,
+                             SlotIndex EnterAfter);
 
   /// splitRegInBlock - Split CurLI in the given block such that it enters the
   /// block in IntvIn and leaves it on the stack (or not at all). Split points
@@ -539,8 +539,8 @@ class LLVM_LIBRARY_VISIBILITY SplitEditor {
   /// @param BI          Block descriptor.
   /// @param IntvIn      Interval index entering the block. Not 0.
   /// @param LeaveBefore When set, leave IntvIn before this point.
-  void splitRegInBlock(const SplitAnalysis::BlockInfo &BI,
-                       unsigned IntvIn, SlotIndex LeaveBefore);
+  void splitRegInBlock(const SplitAnalysis::BlockInfo &BI, unsigned IntvIn,
+                       SlotIndex LeaveBefore);
 
   /// splitRegOutBlock - Split CurLI in the given block such that it enters the
   /// block on the stack (or isn't live-in at all) and leaves it in IntvOut.
@@ -551,8 +551,8 @@ class LLVM_LIBRARY_VISIBILITY SplitEditor {
   /// @param BI          Block descriptor.
   /// @param IntvOut     Interval index leaving the block.
   /// @param EnterAfter  When set, enter IntvOut after this point.
-  void splitRegOutBlock(const SplitAnalysis::BlockInfo &BI,
-                        unsigned IntvOut, SlotIndex EnterAfter);
+  void splitRegOutBlock(const SplitAnalysis::BlockInfo &BI, unsigned IntvOut,
+                        SlotIndex EnterAfter);
 };
 
 } // end namespace llvm
diff --git a/llvm/lib/CodeGen/StackColoring.cpp b/llvm/lib/CodeGen/StackColoring.cpp
index 4d69b6104f68372..5738d39aafd98c3 100644
--- a/llvm/lib/CodeGen/StackColoring.cpp
+++ b/llvm/lib/CodeGen/StackColoring.cpp
@@ -63,10 +63,9 @@ using namespace llvm;
 
 #define DEBUG_TYPE "stack-coloring"
 
-static cl::opt<bool>
-DisableColoring("no-stack-coloring",
-        cl::init(false), cl::Hidden,
-        cl::desc("Disable stack coloring"));
+static cl::opt<bool> DisableColoring("no-stack-coloring", cl::init(false),
+                                     cl::Hidden,
+                                     cl::desc("Disable stack coloring"));
 
 /// The user may write code that uses allocas outside of the declared lifetime
 /// zone. This can happen when the user returns a reference to a local
@@ -74,22 +73,22 @@ DisableColoring("no-stack-coloring",
 /// code. If this flag is enabled, we try to save the user. This option
 /// is treated as overriding LifetimeStartOnFirstUse below.
 static cl::opt<bool>
-ProtectFromEscapedAllocas("protect-from-escaped-allocas",
-                          cl::init(false), cl::Hidden,
-                          cl::desc("Do not optimize lifetime zones that "
-                                   "are broken"));
+    ProtectFromEscapedAllocas("protect-from-escaped-allocas", cl::init(false),
+                              cl::Hidden,
+                              cl::desc("Do not optimize lifetime zones that "
+                                       "are broken"));
 
 /// Enable enhanced dataflow scheme for lifetime analysis (treat first
 /// use of stack slot as start of slot lifetime, as opposed to looking
 /// for LIFETIME_START marker). See "Implementation notes" below for
 /// more info.
 static cl::opt<bool>
-LifetimeStartOnFirstUse("stackcoloring-lifetime-start-on-first-use",
-        cl::init(true), cl::Hidden,
-        cl::desc("Treat stack lifetimes as starting on first use, not on START marker."));
-
+    LifetimeStartOnFirstUse("stackcoloring-lifetime-start-on-first-use",
+                            cl::init(true), cl::Hidden,
+                            cl::desc("Treat stack lifetimes as starting on "
+                                     "first use, not on START marker."));
 
-STATISTIC(NumMarkerSeen,  "Number of lifetime markers found.");
+STATISTIC(NumMarkerSeen, "Number of lifetime markers found.");
 STATISTIC(StackSpaceSaved, "Number of bytes saved due to merging slots.");
 STATISTIC(StackSlotMerged, "Number of stack slot merged.");
 STATISTIC(EscapedAllocas, "Number of allocas that escaped the lifetime region");
@@ -452,7 +451,7 @@ class StackColoring : public MachineFunctionPass {
 
   /// The list of lifetime markers found. These markers are to be removed
   /// once the coloring is done.
-  SmallVector<MachineInstr*, 8> Markers;
+  SmallVector<MachineInstr *, 8> Markers;
 
   /// Record the FI slots for which we have seen some sort of
   /// lifetime marker (either start or end).
@@ -504,7 +503,8 @@ class StackColoring : public MachineFunctionPass {
   void calculateLocalLiveness();
 
   /// Returns TRUE if we're using the first-use-begins-lifetime method for
-  /// this slot (if FALSE, then the start marker is treated as start of lifetime).
+  /// this slot (if FALSE, then the start marker is treated as start of
+  /// lifetime).
   bool applyFirstUse(int Slot) {
     if (!LifetimeStartOnFirstUse || ProtectFromEscapedAllocas)
       return false;
@@ -518,8 +518,7 @@ class StackColoring : public MachineFunctionPass {
   /// starting or ending are added to the vector "slots" and "isStart" is set
   /// accordingly.
   /// \returns True if inst contains a lifetime start or end
-  bool isLifetimeStartOrEnd(const MachineInstr &MI,
-                            SmallVector<int, 4> &slots,
+  bool isLifetimeStartOrEnd(const MachineInstr &MI, SmallVector<int, 4> &slots,
                             bool &isStart);
 
   /// Construct the LiveIntervals for the slots.
@@ -548,11 +547,11 @@ char StackColoring::ID = 0;
 
 char &llvm::StackColoringID = StackColoring::ID;
 
-INITIALIZE_PASS_BEGIN(StackColoring, DEBUG_TYPE,
-                      "Merge disjoint stack slots", false, false)
+INITIALIZE_PASS_BEGIN(StackColoring, DEBUG_TYPE, "Merge disjoint stack slots",
+                      false, false)
 INITIALIZE_PASS_DEPENDENCY(SlotIndexes)
-INITIALIZE_PASS_END(StackColoring, DEBUG_TYPE,
-                    "Merge disjoint stack slots", false, false)
+INITIALIZE_PASS_END(StackColoring, DEBUG_TYPE, "Merge disjoint stack slots",
+                    false, false)
 
 void StackColoring::getAnalysisUsage(AnalysisUsage &AU) const {
   AU.addRequired<SlotIndexes>();
@@ -581,8 +580,8 @@ LLVM_DUMP_METHOD void StackColoring::dumpBB(MachineBasicBlock *MBB) const {
 
 LLVM_DUMP_METHOD void StackColoring::dump() const {
   for (MachineBasicBlock *MBB : depth_first(MF)) {
-    dbgs() << "Inspecting block #" << MBB->getNumber() << " ["
-           << MBB->getName() << "]\n";
+    dbgs() << "Inspecting block #" << MBB->getNumber() << " [" << MBB->getName()
+           << "]\n";
     dumpBB(MBB);
   }
 }
@@ -595,8 +594,7 @@ LLVM_DUMP_METHOD void StackColoring::dumpIntervals() const {
 }
 #endif
 
-static inline int getStartOrEndSlot(const MachineInstr &MI)
-{
+static inline int getStartOrEndSlot(const MachineInstr &MI) {
   assert((MI.getOpcode() == TargetOpcode::LIFETIME_START ||
           MI.getOpcode() == TargetOpcode::LIFETIME_END) &&
          "Expected LIFETIME_START or LIFETIME_END op");
@@ -637,7 +635,7 @@ bool StackColoring::isLifetimeStartOrEnd(const MachineInstr &MI,
         if (!MO.isFI())
           continue;
         int Slot = MO.getIndex();
-        if (Slot<0)
+        if (Slot < 0)
           continue;
         if (InterestingSlots.test(Slot) && applyFirstUse(Slot)) {
           slots.push_back(Slot);
@@ -719,7 +717,7 @@ unsigned StackColoring::collectMarkers(unsigned NumSlot) {
           int Slot = MO.getIndex();
           if (Slot < 0)
             continue;
-          if (! BetweenStartEnd.test(Slot)) {
+          if (!BetweenStartEnd.test(Slot)) {
             ConservativeSlots.set(Slot);
           }
           // Here we check the StoreSlots to screen catch point out. For more
@@ -947,10 +945,10 @@ void StackColoring::remapInstructions(DenseMap<int, int> &SlotRemap) {
   }
 
   // Keep a list of *allocas* which need to be remapped.
-  DenseMap<const AllocaInst*, const AllocaInst*> Allocas;
+  DenseMap<const AllocaInst *, const AllocaInst *> Allocas;
 
   // Keep a list of allocas which has been affected by the remap.
-  SmallPtrSet<const AllocaInst*, 32> MergedAllocas;
+  SmallPtrSet<const AllocaInst *, 32> MergedAllocas;
 
   for (const std::pair<int, int> &SI : SlotRemap) {
     const AllocaInst *From = MFI->getObjectAllocation(SI.first);
@@ -961,7 +959,7 @@ void StackColoring::remapInstructions(DenseMap<int, int> &SlotRemap) {
     // If From is before wo, its possible that there is a use of From between
     // them.
     if (From->comesBefore(To))
-      const_cast<AllocaInst*>(To)->moveBefore(const_cast<AllocaInst*>(From));
+      const_cast<AllocaInst *>(To)->moveBefore(const_cast<AllocaInst *>(From));
 
     // AA might be used later for instruction scheduling, and we need it to be
     // able to deduce the correct aliasing releationships between pointers
@@ -983,8 +981,8 @@ void StackColoring::remapInstructions(DenseMap<int, int> &SlotRemap) {
     // Transfer the stack protector layout tag, but make sure that SSPLK_AddrOf
     // does not overwrite SSPLK_SmallArray or SSPLK_LargeArray, and make sure
     // that SSPLK_SmallArray does not overwrite SSPLK_LargeArray.
-    MachineFrameInfo::SSPLayoutKind FromKind
-        = MFI->getObjectSSPLayout(SI.first);
+    MachineFrameInfo::SSPLayoutKind FromKind =
+        MFI->getObjectSSPLayout(SI.first);
     MachineFrameInfo::SSPLayoutKind ToKind = MFI->getObjectSSPLayout(SI.second);
     if (FromKind != MachineFrameInfo::SSPLK_None &&
         (ToKind == MachineFrameInfo::SSPLK_None ||
@@ -1041,20 +1039,20 @@ void StackColoring::remapInstructions(DenseMap<int, int> &SlotRemap) {
         int FromSlot = MO.getIndex();
 
         // Don't touch arguments.
-        if (FromSlot<0)
+        if (FromSlot < 0)
           continue;
 
         // Only look at mapped slots.
         if (!SlotRemap.count(FromSlot))
           continue;
 
-        // In a debug build, check that the instruction that we are modifying is
-        // inside the expected live range. If the instruction is not inside
-        // the calculated range then it means that the alloca usage moved
-        // outside of the lifetime markers, or that the user has a bug.
-        // NOTE: Alloca address calculations which happen outside the lifetime
-        // zone are okay, despite the fact that we don't have a good way
-        // for validating all of the usages of the calculation.
+          // In a debug build, check that the instruction that we are modifying
+          // is inside the expected live range. If the instruction is not inside
+          // the calculated range then it means that the alloca usage moved
+          // outside of the lifetime markers, or that the user has a bug.
+          // NOTE: Alloca address calculations which happen outside the lifetime
+          // zone are okay, despite the fact that we don't have a good way
+          // for validating all of the usages of the calculation.
 #ifndef NDEBUG
         bool TouchesMemory = I.mayLoadOrStore();
         // If we *don't* protect the user from escaped allocas, don't bother
@@ -1144,9 +1142,9 @@ void StackColoring::remapInstructions(DenseMap<int, int> &SlotRemap) {
   LLVM_DEBUG(dbgs() << "Fixed " << FixedMemOp << " machine memory operands.\n");
   LLVM_DEBUG(dbgs() << "Fixed " << FixedDbg << " debug locations.\n");
   LLVM_DEBUG(dbgs() << "Fixed " << FixedInstr << " machine instructions.\n");
-  (void) FixedMemOp;
-  (void) FixedDbg;
-  (void) FixedInstr;
+  (void)FixedMemOp;
+  (void)FixedDbg;
+  (void)FixedInstr;
 }
 
 void StackColoring::removeInvalidSlotRanges() {
@@ -1172,7 +1170,7 @@ void StackColoring::removeInvalidSlotRanges() {
 
         int Slot = MO.getIndex();
 
-        if (Slot<0)
+        if (Slot < 0)
           continue;
 
         if (Intervals[Slot]->empty())
@@ -1194,7 +1192,7 @@ void StackColoring::removeInvalidSlotRanges() {
 void StackColoring::expungeSlotMap(DenseMap<int, int> &SlotRemap,
                                    unsigned NumSlots) {
   // Expunge slot remap map.
-  for (unsigned i=0; i < NumSlots; ++i) {
+  for (unsigned i = 0; i < NumSlots; ++i) {
     // If we are remapping i
     if (SlotRemap.count(i)) {
       int Target = SlotRemap[i];
@@ -1239,7 +1237,7 @@ bool StackColoring::runOnMachineFunction(MachineFunction &Func) {
                     << " slots\n");
   LLVM_DEBUG(dbgs() << "Slot structure:\n");
 
-  for (int i=0; i < MFI->getObjectIndexEnd(); ++i) {
+  for (int i = 0; i < MFI->getObjectIndexEnd(); ++i) {
     LLVM_DEBUG(dbgs() << "Slot #" << i << " - " << MFI->getObjectSize(i)
                       << " bytes.\n");
     TotalSize += MFI->getObjectSize(i);
@@ -1255,7 +1253,7 @@ bool StackColoring::runOnMachineFunction(MachineFunction &Func) {
     return removeAllMarkers();
   }
 
-  for (unsigned i=0; i < NumSlots; ++i) {
+  for (unsigned i = 0; i < NumSlots; ++i) {
     std::unique_ptr<LiveInterval> LI(new LiveInterval(i, 0));
     LI->getNextValue(Indexes->getZeroIndex(), VNInfoAllocator);
     Intervals.push_back(std::move(LI));
@@ -1315,7 +1313,7 @@ bool StackColoring::runOnMachineFunction(MachineFunction &Func) {
       if (SortedSlots[I] == -1)
         continue;
 
-      for (unsigned J=I+1; J < NumSlots; ++J) {
+      for (unsigned J = I + 1; J < NumSlots; ++J) {
         if (SortedSlots[J] == -1)
           continue;
 
@@ -1352,17 +1350,17 @@ bool StackColoring::runOnMachineFunction(MachineFunction &Func) {
                                         MFI->getObjectAlign(SecondSlot));
 
           assert(MFI->getObjectSize(FirstSlot) >=
-                 MFI->getObjectSize(SecondSlot) &&
+                     MFI->getObjectSize(SecondSlot) &&
                  "Merging a small object into a larger one");
 
-          RemovedSlots+=1;
+          RemovedSlots += 1;
           ReducedSize += MFI->getObjectSize(SecondSlot);
           MFI->setObjectAlignment(FirstSlot, MaxAlignment);
           MFI->RemoveStackObject(SecondSlot);
         }
       }
     }
-  }// While changed.
+  } // While changed.
 
   // Record statistics.
   StackSpaceSaved += ReducedSize;
diff --git a/llvm/lib/CodeGen/StackMapLivenessAnalysis.cpp b/llvm/lib/CodeGen/StackMapLivenessAnalysis.cpp
index 778ac1f5701ca3f..3ef05b06efacf12 100644
--- a/llvm/lib/CodeGen/StackMapLivenessAnalysis.cpp
+++ b/llvm/lib/CodeGen/StackMapLivenessAnalysis.cpp
@@ -1,4 +1,5 @@
-//===-- StackMapLivenessAnalysis.cpp - StackMap live Out Analysis ----------===//
+//===-- StackMapLivenessAnalysis.cpp - StackMap live Out Analysis
+//----------===//
 //
 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
 // See https://llvm.org/LICENSE.txt for license information.
@@ -34,9 +35,9 @@ static cl::opt<bool> EnablePatchPointLiveness(
 
 STATISTIC(NumStackMapFuncVisited, "Number of functions visited");
 STATISTIC(NumStackMapFuncSkipped, "Number of functions skipped");
-STATISTIC(NumBBsVisited,          "Number of basic blocks visited");
-STATISTIC(NumBBsHaveNoStackmap,   "Number of basic blocks with no stackmap");
-STATISTIC(NumStackMaps,           "Number of StackMaps visited");
+STATISTIC(NumBBsVisited, "Number of basic blocks visited");
+STATISTIC(NumBBsHaveNoStackmap, "Number of basic blocks with no stackmap");
+STATISTIC(NumStackMaps, "Number of StackMaps visited");
 
 namespace {
 /// This pass calculates the liveness information for each basic block in
diff --git a/llvm/lib/CodeGen/StackMaps.cpp b/llvm/lib/CodeGen/StackMaps.cpp
index f9115e434878440..c87e48502afa047 100644
--- a/llvm/lib/CodeGen/StackMaps.cpp
+++ b/llvm/lib/CodeGen/StackMaps.cpp
@@ -53,10 +53,8 @@ static uint64_t getConstMetaVal(const MachineInstr &MI, unsigned Idx) {
   return MO.getImm();
 }
 
-StackMapOpers::StackMapOpers(const MachineInstr *MI)
-  : MI(MI) {
-  assert(getVarIdx() <= MI->getNumOperands() &&
-         "invalid stackmap definition");
+StackMapOpers::StackMapOpers(const MachineInstr *MI) : MI(MI) {
+  assert(getVarIdx() <= MI->getNumOperands() && "invalid stackmap definition");
 }
 
 PatchPointOpers::PatchPointOpers(const MachineInstr *MI)
@@ -80,11 +78,10 @@ unsigned PatchPointOpers::getNextScratchIdx(unsigned StartIdx) const {
 
   // Find the next scratch register (implicit def and early clobber)
   unsigned ScratchIdx = StartIdx, e = MI->getNumOperands();
-  while (ScratchIdx < e &&
-         !(MI->getOperand(ScratchIdx).isReg() &&
-           MI->getOperand(ScratchIdx).isDef() &&
-           MI->getOperand(ScratchIdx).isImplicit() &&
-           MI->getOperand(ScratchIdx).isEarlyClobber()))
+  while (ScratchIdx < e && !(MI->getOperand(ScratchIdx).isReg() &&
+                             MI->getOperand(ScratchIdx).isDef() &&
+                             MI->getOperand(ScratchIdx).isImplicit() &&
+                             MI->getOperand(ScratchIdx).isEarlyClobber()))
     ++ScratchIdx;
 
   assert(ScratchIdx != e && "No scratch register available");
@@ -272,8 +269,8 @@ StackMaps::parseOperand(MachineInstr::const_mop_iterator MOI,
     if (SubRegIdx)
       Offset = TRI->getSubRegIdxOffset(SubRegIdx);
 
-    Locs.emplace_back(Location::Register, TRI->getSpillSize(*RC),
-                      DwarfRegNum, Offset);
+    Locs.emplace_back(Location::Register, TRI->getSpillSize(*RC), DwarfRegNum,
+                      Offset);
     return ++MOI;
   }
 
@@ -546,8 +543,8 @@ void StackMaps::recordStackMap(const MCSymbol &L, const MachineInstr &MI) {
 
   StackMapOpers opers(&MI);
   const int64_t ID = MI.getOperand(PatchPointOpers::IDPos).getImm();
-  recordStackMapOpers(L, MI, ID, std::next(MI.operands_begin(),
-                                           opers.getVarIdx()),
+  recordStackMapOpers(L, MI, ID,
+                      std::next(MI.operands_begin(), opers.getVarIdx()),
                       MI.operands_end());
 }
 
@@ -700,7 +697,7 @@ void StackMaps::emitCallsiteEntries(MCStreamer &OS) {
 
     for (const auto &Loc : CSLocs) {
       OS.emitIntValue(Loc.Type, 1);
-      OS.emitIntValue(0, 1);  // Reserved
+      OS.emitIntValue(0, 1); // Reserved
       OS.emitInt16(Loc.Size);
       OS.emitInt16(Loc.Reg);
       OS.emitInt16(0); // Reserved
diff --git a/llvm/lib/CodeGen/StackProtector.cpp b/llvm/lib/CodeGen/StackProtector.cpp
index 387b653f8815367..b8ace2125297cca 100644
--- a/llvm/lib/CodeGen/StackProtector.cpp
+++ b/llvm/lib/CodeGen/StackProtector.cpp
@@ -70,12 +70,12 @@ StackProtector::StackProtector() : FunctionPass(ID) {
   initializeStackProtectorPass(*PassRegistry::getPassRegistry());
 }
 
-INITIALIZE_PASS_BEGIN(StackProtector, DEBUG_TYPE,
-                      "Insert stack protectors", false, true)
+INITIALIZE_PASS_BEGIN(StackProtector, DEBUG_TYPE, "Insert stack protectors",
+                      false, true)
 INITIALIZE_PASS_DEPENDENCY(TargetPassConfig)
 INITIALIZE_PASS_DEPENDENCY(DominatorTreeWrapperPass)
-INITIALIZE_PASS_END(StackProtector, DEBUG_TYPE,
-                    "Insert stack protectors", false, true)
+INITIALIZE_PASS_END(StackProtector, DEBUG_TYPE, "Insert stack protectors",
+                    false, true)
 
 FunctionPass *llvm::createStackProtectorPass() { return new StackProtector(); }
 
@@ -122,9 +122,9 @@ bool StackProtector::runOnFunction(Function &Fn) {
 /// \param [out] IsLarge is set to true if a protectable array is found and
 /// it is "large" ( >= ssp-buffer-size).  In the case of a structure with
 /// multiple arrays, this gets set if any of them is large.
-static bool ContainsProtectableArray(Type *Ty, Module *M, unsigned SSPBufferSize,
-                                     bool &IsLarge, bool Strong,
-                                     bool InStruct) {
+static bool ContainsProtectableArray(Type *Ty, Module *M,
+                                     unsigned SSPBufferSize, bool &IsLarge,
+                                     bool Strong, bool InStruct) {
   if (!Ty)
     return false;
   if (ArrayType *AT = dyn_cast<ArrayType>(Ty)) {
@@ -370,9 +370,9 @@ bool StackProtector::requiresStackProtector(Function *F, SSPLayoutMap *Layout) {
                                      IsLarge, Strong, false)) {
           if (!Layout)
             return true;
-          Layout->insert(std::make_pair(
-              AI, IsLarge ? MachineFrameInfo::SSPLK_LargeArray
-                          : MachineFrameInfo::SSPLK_SmallArray));
+          Layout->insert(
+              std::make_pair(AI, IsLarge ? MachineFrameInfo::SSPLK_LargeArray
+                                         : MachineFrameInfo::SSPLK_SmallArray));
           ORE.emit([&]() {
             return OptimizationRemark(DEBUG_TYPE, "StackProtectorBuffer", &I)
                    << "Stack protection applied to function "
diff --git a/llvm/lib/CodeGen/StackSlotColoring.cpp b/llvm/lib/CodeGen/StackSlotColoring.cpp
index 16a1203262e5a31..e614dd246158bac 100644
--- a/llvm/lib/CodeGen/StackSlotColoring.cpp
+++ b/llvm/lib/CodeGen/StackSlotColoring.cpp
@@ -47,124 +47,123 @@ using namespace llvm;
 #define DEBUG_TYPE "stack-slot-coloring"
 
 static cl::opt<bool>
-DisableSharing("no-stack-slot-sharing",
-             cl::init(false), cl::Hidden,
-             cl::desc("Suppress slot sharing during stack coloring"));
+    DisableSharing("no-stack-slot-sharing", cl::init(false), cl::Hidden,
+                   cl::desc("Suppress slot sharing during stack coloring"));
 
 static cl::opt<int> DCELimit("ssc-dce-limit", cl::init(-1), cl::Hidden);
 
 STATISTIC(NumEliminated, "Number of stack slots eliminated due to coloring");
-STATISTIC(NumDead,       "Number of trivially dead stack accesses eliminated");
+STATISTIC(NumDead, "Number of trivially dead stack accesses eliminated");
 
 namespace {
 
-  class StackSlotColoring : public MachineFunctionPass {
-    LiveStacks *LS = nullptr;
-    MachineFrameInfo *MFI = nullptr;
-    const TargetInstrInfo *TII = nullptr;
-    const MachineBlockFrequencyInfo *MBFI = nullptr;
-
-    // SSIntervals - Spill slot intervals.
-    std::vector<LiveInterval*> SSIntervals;
-
-    // SSRefs - Keep a list of MachineMemOperands for each spill slot.
-    // MachineMemOperands can be shared between instructions, so we need
-    // to be careful that renames like [FI0, FI1] -> [FI1, FI2] do not
-    // become FI0 -> FI1 -> FI2.
-    SmallVector<SmallVector<MachineMemOperand *, 8>, 16> SSRefs;
-
-    // OrigAlignments - Alignments of stack objects before coloring.
-    SmallVector<Align, 16> OrigAlignments;
-
-    // OrigSizes - Sizes of stack objects before coloring.
-    SmallVector<unsigned, 16> OrigSizes;
-
-    // AllColors - If index is set, it's a spill slot, i.e. color.
-    // FIXME: This assumes PEI locate spill slot with smaller indices
-    // closest to stack pointer / frame pointer. Therefore, smaller
-    // index == better color. This is per stack ID.
-    SmallVector<BitVector, 2> AllColors;
-
-    // NextColor - Next "color" that's not yet used. This is per stack ID.
-    SmallVector<int, 2> NextColors = { -1 };
-
-    // UsedColors - "Colors" that have been assigned. This is per stack ID
-    SmallVector<BitVector, 2> UsedColors;
-
-    // Join all intervals sharing one color into a single LiveIntervalUnion to
-    // speedup range overlap test.
-    class ColorAssignmentInfo {
-      // Single liverange (used to avoid creation of LiveIntervalUnion).
-      LiveInterval *SingleLI = nullptr;
-      // LiveIntervalUnion to perform overlap test.
-      LiveIntervalUnion *LIU = nullptr;
-      // LiveIntervalUnion has a parameter in its constructor so doing this
-      // dirty magic.
-      uint8_t LIUPad[sizeof(LiveIntervalUnion)];
-
-    public:
-      ~ColorAssignmentInfo() {
-        if (LIU)
-          LIU->~LiveIntervalUnion(); // Dirty magic again.
-      }
+class StackSlotColoring : public MachineFunctionPass {
+  LiveStacks *LS = nullptr;
+  MachineFrameInfo *MFI = nullptr;
+  const TargetInstrInfo *TII = nullptr;
+  const MachineBlockFrequencyInfo *MBFI = nullptr;
+
+  // SSIntervals - Spill slot intervals.
+  std::vector<LiveInterval *> SSIntervals;
+
+  // SSRefs - Keep a list of MachineMemOperands for each spill slot.
+  // MachineMemOperands can be shared between instructions, so we need
+  // to be careful that renames like [FI0, FI1] -> [FI1, FI2] do not
+  // become FI0 -> FI1 -> FI2.
+  SmallVector<SmallVector<MachineMemOperand *, 8>, 16> SSRefs;
+
+  // OrigAlignments - Alignments of stack objects before coloring.
+  SmallVector<Align, 16> OrigAlignments;
+
+  // OrigSizes - Sizes of stack objects before coloring.
+  SmallVector<unsigned, 16> OrigSizes;
+
+  // AllColors - If index is set, it's a spill slot, i.e. color.
+  // FIXME: This assumes PEI locate spill slot with smaller indices
+  // closest to stack pointer / frame pointer. Therefore, smaller
+  // index == better color. This is per stack ID.
+  SmallVector<BitVector, 2> AllColors;
+
+  // NextColor - Next "color" that's not yet used. This is per stack ID.
+  SmallVector<int, 2> NextColors = {-1};
+
+  // UsedColors - "Colors" that have been assigned. This is per stack ID
+  SmallVector<BitVector, 2> UsedColors;
+
+  // Join all intervals sharing one color into a single LiveIntervalUnion to
+  // speedup range overlap test.
+  class ColorAssignmentInfo {
+    // Single liverange (used to avoid creation of LiveIntervalUnion).
+    LiveInterval *SingleLI = nullptr;
+    // LiveIntervalUnion to perform overlap test.
+    LiveIntervalUnion *LIU = nullptr;
+    // LiveIntervalUnion has a parameter in its constructor so doing this
+    // dirty magic.
+    uint8_t LIUPad[sizeof(LiveIntervalUnion)];
 
-      // Return true if LiveInterval overlaps with any
-      // intervals that have already been assigned to this color.
-      bool overlaps(LiveInterval *LI) const {
-        if (LIU)
-          return LiveIntervalUnion::Query(*LI, *LIU).checkInterference();
-        return SingleLI ? SingleLI->overlaps(*LI) : false;
-      }
+  public:
+    ~ColorAssignmentInfo() {
+      if (LIU)
+        LIU->~LiveIntervalUnion(); // Dirty magic again.
+    }
 
-      // Add new LiveInterval to this color.
-      void add(LiveInterval *LI, LiveIntervalUnion::Allocator &Alloc) {
-        assert(!overlaps(LI));
-        if (LIU) {
-          LIU->unify(*LI, *LI);
-        } else if (SingleLI) {
-          LIU = new (LIUPad) LiveIntervalUnion(Alloc);
-          LIU->unify(*SingleLI, *SingleLI);
-          LIU->unify(*LI, *LI);
-          SingleLI = nullptr;
-        } else
-          SingleLI = LI;
-      }
-    };
+    // Return true if LiveInterval overlaps with any
+    // intervals that have already been assigned to this color.
+    bool overlaps(LiveInterval *LI) const {
+      if (LIU)
+        return LiveIntervalUnion::Query(*LI, *LIU).checkInterference();
+      return SingleLI ? SingleLI->overlaps(*LI) : false;
+    }
 
-    LiveIntervalUnion::Allocator LIUAlloc;
+    // Add new LiveInterval to this color.
+    void add(LiveInterval *LI, LiveIntervalUnion::Allocator &Alloc) {
+      assert(!overlaps(LI));
+      if (LIU) {
+        LIU->unify(*LI, *LI);
+      } else if (SingleLI) {
+        LIU = new (LIUPad) LiveIntervalUnion(Alloc);
+        LIU->unify(*SingleLI, *SingleLI);
+        LIU->unify(*LI, *LI);
+        SingleLI = nullptr;
+      } else
+        SingleLI = LI;
+    }
+  };
 
-    // Assignments - Color to intervals mapping.
-    SmallVector<ColorAssignmentInfo, 16> Assignments;
+  LiveIntervalUnion::Allocator LIUAlloc;
 
-  public:
-    static char ID; // Pass identification
+  // Assignments - Color to intervals mapping.
+  SmallVector<ColorAssignmentInfo, 16> Assignments;
 
-    StackSlotColoring() : MachineFunctionPass(ID) {
-      initializeStackSlotColoringPass(*PassRegistry::getPassRegistry());
-    }
+public:
+  static char ID; // Pass identification
 
-    void getAnalysisUsage(AnalysisUsage &AU) const override {
-      AU.setPreservesCFG();
-      AU.addRequired<SlotIndexes>();
-      AU.addPreserved<SlotIndexes>();
-      AU.addRequired<LiveStacks>();
-      AU.addRequired<MachineBlockFrequencyInfo>();
-      AU.addPreserved<MachineBlockFrequencyInfo>();
-      AU.addPreservedID(MachineDominatorsID);
-      MachineFunctionPass::getAnalysisUsage(AU);
-    }
+  StackSlotColoring() : MachineFunctionPass(ID) {
+    initializeStackSlotColoringPass(*PassRegistry::getPassRegistry());
+  }
+
+  void getAnalysisUsage(AnalysisUsage &AU) const override {
+    AU.setPreservesCFG();
+    AU.addRequired<SlotIndexes>();
+    AU.addPreserved<SlotIndexes>();
+    AU.addRequired<LiveStacks>();
+    AU.addRequired<MachineBlockFrequencyInfo>();
+    AU.addPreserved<MachineBlockFrequencyInfo>();
+    AU.addPreservedID(MachineDominatorsID);
+    MachineFunctionPass::getAnalysisUsage(AU);
+  }
 
-    bool runOnMachineFunction(MachineFunction &MF) override;
+  bool runOnMachineFunction(MachineFunction &MF) override;
 
-  private:
-    void InitializeSlots();
-    void ScanForSpillSlotRefs(MachineFunction &MF);
-    int ColorSlot(LiveInterval *li);
-    bool ColorSlots(MachineFunction &MF);
-    void RewriteInstruction(MachineInstr &MI, SmallVectorImpl<int> &SlotMapping,
-                            MachineFunction &MF);
-    bool RemoveDeadStores(MachineBasicBlock* MBB);
-  };
+private:
+  void InitializeSlots();
+  void ScanForSpillSlotRefs(MachineFunction &MF);
+  int ColorSlot(LiveInterval *li);
+  bool ColorSlots(MachineFunction &MF);
+  void RewriteInstruction(MachineInstr &MI, SmallVectorImpl<int> &SlotMapping,
+                          MachineFunction &MF);
+  bool RemoveDeadStores(MachineBasicBlock *MBB);
+};
 
 } // end anonymous namespace
 
@@ -172,20 +171,20 @@ char StackSlotColoring::ID = 0;
 
 char &llvm::StackSlotColoringID = StackSlotColoring::ID;
 
-INITIALIZE_PASS_BEGIN(StackSlotColoring, DEBUG_TYPE,
-                "Stack Slot Coloring", false, false)
+INITIALIZE_PASS_BEGIN(StackSlotColoring, DEBUG_TYPE, "Stack Slot Coloring",
+                      false, false)
 INITIALIZE_PASS_DEPENDENCY(SlotIndexes)
 INITIALIZE_PASS_DEPENDENCY(LiveStacks)
 INITIALIZE_PASS_DEPENDENCY(MachineLoopInfo)
-INITIALIZE_PASS_END(StackSlotColoring, DEBUG_TYPE,
-                "Stack Slot Coloring", false, false)
+INITIALIZE_PASS_END(StackSlotColoring, DEBUG_TYPE, "Stack Slot Coloring", false,
+                    false)
 
 namespace {
 
 // IntervalSorter - Comparison predicate that sort live intervals by
 // their weight.
 struct IntervalSorter {
-  bool operator()(LiveInterval* LHS, LiveInterval* RHS) const {
+  bool operator()(LiveInterval *LHS, LiveInterval *RHS) const {
     return LHS->weight() > RHS->weight();
   }
 };
@@ -218,8 +217,8 @@ void StackSlotColoring::ScanForSpillSlotRefs(MachineFunction &MF) {
            MMOI != EE; ++MMOI) {
         MachineMemOperand *MMO = *MMOI;
         if (const FixedStackPseudoSourceValue *FSV =
-            dyn_cast_or_null<FixedStackPseudoSourceValue>(
-                MMO->getPseudoValue())) {
+                dyn_cast_or_null<FixedStackPseudoSourceValue>(
+                    MMO->getPseudoValue())) {
           int FI = FSV->getFrameIndex();
           if (FI >= 0)
             SSRefs[FI].push_back(MMO);
@@ -265,7 +264,7 @@ void StackSlotColoring::InitializeSlots() {
 
     SSIntervals.push_back(&li);
     OrigAlignments[FI] = MFI->getObjectAlign(FI);
-    OrigSizes[FI]      = MFI->getObjectSize(FI);
+    OrigSizes[FI] = MFI->getObjectSize(FI);
 
     auto StackID = MFI->getStackID(FI);
     if (StackID != 0) {
@@ -404,7 +403,8 @@ bool StackSlotColoring::ColorSlots(MachineFunction &MF) {
   for (int StackID = 0, E = AllColors.size(); StackID != E; ++StackID) {
     int NextColor = NextColors[StackID];
     while (NextColor != -1) {
-      LLVM_DEBUG(dbgs() << "Removing unused stack object fi#" << NextColor << "\n");
+      LLVM_DEBUG(dbgs() << "Removing unused stack object fi#" << NextColor
+                        << "\n");
       MFI->RemoveStackObject(NextColor);
       NextColor = AllColors[StackID].find_next(NextColor);
     }
@@ -441,15 +441,15 @@ void StackSlotColoring::RewriteInstruction(MachineInstr &MI,
 /// definitely dead.  This could obviously be much more aggressive (consider
 /// pairs with instructions between them), but such extensions might have a
 /// considerable compile time impact.
-bool StackSlotColoring::RemoveDeadStores(MachineBasicBlock* MBB) {
+bool StackSlotColoring::RemoveDeadStores(MachineBasicBlock *MBB) {
   // FIXME: This could be much more aggressive, but we need to investigate
   // the compile time impact of doing so.
   bool changed = false;
 
-  SmallVector<MachineInstr*, 4> toErase;
+  SmallVector<MachineInstr *, 4> toErase;
 
-  for (MachineBasicBlock::iterator I = MBB->begin(), E = MBB->end();
-       I != E; ++I) {
+  for (MachineBasicBlock::iterator I = MBB->begin(), E = MBB->end(); I != E;
+       ++I) {
     if (DCELimit != -1 && (int)NumDead >= DCELimit)
       break;
     int FirstSS, SecondSS;
@@ -475,7 +475,8 @@ bool StackSlotColoring::RemoveDeadStores(MachineBasicBlock* MBB) {
       ++NextMI;
       ++I;
     }
-    if (NextMI == E) continue;
+    if (NextMI == E)
+      continue;
     if (!(StoreReg = TII->isStoreToStackSlot(*NextMI, SecondSS, StoreSize)))
       continue;
     if (FirstSS != SecondSS || LoadReg != StoreReg || FirstSS == -1 ||
diff --git a/llvm/lib/CodeGen/SwiftErrorValueTracking.cpp b/llvm/lib/CodeGen/SwiftErrorValueTracking.cpp
index 74a94d6110f4186..fd08b90e912fc43 100644
--- a/llvm/lib/CodeGen/SwiftErrorValueTracking.cpp
+++ b/llvm/lib/CodeGen/SwiftErrorValueTracking.cpp
@@ -183,8 +183,8 @@ void SwiftErrorValueTracking::propagateVRegs() {
       for (auto *Pred : MBB->predecessors()) {
         if (!Visited.insert(Pred).second)
           continue;
-        VRegs.push_back(std::make_pair(
-            Pred, getOrCreateVReg(Pred, SwiftErrorVal)));
+        VRegs.push_back(
+            std::make_pair(Pred, getOrCreateVReg(Pred, SwiftErrorVal)));
         if (Pred != MBB)
           continue;
         // We have a self-edge.
@@ -240,9 +240,8 @@ void SwiftErrorValueTracking::propagateVRegs() {
       auto const *RC = TLI->getRegClassFor(TLI->getPointerTy(DL));
       Register PHIVReg =
           UpwardsUse ? UUseVReg : MF->getRegInfo().createVirtualRegister(RC);
-      MachineInstrBuilder PHI =
-          BuildMI(*MBB, MBB->getFirstNonPHI(), DLoc,
-                  TII->get(TargetOpcode::PHI), PHIVReg);
+      MachineInstrBuilder PHI = BuildMI(*MBB, MBB->getFirstNonPHI(), DLoc,
+                                        TII->get(TargetOpcode::PHI), PHIVReg);
       for (auto BBRegPair : VRegs) {
         PHI.addReg(BBRegPair.second).addMBB(BBRegPair.first);
       }
@@ -274,9 +273,9 @@ void SwiftErrorValueTracking::propagateVRegs() {
   }
 }
 
-void SwiftErrorValueTracking::preassignVRegs(
-    MachineBasicBlock *MBB, BasicBlock::const_iterator Begin,
-    BasicBlock::const_iterator End) {
+void SwiftErrorValueTracking::preassignVRegs(MachineBasicBlock *MBB,
+                                             BasicBlock::const_iterator Begin,
+                                             BasicBlock::const_iterator End) {
   if (!TLI->supportSwiftError() || SwiftErrorVals.empty())
     return;
 
diff --git a/llvm/lib/CodeGen/SwitchLoweringUtils.cpp b/llvm/lib/CodeGen/SwitchLoweringUtils.cpp
index 36a02d5beb4b240..179f7375fa647df 100644
--- a/llvm/lib/CodeGen/SwitchLoweringUtils.cpp
+++ b/llvm/lib/CodeGen/SwitchLoweringUtils.cpp
@@ -79,7 +79,7 @@ void SwitchCG::SwitchLowering::findJumpTables(CaseClusterVector &Clusters,
       TotalCases[i] += TotalCases[i - 1];
   }
 
-  uint64_t Range = getJumpTableRange(Clusters,0, N - 1);
+  uint64_t Range = getJumpTableRange(Clusters, 0, N - 1);
   uint64_t NumCases = getJumpTableNumCases(TotalCases, 0, N - 1);
   assert(NumCases < UINT64_MAX / 100);
   assert(Range >= NumCases);
@@ -196,8 +196,8 @@ bool SwitchCG::SwitchLowering::buildJumpTable(const CaseClusterVector &Clusters,
 
   auto Prob = BranchProbability::getZero();
   unsigned NumCmps = 0;
-  std::vector<MachineBasicBlock*> Table;
-  DenseMap<MachineBasicBlock*, BranchProbability> JTProbs;
+  std::vector<MachineBasicBlock *> Table;
+  DenseMap<MachineBasicBlock *, BranchProbability> JTProbs;
 
   // Initialize probabilities in JTProbs.
   for (unsigned I = First; I <= Last; ++I)
@@ -274,7 +274,7 @@ void SwitchCG::SwitchLowering::findBitTestClusters(CaseClusterVector &Clusters,
   for (const CaseCluster &C : Clusters)
     assert(C.Kind == CC_Range || C.Kind == CC_JumpTable);
   for (unsigned i = 1; i < Clusters.size(); ++i)
-    assert(Clusters[i-1].High->getValue().slt(Clusters[i].Low->getValue()));
+    assert(Clusters[i - 1].High->getValue().slt(Clusters[i].Low->getValue()));
 #endif
 
   // The algorithm below is not suitable for -O0.
diff --git a/llvm/lib/CodeGen/TailDuplication.cpp b/llvm/lib/CodeGen/TailDuplication.cpp
index bf3d2088e196c46..1f5c3cc869bdc74 100644
--- a/llvm/lib/CodeGen/TailDuplication.cpp
+++ b/llvm/lib/CodeGen/TailDuplication.cpp
@@ -33,9 +33,10 @@ class TailDuplicateBase : public MachineFunctionPass {
   TailDuplicator Duplicator;
   std::unique_ptr<MBFIWrapper> MBFIW;
   bool PreRegAlloc;
+
 public:
   TailDuplicateBase(char &PassID, bool PreRegAlloc)
-    : MachineFunctionPass(PassID), PreRegAlloc(PreRegAlloc) {}
+      : MachineFunctionPass(PassID), PreRegAlloc(PreRegAlloc) {}
 
   bool runOnMachineFunction(MachineFunction &MF) override;
 
@@ -63,8 +64,8 @@ class EarlyTailDuplicate : public TailDuplicateBase {
   }
 
   MachineFunctionProperties getClearedProperties() const override {
-    return MachineFunctionProperties()
-      .set(MachineFunctionProperties::Property::NoPHIs);
+    return MachineFunctionProperties().set(
+        MachineFunctionProperties::Property::NoPHIs);
   }
 };
 
@@ -86,9 +87,9 @@ bool TailDuplicateBase::runOnMachineFunction(MachineFunction &MF) {
 
   auto MBPI = &getAnalysis<MachineBranchProbabilityInfo>();
   auto *PSI = &getAnalysis<ProfileSummaryInfoWrapperPass>().getPSI();
-  auto *MBFI = (PSI && PSI->hasProfileSummary()) ?
-               &getAnalysis<LazyMachineBlockFrequencyInfoPass>().getBFI() :
-               nullptr;
+  auto *MBFI = (PSI && PSI->hasProfileSummary())
+                   ? &getAnalysis<LazyMachineBlockFrequencyInfoPass>().getBFI()
+                   : nullptr;
   if (MBFI)
     MBFIW = std::make_unique<MBFIWrapper>(*MBFI);
   Duplicator.initMF(MF, PreRegAlloc, MBPI, MBFI ? MBFIW.get() : nullptr, PSI,
diff --git a/llvm/lib/CodeGen/TailDuplicator.cpp b/llvm/lib/CodeGen/TailDuplicator.cpp
index 5ed67bd0a121ed6..e6c665eacf2c921 100644
--- a/llvm/lib/CodeGen/TailDuplicator.cpp
+++ b/llvm/lib/CodeGen/TailDuplicator.cpp
@@ -65,8 +65,8 @@ static cl::opt<unsigned> TailDuplicateSize(
 static cl::opt<unsigned> TailDupIndirectBranchSize(
     "tail-dup-indirect-size",
     cl::desc("Maximum instructions to consider tail duplicating blocks that "
-             "end with indirect branches."), cl::init(20),
-    cl::Hidden);
+             "end with indirect branches."),
+    cl::init(20), cl::Hidden);
 
 static cl::opt<bool>
     TailDupVerify("tail-dup-verify",
@@ -78,8 +78,7 @@ static cl::opt<unsigned> TailDupLimit("tail-dup-limit", cl::init(~0U),
 
 void TailDuplicator::initMF(MachineFunction &MFin, bool PreRegAlloc,
                             const MachineBranchProbabilityInfo *MBPIin,
-                            MBFIWrapper *MBFIin,
-                            ProfileSummaryInfo *PSIin,
+                            MBFIWrapper *MBFIin, ProfileSummaryInfo *PSIin,
                             bool LayoutModeIn, unsigned TailDupSizeIn) {
   MF = &MFin;
   TII = MF->getSubtarget().getInstrInfo();
@@ -153,9 +152,8 @@ static void VerifyPHIs(MachineFunction &MF, bool CheckExtra) {
 ///     all Preds that received a copy of \p MBB.
 /// \p RemovalCallback - if non-null, called just before MBB is deleted.
 bool TailDuplicator::tailDuplicateAndUpdate(
-    bool IsSimple, MachineBasicBlock *MBB,
-    MachineBasicBlock *ForcedLayoutPred,
-    SmallVectorImpl<MachineBasicBlock*> *DuplicatedPreds,
+    bool IsSimple, MachineBasicBlock *MBB, MachineBasicBlock *ForcedLayoutPred,
+    SmallVectorImpl<MachineBasicBlock *> *DuplicatedPreds,
     function_ref<void(MachineBasicBlock *)> *RemovalCallback,
     SmallVectorImpl<MachineBasicBlock *> *CandidatePtr) {
   // Save the successors list.
@@ -164,8 +162,8 @@ bool TailDuplicator::tailDuplicateAndUpdate(
 
   SmallVector<MachineBasicBlock *, 8> TDBBs;
   SmallVector<MachineInstr *, 16> Copies;
-  if (!tailDuplicate(IsSimple, MBB, ForcedLayoutPred,
-                     TDBBs, Copies, CandidatePtr))
+  if (!tailDuplicate(IsSimple, MBB, ForcedLayoutPred, TDBBs, Copies,
+                     CandidatePtr))
     return false;
 
   ++NumTails;
@@ -609,10 +607,11 @@ bool TailDuplicator::shouldTailDuplicate(bool IsSimple,
     // CFI instructions are marked as non-duplicable, because Darwin compact
     // unwind info emission can't handle multiple prologue setups. In case of
     // DWARF, allow them be duplicated, so that their existence doesn't prevent
-    // tail duplication of some basic blocks, that would be duplicated otherwise.
+    // tail duplication of some basic blocks, that would be duplicated
+    // otherwise.
     if (MI.isNotDuplicable() &&
         (TailBB.getParent()->getTarget().getTargetTriple().isOSDarwin() ||
-        !MI.isCFIInstruction()))
+         !MI.isCFIInstruction()))
       return false;
 
     // Convergent instructions can be duplicated only if doing so doesn't add
@@ -825,11 +824,12 @@ bool TailDuplicator::canTailDuplicate(MachineBasicBlock *TailBB,
 ///                      into.
 /// \p Copies            A vector of copy instructions inserted. Used later to
 ///                      walk all the inserted copies and remove redundant ones.
-bool TailDuplicator::tailDuplicate(bool IsSimple, MachineBasicBlock *TailBB,
-                          MachineBasicBlock *ForcedLayoutPred,
-                          SmallVectorImpl<MachineBasicBlock *> &TDBBs,
-                          SmallVectorImpl<MachineInstr *> &Copies,
-                          SmallVectorImpl<MachineBasicBlock *> *CandidatePtr) {
+bool TailDuplicator::tailDuplicate(
+    bool IsSimple, MachineBasicBlock *TailBB,
+    MachineBasicBlock *ForcedLayoutPred,
+    SmallVectorImpl<MachineBasicBlock *> &TDBBs,
+    SmallVectorImpl<MachineInstr *> &Copies,
+    SmallVectorImpl<MachineBasicBlock *> *CandidatePtr) {
   LLVM_DEBUG(dbgs() << "\n*** Tail-duplicating " << printMBBReference(*TailBB)
                     << '\n');
 
@@ -926,10 +926,8 @@ bool TailDuplicator::tailDuplicate(bool IsSimple, MachineBasicBlock *TailBB,
       // Layout preds are not always CFG preds. Check.
       *PrevBB->succ_begin() == TailBB &&
       !TII->analyzeBranch(*PrevBB, PriorTBB, PriorFBB, PriorCond) &&
-      PriorCond.empty() &&
-      (!PriorTBB || PriorTBB == TailBB) &&
-      TailBB->pred_size() == 1 &&
-      !TailBB->hasAddressTaken()) {
+      PriorCond.empty() && (!PriorTBB || PriorTBB == TailBB) &&
+      TailBB->pred_size() == 1 && !TailBB->hasAddressTaken()) {
     LLVM_DEBUG(dbgs() << "\nMerging into block: " << *PrevBB
                       << "From MBB: " << *TailBB);
     // There may be a branch to the layout successor. This is unlikely but it
@@ -1033,14 +1031,15 @@ bool TailDuplicator::tailDuplicate(bool IsSimple, MachineBasicBlock *TailBB,
 
 /// At the end of the block \p MBB generate COPY instructions between registers
 /// described by \p CopyInfos. Append resulting instructions to \p Copies.
-void TailDuplicator::appendCopies(MachineBasicBlock *MBB,
-      SmallVectorImpl<std::pair<Register, RegSubRegPair>> &CopyInfos,
-      SmallVectorImpl<MachineInstr*> &Copies) {
+void TailDuplicator::appendCopies(
+    MachineBasicBlock *MBB,
+    SmallVectorImpl<std::pair<Register, RegSubRegPair>> &CopyInfos,
+    SmallVectorImpl<MachineInstr *> &Copies) {
   MachineBasicBlock::iterator Loc = MBB->getFirstTerminator();
   const MCInstrDesc &CopyD = TII->get(TargetOpcode::COPY);
   for (auto &CI : CopyInfos) {
     auto C = BuildMI(*MBB, Loc, DebugLoc(), CopyD, CI.first)
-                .addReg(CI.second.Reg, 0, CI.second.SubReg);
+                 .addReg(CI.second.Reg, 0, CI.second.SubReg);
     Copies.push_back(C);
   }
 }
diff --git a/llvm/lib/CodeGen/TargetFrameLoweringImpl.cpp b/llvm/lib/CodeGen/TargetFrameLoweringImpl.cpp
index 48a2094f5d451c9..795134b3b4f6acb 100644
--- a/llvm/lib/CodeGen/TargetFrameLoweringImpl.cpp
+++ b/llvm/lib/CodeGen/TargetFrameLoweringImpl.cpp
@@ -29,7 +29,8 @@ using namespace llvm;
 
 TargetFrameLowering::~TargetFrameLowering() = default;
 
-bool TargetFrameLowering::enableCalleeSaveSkip(const MachineFunction &MF) const {
+bool TargetFrameLowering::enableCalleeSaveSkip(
+    const MachineFunction &MF) const {
   assert(MF.getFunction().hasFnAttribute(Attribute::NoReturn) &&
          MF.getFunction().hasFnAttribute(Attribute::NoUnwind) &&
          !MF.getFunction().hasFnAttribute(Attribute::UWTable));
@@ -115,9 +116,9 @@ void TargetFrameLowering::determineCalleeSaves(MachineFunction &MF,
   // execution we do not need the CSR spills either: setjmp stores all CSRs
   // it was called with into the jmp_buf, which longjmp then restores.
   if (MF.getFunction().hasFnAttribute(Attribute::NoReturn) &&
-        MF.getFunction().hasFnAttribute(Attribute::NoUnwind) &&
-        !MF.getFunction().hasFnAttribute(Attribute::UWTable) &&
-        enableCalleeSaveSkip(MF))
+      MF.getFunction().hasFnAttribute(Attribute::NoUnwind) &&
+      !MF.getFunction().hasFnAttribute(Attribute::UWTable) &&
+      enableCalleeSaveSkip(MF))
     return;
 
   // Functions which call __builtin_unwind_init get all their registers saved.
@@ -131,7 +132,7 @@ void TargetFrameLowering::determineCalleeSaves(MachineFunction &MF,
 }
 
 bool TargetFrameLowering::allocateScavengingFrameIndexesNearIncomingSP(
-  const MachineFunction &MF) const {
+    const MachineFunction &MF) const {
   if (!hasFP(MF))
     return false;
 
diff --git a/llvm/lib/CodeGen/TargetInstrInfo.cpp b/llvm/lib/CodeGen/TargetInstrInfo.cpp
index 16b937cd1f68412..3ac1236b2c84fbd 100644
--- a/llvm/lib/CodeGen/TargetInstrInfo.cpp
+++ b/llvm/lib/CodeGen/TargetInstrInfo.cpp
@@ -38,12 +38,12 @@
 using namespace llvm;
 
 static cl::opt<bool> DisableHazardRecognizer(
-  "disable-sched-hazard", cl::Hidden, cl::init(false),
-  cl::desc("Disable hazard detection during preRA scheduling"));
+    "disable-sched-hazard", cl::Hidden, cl::init(false),
+    cl::desc("Disable hazard detection during preRA scheduling"));
 
 TargetInstrInfo::~TargetInstrInfo() = default;
 
-const TargetRegisterClass*
+const TargetRegisterClass *
 TargetInstrInfo::getRegClass(const MCInstrDesc &MCID, unsigned OpNum,
                              const TargetRegisterInfo *TRI,
                              const MachineFunction &MF) const {
@@ -97,9 +97,9 @@ static bool isAsmComment(const char *Str, const MCAsmInfo &MAI) {
 /// simple--i.e. not a logical or arithmetic expression--size values without
 /// the optional fill value. This is primarily used for creating arbitrary
 /// sized inline asm blocks for testing purposes.
-unsigned TargetInstrInfo::getInlineAsmLength(
-  const char *Str,
-  const MCAsmInfo &MAI, const TargetSubtargetInfo *STI) const {
+unsigned
+TargetInstrInfo::getInlineAsmLength(const char *Str, const MCAsmInfo &MAI,
+                                    const TargetSubtargetInfo *STI) const {
   // Count the number of instructions in the asm.
   bool AtInsnStart = true;
   unsigned Length = 0;
@@ -137,9 +137,8 @@ unsigned TargetInstrInfo::getInlineAsmLength(
 
 /// ReplaceTailWithBranchTo - Delete the instruction OldInst and everything
 /// after it, replacing it with an unconditional branch to NewDest.
-void
-TargetInstrInfo::ReplaceTailWithBranchTo(MachineBasicBlock::iterator Tail,
-                                         MachineBasicBlock *NewDest) const {
+void TargetInstrInfo::ReplaceTailWithBranchTo(
+    MachineBasicBlock::iterator Tail, MachineBasicBlock *NewDest) const {
   MachineBasicBlock *MBB = Tail->getParent();
 
   // Remove all the old successors of MBB from the CFG.
@@ -173,8 +172,10 @@ MachineInstr *TargetInstrInfo::commuteInstructionImpl(MachineInstr &MI,
     // No idea how to commute this instruction. Target should implement its own.
     return nullptr;
 
-  unsigned CommutableOpIdx1 = Idx1; (void)CommutableOpIdx1;
-  unsigned CommutableOpIdx2 = Idx2; (void)CommutableOpIdx2;
+  unsigned CommutableOpIdx1 = Idx1;
+  (void)CommutableOpIdx1;
+  unsigned CommutableOpIdx2 = Idx2;
+  (void)CommutableOpIdx2;
   assert(findCommutedOpIndices(MI, CommutableOpIdx1, CommutableOpIdx2) &&
          CommutableOpIdx1 == Idx1 && CommutableOpIdx2 == Idx2 &&
          "TargetInstrInfo::CommuteInstructionImpl(): not commutable operands.");
@@ -305,8 +306,8 @@ bool TargetInstrInfo::findCommutedOpIndices(const MachineInstr &MI,
   // is not true, then the target must implement this.
   unsigned CommutableOpIdx1 = MCID.getNumDefs();
   unsigned CommutableOpIdx2 = CommutableOpIdx1 + 1;
-  if (!fixCommutedOpIndices(SrcOpIdx1, SrcOpIdx2,
-                            CommutableOpIdx1, CommutableOpIdx2))
+  if (!fixCommutedOpIndices(SrcOpIdx1, SrcOpIdx2, CommutableOpIdx1,
+                            CommutableOpIdx2))
     return false;
 
   if (!MI.getOperand(SrcOpIdx1).isReg() || !MI.getOperand(SrcOpIdx2).isReg())
@@ -316,7 +317,8 @@ bool TargetInstrInfo::findCommutedOpIndices(const MachineInstr &MI,
 }
 
 bool TargetInstrInfo::isUnpredicatedTerminator(const MachineInstr &MI) const {
-  if (!MI.isTerminator()) return false;
+  if (!MI.isTerminator())
+    return false;
 
   // Conditional branch is a special case.
   if (MI.isBranch() && !MI.isBarrier())
@@ -430,8 +432,10 @@ bool TargetInstrInfo::produceSameValue(const MachineInstr &MI0,
   return MI0.isIdenticalTo(MI1, MachineInstr::IgnoreVRegDefs);
 }
 
-MachineInstr &TargetInstrInfo::duplicate(MachineBasicBlock &MBB,
-    MachineBasicBlock::iterator InsertBefore, const MachineInstr &Orig) const {
+MachineInstr &
+TargetInstrInfo::duplicate(MachineBasicBlock &MBB,
+                           MachineBasicBlock::iterator InsertBefore,
+                           const MachineInstr &Orig) const {
   assert(!Orig.isNotDuplicable() && "Instruction cannot be duplicated");
   MachineFunction &MF = *MBB.getParent();
   return MF.cloneMachineInstrBundle(MBB, InsertBefore, Orig);
@@ -445,7 +449,7 @@ static const TargetRegisterClass *canFoldCopy(const MachineInstr &MI,
   assert(TII.isCopyInstr(MI) && "MI must be a COPY instruction");
   if (MI.getNumOperands() != 2)
     return nullptr;
-  assert(FoldIdx<2 && "FoldIdx refers no nonexistent operand");
+  assert(FoldIdx < 2 && "FoldIdx refers no nonexistent operand");
 
   const MachineOperand &FoldOp = MI.getOperand(FoldIdx);
   const MachineOperand &LiveOp = MI.getOperand(1 - FoldIdx);
@@ -533,8 +537,7 @@ static MachineInstr *foldPatchpoint(MachineFunction &MF, MachineInstr &MI,
       unsigned SpillSize;
       unsigned SpillOffset;
       // Compute the spill slot size and offset.
-      const TargetRegisterClass *RC =
-        MF.getRegInfo().getRegClass(MO.getReg());
+      const TargetRegisterClass *RC = MF.getRegInfo().getRegClass(MO.getReg());
       bool Valid =
           TII.getStackSlotRange(RC, MO.getSubReg(), SpillSize, SpillOffset, MF);
       if (!Valid)
@@ -611,11 +614,9 @@ MachineInstr *TargetInstrInfo::foldMemoryOperand(MachineInstr &MI,
   if (NewMI) {
     NewMI->setMemRefs(MF, MI.memoperands());
     // Add a memory operand, foldMemoryOperandImpl doesn't do that.
-    assert((!(Flags & MachineMemOperand::MOStore) ||
-            NewMI->mayStore()) &&
+    assert((!(Flags & MachineMemOperand::MOStore) || NewMI->mayStore()) &&
            "Folded a def to a non-store!");
-    assert((!(Flags & MachineMemOperand::MOLoad) ||
-            NewMI->mayLoad()) &&
+    assert((!(Flags & MachineMemOperand::MOLoad) || NewMI->mayLoad()) &&
            "Folded a use to a non-load!");
     assert(MFI.getObjectOffset(FI) != -1);
     MachineMemOperand *MMO =
@@ -862,8 +863,8 @@ bool TargetInstrInfo::getMachineCombinerPatterns(
 }
 
 /// Return true when a code sequence can improve loop throughput.
-bool
-TargetInstrInfo::isThroughputPattern(MachineCombinerPattern Pattern) const {
+bool TargetInstrInfo::isThroughputPattern(
+    MachineCombinerPattern Pattern) const {
   return false;
 }
 
@@ -975,8 +976,7 @@ static std::pair<bool, bool> mustSwapOperands(MachineCombinerPattern Pattern) {
 /// Attempt the reassociation transformation to reduce critical path length.
 /// See the above comments before getMachineCombinerPatterns().
 void TargetInstrInfo::reassociateOps(
-    MachineInstr &Root, MachineInstr &Prev,
-    MachineCombinerPattern Pattern,
+    MachineInstr &Root, MachineInstr &Prev, MachineCombinerPattern Pattern,
     SmallVectorImpl<MachineInstr *> &InsInstrs,
     SmallVectorImpl<MachineInstr *> &DelInstrs,
     DenseMap<unsigned, unsigned> &InstrIdxForVirtReg) const {
@@ -990,19 +990,24 @@ void TargetInstrInfo::reassociateOps(
   // operands may be commuted. Each row corresponds to a pattern value,
   // and each column specifies the index of A, B, X, Y.
   unsigned OpIdx[4][4] = {
-    { 1, 1, 2, 2 },
-    { 1, 2, 2, 1 },
-    { 2, 1, 1, 2 },
-    { 2, 2, 1, 1 }
-  };
+      {1, 1, 2, 2}, {1, 2, 2, 1}, {2, 1, 1, 2}, {2, 2, 1, 1}};
 
   int Row;
   switch (Pattern) {
-  case MachineCombinerPattern::REASSOC_AX_BY: Row = 0; break;
-  case MachineCombinerPattern::REASSOC_AX_YB: Row = 1; break;
-  case MachineCombinerPattern::REASSOC_XA_BY: Row = 2; break;
-  case MachineCombinerPattern::REASSOC_XA_YB: Row = 3; break;
-  default: llvm_unreachable("unexpected MachineCombinerPattern");
+  case MachineCombinerPattern::REASSOC_AX_BY:
+    Row = 0;
+    break;
+  case MachineCombinerPattern::REASSOC_AX_YB:
+    Row = 1;
+    break;
+  case MachineCombinerPattern::REASSOC_XA_BY:
+    Row = 2;
+    break;
+  case MachineCombinerPattern::REASSOC_XA_YB:
+    Row = 3;
+    break;
+  default:
+    llvm_unreachable("unexpected MachineCombinerPattern");
   }
 
   MachineOperand &OpA = Prev.getOperand(OpIdx[Row][0]);
@@ -1161,7 +1166,8 @@ bool TargetInstrInfo::isReallyTriviallyReMaterializable(
   // If any of the registers accessed are non-constant, conservatively assume
   // the instruction is not rematerializable.
   for (const MachineOperand &MO : MI.operands()) {
-    if (!MO.isReg()) continue;
+    if (!MO.isReg())
+      continue;
     Register Reg = MO.getReg();
     if (Reg == 0)
       continue;
@@ -1201,7 +1207,7 @@ int TargetInstrInfo::getSPAdjust(const MachineInstr &MI) const {
   const MachineFunction *MF = MI.getMF();
   const TargetFrameLowering *TFI = MF->getSubtarget().getFrameLowering();
   bool StackGrowsDown =
-    TFI->getStackGrowthDirection() == TargetFrameLowering::StackGrowsDown;
+      TFI->getStackGrowthDirection() == TargetFrameLowering::StackGrowsDown;
 
   unsigned FrameSetupOpcode = getCallFrameSetupOpcode();
   unsigned FrameDestroyOpcode = getCallFrameDestroyOpcode();
@@ -1249,9 +1255,9 @@ bool TargetInstrInfo::usePreRAHazardRecognizer() const {
 }
 
 // Default implementation of CreateTargetRAHazardRecognizer.
-ScheduleHazardRecognizer *TargetInstrInfo::
-CreateTargetHazardRecognizer(const TargetSubtargetInfo *STI,
-                             const ScheduleDAG *DAG) const {
+ScheduleHazardRecognizer *
+TargetInstrInfo::CreateTargetHazardRecognizer(const TargetSubtargetInfo *STI,
+                                              const ScheduleDAG *DAG) const {
   // Dummy hazard recognizer allows all instructions to issue.
   return new ScheduleHazardRecognizer();
 }
@@ -1263,9 +1269,8 @@ ScheduleHazardRecognizer *TargetInstrInfo::CreateTargetMIHazardRecognizer(
 }
 
 // Default implementation of CreateTargetPostRAHazardRecognizer.
-ScheduleHazardRecognizer *TargetInstrInfo::
-CreateTargetPostRAHazardRecognizer(const InstrItineraryData *II,
-                                   const ScheduleDAG *DAG) const {
+ScheduleHazardRecognizer *TargetInstrInfo::CreateTargetPostRAHazardRecognizer(
+    const InstrItineraryData *II, const ScheduleDAG *DAG) const {
   return new ScoreboardHazardRecognizer(II, DAG, "post-RA-sched");
 }
 
@@ -1287,10 +1292,9 @@ bool TargetInstrInfo::getMemOperandWithOffset(
 //  SelectionDAG latency interface.
 //===----------------------------------------------------------------------===//
 
-int
-TargetInstrInfo::getOperandLatency(const InstrItineraryData *ItinData,
-                                   SDNode *DefNode, unsigned DefIdx,
-                                   SDNode *UseNode, unsigned UseIdx) const {
+int TargetInstrInfo::getOperandLatency(const InstrItineraryData *ItinData,
+                                       SDNode *DefNode, unsigned DefIdx,
+                                       SDNode *UseNode, unsigned UseIdx) const {
   if (!ItinData || ItinData->isEmpty())
     return -1;
 
@@ -1451,7 +1455,8 @@ TargetInstrInfo::describeLoadedValue(const MachineInstr &MI,
     // TODO: Can currently only handle mem instructions with a single define.
     // An example from the x86 target:
     //    ...
-    //    DIV64m $rsp, 1, $noreg, 24, $noreg, implicit-def dead $rax, implicit-def $rdx
+    //    DIV64m $rsp, 1, $noreg, 24, $noreg, implicit-def dead $rax,
+    //    implicit-def $rdx
     //    ...
     //
     if (MI.getNumExplicitDefs() != 1)
@@ -1501,8 +1506,8 @@ int TargetInstrInfo::getOperandLatency(const InstrItineraryData *ItinData,
 bool TargetInstrInfo::getRegSequenceInputs(
     const MachineInstr &MI, unsigned DefIdx,
     SmallVectorImpl<RegSubRegPairAndIdx> &InputRegs) const {
-  assert((MI.isRegSequence() ||
-          MI.isRegSequenceLike()) && "Instruction do not have the proper type");
+  assert((MI.isRegSequence() || MI.isRegSequenceLike()) &&
+         "Instruction do not have the proper type");
 
   if (!MI.isRegSequence())
     return getRegSequenceLikeInputs(MI, DefIdx, InputRegs);
@@ -1528,8 +1533,8 @@ bool TargetInstrInfo::getRegSequenceInputs(
 bool TargetInstrInfo::getExtractSubregInputs(
     const MachineInstr &MI, unsigned DefIdx,
     RegSubRegPairAndIdx &InputReg) const {
-  assert((MI.isExtractSubreg() ||
-      MI.isExtractSubregLike()) && "Instruction do not have the proper type");
+  assert((MI.isExtractSubreg() || MI.isExtractSubregLike()) &&
+         "Instruction do not have the proper type");
 
   if (!MI.isExtractSubreg())
     return getExtractSubregLikeInputs(MI, DefIdx, InputReg);
@@ -1551,10 +1556,10 @@ bool TargetInstrInfo::getExtractSubregInputs(
 }
 
 bool TargetInstrInfo::getInsertSubregInputs(
-    const MachineInstr &MI, unsigned DefIdx,
-    RegSubRegPair &BaseReg, RegSubRegPairAndIdx &InsertedReg) const {
-  assert((MI.isInsertSubreg() ||
-      MI.isInsertSubregLike()) && "Instruction do not have the proper type");
+    const MachineInstr &MI, unsigned DefIdx, RegSubRegPair &BaseReg,
+    RegSubRegPairAndIdx &InsertedReg) const {
+  assert((MI.isInsertSubreg() || MI.isInsertSubregLike()) &&
+         "Instruction do not have the proper type");
 
   if (!MI.isInsertSubreg())
     return getInsertSubregLikeInputs(MI, DefIdx, BaseReg, InsertedReg);
@@ -1656,8 +1661,9 @@ void TargetInstrInfo::mergeOutliningCandidateAttributes(
     F.addFnAttr(Attribute::NoUnwind);
 }
 
-outliner::InstrType TargetInstrInfo::getOutliningType(
-    MachineBasicBlock::iterator &MIT, unsigned Flags) const {
+outliner::InstrType
+TargetInstrInfo::getOutliningType(MachineBasicBlock::iterator &MIT,
+                                  unsigned Flags) const {
   MachineInstr &MI = *MIT;
 
   // NOTE: MI.isMetaInstruction() will match CFI_INSTRUCTION, but some targets
@@ -1680,13 +1686,13 @@ outliner::InstrType TargetInstrInfo::getOutliningType(
 
   // Some other special cases.
   switch (MI.getOpcode()) {
-    case TargetOpcode::IMPLICIT_DEF:
-    case TargetOpcode::KILL:
-    case TargetOpcode::LIFETIME_START:
-    case TargetOpcode::LIFETIME_END:
-      return outliner::InstrType::Invisible;
-    default:
-      break;
+  case TargetOpcode::IMPLICIT_DEF:
+  case TargetOpcode::KILL:
+  case TargetOpcode::LIFETIME_START:
+  case TargetOpcode::LIFETIME_END:
+    return outliner::InstrType::Invisible;
+  default:
+    break;
   }
 
   // Is this a terminator for a basic block?
diff --git a/llvm/lib/CodeGen/TargetLoweringBase.cpp b/llvm/lib/CodeGen/TargetLoweringBase.cpp
index 3e4bff5ddce1264..812964dc4f0b189 100644
--- a/llvm/lib/CodeGen/TargetLoweringBase.cpp
+++ b/llvm/lib/CodeGen/TargetLoweringBase.cpp
@@ -70,13 +70,13 @@ static cl::opt<bool> JumpIsExpensiveOverride(
     cl::desc("Do not create extra branches to split comparison logic."),
     cl::Hidden);
 
-static cl::opt<unsigned> MinimumJumpTableEntries
-  ("min-jump-table-entries", cl::init(4), cl::Hidden,
-   cl::desc("Set minimum number of entries to use a jump table."));
+static cl::opt<unsigned> MinimumJumpTableEntries(
+    "min-jump-table-entries", cl::init(4), cl::Hidden,
+    cl::desc("Set minimum number of entries to use a jump table."));
 
-static cl::opt<unsigned> MaximumJumpTableSize
-  ("max-jump-table-size", cl::init(UINT_MAX), cl::Hidden,
-   cl::desc("Set maximum size of jump tables."));
+static cl::opt<unsigned>
+    MaximumJumpTableSize("max-jump-table-size", cl::init(UINT_MAX), cl::Hidden,
+                         cl::desc("Set maximum size of jump tables."));
 
 /// Minimum jump table density for normal functions.
 static cl::opt<unsigned>
@@ -94,9 +94,10 @@ static cl::opt<unsigned> OptsizeJumpTableDensity(
 // correctly by preventing mutating strict fp operation to normal fp operation
 // during development. When the backend supports strict float operation, this
 // option will be meaningless.
-static cl::opt<bool> DisableStrictNodeMutation("disable-strictnode-mutation",
-       cl::desc("Don't mutate strict-float node to a legalize node"),
-       cl::init(false), cl::Hidden);
+static cl::opt<bool> DisableStrictNodeMutation(
+    "disable-strictnode-mutation",
+    cl::desc("Don't mutate strict-float node to a legalize node"),
+    cl::init(false), cl::Hidden);
 
 static bool darwinHasSinCos(const Triple &TT) {
   assert(TT.isOSDarwin() && "should be called with darwin triple");
@@ -114,8 +115,7 @@ static bool darwinHasSinCos(const Triple &TT) {
 }
 
 void TargetLoweringBase::InitLibcalls(const Triple &TT) {
-#define HANDLE_LIBCALL(code, name) \
-  setLibcallName(RTLIB::code, name);
+#define HANDLE_LIBCALL(code, name) setLibcallName(RTLIB::code, name);
 #include "llvm/IR/RuntimeLibcalls.def"
 #undef HANDLE_LIBCALL
   // Initialize calling conventions to their default.
@@ -225,19 +225,17 @@ void TargetLoweringBase::InitLibcalls(const Triple &TT) {
 
 /// GetFPLibCall - Helper to return the right libcall for the given floating
 /// point type, or UNKNOWN_LIBCALL if there is none.
-RTLIB::Libcall RTLIB::getFPLibCall(EVT VT,
-                                   RTLIB::Libcall Call_F32,
+RTLIB::Libcall RTLIB::getFPLibCall(EVT VT, RTLIB::Libcall Call_F32,
                                    RTLIB::Libcall Call_F64,
                                    RTLIB::Libcall Call_F80,
                                    RTLIB::Libcall Call_F128,
                                    RTLIB::Libcall Call_PPCF128) {
-  return
-    VT == MVT::f32 ? Call_F32 :
-    VT == MVT::f64 ? Call_F64 :
-    VT == MVT::f80 ? Call_F80 :
-    VT == MVT::f128 ? Call_F128 :
-    VT == MVT::ppcf128 ? Call_PPCF128 :
-    RTLIB::UNKNOWN_LIBCALL;
+  return VT == MVT::f32       ? Call_F32
+         : VT == MVT::f64     ? Call_F64
+         : VT == MVT::f80     ? Call_F80
+         : VT == MVT::f128    ? Call_F128
+         : VT == MVT::ppcf128 ? Call_PPCF128
+                              : RTLIB::UNKNOWN_LIBCALL;
 }
 
 /// getFPEXT - Return the FPEXT_*_* value for the given types, or
@@ -652,7 +650,8 @@ RTLIB::Libcall RTLIB::getMEMCPY_ELEMENT_UNORDERED_ATOMIC(uint64_t ElementSize) {
   }
 }
 
-RTLIB::Libcall RTLIB::getMEMMOVE_ELEMENT_UNORDERED_ATOMIC(uint64_t ElementSize) {
+RTLIB::Libcall
+RTLIB::getMEMMOVE_ELEMENT_UNORDERED_ATOMIC(uint64_t ElementSize) {
   switch (ElementSize) {
   case 1:
     return MEMMOVE_ELEMENT_UNORDERED_ATOMIC_1;
@@ -755,7 +754,8 @@ TargetLoweringBase::TargetLoweringBase(const TargetMachine &tm) : TM(tm) {
   MinCmpXchgSizeInBits = 0;
   SupportsUnalignedAtomics = false;
 
-  std::fill(std::begin(LibcallRoutineNames), std::end(LibcallRoutineNames), nullptr);
+  std::fill(std::begin(LibcallRoutineNames), std::end(LibcallRoutineNames),
+            nullptr);
 
   InitLibcalls(TM.getTargetTriple());
   InitCmpLibcallCCs(CmpLibcallCCs);
@@ -769,8 +769,8 @@ void TargetLoweringBase::initActions() {
   memset(IndexedModeActions, 0, sizeof(IndexedModeActions));
   memset(CondCodeActions, 0, sizeof(CondCodeActions));
   std::fill(std::begin(RegClassForVT), std::end(RegClassForVT), nullptr);
-  std::fill(std::begin(TargetDAGCombineArray),
-            std::end(TargetDAGCombineArray), 0);
+  std::fill(std::begin(TargetDAGCombineArray), std::end(TargetDAGCombineArray),
+            0);
 
   // We're somewhat special casing MVT::i2 and MVT::i4. Ideally we want to
   // remove this and targets should individually set these types if not legal.
@@ -879,9 +879,9 @@ void TargetLoweringBase::initActions() {
                           ISD::ZERO_EXTEND_VECTOR_INREG, ISD::SPLAT_VECTOR},
                          VT, Expand);
 
-    // Constrained floating-point operations default to expand.
+      // Constrained floating-point operations default to expand.
 #define DAG_INSTRUCTION(NAME, NARG, ROUND_MODE, INTRINSIC, DAGN)               \
-    setOperationAction(ISD::STRICT_##DAGN, VT, Expand);
+  setOperationAction(ISD::STRICT_##DAGN, VT, Expand);
 #include "llvm/IR/ConstrainedOps.def"
 
     // For most targets @llvm.get.dynamic.area.offset just returns 0.
@@ -902,7 +902,7 @@ void TargetLoweringBase::initActions() {
 
     // VP operations default to expand.
 #define BEGIN_REGISTER_VP_SDNODE(SDOPC, ...)                                   \
-    setOperationAction(ISD::SDOPC, VT, Expand);
+  setOperationAction(ISD::SDOPC, VT, Expand);
 #include "llvm/IR/VPIntrinsics.def"
 
     // FP environment operations default to expand.
@@ -920,9 +920,9 @@ void TargetLoweringBase::initActions() {
   // ConstantFP nodes default to expand.  Targets can either change this to
   // Legal, in which case all fp constants are legal, or use isFPImmLegal()
   // to optimize expansions for certain constants.
-  setOperationAction(ISD::ConstantFP,
-                     {MVT::bf16, MVT::f16, MVT::f32, MVT::f64, MVT::f80, MVT::f128},
-                     Expand);
+  setOperationAction(
+      ISD::ConstantFP,
+      {MVT::bf16, MVT::f16, MVT::f32, MVT::f64, MVT::f80, MVT::f128}, Expand);
 
   // These library functions default to expand.
   setOperationAction({ISD::FCBRT, ISD::FLOG, ISD::FLOG2, ISD::FLOG10, ISD::FEXP,
@@ -1182,7 +1182,7 @@ static unsigned getVectorTypeBreakdownMVT(MVT VT, MVT &IntermediateVT,
 
   MVT DestVT = TLI->getRegisterType(NewVT);
   RegisterVT = DestVT;
-  if (EVT(DestVT).bitsLT(NewVT))    // Value is expanded, e.g. i64 -> i16.
+  if (EVT(DestVT).bitsLT(NewVT)) // Value is expanded, e.g. i64 -> i16.
     return NumVectorRegs * (LaneSizeInBits / DestVT.getScalarSizeInBits());
 
   // Otherwise, promotion or legal types use the same number of registers as
@@ -1345,7 +1345,7 @@ void TargetLoweringBase::computeRegisterProperties(
   // many registers to represent as the previous ValueType.
   for (unsigned ExpandedReg = LargestIntReg + 1;
        ExpandedReg <= MVT::LAST_INTEGER_VALUETYPE; ++ExpandedReg) {
-    NumRegistersForVT[ExpandedReg] = 2*NumRegistersForVT[ExpandedReg-1];
+    NumRegistersForVT[ExpandedReg] = 2 * NumRegistersForVT[ExpandedReg - 1];
     RegisterTypeForVT[ExpandedReg] = (MVT::SimpleValueType)LargestIntReg;
     TransformToType[ExpandedReg] = (MVT::SimpleValueType)(ExpandedReg - 1);
     ValueTypeActions.setTypeAction((MVT::SimpleValueType)ExpandedReg,
@@ -1355,14 +1355,14 @@ void TargetLoweringBase::computeRegisterProperties(
   // Inspect all of the ValueType's smaller than the largest integer
   // register to see which ones need promotion.
   unsigned LegalIntReg = LargestIntReg;
-  for (unsigned IntReg = LargestIntReg - 1;
-       IntReg >= (unsigned)MVT::i1; --IntReg) {
+  for (unsigned IntReg = LargestIntReg - 1; IntReg >= (unsigned)MVT::i1;
+       --IntReg) {
     MVT IVT = (MVT::SimpleValueType)IntReg;
     if (isTypeLegal(IVT)) {
       LegalIntReg = IntReg;
     } else {
       RegisterTypeForVT[IntReg] = TransformToType[IntReg] =
-        (MVT::SimpleValueType)LegalIntReg;
+          (MVT::SimpleValueType)LegalIntReg;
       ValueTypeActions.setTypeAction(IVT, TypePromoteInteger);
     }
   }
@@ -1370,7 +1370,7 @@ void TargetLoweringBase::computeRegisterProperties(
   // ppcf128 type is really two f64's.
   if (!isTypeLegal(MVT::ppcf128)) {
     if (isTypeLegal(MVT::f64)) {
-      NumRegistersForVT[MVT::ppcf128] = 2*NumRegistersForVT[MVT::f64];
+      NumRegistersForVT[MVT::ppcf128] = 2 * NumRegistersForVT[MVT::f64];
       RegisterTypeForVT[MVT::ppcf128] = MVT::f64;
       TransformToType[MVT::ppcf128] = MVT::f64;
       ValueTypeActions.setTypeAction(MVT::ppcf128, TypeExpandFloat);
@@ -1394,7 +1394,7 @@ void TargetLoweringBase::computeRegisterProperties(
   // Decide how to handle f80. If the target does not have native f80 support,
   // expand it to i96 and we will be generating soft float library calls.
   if (!isTypeLegal(MVT::f80)) {
-    NumRegistersForVT[MVT::f80] = 3*NumRegistersForVT[MVT::i32];
+    NumRegistersForVT[MVT::f80] = 3 * NumRegistersForVT[MVT::i32];
     RegisterTypeForVT[MVT::f80] = RegisterTypeForVT[MVT::i32];
     TransformToType[MVT::f80] = MVT::i32;
     ValueTypeActions.setTypeAction(MVT::f80, TypeSoftenFloat);
@@ -1449,7 +1449,7 @@ void TargetLoweringBase::computeRegisterProperties(
   // Loop over all of the vector value types to see which need transformations.
   for (unsigned i = MVT::FIRST_VECTOR_VALUETYPE;
        i <= (unsigned)MVT::LAST_VECTOR_VALUETYPE; ++i) {
-    MVT VT = (MVT::SimpleValueType) i;
+    MVT VT = (MVT::SimpleValueType)i;
     if (isTypeLegal(VT))
       continue;
 
@@ -1460,14 +1460,13 @@ void TargetLoweringBase::computeRegisterProperties(
     LegalizeTypeAction PreferredAction = getPreferredVectorAction(VT);
     switch (PreferredAction) {
     case TypePromoteInteger: {
-      MVT::SimpleValueType EndVT = IsScalable ?
-                                   MVT::LAST_INTEGER_SCALABLE_VECTOR_VALUETYPE :
-                                   MVT::LAST_INTEGER_FIXEDLEN_VECTOR_VALUETYPE;
+      MVT::SimpleValueType EndVT =
+          IsScalable ? MVT::LAST_INTEGER_SCALABLE_VECTOR_VALUETYPE
+                     : MVT::LAST_INTEGER_FIXEDLEN_VECTOR_VALUETYPE;
       // Try to promote the elements of integer vectors. If no legal
       // promotion was found, fall through to the widen-vector method.
-      for (unsigned nVT = i + 1;
-           (MVT::SimpleValueType)nVT <= EndVT; ++nVT) {
-        MVT SVT = (MVT::SimpleValueType) nVT;
+      for (unsigned nVT = i + 1; (MVT::SimpleValueType)nVT <= EndVT; ++nVT) {
+        MVT SVT = (MVT::SimpleValueType)nVT;
         // Promote vectors of integers to vectors with the same number
         // of elements, with a wider element type.
         if (SVT.getScalarSizeInBits() > EltVT.getFixedSizeInBits() &&
@@ -1489,7 +1488,7 @@ void TargetLoweringBase::computeRegisterProperties(
       if (isPowerOf2_32(EC.getKnownMinValue())) {
         // Try to widen the vector.
         for (unsigned nVT = i + 1; nVT <= MVT::LAST_VECTOR_VALUETYPE; ++nVT) {
-          MVT SVT = (MVT::SimpleValueType) nVT;
+          MVT SVT = (MVT::SimpleValueType)nVT;
           if (SVT.getVectorElementType() == EltVT &&
               SVT.isScalableVector() == IsScalable &&
               SVT.getVectorElementCount().getKnownMinValue() >
@@ -1523,8 +1522,8 @@ void TargetLoweringBase::computeRegisterProperties(
       MVT IntermediateVT;
       MVT RegisterVT;
       unsigned NumIntermediates;
-      unsigned NumRegisters = getVectorTypeBreakdownMVT(VT, IntermediateVT,
-          NumIntermediates, RegisterVT, this);
+      unsigned NumRegisters = getVectorTypeBreakdownMVT(
+          VT, IntermediateVT, NumIntermediates, RegisterVT, this);
       NumRegistersForVT[i] = NumRegisters;
       assert(NumRegistersForVT[i] == NumRegisters &&
              "NumRegistersForVT size cannot represent NumRegisters!");
@@ -1561,7 +1560,7 @@ void TargetLoweringBase::computeRegisterProperties(
   // a group of value types. For example, on i386, i8, i16, and i32
   // representative would be GR32; while on x86_64 it's GR64.
   for (unsigned i = 0; i != MVT::VALUETYPE_SIZE; ++i) {
-    const TargetRegisterClass* RRC;
+    const TargetRegisterClass *RRC;
     uint8_t Cost;
     std::tie(RRC, Cost) = findRepresentativeClass(TRI, (MVT::SimpleValueType)i);
     RepRegClassForVT[i] = RRC;
@@ -1664,12 +1663,12 @@ unsigned TargetLoweringBase::getVectorTypeBreakdown(LLVMContext &Context,
   MVT DestVT = getRegisterType(Context, NewVT);
   RegisterVT = DestVT;
 
-  if (EVT(DestVT).bitsLT(NewVT)) {  // Value is expanded, e.g. i64 -> i16.
+  if (EVT(DestVT).bitsLT(NewVT)) { // Value is expanded, e.g. i64 -> i16.
     TypeSize NewVTSize = NewVT.getSizeInBits();
     // Convert sizes such as i33 to i64.
     if (!llvm::has_single_bit<uint32_t>(NewVTSize.getKnownMinValue()))
       NewVTSize = NewVTSize.coefficientNextPowerOf2();
-    return NumVectorRegs*(NewVTSize/DestVT.getSizeInBits());
+    return NumVectorRegs * (NewVTSize / DestVT.getSizeInBits());
   }
 
   // Otherwise, promotion or legal types use the same number of registers as
@@ -1715,7 +1714,8 @@ void llvm::GetReturnInfo(CallingConv::ID CC, Type *ReturnType,
   SmallVector<EVT, 4> ValueVTs;
   ComputeValueVTs(TLI, DL, ReturnType, ValueVTs);
   unsigned NumValues = ValueVTs.size();
-  if (NumValues == 0) return;
+  if (NumValues == 0)
+    return;
 
   for (unsigned j = 0, f = NumValues; j != f; ++j) {
     EVT VT = ValueVTs[j];
@@ -1829,73 +1829,140 @@ int TargetLoweringBase::InstructionOpcodeToISD(unsigned Opcode) const {
 #include "llvm/IR/Instruction.def"
   };
   switch (static_cast<InstructionOpcodes>(Opcode)) {
-  case Ret:            return 0;
-  case Br:             return 0;
-  case Switch:         return 0;
-  case IndirectBr:     return 0;
-  case Invoke:         return 0;
-  case CallBr:         return 0;
-  case Resume:         return 0;
-  case Unreachable:    return 0;
-  case CleanupRet:     return 0;
-  case CatchRet:       return 0;
-  case CatchPad:       return 0;
-  case CatchSwitch:    return 0;
-  case CleanupPad:     return 0;
-  case FNeg:           return ISD::FNEG;
-  case Add:            return ISD::ADD;
-  case FAdd:           return ISD::FADD;
-  case Sub:            return ISD::SUB;
-  case FSub:           return ISD::FSUB;
-  case Mul:            return ISD::MUL;
-  case FMul:           return ISD::FMUL;
-  case UDiv:           return ISD::UDIV;
-  case SDiv:           return ISD::SDIV;
-  case FDiv:           return ISD::FDIV;
-  case URem:           return ISD::UREM;
-  case SRem:           return ISD::SREM;
-  case FRem:           return ISD::FREM;
-  case Shl:            return ISD::SHL;
-  case LShr:           return ISD::SRL;
-  case AShr:           return ISD::SRA;
-  case And:            return ISD::AND;
-  case Or:             return ISD::OR;
-  case Xor:            return ISD::XOR;
-  case Alloca:         return 0;
-  case Load:           return ISD::LOAD;
-  case Store:          return ISD::STORE;
-  case GetElementPtr:  return 0;
-  case Fence:          return 0;
-  case AtomicCmpXchg:  return 0;
-  case AtomicRMW:      return 0;
-  case Trunc:          return ISD::TRUNCATE;
-  case ZExt:           return ISD::ZERO_EXTEND;
-  case SExt:           return ISD::SIGN_EXTEND;
-  case FPToUI:         return ISD::FP_TO_UINT;
-  case FPToSI:         return ISD::FP_TO_SINT;
-  case UIToFP:         return ISD::UINT_TO_FP;
-  case SIToFP:         return ISD::SINT_TO_FP;
-  case FPTrunc:        return ISD::FP_ROUND;
-  case FPExt:          return ISD::FP_EXTEND;
-  case PtrToInt:       return ISD::BITCAST;
-  case IntToPtr:       return ISD::BITCAST;
-  case BitCast:        return ISD::BITCAST;
-  case AddrSpaceCast:  return ISD::ADDRSPACECAST;
-  case ICmp:           return ISD::SETCC;
-  case FCmp:           return ISD::SETCC;
-  case PHI:            return 0;
-  case Call:           return 0;
-  case Select:         return ISD::SELECT;
-  case UserOp1:        return 0;
-  case UserOp2:        return 0;
-  case VAArg:          return 0;
-  case ExtractElement: return ISD::EXTRACT_VECTOR_ELT;
-  case InsertElement:  return ISD::INSERT_VECTOR_ELT;
-  case ShuffleVector:  return ISD::VECTOR_SHUFFLE;
-  case ExtractValue:   return ISD::MERGE_VALUES;
-  case InsertValue:    return ISD::MERGE_VALUES;
-  case LandingPad:     return 0;
-  case Freeze:         return ISD::FREEZE;
+  case Ret:
+    return 0;
+  case Br:
+    return 0;
+  case Switch:
+    return 0;
+  case IndirectBr:
+    return 0;
+  case Invoke:
+    return 0;
+  case CallBr:
+    return 0;
+  case Resume:
+    return 0;
+  case Unreachable:
+    return 0;
+  case CleanupRet:
+    return 0;
+  case CatchRet:
+    return 0;
+  case CatchPad:
+    return 0;
+  case CatchSwitch:
+    return 0;
+  case CleanupPad:
+    return 0;
+  case FNeg:
+    return ISD::FNEG;
+  case Add:
+    return ISD::ADD;
+  case FAdd:
+    return ISD::FADD;
+  case Sub:
+    return ISD::SUB;
+  case FSub:
+    return ISD::FSUB;
+  case Mul:
+    return ISD::MUL;
+  case FMul:
+    return ISD::FMUL;
+  case UDiv:
+    return ISD::UDIV;
+  case SDiv:
+    return ISD::SDIV;
+  case FDiv:
+    return ISD::FDIV;
+  case URem:
+    return ISD::UREM;
+  case SRem:
+    return ISD::SREM;
+  case FRem:
+    return ISD::FREM;
+  case Shl:
+    return ISD::SHL;
+  case LShr:
+    return ISD::SRL;
+  case AShr:
+    return ISD::SRA;
+  case And:
+    return ISD::AND;
+  case Or:
+    return ISD::OR;
+  case Xor:
+    return ISD::XOR;
+  case Alloca:
+    return 0;
+  case Load:
+    return ISD::LOAD;
+  case Store:
+    return ISD::STORE;
+  case GetElementPtr:
+    return 0;
+  case Fence:
+    return 0;
+  case AtomicCmpXchg:
+    return 0;
+  case AtomicRMW:
+    return 0;
+  case Trunc:
+    return ISD::TRUNCATE;
+  case ZExt:
+    return ISD::ZERO_EXTEND;
+  case SExt:
+    return ISD::SIGN_EXTEND;
+  case FPToUI:
+    return ISD::FP_TO_UINT;
+  case FPToSI:
+    return ISD::FP_TO_SINT;
+  case UIToFP:
+    return ISD::UINT_TO_FP;
+  case SIToFP:
+    return ISD::SINT_TO_FP;
+  case FPTrunc:
+    return ISD::FP_ROUND;
+  case FPExt:
+    return ISD::FP_EXTEND;
+  case PtrToInt:
+    return ISD::BITCAST;
+  case IntToPtr:
+    return ISD::BITCAST;
+  case BitCast:
+    return ISD::BITCAST;
+  case AddrSpaceCast:
+    return ISD::ADDRSPACECAST;
+  case ICmp:
+    return ISD::SETCC;
+  case FCmp:
+    return ISD::SETCC;
+  case PHI:
+    return 0;
+  case Call:
+    return 0;
+  case Select:
+    return ISD::SELECT;
+  case UserOp1:
+    return 0;
+  case UserOp2:
+    return 0;
+  case VAArg:
+    return 0;
+  case ExtractElement:
+    return ISD::EXTRACT_VECTOR_ELT;
+  case InsertElement:
+    return ISD::INSERT_VECTOR_ELT;
+  case ShuffleVector:
+    return ISD::VECTOR_SHUFFLE;
+  case ExtractValue:
+    return ISD::MERGE_VALUES;
+  case InsertValue:
+    return ISD::MERGE_VALUES;
+  case LandingPad:
+    return 0;
+  case Freeze:
+    return ISD::FREEZE;
   }
 
   llvm_unreachable("Unknown instruction type encountered!");
@@ -1914,15 +1981,14 @@ TargetLoweringBase::getDefaultSafeStackPointerLocation(IRBuilderBase &IRB,
   Type *StackPtrTy = Type::getInt8PtrTy(M->getContext());
 
   if (!UnsafeStackPtr) {
-    auto TLSModel = UseTLS ?
-        GlobalValue::InitialExecTLSModel :
-        GlobalValue::NotThreadLocal;
+    auto TLSModel =
+        UseTLS ? GlobalValue::InitialExecTLSModel : GlobalValue::NotThreadLocal;
     // The global variable is not defined yet, define it ourselves.
     // We use the initial-exec TLS model because we do not support the
     // variable living anywhere other than in the main executable.
-    UnsafeStackPtr = new GlobalVariable(
-        *M, StackPtrTy, false, GlobalValue::ExternalLinkage, nullptr,
-        UnsafeStackPtrVar, nullptr, TLSModel);
+    UnsafeStackPtr =
+        new GlobalVariable(*M, StackPtrTy, false, GlobalValue::ExternalLinkage,
+                           nullptr, UnsafeStackPtrVar, nullptr, TLSModel);
   } else {
     // The variable exists, check its type and attributes.
     if (UnsafeStackPtr->getValueType() != StackPtrTy)
@@ -1956,12 +2022,13 @@ TargetLoweringBase::getSafeStackPointerLocation(IRBuilderBase &IRB) const {
 /// by AM is legal for this target, for a load/store of the specified type.
 bool TargetLoweringBase::isLegalAddressingMode(const DataLayout &DL,
                                                const AddrMode &AM, Type *Ty,
-                                               unsigned AS, Instruction *I) const {
+                                               unsigned AS,
+                                               Instruction *I) const {
   // The default implementation of this implements a conservative RISCy, r+r and
   // r+i addr mode.
 
   // Allows a sign-extended 16-bit immediate field.
-  if (AM.BaseOffs <= -(1LL << 16) || AM.BaseOffs >= (1LL << 16)-1)
+  if (AM.BaseOffs <= -(1LL << 16) || AM.BaseOffs >= (1LL << 16) - 1)
     return false;
 
   // No global is ever allowed as a base.
@@ -1970,15 +2037,15 @@ bool TargetLoweringBase::isLegalAddressingMode(const DataLayout &DL,
 
   // Only support r+r,
   switch (AM.Scale) {
-  case 0:  // "r+i" or just "i", depending on HasBaseReg.
+  case 0: // "r+i" or just "i", depending on HasBaseReg.
     break;
   case 1:
-    if (AM.HasBaseReg && AM.BaseOffs)  // "r+r+i" is not allowed.
+    if (AM.HasBaseReg && AM.BaseOffs) // "r+r+i" is not allowed.
       return false;
     // Otherwise we have r+r or r+i.
     break;
   case 2:
-    if (AM.HasBaseReg || AM.BaseOffs)  // 2*r+r  or  2*r+i is not allowed.
+    if (AM.HasBaseReg || AM.BaseOffs) // 2*r+r  or  2*r+i is not allowed.
       return false;
     // Allow 2*r as r+r.
     break;
diff --git a/llvm/lib/CodeGen/TargetLoweringObjectFileImpl.cpp b/llvm/lib/CodeGen/TargetLoweringObjectFileImpl.cpp
index 55fb522554fad25..d08780828494a55 100644
--- a/llvm/lib/CodeGen/TargetLoweringObjectFileImpl.cpp
+++ b/llvm/lib/CodeGen/TargetLoweringObjectFileImpl.cpp
@@ -80,7 +80,7 @@ static void GetObjCImageInfo(Module &M, unsigned &Version, unsigned &Flags,
   SmallVector<Module::ModuleFlagEntry, 8> ModuleFlags;
   M.getModuleFlagsMetadata(ModuleFlags);
 
-  for (const auto &MFE: ModuleFlags) {
+  for (const auto &MFE : ModuleFlags) {
     // Ignore flags with 'Require' behaviour.
     if (MFE.Behavior == Module::Require)
       continue;
@@ -97,8 +97,8 @@ static void GetObjCImageInfo(Module &M, unsigned &Version, unsigned &Flags,
     } else if (Key == "Objective-C Image Info Section") {
       Section = cast<MDString>(MFE.Val)->getString();
     }
-    // Backend generates L_OBJC_IMAGE_INFO from Swift ABI version + major + minor +
-    // "Objective-C Garbage Collection".
+    // Backend generates L_OBJC_IMAGE_INFO from Swift ABI version + major +
+    // minor + "Objective-C Garbage Collection".
     else if (Key == "Swift ABI Version") {
       Flags |= (mdconst::extract<ConstantInt>(MFE.Val)->getZExtValue()) << 8;
     } else if (Key == "Swift Major Version") {
@@ -136,11 +136,10 @@ void TargetLoweringObjectFileELF::Initialize(MCContext &Ctx,
   case Triple::ppc:
   case Triple::ppcle:
   case Triple::x86:
-    PersonalityEncoding = isPositionIndependent()
-                              ? dwarf::DW_EH_PE_indirect |
-                                    dwarf::DW_EH_PE_pcrel |
-                                    dwarf::DW_EH_PE_sdata4
-                              : dwarf::DW_EH_PE_absptr;
+    PersonalityEncoding = isPositionIndependent() ? dwarf::DW_EH_PE_indirect |
+                                                        dwarf::DW_EH_PE_pcrel |
+                                                        dwarf::DW_EH_PE_sdata4
+                                                  : dwarf::DW_EH_PE_absptr;
     LSDAEncoding = isPositionIndependent()
                        ? dwarf::DW_EH_PE_pcrel | dwarf::DW_EH_PE_sdata4
                        : dwarf::DW_EH_PE_absptr;
@@ -152,22 +151,24 @@ void TargetLoweringObjectFileELF::Initialize(MCContext &Ctx,
   case Triple::x86_64:
     if (isPositionIndependent()) {
       PersonalityEncoding = dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel |
-        ((CM == CodeModel::Small || CM == CodeModel::Medium)
-         ? dwarf::DW_EH_PE_sdata4 : dwarf::DW_EH_PE_sdata8);
+                            ((CM == CodeModel::Small || CM == CodeModel::Medium)
+                                 ? dwarf::DW_EH_PE_sdata4
+                                 : dwarf::DW_EH_PE_sdata8);
       LSDAEncoding = dwarf::DW_EH_PE_pcrel |
-        (CM == CodeModel::Small
-         ? dwarf::DW_EH_PE_sdata4 : dwarf::DW_EH_PE_sdata8);
+                     (CM == CodeModel::Small ? dwarf::DW_EH_PE_sdata4
+                                             : dwarf::DW_EH_PE_sdata8);
       TTypeEncoding = dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel |
-        ((CM == CodeModel::Small || CM == CodeModel::Medium)
-         ? dwarf::DW_EH_PE_sdata4 : dwarf::DW_EH_PE_sdata8);
+                      ((CM == CodeModel::Small || CM == CodeModel::Medium)
+                           ? dwarf::DW_EH_PE_sdata4
+                           : dwarf::DW_EH_PE_sdata8);
     } else {
-      PersonalityEncoding =
-        (CM == CodeModel::Small || CM == CodeModel::Medium)
-        ? dwarf::DW_EH_PE_udata4 : dwarf::DW_EH_PE_absptr;
-      LSDAEncoding = (CM == CodeModel::Small)
-        ? dwarf::DW_EH_PE_udata4 : dwarf::DW_EH_PE_absptr;
-      TTypeEncoding = (CM == CodeModel::Small)
-        ? dwarf::DW_EH_PE_udata4 : dwarf::DW_EH_PE_absptr;
+      PersonalityEncoding = (CM == CodeModel::Small || CM == CodeModel::Medium)
+                                ? dwarf::DW_EH_PE_udata4
+                                : dwarf::DW_EH_PE_absptr;
+      LSDAEncoding = (CM == CodeModel::Small) ? dwarf::DW_EH_PE_udata4
+                                              : dwarf::DW_EH_PE_absptr;
+      TTypeEncoding = (CM == CodeModel::Small) ? dwarf::DW_EH_PE_udata4
+                                               : dwarf::DW_EH_PE_absptr;
     }
     break;
   case Triple::hexagon:
@@ -226,19 +227,19 @@ void TargetLoweringObjectFileELF::Initialize(MCContext &Ctx,
   case Triple::ppc64:
   case Triple::ppc64le:
     PersonalityEncoding = dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel |
-      dwarf::DW_EH_PE_udata8;
+                          dwarf::DW_EH_PE_udata8;
     LSDAEncoding = dwarf::DW_EH_PE_pcrel | dwarf::DW_EH_PE_udata8;
     TTypeEncoding = dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel |
-      dwarf::DW_EH_PE_udata8;
+                    dwarf::DW_EH_PE_udata8;
     break;
   case Triple::sparcel:
   case Triple::sparc:
     if (isPositionIndependent()) {
       LSDAEncoding = dwarf::DW_EH_PE_pcrel | dwarf::DW_EH_PE_sdata4;
       PersonalityEncoding = dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel |
-        dwarf::DW_EH_PE_sdata4;
+                            dwarf::DW_EH_PE_sdata4;
       TTypeEncoding = dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel |
-        dwarf::DW_EH_PE_sdata4;
+                      dwarf::DW_EH_PE_sdata4;
     } else {
       LSDAEncoding = dwarf::DW_EH_PE_absptr;
       PersonalityEncoding = dwarf::DW_EH_PE_absptr;
@@ -259,9 +260,9 @@ void TargetLoweringObjectFileELF::Initialize(MCContext &Ctx,
     LSDAEncoding = dwarf::DW_EH_PE_pcrel | dwarf::DW_EH_PE_sdata4;
     if (isPositionIndependent()) {
       PersonalityEncoding = dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel |
-        dwarf::DW_EH_PE_sdata4;
+                            dwarf::DW_EH_PE_sdata4;
       TTypeEncoding = dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel |
-        dwarf::DW_EH_PE_sdata4;
+                      dwarf::DW_EH_PE_sdata4;
     } else {
       PersonalityEncoding = dwarf::DW_EH_PE_absptr;
       TTypeEncoding = dwarf::DW_EH_PE_absptr;
@@ -272,10 +273,10 @@ void TargetLoweringObjectFileELF::Initialize(MCContext &Ctx,
     // values will be in range.
     if (isPositionIndependent()) {
       PersonalityEncoding = dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel |
-        dwarf::DW_EH_PE_sdata4;
+                            dwarf::DW_EH_PE_sdata4;
       LSDAEncoding = dwarf::DW_EH_PE_pcrel | dwarf::DW_EH_PE_sdata4;
       TTypeEncoding = dwarf::DW_EH_PE_indirect | dwarf::DW_EH_PE_pcrel |
-        dwarf::DW_EH_PE_sdata4;
+                      dwarf::DW_EH_PE_sdata4;
     } else {
       PersonalityEncoding = dwarf::DW_EH_PE_absptr;
       LSDAEncoding = dwarf::DW_EH_PE_absptr;
@@ -323,7 +324,8 @@ void TargetLoweringObjectFileELF::emitModuleMetadata(MCStreamer &Streamer,
     }
   }
 
-  if (NamedMDNode *DependentLibraries = M.getNamedMetadata("llvm.dependent-libraries")) {
+  if (NamedMDNode *DependentLibraries =
+          M.getNamedMetadata("llvm.dependent-libraries")) {
     auto *S = C.getELFSection(".deplibs", ELF::SHT_LLVM_DEPENDENT_LIBRARIES,
                               ELF::SHF_MERGE | ELF::SHF_STRINGS, 1);
 
@@ -451,9 +453,9 @@ const MCExpr *TargetLoweringObjectFileELF::getTTypeGlobalReference(
       StubSym = MachineModuleInfoImpl::StubValueTy(Sym, !GV->hasLocalLinkage());
     }
 
-    return TargetLoweringObjectFile::
-      getTTypeReference(MCSymbolRefExpr::create(SSym, getContext()),
-                        Encoding & ~DW_EH_PE_indirect, Streamer);
+    return TargetLoweringObjectFile::getTTypeReference(
+        MCSymbolRefExpr::create(SSym, getContext()),
+        Encoding & ~DW_EH_PE_indirect, Streamer);
   }
 
   return TargetLoweringObjectFile::getTTypeGlobalReference(GV, Encoding, TM,
@@ -475,27 +477,23 @@ static SectionKind getELFKindForNamedSection(StringRef Name, SectionKind K) {
       Name == ".llvmbc" || Name == ".llvmcmd")
     return SectionKind::getMetadata();
 
-  if (Name.empty() || Name[0] != '.') return K;
+  if (Name.empty() || Name[0] != '.')
+    return K;
 
   // Default implementation based on some magic section names.
-  if (Name == ".bss" ||
-      Name.startswith(".bss.") ||
+  if (Name == ".bss" || Name.startswith(".bss.") ||
       Name.startswith(".gnu.linkonce.b.") ||
-      Name.startswith(".llvm.linkonce.b.") ||
-      Name == ".sbss" ||
-      Name.startswith(".sbss.") ||
-      Name.startswith(".gnu.linkonce.sb.") ||
+      Name.startswith(".llvm.linkonce.b.") || Name == ".sbss" ||
+      Name.startswith(".sbss.") || Name.startswith(".gnu.linkonce.sb.") ||
       Name.startswith(".llvm.linkonce.sb."))
     return SectionKind::getBSS();
 
-  if (Name == ".tdata" ||
-      Name.startswith(".tdata.") ||
+  if (Name == ".tdata" || Name.startswith(".tdata.") ||
       Name.startswith(".gnu.linkonce.td.") ||
       Name.startswith(".llvm.linkonce.td."))
     return SectionKind::getThreadData();
 
-  if (Name == ".tbss" ||
-      Name.startswith(".tbss.") ||
+  if (Name == ".tbss" || Name.startswith(".tbss.") ||
       Name.startswith(".gnu.linkonce.tb.") ||
       Name.startswith(".llvm.linkonce.tb."))
     return SectionKind::getThreadBSS();
@@ -666,7 +664,7 @@ getELFSectionNameForGlobal(const GlobalObject *GO, SectionKind Kind,
 
   if (UniqueSectionName) {
     Name.push_back('.');
-    TM.getNameWithPrefix(Name, GO, Mang, /*MayAlwaysUsePrivate*/true);
+    TM.getNameWithPrefix(Name, GO, Mang, /*MayAlwaysUsePrivate*/ true);
   } else if (HasPrefix)
     // For distinguishing between .text.${text-section-prefix}. (with trailing
     // dot) and .text.${function-name}
@@ -684,7 +682,7 @@ class LoweringDiagnosticInfo : public DiagnosticInfo {
       : DiagnosticInfo(DK_Lowering, Severity), Msg(DiagMsg) {}
   void print(DiagnosticPrinter &DP) const override { DP << Msg; }
 };
-}
+} // namespace
 
 /// Calculate an appropriate unique ID for a section, and update Flags,
 /// EntrySize and NextUniqueID where appropriate.
@@ -763,10 +761,12 @@ calcUniqueIDUpdateFlagsAndSize(const GlobalObject *GO, StringRef SectionName,
   return NextUniqueID++;
 }
 
-static MCSection *selectExplicitSectionGlobal(
-    const GlobalObject *GO, SectionKind Kind, const TargetMachine &TM,
-    MCContext &Ctx, Mangler &Mang, unsigned &NextUniqueID,
-    bool Retain, bool ForceUnique) {
+static MCSection *selectExplicitSectionGlobal(const GlobalObject *GO,
+                                              SectionKind Kind,
+                                              const TargetMachine &TM,
+                                              MCContext &Ctx, Mangler &Mang,
+                                              unsigned &NextUniqueID,
+                                              bool Retain, bool ForceUnique) {
   StringRef SectionName = GO->getSection();
 
   // Check if '#pragma clang section' name is applicable.
@@ -779,7 +779,8 @@ static MCSection *selectExplicitSectionGlobal(
       SectionName = Attrs.getAttribute("bss-section").getValueAsString();
     } else if (Attrs.hasAttribute("rodata-section") && Kind.isReadOnly()) {
       SectionName = Attrs.getAttribute("rodata-section").getValueAsString();
-    } else if (Attrs.hasAttribute("relro-section") && Kind.isReadOnlyWithRel()) {
+    } else if (Attrs.hasAttribute("relro-section") &&
+               Kind.isReadOnlyWithRel()) {
       SectionName = Attrs.getAttribute("relro-section").getValueAsString();
     } else if (Attrs.hasAttribute("data-section") && Kind.isData()) {
       SectionName = Attrs.getAttribute("data-section").getValueAsString();
@@ -840,7 +841,7 @@ MCSection *TargetLoweringObjectFileELF::getExplicitSectionGlobal(
     const GlobalObject *GO, SectionKind Kind, const TargetMachine &TM) const {
   return selectExplicitSectionGlobal(GO, Kind, TM, getContext(), getMangler(),
                                      NextUniqueID, Used.count(GO),
-                                     /* ForceUnique = */false);
+                                     /* ForceUnique = */ false);
 }
 
 static MCSectionELF *selectELFSectionForGlobal(
@@ -906,9 +907,9 @@ static MCSection *selectELFSectionForGlobal(
     }
   }
 
-  MCSectionELF *Section = selectELFSectionForGlobal(
-      Ctx, GO, Kind, Mang, TM, EmitUniqueSection, Flags,
-      NextUniqueID, LinkedToSym);
+  MCSectionELF *Section =
+      selectELFSectionForGlobal(Ctx, GO, Kind, Mang, TM, EmitUniqueSection,
+                                Flags, NextUniqueID, LinkedToSym);
   assert(Section->getLinkedToSymbol() == LinkedToSym);
   return Section;
 }
@@ -939,9 +940,9 @@ MCSection *TargetLoweringObjectFileELF::getUniqueSectionForFunction(
   // If the function's section names is pre-determined via pragma or a
   // section attribute, call selectExplicitSectionGlobal.
   if (F.hasSection() || F.hasFnAttribute("implicit-section-name"))
-    return selectExplicitSectionGlobal(
-        &F, Kind, TM, getContext(), getMangler(), NextUniqueID,
-        Used.count(&F), /* ForceUnique = */true);
+    return selectExplicitSectionGlobal(&F, Kind, TM, getContext(), getMangler(),
+                                       NextUniqueID, Used.count(&F),
+                                       /* ForceUnique = */ true);
   else
     return selectELFSectionForGlobal(
         getContext(), &F, Kind, getMangler(), TM, Used.count(&F),
@@ -991,11 +992,11 @@ MCSection *TargetLoweringObjectFileELF::getSectionForLSDA(
 
   // Append the function name as the suffix like GCC, assuming
   // -funique-section-names applies to .gcc_except_table sections.
-  return getContext().getELFSection(
-      (TM.getUniqueSectionNames() ? LSDA->getName() + "." + F.getName()
-                                  : LSDA->getName()),
-      LSDA->getType(), Flags, 0, Group, IsComdat, MCSection::NonUniqueID,
-      LinkedToSym);
+  return getContext().getELFSection((TM.getUniqueSectionNames()
+                                         ? LSDA->getName() + "." + F.getName()
+                                         : LSDA->getName()),
+                                    LSDA->getType(), Flags, 0, Group, IsComdat,
+                                    MCSection::NonUniqueID, LinkedToSym);
 }
 
 bool TargetLoweringObjectFileELF::shouldPutJumpTableInFunctionSection(
@@ -1158,8 +1159,7 @@ MCSection *TargetLoweringObjectFileELF::getSectionForCommandLines() const {
                                     ELF::SHF_MERGE | ELF::SHF_STRINGS, 1);
 }
 
-void
-TargetLoweringObjectFileELF::InitializeELF(bool UseInitArray_) {
+void TargetLoweringObjectFileELF::InitializeELF(bool UseInitArray_) {
   UseInitArray = UseInitArray_;
   MCContext &Ctx = getContext();
   if (!UseInitArray) {
@@ -1256,8 +1256,8 @@ void TargetLoweringObjectFileMachO::emitModuleMetadata(MCStreamer &Streamer,
   MCSectionMachO *S = getContext().getMachOSection(
       Segment, Section, TAA, StubSize, SectionKind::getData());
   Streamer.switchSection(S);
-  Streamer.emitLabel(getContext().
-                     getOrCreateSymbol(StringRef("L_OBJC_IMAGE_INFO")));
+  Streamer.emitLabel(
+      getContext().getOrCreateSymbol(StringRef("L_OBJC_IMAGE_INFO")));
   Streamer.emitInt32(VersionVal);
   Streamer.emitInt32(ImageInfoFlags);
   Streamer.addBlankLine();
@@ -1284,7 +1284,8 @@ MCSection *TargetLoweringObjectFileMachO::getExplicitSectionGlobal(
       SectionName = Attrs.getAttribute("bss-section").getValueAsString();
     } else if (Attrs.hasAttribute("rodata-section") && Kind.isReadOnly()) {
       SectionName = Attrs.getAttribute("rodata-section").getValueAsString();
-    } else if (Attrs.hasAttribute("relro-section") && Kind.isReadOnlyWithRel()) {
+    } else if (Attrs.hasAttribute("relro-section") &&
+               Kind.isReadOnlyWithRel()) {
       SectionName = Attrs.getAttribute("relro-section").getValueAsString();
     } else if (Attrs.hasAttribute("data-section") && Kind.isData()) {
       SectionName = Attrs.getAttribute("data-section").getValueAsString();
@@ -1338,8 +1339,10 @@ MCSection *TargetLoweringObjectFileMachO::SelectSectionForGlobal(
   checkMachOComdat(GO);
 
   // Handle thread local data.
-  if (Kind.isThreadBSS()) return TLSBSSSection;
-  if (Kind.isThreadData()) return TLSDataSection;
+  if (Kind.isThreadBSS())
+    return TLSBSSSection;
+  if (Kind.isThreadData())
+    return TLSDataSection;
 
   if (Kind.isText())
     return GO->isWeakForLinker() ? TextCoalSection : TextSection;
@@ -1417,7 +1420,7 @@ MCSection *TargetLoweringObjectFileMachO::getSectionForConstant(
     return EightByteConstantSection;
   if (Kind.isMergeableConst16())
     return SixteenByteConstantSection;
-  return ReadOnlySection;  // .const
+  return ReadOnlySection; // .const
 }
 
 MCSection *TargetLoweringObjectFileMachO::getSectionForCommandLines() const {
@@ -1432,7 +1435,7 @@ const MCExpr *TargetLoweringObjectFileMachO::getTTypeGlobalReference(
 
   if (Encoding & DW_EH_PE_indirect) {
     MachineModuleInfoMachO &MachOMMI =
-      MMI->getObjFileInfo<MachineModuleInfoMachO>();
+        MMI->getObjFileInfo<MachineModuleInfoMachO>();
 
     MCSymbol *SSym = getSymbolWithGlobalValueBase(GV, "$non_lazy_ptr", TM);
 
@@ -1444,9 +1447,9 @@ const MCExpr *TargetLoweringObjectFileMachO::getTTypeGlobalReference(
       StubSym = MachineModuleInfoImpl::StubValueTy(Sym, !GV->hasLocalLinkage());
     }
 
-    return TargetLoweringObjectFile::
-      getTTypeReference(MCSymbolRefExpr::create(SSym, getContext()),
-                        Encoding & ~DW_EH_PE_indirect, Streamer);
+    return TargetLoweringObjectFile::getTTypeReference(
+        MCSymbolRefExpr::create(SSym, getContext()),
+        Encoding & ~DW_EH_PE_indirect, Streamer);
   }
 
   return TargetLoweringObjectFile::getTTypeGlobalReference(GV, Encoding, TM,
@@ -1458,7 +1461,7 @@ MCSymbol *TargetLoweringObjectFileMachO::getCFIPersonalitySymbol(
     MachineModuleInfo *MMI) const {
   // The mach-o version of this method defaults to returning a stub reference.
   MachineModuleInfoMachO &MachOMMI =
-    MMI->getObjFileInfo<MachineModuleInfoMachO>();
+      MMI->getObjFileInfo<MachineModuleInfoMachO>();
 
   MCSymbol *SSym = getSymbolWithGlobalValueBase(GV, "$non_lazy_ptr", TM);
 
@@ -1514,7 +1517,7 @@ const MCExpr *TargetLoweringObjectFileMachO::getIndirectSymViaGOTPCRel(
   // Then the linker will notice the constant in the table and will look at the
   // content of the symbol.
   MachineModuleInfoMachO &MachOMMI =
-    MMI->getObjFileInfo<MachineModuleInfoMachO>();
+      MMI->getObjFileInfo<MachineModuleInfoMachO>();
   MCContext &Ctx = getContext();
 
   // The offset must consider the original displacement from the base symbol
@@ -1538,15 +1541,15 @@ const MCExpr *TargetLoweringObjectFileMachO::getIndirectSymViaGOTPCRel(
                                                  !GV->hasLocalLinkage());
 
   const MCExpr *BSymExpr =
-    MCSymbolRefExpr::create(BaseSym, MCSymbolRefExpr::VK_None, Ctx);
+      MCSymbolRefExpr::create(BaseSym, MCSymbolRefExpr::VK_None, Ctx);
   const MCExpr *LHS =
-    MCSymbolRefExpr::create(Stub, MCSymbolRefExpr::VK_None, Ctx);
+      MCSymbolRefExpr::create(Stub, MCSymbolRefExpr::VK_None, Ctx);
 
   if (!Offset)
     return MCBinaryExpr::createSub(LHS, BSymExpr, Ctx);
 
-  const MCExpr *RHS =
-    MCBinaryExpr::createAdd(BSymExpr, MCConstantExpr::create(Offset, Ctx), Ctx);
+  const MCExpr *RHS = MCBinaryExpr::createAdd(
+      BSymExpr, MCConstantExpr::create(Offset, Ctx), Ctx);
   return MCBinaryExpr::createSub(LHS, RHS, Ctx);
 }
 
@@ -1579,42 +1582,30 @@ void TargetLoweringObjectFileMachO::getNameWithPrefix(
 //                                  COFF
 //===----------------------------------------------------------------------===//
 
-static unsigned
-getCOFFSectionFlags(SectionKind K, const TargetMachine &TM) {
+static unsigned getCOFFSectionFlags(SectionKind K, const TargetMachine &TM) {
   unsigned Flags = 0;
   bool isThumb = TM.getTargetTriple().getArch() == Triple::thumb;
 
   if (K.isMetadata())
-    Flags |=
-      COFF::IMAGE_SCN_MEM_DISCARDABLE;
+    Flags |= COFF::IMAGE_SCN_MEM_DISCARDABLE;
   else if (K.isExclude())
-    Flags |=
-      COFF::IMAGE_SCN_LNK_REMOVE | COFF::IMAGE_SCN_MEM_DISCARDABLE;
+    Flags |= COFF::IMAGE_SCN_LNK_REMOVE | COFF::IMAGE_SCN_MEM_DISCARDABLE;
   else if (K.isText())
     Flags |=
-      COFF::IMAGE_SCN_MEM_EXECUTE |
-      COFF::IMAGE_SCN_MEM_READ |
-      COFF::IMAGE_SCN_CNT_CODE |
-      (isThumb ? COFF::IMAGE_SCN_MEM_16BIT : (COFF::SectionCharacteristics)0);
+        COFF::IMAGE_SCN_MEM_EXECUTE | COFF::IMAGE_SCN_MEM_READ |
+        COFF::IMAGE_SCN_CNT_CODE |
+        (isThumb ? COFF::IMAGE_SCN_MEM_16BIT : (COFF::SectionCharacteristics)0);
   else if (K.isBSS())
-    Flags |=
-      COFF::IMAGE_SCN_CNT_UNINITIALIZED_DATA |
-      COFF::IMAGE_SCN_MEM_READ |
-      COFF::IMAGE_SCN_MEM_WRITE;
+    Flags |= COFF::IMAGE_SCN_CNT_UNINITIALIZED_DATA | COFF::IMAGE_SCN_MEM_READ |
+             COFF::IMAGE_SCN_MEM_WRITE;
   else if (K.isThreadLocal())
-    Flags |=
-      COFF::IMAGE_SCN_CNT_INITIALIZED_DATA |
-      COFF::IMAGE_SCN_MEM_READ |
-      COFF::IMAGE_SCN_MEM_WRITE;
+    Flags |= COFF::IMAGE_SCN_CNT_INITIALIZED_DATA | COFF::IMAGE_SCN_MEM_READ |
+             COFF::IMAGE_SCN_MEM_WRITE;
   else if (K.isReadOnly() || K.isReadOnlyWithRel())
-    Flags |=
-      COFF::IMAGE_SCN_CNT_INITIALIZED_DATA |
-      COFF::IMAGE_SCN_MEM_READ;
+    Flags |= COFF::IMAGE_SCN_CNT_INITIALIZED_DATA | COFF::IMAGE_SCN_MEM_READ;
   else if (K.isWriteable())
-    Flags |=
-      COFF::IMAGE_SCN_CNT_INITIALIZED_DATA |
-      COFF::IMAGE_SCN_MEM_READ |
-      COFF::IMAGE_SCN_MEM_WRITE;
+    Flags |= COFF::IMAGE_SCN_CNT_INITIALIZED_DATA | COFF::IMAGE_SCN_MEM_READ |
+             COFF::IMAGE_SCN_MEM_WRITE;
 
   return Flags;
 }
@@ -1747,7 +1738,8 @@ MCSection *TargetLoweringObjectFileCOFF::SelectSectionForGlobal(
                                          COMDATSymName, Selection, UniqueID);
     } else {
       SmallString<256> TmpData;
-      getMangler().getNameWithPrefix(TmpData, GO, /*CannotUsePrivateLabel=*/true);
+      getMangler().getNameWithPrefix(TmpData, GO,
+                                     /*CannotUsePrivateLabel=*/true);
       return getContext().getCOFFSection(Name, Characteristics, Kind, TmpData,
                                          Selection, UniqueID);
     }
@@ -1820,7 +1812,7 @@ bool TargetLoweringObjectFileCOFF::shouldPutJumpTableInFunctionSection(
     }
   }
   return TargetLoweringObjectFile::shouldPutJumpTableInFunctionSection(
-    UsesLabelDifference, F);
+      UsesLabelDifference, F);
 }
 
 void TargetLoweringObjectFileCOFF::emitModuleMetadata(MCStreamer &Streamer,
@@ -1848,8 +1840,8 @@ void TargetLoweringObjectFileCOFF::emitModuleMetadata(MCStreamer &Streamer,
   emitCGProfileMetadata(Streamer, M);
 }
 
-void TargetLoweringObjectFileCOFF::emitLinkerDirectives(
-    MCStreamer &Streamer, Module &M) const {
+void TargetLoweringObjectFileCOFF::emitLinkerDirectives(MCStreamer &Streamer,
+                                                        Module &M) const {
   if (NamedMDNode *LinkerOptions = M.getNamedMetadata("llvm.linker.options")) {
     // Emit the linker options to the linker .drectve section.  According to the
     // spec, this section is a space-separated string containing flags for
@@ -1915,22 +1907,24 @@ void TargetLoweringObjectFileCOFF::Initialize(MCContext &Ctx,
   this->TM = &TM;
   const Triple &T = TM.getTargetTriple();
   if (T.isWindowsMSVCEnvironment() || T.isWindowsItaniumEnvironment()) {
-    StaticCtorSection =
-        Ctx.getCOFFSection(".CRT$XCU", COFF::IMAGE_SCN_CNT_INITIALIZED_DATA |
-                                           COFF::IMAGE_SCN_MEM_READ,
-                           SectionKind::getReadOnly());
-    StaticDtorSection =
-        Ctx.getCOFFSection(".CRT$XTX", COFF::IMAGE_SCN_CNT_INITIALIZED_DATA |
-                                           COFF::IMAGE_SCN_MEM_READ,
-                           SectionKind::getReadOnly());
+    StaticCtorSection = Ctx.getCOFFSection(
+        ".CRT$XCU",
+        COFF::IMAGE_SCN_CNT_INITIALIZED_DATA | COFF::IMAGE_SCN_MEM_READ,
+        SectionKind::getReadOnly());
+    StaticDtorSection = Ctx.getCOFFSection(
+        ".CRT$XTX",
+        COFF::IMAGE_SCN_CNT_INITIALIZED_DATA | COFF::IMAGE_SCN_MEM_READ,
+        SectionKind::getReadOnly());
   } else {
     StaticCtorSection = Ctx.getCOFFSection(
-        ".ctors", COFF::IMAGE_SCN_CNT_INITIALIZED_DATA |
-                      COFF::IMAGE_SCN_MEM_READ | COFF::IMAGE_SCN_MEM_WRITE,
+        ".ctors",
+        COFF::IMAGE_SCN_CNT_INITIALIZED_DATA | COFF::IMAGE_SCN_MEM_READ |
+            COFF::IMAGE_SCN_MEM_WRITE,
         SectionKind::getData());
     StaticDtorSection = Ctx.getCOFFSection(
-        ".dtors", COFF::IMAGE_SCN_CNT_INITIALIZED_DATA |
-                      COFF::IMAGE_SCN_MEM_READ | COFF::IMAGE_SCN_MEM_WRITE,
+        ".dtors",
+        COFF::IMAGE_SCN_CNT_INITIALIZED_DATA | COFF::IMAGE_SCN_MEM_READ |
+            COFF::IMAGE_SCN_MEM_WRITE,
         SectionKind::getData());
   }
 }
@@ -1979,9 +1973,10 @@ static MCSectionCOFF *getCOFFStaticStructorSection(MCContext &Ctx,
     raw_string_ostream(Name) << format(".%05u", 65535 - Priority);
 
   return Ctx.getAssociativeCOFFSection(
-      Ctx.getCOFFSection(Name, COFF::IMAGE_SCN_CNT_INITIALIZED_DATA |
-                                   COFF::IMAGE_SCN_MEM_READ |
-                                   COFF::IMAGE_SCN_MEM_WRITE,
+      Ctx.getCOFFSection(Name,
+                         COFF::IMAGE_SCN_CNT_INITIALIZED_DATA |
+                             COFF::IMAGE_SCN_MEM_READ |
+                             COFF::IMAGE_SCN_MEM_WRITE,
                          SectionKind::getData()),
       KeySym, 0);
 }
@@ -2015,9 +2010,9 @@ const MCExpr *TargetLoweringObjectFileCOFF::lowerRelativeReference(
 
   // Both ptrtoint instructions must wrap global objects:
   // - Only global variables are eligible for image relative relocations.
-  // - The subtrahend refers to the special symbol __ImageBase, a GlobalVariable.
-  // We expect __ImageBase to be a global variable without a section, externally
-  // defined.
+  // - The subtrahend refers to the special symbol __ImageBase, a
+  // GlobalVariable. We expect __ImageBase to be a global variable without a
+  // section, externally defined.
   //
   // It should look something like this: @__ImageBase = external constant i8
   if (!isa<GlobalObject>(LHS) || !isa<GlobalVariable>(RHS) ||
@@ -2026,9 +2021,8 @@ const MCExpr *TargetLoweringObjectFileCOFF::lowerRelativeReference(
       cast<GlobalVariable>(RHS)->hasInitializer() || RHS->hasSection())
     return nullptr;
 
-  return MCSymbolRefExpr::create(TM.getSymbol(LHS),
-                                 MCSymbolRefExpr::VK_COFF_IMGREL32,
-                                 getContext());
+  return MCSymbolRefExpr::create(
+      TM.getSymbol(LHS), MCSymbolRefExpr::VK_COFF_IMGREL32, getContext());
 }
 
 static std::string APIntToHexString(const APInt &AI) {
@@ -2120,7 +2114,9 @@ static const Comdat *getWasmComdat(const GlobalValue *GV) {
 
   if (C->getSelectionKind() != Comdat::Any)
     report_fatal_error("WebAssembly COMDATs only support "
-                       "SelectionKind::Any, '" + C->getName() + "' cannot be "
+                       "SelectionKind::Any, '" +
+                       C->getName() +
+                       "' cannot be "
                        "lowered.");
 
   return C;
@@ -2258,10 +2254,10 @@ void TargetLoweringObjectFileWasm::InitializeWasm() {
 
 MCSection *TargetLoweringObjectFileWasm::getStaticCtorSection(
     unsigned Priority, const MCSymbol *KeySym) const {
-  return Priority == UINT16_MAX ?
-         StaticCtorSection :
-         getContext().getWasmSection(".init_array." + utostr(Priority),
-                                     SectionKind::getData());
+  return Priority == UINT16_MAX
+             ? StaticCtorSection
+             : getContext().getWasmSection(".init_array." + utostr(Priority),
+                                           SectionKind::getData());
 }
 
 MCSection *TargetLoweringObjectFileWasm::getStaticDtorSection(
@@ -2491,7 +2487,7 @@ MCSection *TargetLoweringObjectFileXCOFF::SelectSectionForGlobal(
 
 MCSection *TargetLoweringObjectFileXCOFF::getSectionForJumpTable(
     const Function &F, const TargetMachine &TM) const {
-  assert (!F.getComdat() && "Comdat not supported on XCOFF.");
+  assert(!F.getComdat() && "Comdat not supported on XCOFF.");
 
   if (!TM.getFunctionSections())
     return ReadOnlySection;
@@ -2552,12 +2548,12 @@ void TargetLoweringObjectFileXCOFF::Initialize(MCContext &Ctx,
 }
 
 MCSection *TargetLoweringObjectFileXCOFF::getStaticCtorSection(
-	unsigned Priority, const MCSymbol *KeySym) const {
+    unsigned Priority, const MCSymbol *KeySym) const {
   report_fatal_error("no static constructor section on AIX");
 }
 
 MCSection *TargetLoweringObjectFileXCOFF::getStaticDtorSection(
-	unsigned Priority, const MCSymbol *KeySym) const {
+    unsigned Priority, const MCSymbol *KeySym) const {
   report_fatal_error("no static destructor section on AIX");
 }
 
diff --git a/llvm/lib/CodeGen/TargetPassConfig.cpp b/llvm/lib/CodeGen/TargetPassConfig.cpp
index 87ac68c834a867e..33e3c5d8add00f7 100644
--- a/llvm/lib/CodeGen/TargetPassConfig.cpp
+++ b/llvm/lib/CodeGen/TargetPassConfig.cpp
@@ -58,49 +58,60 @@ static cl::opt<bool>
     EnableIPRA("enable-ipra", cl::init(false), cl::Hidden,
                cl::desc("Enable interprocedural register allocation "
                         "to reduce load/store at procedure calls."));
-static cl::opt<bool> DisablePostRASched("disable-post-ra", cl::Hidden,
-    cl::desc("Disable Post Regalloc Scheduler"));
+static cl::opt<bool>
+    DisablePostRASched("disable-post-ra", cl::Hidden,
+                       cl::desc("Disable Post Regalloc Scheduler"));
 static cl::opt<bool> DisableBranchFold("disable-branch-fold", cl::Hidden,
-    cl::desc("Disable branch folding"));
+                                       cl::desc("Disable branch folding"));
 static cl::opt<bool> DisableTailDuplicate("disable-tail-duplicate", cl::Hidden,
-    cl::desc("Disable tail duplication"));
-static cl::opt<bool> DisableEarlyTailDup("disable-early-taildup", cl::Hidden,
+                                          cl::desc("Disable tail duplication"));
+static cl::opt<bool> DisableEarlyTailDup(
+    "disable-early-taildup", cl::Hidden,
     cl::desc("Disable pre-register allocation tail duplication"));
-static cl::opt<bool> DisableBlockPlacement("disable-block-placement",
-    cl::Hidden, cl::desc("Disable probability-driven block placement"));
-static cl::opt<bool> EnableBlockPlacementStats("enable-block-placement-stats",
-    cl::Hidden, cl::desc("Collect probability-driven block placement stats"));
+static cl::opt<bool> DisableBlockPlacement(
+    "disable-block-placement", cl::Hidden,
+    cl::desc("Disable probability-driven block placement"));
+static cl::opt<bool> EnableBlockPlacementStats(
+    "enable-block-placement-stats", cl::Hidden,
+    cl::desc("Collect probability-driven block placement stats"));
 static cl::opt<bool> DisableSSC("disable-ssc", cl::Hidden,
-    cl::desc("Disable Stack Slot Coloring"));
-static cl::opt<bool> DisableMachineDCE("disable-machine-dce", cl::Hidden,
-    cl::desc("Disable Machine Dead Code Elimination"));
-static cl::opt<bool> DisableEarlyIfConversion("disable-early-ifcvt", cl::Hidden,
-    cl::desc("Disable Early If-conversion"));
+                                cl::desc("Disable Stack Slot Coloring"));
+static cl::opt<bool>
+    DisableMachineDCE("disable-machine-dce", cl::Hidden,
+                      cl::desc("Disable Machine Dead Code Elimination"));
+static cl::opt<bool>
+    DisableEarlyIfConversion("disable-early-ifcvt", cl::Hidden,
+                             cl::desc("Disable Early If-conversion"));
 static cl::opt<bool> DisableMachineLICM("disable-machine-licm", cl::Hidden,
-    cl::desc("Disable Machine LICM"));
-static cl::opt<bool> DisableMachineCSE("disable-machine-cse", cl::Hidden,
+                                        cl::desc("Disable Machine LICM"));
+static cl::opt<bool> DisableMachineCSE(
+    "disable-machine-cse", cl::Hidden,
     cl::desc("Disable Machine Common Subexpression Elimination"));
 static cl::opt<cl::boolOrDefault> OptimizeRegAlloc(
     "optimize-regalloc", cl::Hidden,
     cl::desc("Enable optimized register allocation compilation path."));
 static cl::opt<bool> DisablePostRAMachineLICM("disable-postra-machine-licm",
-    cl::Hidden,
-    cl::desc("Disable Machine LICM"));
+                                              cl::Hidden,
+                                              cl::desc("Disable Machine LICM"));
 static cl::opt<bool> DisableMachineSink("disable-machine-sink", cl::Hidden,
-    cl::desc("Disable Machine Sinking"));
-static cl::opt<bool> DisablePostRAMachineSink("disable-postra-machine-sink",
-    cl::Hidden,
-    cl::desc("Disable PostRA Machine Sinking"));
-static cl::opt<bool> DisableLSR("disable-lsr", cl::Hidden,
-    cl::desc("Disable Loop Strength Reduction Pass"));
-static cl::opt<bool> DisableConstantHoisting("disable-constant-hoisting",
-    cl::Hidden, cl::desc("Disable ConstantHoisting"));
+                                        cl::desc("Disable Machine Sinking"));
+static cl::opt<bool>
+    DisablePostRAMachineSink("disable-postra-machine-sink", cl::Hidden,
+                             cl::desc("Disable PostRA Machine Sinking"));
+static cl::opt<bool>
+    DisableLSR("disable-lsr", cl::Hidden,
+               cl::desc("Disable Loop Strength Reduction Pass"));
+static cl::opt<bool>
+    DisableConstantHoisting("disable-constant-hoisting", cl::Hidden,
+                            cl::desc("Disable ConstantHoisting"));
 static cl::opt<bool> DisableCGP("disable-cgp", cl::Hidden,
-    cl::desc("Disable Codegen Prepare"));
+                                cl::desc("Disable Codegen Prepare"));
 static cl::opt<bool> DisableCopyProp("disable-copyprop", cl::Hidden,
-    cl::desc("Disable Copy Propagation pass"));
-static cl::opt<bool> DisablePartialLibcallInlining("disable-partial-libcall-inlining",
-    cl::Hidden, cl::desc("Disable Partial Libcall Inlining"));
+                                     cl::desc("Disable Copy Propagation pass"));
+static cl::opt<bool>
+    DisablePartialLibcallInlining("disable-partial-libcall-inlining",
+                                  cl::Hidden,
+                                  cl::desc("Disable Partial Libcall Inlining"));
 static cl::opt<bool> DisableAtExitBasedGlobalDtorLowering(
     "disable-atexit-based-global-dtor-lowering", cl::Hidden,
     cl::desc("For MachO, disable atexit()-based global destructor lowering"));
@@ -109,14 +120,16 @@ static cl::opt<bool> EnableImplicitNullChecks(
     cl::desc("Fold null checks into faulting memory operations"),
     cl::init(false), cl::Hidden);
 static cl::opt<bool> DisableMergeICmps("disable-mergeicmps",
-    cl::desc("Disable MergeICmps Pass"),
-    cl::init(false), cl::Hidden);
-static cl::opt<bool> PrintLSR("print-lsr-output", cl::Hidden,
-    cl::desc("Print LLVM IR produced by the loop-reduce pass"));
-static cl::opt<bool> PrintISelInput("print-isel-input", cl::Hidden,
-    cl::desc("Print LLVM IR input to isel pass"));
+                                       cl::desc("Disable MergeICmps Pass"),
+                                       cl::init(false), cl::Hidden);
+static cl::opt<bool>
+    PrintLSR("print-lsr-output", cl::Hidden,
+             cl::desc("Print LLVM IR produced by the loop-reduce pass"));
+static cl::opt<bool>
+    PrintISelInput("print-isel-input", cl::Hidden,
+                   cl::desc("Print LLVM IR input to isel pass"));
 static cl::opt<bool> PrintGCInfo("print-gc", cl::Hidden,
-    cl::desc("Dump garbage collector data"));
+                                 cl::desc("Dump garbage collector data"));
 static cl::opt<cl::boolOrDefault>
     VerifyMachineCode("verify-machineinstrs", cl::Hidden,
                       cl::desc("Verify generated machine code"));
@@ -150,8 +163,8 @@ static cl::opt<bool> DisableCFIFixup("disable-cfi-fixup", cl::Hidden,
 // FastISel is enabled by default with -fast, and we wish to be
 // able to enable or disable fast-isel independently from -O0.
 static cl::opt<cl::boolOrDefault>
-EnableFastISelOption("fast-isel", cl::Hidden,
-  cl::desc("Enable the \"fast\" instruction selector"));
+    EnableFastISelOption("fast-isel", cl::Hidden,
+                         cl::desc("Enable the \"fast\" instruction selector"));
 
 static cl::opt<cl::boolOrDefault> EnableGlobalISelOption(
     "global-isel", cl::Hidden,
@@ -203,7 +216,8 @@ static cl::opt<bool> MISchedPostRA(
         "Run MachineScheduler post regalloc (independent of preRA sched)"));
 
 // Experimental option to run live interval analysis early.
-static cl::opt<bool> EarlyLiveIntervals("early-live-intervals", cl::Hidden,
+static cl::opt<bool> EarlyLiveIntervals(
+    "early-live-intervals", cl::Hidden,
     cl::desc("Run live interval analysis earlier in the pipeline"));
 
 /// Option names for limiting the codegen pipeline.
@@ -389,7 +403,7 @@ class PassConfigImpl {
   // user interface. For example, a target may disable a standard pass by
   // default by substituting a pass ID of zero, and the user may still enable
   // that standard pass with an explicit command line option.
-  DenseMap<AnalysisID,IdentifyingPassPtr> TargetPasses;
+  DenseMap<AnalysisID, IdentifyingPassPtr> TargetPasses;
 
   /// Store the pairs of <AnalysisID, AnalysisID> of which the second pass
   /// is inserted after each instance of the first one.
@@ -399,9 +413,7 @@ class PassConfigImpl {
 } // end namespace llvm
 
 // Out of line virtual method.
-TargetPassConfig::~TargetPassConfig() {
-  delete Impl;
-}
+TargetPassConfig::~TargetPassConfig() { delete Impl; }
 
 static const PassInfo *getPassInfo(StringRef PassName) {
   if (PassName.empty())
@@ -435,19 +447,19 @@ getPassNameAndInstanceNum(StringRef PassName) {
 void TargetPassConfig::setStartStopPasses() {
   StringRef StartBeforeName;
   std::tie(StartBeforeName, StartBeforeInstanceNum) =
-    getPassNameAndInstanceNum(StartBeforeOpt);
+      getPassNameAndInstanceNum(StartBeforeOpt);
 
   StringRef StartAfterName;
   std::tie(StartAfterName, StartAfterInstanceNum) =
-    getPassNameAndInstanceNum(StartAfterOpt);
+      getPassNameAndInstanceNum(StartAfterOpt);
 
   StringRef StopBeforeName;
-  std::tie(StopBeforeName, StopBeforeInstanceNum)
-    = getPassNameAndInstanceNum(StopBeforeOpt);
+  std::tie(StopBeforeName, StopBeforeInstanceNum) =
+      getPassNameAndInstanceNum(StopBeforeOpt);
 
   StringRef StopAfterName;
-  std::tie(StopAfterName, StopAfterInstanceNum)
-    = getPassNameAndInstanceNum(StopAfterOpt);
+  std::tie(StopAfterName, StopAfterInstanceNum) =
+      getPassNameAndInstanceNum(StopAfterOpt);
 
   StartBefore = getPassIDFromName(StartBeforeName);
   StartAfter = getPassIDFromName(StartAfterName);
@@ -654,8 +666,7 @@ TargetPassConfig *LLVMTargetMachine::createPassConfig(PassManagerBase &PM) {
   return new TargetPassConfig(*this, PM);
 }
 
-TargetPassConfig::TargetPassConfig()
-  : ImmutablePass(ID) {
+TargetPassConfig::TargetPassConfig() : ImmutablePass(ID) {
   report_fatal_error("Trying to construct TargetPassConfig without a target "
                      "machine. Scheduling a CodeGen pass without a target "
                      "triple set?");
@@ -702,8 +713,8 @@ void TargetPassConfig::substitutePass(AnalysisID StandardID,
 }
 
 IdentifyingPassPtr TargetPassConfig::getPassSubstitution(AnalysisID ID) const {
-  DenseMap<AnalysisID, IdentifyingPassPtr>::const_iterator
-    I = Impl->TargetPasses.find(ID);
+  DenseMap<AnalysisID, IdentifyingPassPtr>::const_iterator I =
+      Impl->TargetPasses.find(ID);
   if (I == Impl->TargetPasses.end())
     return ID;
   return I->second;
@@ -712,8 +723,7 @@ IdentifyingPassPtr TargetPassConfig::getPassSubstitution(AnalysisID ID) const {
 bool TargetPassConfig::isPassSubstitutedOrOverridden(AnalysisID ID) const {
   IdentifyingPassPtr TargetID = getPassSubstitution(ID);
   IdentifyingPassPtr FinalPtr = overridePass(ID, TargetID);
-  return !FinalPtr.isValid() || FinalPtr.isInstance() ||
-      FinalPtr.getID() != ID;
+  return !FinalPtr.isValid() || FinalPtr.isInstance() || FinalPtr.getID() != ID;
 }
 
 /// Add a pass to the PassManager if that pass is supposed to be run.  If the
@@ -860,8 +870,8 @@ void TargetPassConfig::addIRPasses() {
       addPass(createCanonicalizeFreezeInLoopsPass());
       addPass(createLoopStrengthReducePass());
       if (PrintLSR)
-        addPass(createPrintFunctionPass(dbgs(),
-                                        "\n\n*** Code after LSR ***\n"));
+        addPass(
+            createPrintFunctionPass(dbgs(), "\n\n*** Code after LSR ***\n"));
     }
 
     // The MergeICmpsPass tries to create memcmp calls by grouping sequences of
@@ -1183,7 +1193,7 @@ void TargetPassConfig::addMachinePasses() {
   // Prolog/Epilog inserter needs a TargetMachine to instantiate. But only
   // do so if it hasn't been disabled, substituted, or overridden.
   if (!isPassSubstitutedOrOverridden(&PrologEpilogCodeInserterID))
-      addPass(createPrologEpilogInserterPass());
+    addPass(createPrologEpilogInserterPass());
 
   /// Add passes that optimize machine instructions after register allocation.
   if (getOptLevel() != CodeGenOpt::None)
@@ -1344,9 +1354,12 @@ void TargetPassConfig::addMachineSSAOptimization() {
 
 bool TargetPassConfig::getOptimizeRegAlloc() const {
   switch (OptimizeRegAlloc) {
-  case cl::BOU_UNSET: return getOptLevel() != CodeGenOpt::None;
-  case cl::BOU_TRUE:  return true;
-  case cl::BOU_FALSE: return false;
+  case cl::BOU_UNSET:
+    return getOptLevel() != CodeGenOpt::None;
+  case cl::BOU_TRUE:
+    return true;
+  case cl::BOU_FALSE:
+    return false;
   }
   llvm_unreachable("Invalid optimize-regalloc state");
 }
@@ -1356,9 +1369,8 @@ bool TargetPassConfig::getOptimizeRegAlloc() const {
 static llvm::once_flag InitializeDefaultRegisterAllocatorFlag;
 
 static RegisterRegAlloc
-defaultRegAlloc("default",
-                "pick register allocator based on -O option",
-                useDefaultRegisterAllocator);
+    defaultRegAlloc("default", "pick register allocator based on -O option",
+                    useDefaultRegisterAllocator);
 
 static void initializeDefaultRegisterAllocatorOnce() {
   if (!RegisterRegAlloc::getDefault())
@@ -1408,9 +1420,12 @@ bool TargetPassConfig::isCustomizedRegAlloc() {
 }
 
 bool TargetPassConfig::addRegAssignAndRewriteFast() {
-  if (RegAlloc != (RegisterRegAlloc::FunctionPassCtor)&useDefaultRegisterAllocator &&
-      RegAlloc != (RegisterRegAlloc::FunctionPassCtor)&createFastRegisterAllocator)
-    report_fatal_error("Must use fast (default) register allocator for unoptimized regalloc.");
+  if (RegAlloc !=
+          (RegisterRegAlloc::FunctionPassCtor)&useDefaultRegisterAllocator &&
+      RegAlloc !=
+          (RegisterRegAlloc::FunctionPassCtor)&createFastRegisterAllocator)
+    report_fatal_error(
+        "Must use fast (default) register allocator for unoptimized regalloc.");
 
   addPass(createRegAllocPass(false));
 
@@ -1568,9 +1583,7 @@ bool TargetPassConfig::reportDiagnosticWhenGlobalISelFallback() const {
   return TM->Options.GlobalISelAbort == GlobalISelAbortMode::DisableWithDiag;
 }
 
-bool TargetPassConfig::isGISelCSEEnabled() const {
-  return true;
-}
+bool TargetPassConfig::isGISelCSEEnabled() const { return true; }
 
 std::unique_ptr<CSEConfigBase> TargetPassConfig::getCSEConfig() const {
   return std::make_unique<CSEConfigBase>();
diff --git a/llvm/lib/CodeGen/TargetRegisterInfo.cpp b/llvm/lib/CodeGen/TargetRegisterInfo.cpp
index 1bb35f40facfd0f..72eeb16c27985e7 100644
--- a/llvm/lib/CodeGen/TargetRegisterInfo.cpp
+++ b/llvm/lib/CodeGen/TargetRegisterInfo.cpp
@@ -50,20 +50,16 @@ static cl::opt<unsigned>
                               "high compile time cost in global splitting."),
                      cl::init(5000));
 
-TargetRegisterInfo::TargetRegisterInfo(const TargetRegisterInfoDesc *ID,
-                             regclass_iterator RCB, regclass_iterator RCE,
-                             const char *const *SRINames,
-                             const LaneBitmask *SRILaneMasks,
-                             LaneBitmask SRICoveringLanes,
-                             const RegClassInfo *const RCIs,
-                             const MVT::SimpleValueType *const RCVTLists,
-                             unsigned Mode)
-  : InfoDesc(ID), SubRegIndexNames(SRINames),
-    SubRegIndexLaneMasks(SRILaneMasks),
-    RegClassBegin(RCB), RegClassEnd(RCE),
-    CoveringLanes(SRICoveringLanes),
-    RCInfos(RCIs), RCVTLists(RCVTLists), HwMode(Mode) {
-}
+TargetRegisterInfo::TargetRegisterInfo(
+    const TargetRegisterInfoDesc *ID, regclass_iterator RCB,
+    regclass_iterator RCE, const char *const *SRINames,
+    const LaneBitmask *SRILaneMasks, LaneBitmask SRICoveringLanes,
+    const RegClassInfo *const RCIs, const MVT::SimpleValueType *const RCVTLists,
+    unsigned Mode)
+    : InfoDesc(ID), SubRegIndexNames(SRINames),
+      SubRegIndexLaneMasks(SRILaneMasks), RegClassBegin(RCB), RegClassEnd(RCE),
+      CoveringLanes(SRICoveringLanes), RCInfos(RCIs), RCVTLists(RCVTLists),
+      HwMode(Mode) {}
 
 TargetRegisterInfo::~TargetRegisterInfo() = default;
 
@@ -84,8 +80,8 @@ void TargetRegisterInfo::markSuperRegs(BitVector &RegisterSet,
     RegisterSet.set(SR);
 }
 
-bool TargetRegisterInfo::checkAllSuperRegsMarked(const BitVector &RegisterSet,
-    ArrayRef<MCPhysReg> Exceptions) const {
+bool TargetRegisterInfo::checkAllSuperRegsMarked(
+    const BitVector &RegisterSet, ArrayRef<MCPhysReg> Exceptions) const {
   // Check that all super registers of reserved regs are reserved as well.
   BitVector Checked(getNumRegs());
   for (unsigned Reg : RegisterSet.set_bits()) {
@@ -109,8 +105,8 @@ bool TargetRegisterInfo::checkAllSuperRegsMarked(const BitVector &RegisterSet,
 
 namespace llvm {
 
-Printable printReg(Register Reg, const TargetRegisterInfo *TRI,
-                   unsigned SubIdx, const MachineRegisterInfo *MRI) {
+Printable printReg(Register Reg, const TargetRegisterInfo *TRI, unsigned SubIdx,
+                   const MachineRegisterInfo *MRI) {
   return Printable([Reg, TRI, SubIdx, MRI](raw_ostream &OS) {
     if (!Reg)
       OS << "$noreg";
@@ -216,8 +212,8 @@ TargetRegisterInfo::getMinimalPhysRegClass(MCRegister reg, MVT VT) const {
 
   // Pick the most sub register class of the right type that contains
   // this physreg.
-  const TargetRegisterClass* BestRC = nullptr;
-  for (const TargetRegisterClass* RC : regclasses()) {
+  const TargetRegisterClass *BestRC = nullptr;
+  for (const TargetRegisterClass *RC : regclasses()) {
     if ((VT == MVT::Other || isTypeLegalForClass(*RC, VT)) &&
         RC->contains(reg) && (!BestRC || BestRC->hasSubClass(RC)))
       BestRC = RC;
@@ -247,15 +243,17 @@ TargetRegisterInfo::getMinimalPhysRegClassLLT(MCRegister reg, LLT Ty) const {
 /// getAllocatableSetForRC - Toggle the bits that represent allocatable
 /// registers for the specific register class.
 static void getAllocatableSetForRC(const MachineFunction &MF,
-                                   const TargetRegisterClass *RC, BitVector &R){
+                                   const TargetRegisterClass *RC,
+                                   BitVector &R) {
   assert(RC->isAllocatable() && "invalid for nonallocatable sets");
   ArrayRef<MCPhysReg> Order = RC->getRawAllocationOrder(MF);
   for (MCPhysReg PR : Order)
     R.set(PR);
 }
 
-BitVector TargetRegisterInfo::getAllocatableSet(const MachineFunction &MF,
-                                          const TargetRegisterClass *RC) const {
+BitVector
+TargetRegisterInfo::getAllocatableSet(const MachineFunction &MF,
+                                      const TargetRegisterClass *RC) const {
   BitVector Allocatable(getNumRegs());
   if (RC) {
     // A register class with no allocatable subclass returns an empty set.
@@ -276,10 +274,9 @@ BitVector TargetRegisterInfo::getAllocatableSet(const MachineFunction &MF,
   return Allocatable;
 }
 
-static inline
-const TargetRegisterClass *firstCommonClass(const uint32_t *A,
-                                            const uint32_t *B,
-                                            const TargetRegisterInfo *TRI) {
+static inline const TargetRegisterClass *
+firstCommonClass(const uint32_t *A, const uint32_t *B,
+                 const TargetRegisterInfo *TRI) {
   for (unsigned I = 0, E = TRI->getNumRegClasses(); I < E; I += 32)
     if (unsigned Common = *A++ & *B++)
       return TRI->getRegClass(I + llvm::countr_zero(Common));
@@ -316,10 +313,10 @@ TargetRegisterInfo::getMatchingSuperRegClass(const TargetRegisterClass *A,
   return nullptr;
 }
 
-const TargetRegisterClass *TargetRegisterInfo::
-getCommonSuperRegClass(const TargetRegisterClass *RCA, unsigned SubA,
-                       const TargetRegisterClass *RCB, unsigned SubB,
-                       unsigned &PreA, unsigned &PreB) const {
+const TargetRegisterClass *TargetRegisterInfo::getCommonSuperRegClass(
+    const TargetRegisterClass *RCA, unsigned SubA,
+    const TargetRegisterClass *RCB, unsigned SubB, unsigned &PreA,
+    unsigned &PreB) const {
   assert(RCA && SubA && RCB && SubB && "Invalid arguments");
 
   // Search all pairs of sub-register indices that project into RCA and RCB
@@ -352,7 +349,7 @@ getCommonSuperRegClass(const TargetRegisterClass *RCA, unsigned SubA,
     for (SuperRegClassIterator IB(RCB, this, true); IB.isValid(); ++IB) {
       // Check if a common super-register class exists for this index pair.
       const TargetRegisterClass *RC =
-        firstCommonClass(IA.getMask(), IB.getMask(), this);
+          firstCommonClass(IA.getMask(), IB.getMask(), this);
       if (!RC || getRegSizeInBits(*RC) < MinSize)
         continue;
 
@@ -463,8 +460,8 @@ bool TargetRegisterInfo::getRegAllocationHints(
   return false;
 }
 
-bool TargetRegisterInfo::isCalleeSavedPhysReg(
-    MCRegister PhysReg, const MachineFunction &MF) const {
+bool TargetRegisterInfo::isCalleeSavedPhysReg(MCRegister PhysReg,
+                                              const MachineFunction &MF) const {
   if (PhysReg == 0)
     return false;
   const uint32_t *callerPreservedRegs =
@@ -492,7 +489,7 @@ bool TargetRegisterInfo::shouldRealignStack(const MachineFunction &MF) const {
 
 bool TargetRegisterInfo::regmaskSubsetEqual(const uint32_t *mask0,
                                             const uint32_t *mask1) const {
-  unsigned N = (getNumRegs()+31) / 32;
+  unsigned N = (getNumRegs() + 31) / 32;
   for (unsigned I = 0; I < N; ++I)
     if ((mask0[I] & mask1[I]) != mask0[I])
       return false;
diff --git a/llvm/lib/CodeGen/TargetSchedule.cpp b/llvm/lib/CodeGen/TargetSchedule.cpp
index 3cedb38de2ad8d9..254793ae80e1c23 100644
--- a/llvm/lib/CodeGen/TargetSchedule.cpp
+++ b/llvm/lib/CodeGen/TargetSchedule.cpp
@@ -30,11 +30,13 @@
 
 using namespace llvm;
 
-static cl::opt<bool> EnableSchedModel("schedmodel", cl::Hidden, cl::init(true),
-  cl::desc("Use TargetSchedModel for latency lookup"));
+static cl::opt<bool>
+    EnableSchedModel("schedmodel", cl::Hidden, cl::init(true),
+                     cl::desc("Use TargetSchedModel for latency lookup"));
 
-static cl::opt<bool> EnableSchedItins("scheditins", cl::Hidden, cl::init(true),
-  cl::desc("Use InstrItineraryData for latency lookup"));
+static cl::opt<bool>
+    EnableSchedItins("scheditins", cl::Hidden, cl::init(true),
+                     cl::desc("Use InstrItineraryData for latency lookup"));
 
 static cl::opt<bool> ForceEnableIntervals(
     "sched-model-force-enable-intervals", cl::Hidden, cl::init(false),
@@ -71,7 +73,7 @@ void TargetSchedModel::init(const TargetSubtargetInfo *TSInfo) {
 
 /// Returns true only if instruction is specified as single issue.
 bool TargetSchedModel::mustBeginGroup(const MachineInstr *MI,
-                                     const MCSchedClassDesc *SC) const {
+                                      const MCSchedClassDesc *SC) const {
   if (hasInstrSchedModel()) {
     if (!SC)
       SC = resolveSchedClass(MI);
@@ -82,7 +84,7 @@ bool TargetSchedModel::mustBeginGroup(const MachineInstr *MI,
 }
 
 bool TargetSchedModel::mustEndGroup(const MachineInstr *MI,
-                                     const MCSchedClassDesc *SC) const {
+                                    const MCSchedClassDesc *SC) const {
   if (hasInstrSchedModel()) {
     if (!SC)
       SC = resolveSchedClass(MI);
@@ -111,14 +113,12 @@ unsigned TargetSchedModel::getNumMicroOps(const MachineInstr *MI,
 // effectively means infinite latency. Since users of the TargetSchedule API
 // don't know how to handle this, we convert it to a very large latency that is
 // easy to distinguish when debugging the DAG but won't induce overflow.
-static unsigned capLatency(int Cycles) {
-  return Cycles >= 0 ? Cycles : 1000;
-}
+static unsigned capLatency(int Cycles) { return Cycles >= 0 ? Cycles : 1000; }
 
 /// Return the MCSchedClassDesc for this instruction. Some SchedClasses require
 /// evaluation of predicates that depend on instruction operands or flags.
-const MCSchedClassDesc *TargetSchedModel::
-resolveSchedClass(const MachineInstr *MI) const {
+const MCSchedClassDesc *
+TargetSchedModel::resolveSchedClass(const MachineInstr *MI) const {
   // Get the definition's scheduling class descriptor from this machine model.
   unsigned SchedClass = MI->getDesc().getSchedClass();
   const MCSchedClassDesc *SCDesc = SchedModel.getSchedClassDesc(SchedClass);
@@ -169,9 +169,10 @@ static unsigned findUseIdx(const MachineInstr *MI, unsigned UseOperIdx) {
 }
 
 // Top-level API for clients that know the operand indices.
-unsigned TargetSchedModel::computeOperandLatency(
-  const MachineInstr *DefMI, unsigned DefOperIdx,
-  const MachineInstr *UseMI, unsigned UseOperIdx) const {
+unsigned TargetSchedModel::computeOperandLatency(const MachineInstr *DefMI,
+                                                 unsigned DefOperIdx,
+                                                 const MachineInstr *UseMI,
+                                                 unsigned UseOperIdx) const {
 
   if (!hasInstrSchedModel() && !hasInstrItineraries())
     return TII->defaultDefLatency(SchedModel, *DefMI);
@@ -181,8 +182,7 @@ unsigned TargetSchedModel::computeOperandLatency(
     if (UseMI) {
       OperLatency = TII->getOperandLatency(&InstrItins, *DefMI, DefOperIdx,
                                            *UseMI, UseOperIdx);
-    }
-    else {
+    } else {
       unsigned DefClass = DefMI->getDesc().getSchedClass();
       OperLatency = InstrItins.getOperandCycle(DefClass, DefOperIdx);
     }
@@ -207,7 +207,7 @@ unsigned TargetSchedModel::computeOperandLatency(
   if (DefIdx < SCDesc->NumWriteLatencyEntries) {
     // Lookup the definition's write latency in SubtargetInfo.
     const MCWriteLatencyEntry *WLEntry =
-      STI->getWriteLatencyEntry(SCDesc, DefIdx);
+        STI->getWriteLatencyEntry(SCDesc, DefIdx);
     unsigned WriteID = WLEntry->WriteResourceID;
     unsigned Latency = capLatency(WLEntry->Cycles);
     if (!UseMI)
@@ -274,9 +274,10 @@ TargetSchedModel::computeInstrLatency(const MachineInstr *MI,
   return TII->defaultDefLatency(SchedModel, *MI);
 }
 
-unsigned TargetSchedModel::
-computeOutputLatency(const MachineInstr *DefMI, unsigned DefOperIdx,
-                     const MachineInstr *DepMI) const {
+unsigned
+TargetSchedModel::computeOutputLatency(const MachineInstr *DefMI,
+                                       unsigned DefOperIdx,
+                                       const MachineInstr *DepMI) const {
   if (!SchedModel.isOutOfOrder())
     return 1;
 
@@ -300,7 +301,8 @@ computeOutputLatency(const MachineInstr *DefMI, unsigned DefOperIdx,
     const MCSchedClassDesc *SCDesc = resolveSchedClass(DefMI);
     if (SCDesc->isValid()) {
       for (const MCWriteProcResEntry *PRI = STI->getWriteProcResBegin(SCDesc),
-             *PRE = STI->getWriteProcResEnd(SCDesc); PRI != PRE; ++PRI) {
+                                     *PRE = STI->getWriteProcResEnd(SCDesc);
+           PRI != PRE; ++PRI) {
         if (!SchedModel.getProcResource(PRI->ProcResourceIdx)->BufferSize)
           return 1;
       }
@@ -323,8 +325,7 @@ TargetSchedModel::computeReciprocalThroughput(const MachineInstr *MI) const {
   return 0.0;
 }
 
-double
-TargetSchedModel::computeReciprocalThroughput(unsigned Opcode) const {
+double TargetSchedModel::computeReciprocalThroughput(unsigned Opcode) const {
   unsigned SchedClass = TII->get(Opcode).getSchedClass();
   if (hasInstrItineraries())
     return MCSchedModel::getReciprocalThroughput(SchedClass,
@@ -338,8 +339,7 @@ TargetSchedModel::computeReciprocalThroughput(unsigned Opcode) const {
   return 0.0;
 }
 
-double
-TargetSchedModel::computeReciprocalThroughput(const MCInst &MI) const {
+double TargetSchedModel::computeReciprocalThroughput(const MCInst &MI) const {
   if (hasInstrSchedModel())
     return SchedModel.getReciprocalThroughput(*STI, *TII, MI);
   return computeReciprocalThroughput(MI.getOpcode());
diff --git a/llvm/lib/CodeGen/TargetSubtargetInfo.cpp b/llvm/lib/CodeGen/TargetSubtargetInfo.cpp
index ba2c8dda7de5979..a3851a902da5969 100644
--- a/llvm/lib/CodeGen/TargetSubtargetInfo.cpp
+++ b/llvm/lib/CodeGen/TargetSubtargetInfo.cpp
@@ -24,17 +24,11 @@ TargetSubtargetInfo::TargetSubtargetInfo(
 
 TargetSubtargetInfo::~TargetSubtargetInfo() = default;
 
-bool TargetSubtargetInfo::enableAtomicExpand() const {
-  return true;
-}
+bool TargetSubtargetInfo::enableAtomicExpand() const { return true; }
 
-bool TargetSubtargetInfo::enableIndirectBrExpand() const {
-  return false;
-}
+bool TargetSubtargetInfo::enableIndirectBrExpand() const { return false; }
 
-bool TargetSubtargetInfo::enableMachineScheduler() const {
-  return false;
-}
+bool TargetSubtargetInfo::enableMachineScheduler() const { return false; }
 
 bool TargetSubtargetInfo::enableJoinGlobalCopies() const {
   return enableMachineScheduler();
@@ -53,8 +47,6 @@ bool TargetSubtargetInfo::enablePostRAMachineScheduler() const {
   return enableMachineScheduler() && enablePostRAScheduler();
 }
 
-bool TargetSubtargetInfo::useAA() const {
-  return false;
-}
+bool TargetSubtargetInfo::useAA() const { return false; }
 
-void TargetSubtargetInfo::mirFileLoaded(MachineFunction &MF) const { }
+void TargetSubtargetInfo::mirFileLoaded(MachineFunction &MF) const {}
diff --git a/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp b/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp
index c3ea76bf8cea6d4..938f0d70a5f5d35 100644
--- a/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp
+++ b/llvm/lib/CodeGen/TwoAddressInstructionPass.cpp
@@ -65,17 +65,17 @@ using namespace llvm;
 #define DEBUG_TYPE "twoaddressinstruction"
 
 STATISTIC(NumTwoAddressInstrs, "Number of two-address instructions");
-STATISTIC(NumCommuted        , "Number of instructions commuted to coalesce");
-STATISTIC(NumAggrCommuted    , "Number of instructions aggressively commuted");
+STATISTIC(NumCommuted, "Number of instructions commuted to coalesce");
+STATISTIC(NumAggrCommuted, "Number of instructions aggressively commuted");
 STATISTIC(NumConvertedTo3Addr, "Number of instructions promoted to 3-address");
-STATISTIC(NumReSchedUps,       "Number of instructions re-scheduled up");
-STATISTIC(NumReSchedDowns,     "Number of instructions re-scheduled down");
+STATISTIC(NumReSchedUps, "Number of instructions re-scheduled up");
+STATISTIC(NumReSchedDowns, "Number of instructions re-scheduled down");
 
 // Temporary flag to disable rescheduling.
-static cl::opt<bool>
-EnableRescheduling("twoaddr-reschedule",
-                   cl::desc("Coalesce copies by rescheduling (default=true)"),
-                   cl::init(true), cl::Hidden);
+static cl::opt<bool> EnableRescheduling(
+    "twoaddr-reschedule",
+    cl::desc("Coalesce copies by rescheduling (default=true)"), cl::init(true),
+    cl::Hidden);
 
 // Limit the number of dataflow edges to traverse when evaluating the benefit
 // of commuting operands.
@@ -101,10 +101,10 @@ class TwoAddressInstructionPass : public MachineFunctionPass {
   MachineBasicBlock *MBB = nullptr;
 
   // Keep track the distance of a MI from the start of the current basic block.
-  DenseMap<MachineInstr*, unsigned> DistanceMap;
+  DenseMap<MachineInstr *, unsigned> DistanceMap;
 
   // Set of already processed instructions in the current block.
-  SmallPtrSet<MachineInstr*, 8> Processed;
+  SmallPtrSet<MachineInstr *, 8> Processed;
 
   // A map from virtual registers to physical registers which are likely targets
   // to be coalesced to due to copies from physical registers to virtual
@@ -125,8 +125,8 @@ class TwoAddressInstructionPass : public MachineFunctionPass {
   bool isProfitableToCommute(Register RegA, Register RegB, Register RegC,
                              MachineInstr *MI, unsigned Dist);
 
-  bool commuteInstruction(MachineInstr *MI, unsigned DstIdx,
-                          unsigned RegBIdx, unsigned RegCIdx, unsigned Dist);
+  bool commuteInstruction(MachineInstr *MI, unsigned DstIdx, unsigned RegBIdx,
+                          unsigned RegCIdx, unsigned Dist);
 
   bool isProfitableToConv3Addr(Register RegA, Register RegB);
 
@@ -143,13 +143,11 @@ class TwoAddressInstructionPass : public MachineFunctionPass {
 
   bool tryInstructionTransform(MachineBasicBlock::iterator &mi,
                                MachineBasicBlock::iterator &nmi,
-                               unsigned SrcIdx, unsigned DstIdx,
-                               unsigned &Dist, bool shouldOnlyCommute);
+                               unsigned SrcIdx, unsigned DstIdx, unsigned &Dist,
+                               bool shouldOnlyCommute);
 
-  bool tryInstructionCommute(MachineInstr *MI,
-                             unsigned DstOpIdx,
-                             unsigned BaseOpIdx,
-                             bool BaseOpKilled,
+  bool tryInstructionCommute(MachineInstr *MI, unsigned DstOpIdx,
+                             unsigned BaseOpIdx, bool BaseOpKilled,
                              unsigned Dist);
   void scanUses(Register DstReg);
 
@@ -158,9 +156,9 @@ class TwoAddressInstructionPass : public MachineFunctionPass {
   using TiedPairList = SmallVector<std::pair<unsigned, unsigned>, 4>;
   using TiedOperandMap = SmallDenseMap<unsigned, TiedPairList>;
 
-  bool collectTiedOperands(MachineInstr *MI, TiedOperandMap&);
-  void processTiedPairs(MachineInstr *MI, TiedPairList&, unsigned &Dist);
-  void eliminateRegSequence(MachineBasicBlock::iterator&);
+  bool collectTiedOperands(MachineInstr *MI, TiedOperandMap &);
+  void processTiedPairs(MachineInstr *MI, TiedPairList &, unsigned &Dist);
+  void eliminateRegSequence(MachineBasicBlock::iterator &);
   bool processStatepoint(MachineInstr *MI, TiedOperandMap &TiedOperands);
 
 public:
@@ -183,7 +181,7 @@ class TwoAddressInstructionPass : public MachineFunctionPass {
   }
 
   /// Pass entry point.
-  bool runOnMachineFunction(MachineFunction&) override;
+  bool runOnMachineFunction(MachineFunction &) override;
 };
 
 } // end anonymous namespace
@@ -193,10 +191,10 @@ char TwoAddressInstructionPass::ID = 0;
 char &llvm::TwoAddressInstructionPassID = TwoAddressInstructionPass::ID;
 
 INITIALIZE_PASS_BEGIN(TwoAddressInstructionPass, DEBUG_TYPE,
-                "Two-Address instruction pass", false, false)
+                      "Two-Address instruction pass", false, false)
 INITIALIZE_PASS_DEPENDENCY(AAResultsWrapperPass)
 INITIALIZE_PASS_END(TwoAddressInstructionPass, DEBUG_TYPE,
-                "Two-Address instruction pass", false, false)
+                    "Two-Address instruction pass", false, false)
 
 /// Return the MachineInstr* if it is the single def of the Reg in current BB.
 static MachineInstr *getSingleDef(Register Reg, MachineBasicBlock *BB,
@@ -248,7 +246,7 @@ bool TwoAddressInstructionPass::noUseAfterLastDef(Register Reg, unsigned Dist,
     MachineInstr *MI = MO.getParent();
     if (MI->getParent() != MBB || MI->isDebugValue())
       continue;
-    DenseMap<MachineInstr*, unsigned>::iterator DI = DistanceMap.find(MI);
+    DenseMap<MachineInstr *, unsigned>::iterator DI = DistanceMap.find(MI);
     if (DI == DistanceMap.end())
       continue;
     if (MO.isUse() && DI->second < LastUse)
@@ -728,12 +726,12 @@ void TwoAddressInstructionPass::scanUses(Register DstReg) {
   bool IsCopy = false;
   Register NewReg;
   Register Reg = DstReg;
-  while (MachineInstr *UseMI = findOnlyInterestingUse(Reg, MBB, MRI, TII,IsCopy,
-                                                      NewReg, IsDstPhys, LIS)) {
+  while (MachineInstr *UseMI = findOnlyInterestingUse(
+             Reg, MBB, MRI, TII, IsCopy, NewReg, IsDstPhys, LIS)) {
     if (IsCopy && !Processed.insert(UseMI).second)
       break;
 
-    DenseMap<MachineInstr*, unsigned>::iterator DI = DistanceMap.find(UseMI);
+    DenseMap<MachineInstr *, unsigned>::iterator DI = DistanceMap.find(UseMI);
     if (DI != DistanceMap.end())
       // Earlier in the same MBB.Reached via a back edge.
       break;
@@ -754,7 +752,8 @@ void TwoAddressInstructionPass::scanUses(Register DstReg) {
       unsigned FromReg = VirtRegPairs.pop_back_val();
       bool isNew = DstRegMap.insert(std::make_pair(FromReg, ToReg)).second;
       if (!isNew)
-        assert(DstRegMap[FromReg] == ToReg &&"Can't map to two dst registers!");
+        assert(DstRegMap[FromReg] == ToReg &&
+               "Can't map to two dst registers!");
       ToReg = FromReg;
     }
     bool isNew = DstRegMap.insert(std::make_pair(DstReg, ToReg)).second;
@@ -810,7 +809,7 @@ bool TwoAddressInstructionPass::rescheduleMIBelowKill(
     return false;
 
   MachineInstr *MI = &*mi;
-  DenseMap<MachineInstr*, unsigned>::iterator DI = DistanceMap.find(MI);
+  DenseMap<MachineInstr *, unsigned>::iterator DI = DistanceMap.find(MI);
   if (DI == DistanceMap.end())
     // Must be created from unfolded load. Don't waste time trying this.
     return false;
@@ -891,7 +890,7 @@ bool TwoAddressInstructionPass::rescheduleMIBelowKill(
     // Debug or pseudo instructions cannot be counted against the limit.
     if (OtherMI.isDebugOrPseudoInstr())
       continue;
-    if (NumVisited > 10)  // FIXME: Arbitrary limit to reduce compile time cost.
+    if (NumVisited > 10) // FIXME: Arbitrary limit to reduce compile time cost.
       return false;
     ++NumVisited;
     if (OtherMI.hasUnmodeledSideEffects() || OtherMI.isCall() ||
@@ -975,9 +974,9 @@ bool TwoAddressInstructionPass::isDefTooClose(Register Reg, unsigned Dist,
       continue;
     if (&DefMI == MI)
       return true; // MI is defining something KillMI uses
-    DenseMap<MachineInstr*, unsigned>::iterator DDI = DistanceMap.find(&DefMI);
+    DenseMap<MachineInstr *, unsigned>::iterator DDI = DistanceMap.find(&DefMI);
     if (DDI == DistanceMap.end())
-      return true;  // Below MI
+      return true; // Below MI
     unsigned DefDist = DDI->second;
     assert(Dist > DefDist && "Visited def already?");
     if (TII->getInstrLatency(InstrItins, DefMI) > (Dist - DefDist))
@@ -998,7 +997,7 @@ bool TwoAddressInstructionPass::rescheduleKillAboveMI(
     return false;
 
   MachineInstr *MI = &*mi;
-  DenseMap<MachineInstr*, unsigned>::iterator DI = DistanceMap.find(MI);
+  DenseMap<MachineInstr *, unsigned>::iterator DI = DistanceMap.find(MI);
   if (DI == DistanceMap.end())
     // Must be created from unfolded load. Don't waste time trying this.
     return false;
@@ -1064,7 +1063,7 @@ bool TwoAddressInstructionPass::rescheduleKillAboveMI(
     // Debug or pseudo instructions cannot be counted against the limit.
     if (OtherMI.isDebugOrPseudoInstr())
       continue;
-    if (NumVisited > 10)  // FIXME: Arbitrary limit to reduce compile time cost.
+    if (NumVisited > 10) // FIXME: Arbitrary limit to reduce compile time cost.
       return false;
     ++NumVisited;
     if (OtherMI.hasUnmodeledSideEffects() || OtherMI.isCall() ||
@@ -1179,8 +1178,8 @@ bool TwoAddressInstructionPass::tryInstructionCommute(MachineInstr *MI,
     }
 
     // If it's profitable to commute, try to do so.
-    if (DoCommute && commuteInstruction(MI, DstOpIdx, BaseOpIdx, OtherOpIdx,
-                                        Dist)) {
+    if (DoCommute &&
+        commuteInstruction(MI, DstOpIdx, BaseOpIdx, OtherOpIdx, Dist)) {
       MadeChange = true;
       ++NumCommuted;
       if (AggressiveCommute)
@@ -1207,11 +1206,9 @@ bool TwoAddressInstructionPass::tryInstructionCommute(MachineInstr *MI,
 /// (either because they were untied, or because mi was rescheduled, and will
 /// be visited again later). If the shouldOnlyCommute flag is true, only
 /// instruction commutation is attempted.
-bool TwoAddressInstructionPass::
-tryInstructionTransform(MachineBasicBlock::iterator &mi,
-                        MachineBasicBlock::iterator &nmi,
-                        unsigned SrcIdx, unsigned DstIdx,
-                        unsigned &Dist, bool shouldOnlyCommute) {
+bool TwoAddressInstructionPass::tryInstructionTransform(
+    MachineBasicBlock::iterator &mi, MachineBasicBlock::iterator &nmi,
+    unsigned SrcIdx, unsigned DstIdx, unsigned &Dist, bool shouldOnlyCommute) {
   if (OptLevel == CodeGenOpt::None)
     return false;
 
@@ -1290,17 +1287,15 @@ tryInstructionTransform(MachineBasicBlock::iterator &mi,
     // Determine if a load can be unfolded.
     unsigned LoadRegIndex;
     unsigned NewOpc =
-      TII->getOpcodeAfterMemoryUnfold(MI.getOpcode(),
-                                      /*UnfoldLoad=*/true,
-                                      /*UnfoldStore=*/false,
-                                      &LoadRegIndex);
+        TII->getOpcodeAfterMemoryUnfold(MI.getOpcode(),
+                                        /*UnfoldLoad=*/true,
+                                        /*UnfoldStore=*/false, &LoadRegIndex);
     if (NewOpc != 0) {
       const MCInstrDesc &UnfoldMCID = TII->get(NewOpc);
       if (UnfoldMCID.getNumDefs() == 1) {
         // Unfold the load.
         LLVM_DEBUG(dbgs() << "2addr:   UNFOLDING: " << MI);
-        const TargetRegisterClass *RC =
-          TRI->getAllocatableClass(
+        const TargetRegisterClass *RC = TRI->getAllocatableClass(
             TII->getRegClass(UnfoldMCID, LoadRegIndex, TRI, *MF));
         Register Reg = MRI->createVirtualRegister(RC);
         SmallVector<MachineInstr *, 2> NewMIs;
@@ -1329,8 +1324,8 @@ tryInstructionTransform(MachineBasicBlock::iterator &mi,
         unsigned NewDstIdx = NewMIs[1]->findRegisterDefOperandIdx(regA);
         unsigned NewSrcIdx = NewMIs[1]->findRegisterUseOperandIdx(regB);
         MachineBasicBlock::iterator NewMI = NewMIs[1];
-        bool TransformResult =
-          tryInstructionTransform(NewMI, mi, NewSrcIdx, NewDstIdx, Dist, true);
+        bool TransformResult = tryInstructionTransform(NewMI, mi, NewSrcIdx,
+                                                       NewDstIdx, Dist, true);
         (void)TransformResult;
         assert(!TransformResult &&
                "tryInstructionTransform() should return false.");
@@ -1406,8 +1401,8 @@ tryInstructionTransform(MachineBasicBlock::iterator &mi,
 // Collect tied operands of MI that need to be handled.
 // Rewrite trivial cases immediately.
 // Return true if any tied operands where found, including the trivial ones.
-bool TwoAddressInstructionPass::
-collectTiedOperands(MachineInstr *MI, TiedOperandMap &TiedOperands) {
+bool TwoAddressInstructionPass::collectTiedOperands(
+    MachineInstr *MI, TiedOperandMap &TiedOperands) {
   bool AnyOps = false;
   unsigned NumOps = MI->getNumOperands();
 
@@ -1445,10 +1440,9 @@ collectTiedOperands(MachineInstr *MI, TiedOperandMap &TiedOperands) {
 
 // Process a list of tied MI operands that all use the same source register.
 // The tied pairs are of the form (SrcIdx, DstIdx).
-void
-TwoAddressInstructionPass::processTiedPairs(MachineInstr *MI,
-                                            TiedPairList &TiedPairs,
-                                            unsigned &Dist) {
+void TwoAddressInstructionPass::processTiedPairs(MachineInstr *MI,
+                                                 TiedPairList &TiedPairs,
+                                                 unsigned &Dist) {
   bool IsEarlyClobber = llvm::any_of(TiedPairs, [MI](auto const &TP) {
     return MI->getOperand(TP.second).isEarlyClobber();
   });
@@ -1487,8 +1481,7 @@ TwoAddressInstructionPass::processTiedPairs(MachineInstr *MI,
     // (a = b + a for example) because our transformation will not
     // work. This should never occur because we are in SSA form.
     for (unsigned i = 0; i != MI->getNumOperands(); ++i)
-      assert(i == DstIdx ||
-             !MI->getOperand(i).isReg() ||
+      assert(i == DstIdx || !MI->getOperand(i).isReg() ||
              MI->getOperand(i).getReg() != RegA);
 #endif
 
@@ -1507,8 +1500,9 @@ TwoAddressInstructionPass::processTiedPairs(MachineInstr *MI,
         // The superreg class will not be used to constrain the subreg class.
         RC = nullptr;
       } else {
-        assert(TRI->getMatchingSuperReg(RegA, SubRegB, MRI->getRegClass(RegB))
-               && "tied subregister must be a truncation");
+        assert(
+            TRI->getMatchingSuperReg(RegA, SubRegB, MRI->getRegClass(RegB)) &&
+            "tied subregister must be a truncation");
       }
     }
 
@@ -1749,8 +1743,8 @@ bool TwoAddressInstructionPass::runOnMachineFunction(MachineFunction &Func) {
   MRI->leaveSSA();
 
   // This pass will rewrite the tied-def to meet the RegConstraint.
-  MF->getProperties()
-      .set(MachineFunctionProperties::Property::TiedOpsRewritten);
+  MF->getProperties().set(
+      MachineFunctionProperties::Property::TiedOpsRewritten);
 
   TiedOperandMap TiedOperands;
   for (MachineBasicBlock &MBBI : *MF) {
@@ -1761,7 +1755,7 @@ bool TwoAddressInstructionPass::runOnMachineFunction(MachineFunction &Func) {
     DstRegMap.clear();
     Processed.clear();
     for (MachineBasicBlock::iterator mi = MBB->begin(), me = MBB->end();
-         mi != me; ) {
+         mi != me;) {
       MachineBasicBlock::iterator nmi = std::next(mi);
       // Skip debug instructions.
       if (mi->isDebugInstr()) {
@@ -1794,8 +1788,8 @@ bool TwoAddressInstructionPass::runOnMachineFunction(MachineFunction &Func) {
       // transformations that may either eliminate the tied operands or
       // improve the opportunities for coalescing away the register copy.
       if (TiedOperands.size() == 1) {
-        SmallVectorImpl<std::pair<unsigned, unsigned>> &TiedPairs
-          = TiedOperands.begin()->second;
+        SmallVectorImpl<std::pair<unsigned, unsigned>> &TiedPairs =
+            TiedOperands.begin()->second;
         if (TiedPairs.size() == 1) {
           unsigned SrcIdx = TiedPairs[0].first;
           unsigned DstIdx = TiedPairs[0].second;
@@ -1890,8 +1884,8 @@ bool TwoAddressInstructionPass::runOnMachineFunction(MachineFunction &Func) {
 ///
 ///   undef %dst:ssub0 = COPY %v1
 ///   %dst:ssub1 = COPY %v2
-void TwoAddressInstructionPass::
-eliminateRegSequence(MachineBasicBlock::iterator &MBBI) {
+void TwoAddressInstructionPass::eliminateRegSequence(
+    MachineBasicBlock::iterator &MBBI) {
   MachineInstr &MI = *MBBI;
   Register DstReg = MI.getOperand(0).getReg();
 
@@ -1906,7 +1900,7 @@ eliminateRegSequence(MachineBasicBlock::iterator &MBBI) {
   for (unsigned i = 1, e = MI.getNumOperands(); i < e; i += 2) {
     MachineOperand &UseMO = MI.getOperand(i);
     Register SrcReg = UseMO.getReg();
-    unsigned SubIdx = MI.getOperand(i+1).getImm();
+    unsigned SubIdx = MI.getOperand(i + 1).getImm();
     // Nothing needs to be inserted for undef operands.
     if (UseMO.isUndef())
       continue;
diff --git a/llvm/lib/CodeGen/TypePromotion.cpp b/llvm/lib/CodeGen/TypePromotion.cpp
index 426292345a1478a..81953c1af3d8715 100644
--- a/llvm/lib/CodeGen/TypePromotion.cpp
+++ b/llvm/lib/CodeGen/TypePromotion.cpp
@@ -172,8 +172,8 @@ class TypePromotionImpl {
   bool TryToPromote(Value *V, unsigned PromotedWidth, const LoopInfo &LI);
 
 public:
-  bool run(Function &F, const TargetMachine *TM,
-           const TargetTransformInfo &TTI, const LoopInfo &LI);
+  bool run(Function &F, const TargetMachine *TM, const TargetTransformInfo &TTI,
+           const LoopInfo &LI);
 };
 
 class TypePromotionLegacy : public FunctionPass {
@@ -771,7 +771,7 @@ bool TypePromotionImpl::isLegalToPromote(Value *V) {
 }
 
 bool TypePromotionImpl::TryToPromote(Value *V, unsigned PromotedWidth,
-                                 const LoopInfo &LI) {
+                                     const LoopInfo &LI) {
   Type *OrigTy = V->getType();
   TypeSize = OrigTy->getPrimitiveSizeInBits().getFixedValue();
   SafeToPromote.clear();
diff --git a/llvm/lib/CodeGen/UnreachableBlockElim.cpp b/llvm/lib/CodeGen/UnreachableBlockElim.cpp
index f17450d264ba07c..e8c1fe1e68212ac 100644
--- a/llvm/lib/CodeGen/UnreachableBlockElim.cpp
+++ b/llvm/lib/CodeGen/UnreachableBlockElim.cpp
@@ -52,7 +52,7 @@ class UnreachableBlockElimLegacyPass : public FunctionPass {
     AU.addPreserved<DominatorTreeWrapperPass>();
   }
 };
-}
+} // namespace
 char UnreachableBlockElimLegacyPass::ID = 0;
 INITIALIZE_PASS(UnreachableBlockElimLegacyPass, "unreachableblockelim",
                 "Remove unreachable blocks from the CFG", false, false)
@@ -72,19 +72,19 @@ PreservedAnalyses UnreachableBlockElimPass::run(Function &F,
 }
 
 namespace {
-  class UnreachableMachineBlockElim : public MachineFunctionPass {
-    bool runOnMachineFunction(MachineFunction &F) override;
-    void getAnalysisUsage(AnalysisUsage &AU) const override;
-
-  public:
-    static char ID; // Pass identification, replacement for typeid
-    UnreachableMachineBlockElim() : MachineFunctionPass(ID) {}
-  };
-}
+class UnreachableMachineBlockElim : public MachineFunctionPass {
+  bool runOnMachineFunction(MachineFunction &F) override;
+  void getAnalysisUsage(AnalysisUsage &AU) const override;
+
+public:
+  static char ID; // Pass identification, replacement for typeid
+  UnreachableMachineBlockElim() : MachineFunctionPass(ID) {}
+};
+} // namespace
 char UnreachableMachineBlockElim::ID = 0;
 
 INITIALIZE_PASS(UnreachableMachineBlockElim, "unreachable-mbb-elimination",
-  "Remove unreachable machine basic blocks", false, false)
+                "Remove unreachable machine basic blocks", false, false)
 
 char &llvm::UnreachableMachineBlockElimID = UnreachableMachineBlockElim::ID;
 
@@ -95,7 +95,7 @@ void UnreachableMachineBlockElim::getAnalysisUsage(AnalysisUsage &AU) const {
 }
 
 bool UnreachableMachineBlockElim::runOnMachineFunction(MachineFunction &F) {
-  df_iterator_default_set<MachineBasicBlock*> Reachable;
+  df_iterator_default_set<MachineBasicBlock *> Reachable;
   bool ModifiedPHI = false;
 
   MachineDominatorTree *MDT = getAnalysisIfAvailable<MachineDominatorTree>();
@@ -103,22 +103,24 @@ bool UnreachableMachineBlockElim::runOnMachineFunction(MachineFunction &F) {
 
   // Mark all reachable blocks.
   for (MachineBasicBlock *BB : depth_first_ext(&F, Reachable))
-    (void)BB/* Mark all reachable blocks */;
+    (void)BB /* Mark all reachable blocks */;
 
   // Loop over all dead blocks, remembering them and deleting all instructions
   // in them.
-  std::vector<MachineBasicBlock*> DeadBlocks;
+  std::vector<MachineBasicBlock *> DeadBlocks;
   for (MachineBasicBlock &BB : F) {
     // Test for deadness.
     if (!Reachable.count(&BB)) {
       DeadBlocks.push_back(&BB);
 
       // Update dominator and loop info.
-      if (MLI) MLI->removeBlock(&BB);
-      if (MDT && MDT->getNode(&BB)) MDT->eraseNode(&BB);
+      if (MLI)
+        MLI->removeBlock(&BB);
+      if (MDT && MDT->getNode(&BB))
+        MDT->eraseNode(&BB);
 
       while (BB.succ_begin() != BB.succ_end()) {
-        MachineBasicBlock* succ = *BB.succ_begin();
+        MachineBasicBlock *succ = *BB.succ_begin();
 
         for (MachineInstr &Phi : succ->phis()) {
           for (unsigned i = Phi.getNumOperands() - 1; i >= 2; i -= 2) {
@@ -148,8 +150,7 @@ bool UnreachableMachineBlockElim::runOnMachineFunction(MachineFunction &F) {
   // Cleanup PHI nodes.
   for (MachineBasicBlock &BB : F) {
     // Prune unneeded PHI entries.
-    SmallPtrSet<MachineBasicBlock*, 8> preds(BB.pred_begin(),
-                                             BB.pred_end());
+    SmallPtrSet<MachineBasicBlock *, 8> preds(BB.pred_begin(), BB.pred_end());
     for (MachineInstr &Phi : make_early_inc_range(BB.phis())) {
       for (unsigned i = Phi.getNumOperands() - 1; i >= 2; i -= 2) {
         if (!preds.count(Phi.getOperand(i).getMBB())) {
diff --git a/llvm/lib/CodeGen/ValueTypes.cpp b/llvm/lib/CodeGen/ValueTypes.cpp
index 2d16ff2dfb2fbf5..eccb43b6c20b490 100644
--- a/llvm/lib/CodeGen/ValueTypes.cpp
+++ b/llvm/lib/CodeGen/ValueTypes.cpp
@@ -162,18 +162,30 @@ std::string EVT::getEVTString() const {
     if (isFloatingPoint())
       return "f" + utostr(getSizeInBits());
     llvm_unreachable("Invalid EVT!");
-  case MVT::bf16:      return "bf16";
-  case MVT::ppcf128:   return "ppcf128";
-  case MVT::isVoid:    return "isVoid";
-  case MVT::Other:     return "ch";
-  case MVT::Glue:      return "glue";
-  case MVT::x86mmx:    return "x86mmx";
-  case MVT::x86amx:    return "x86amx";
-  case MVT::i64x8:     return "i64x8";
-  case MVT::Metadata:  return "Metadata";
-  case MVT::Untyped:   return "Untyped";
-  case MVT::funcref:   return "funcref";
-  case MVT::externref: return "externref";
+  case MVT::bf16:
+    return "bf16";
+  case MVT::ppcf128:
+    return "ppcf128";
+  case MVT::isVoid:
+    return "isVoid";
+  case MVT::Other:
+    return "ch";
+  case MVT::Glue:
+    return "glue";
+  case MVT::x86mmx:
+    return "x86mmx";
+  case MVT::x86amx:
+    return "x86amx";
+  case MVT::i64x8:
+    return "i64x8";
+  case MVT::Metadata:
+    return "Metadata";
+  case MVT::Untyped:
+    return "Untyped";
+  case MVT::funcref:
+    return "funcref";
+  case MVT::externref:
+    return "externref";
   case MVT::aarch64svcount:
     return "aarch64svcount";
   case MVT::spirvbuiltin:
@@ -570,22 +582,29 @@ Type *EVT::getTypeForEVT(LLVMContext &Context) const {
 /// Return the value type corresponding to the specified type.  This returns all
 /// pointers as MVT::iPTR.  If HandleUnknown is true, unknown types are returned
 /// as Other, otherwise they are invalid.
-MVT MVT::getVT(Type *Ty, bool HandleUnknown){
+MVT MVT::getVT(Type *Ty, bool HandleUnknown) {
   assert(Ty != nullptr && "Invalid type");
   switch (Ty->getTypeID()) {
   default:
-    if (HandleUnknown) return MVT(MVT::Other);
+    if (HandleUnknown)
+      return MVT(MVT::Other);
     llvm_unreachable("Unknown type!");
   case Type::VoidTyID:
     return MVT::isVoid;
   case Type::IntegerTyID:
     return getIntegerVT(cast<IntegerType>(Ty)->getBitWidth());
-  case Type::HalfTyID:      return MVT(MVT::f16);
-  case Type::BFloatTyID:    return MVT(MVT::bf16);
-  case Type::FloatTyID:     return MVT(MVT::f32);
-  case Type::DoubleTyID:    return MVT(MVT::f64);
-  case Type::X86_FP80TyID:  return MVT(MVT::f80);
-  case Type::X86_MMXTyID:   return MVT(MVT::x86mmx);
+  case Type::HalfTyID:
+    return MVT(MVT::f16);
+  case Type::BFloatTyID:
+    return MVT(MVT::bf16);
+  case Type::FloatTyID:
+    return MVT(MVT::f32);
+  case Type::DoubleTyID:
+    return MVT(MVT::f64);
+  case Type::X86_FP80TyID:
+    return MVT(MVT::f80);
+  case Type::X86_MMXTyID:
+    return MVT(MVT::x86mmx);
   case Type::TargetExtTyID: {
     TargetExtType *TargetExtTy = cast<TargetExtType>(Ty);
     if (TargetExtTy->getName() == "aarch64.svcount")
@@ -596,16 +615,19 @@ MVT MVT::getVT(Type *Ty, bool HandleUnknown){
       return MVT(MVT::Other);
     llvm_unreachable("Unknown target ext type!");
   }
-  case Type::X86_AMXTyID:   return MVT(MVT::x86amx);
-  case Type::FP128TyID:     return MVT(MVT::f128);
-  case Type::PPC_FP128TyID: return MVT(MVT::ppcf128);
-  case Type::PointerTyID:   return MVT(MVT::iPTR);
+  case Type::X86_AMXTyID:
+    return MVT(MVT::x86amx);
+  case Type::FP128TyID:
+    return MVT(MVT::f128);
+  case Type::PPC_FP128TyID:
+    return MVT(MVT::ppcf128);
+  case Type::PointerTyID:
+    return MVT(MVT::iPTR);
   case Type::FixedVectorTyID:
   case Type::ScalableVectorTyID: {
     VectorType *VTy = cast<VectorType>(Ty);
-    return getVectorVT(
-      getVT(VTy->getElementType(), /*HandleUnknown=*/ false),
-            VTy->getElementCount());
+    return getVectorVT(getVT(VTy->getElementType(), /*HandleUnknown=*/false),
+                       VTy->getElementCount());
   }
   }
 }
@@ -613,7 +635,7 @@ MVT MVT::getVT(Type *Ty, bool HandleUnknown){
 /// getEVT - Return the value type corresponding to the specified type.  This
 /// returns all pointers as MVT::iPTR.  If HandleUnknown is true, unknown types
 /// are returned as Other, otherwise they are invalid.
-EVT EVT::getEVT(Type *Ty, bool HandleUnknown){
+EVT EVT::getEVT(Type *Ty, bool HandleUnknown) {
   switch (Ty->getTypeID()) {
   default:
     return MVT::getVT(Ty, HandleUnknown);
@@ -623,7 +645,7 @@ EVT EVT::getEVT(Type *Ty, bool HandleUnknown){
   case Type::ScalableVectorTyID: {
     VectorType *VTy = cast<VectorType>(Ty);
     return getVectorVT(Ty->getContext(),
-                       getEVT(VTy->getElementType(), /*HandleUnknown=*/ false),
+                       getEVT(VTy->getElementType(), /*HandleUnknown=*/false),
                        VTy->getElementCount());
   }
   }
@@ -642,4 +664,3 @@ void MVT::print(raw_ostream &OS) const {
   else
     OS << EVT(*this).getEVTString();
 }
-
diff --git a/llvm/lib/CodeGen/VirtRegMap.cpp b/llvm/lib/CodeGen/VirtRegMap.cpp
index c1c6ce227334a4b..6a8c0801661faa7 100644
--- a/llvm/lib/CodeGen/VirtRegMap.cpp
+++ b/llvm/lib/CodeGen/VirtRegMap.cpp
@@ -50,7 +50,7 @@ using namespace llvm;
 #define DEBUG_TYPE "regalloc"
 
 STATISTIC(NumSpillSlots, "Number of spill slots allocated");
-STATISTIC(NumIdCopies,   "Number of identity moves eliminated after rewriting");
+STATISTIC(NumIdCopies, "Number of identity moves eliminated after rewriting");
 
 //===----------------------------------------------------------------------===//
 //  VirtRegMap implementation
@@ -128,7 +128,7 @@ int VirtRegMap::assignVirt2StackSlot(Register virtReg) {
   assert(virtReg.isVirtual());
   assert(Virt2StackSlotMap[virtReg.id()] == NO_STACK_SLOT &&
          "attempt to assign stack slot to already spilled register");
-  const TargetRegisterClass* RC = MF->getRegInfo().getRegClass(virtReg);
+  const TargetRegisterClass *RC = MF->getRegInfo().getRegClass(virtReg);
   return Virt2StackSlotMap[virtReg.id()] = createSpillSlot(RC);
 }
 
@@ -136,13 +136,12 @@ void VirtRegMap::assignVirt2StackSlot(Register virtReg, int SS) {
   assert(virtReg.isVirtual());
   assert(Virt2StackSlotMap[virtReg.id()] == NO_STACK_SLOT &&
          "attempt to assign stack slot to already spilled register");
-  assert((SS >= 0 ||
-          (SS >= MF->getFrameInfo().getObjectIndexBegin())) &&
+  assert((SS >= 0 || (SS >= MF->getFrameInfo().getObjectIndexBegin())) &&
          "illegal fixed frame index");
   Virt2StackSlotMap[virtReg.id()] = SS;
 }
 
-void VirtRegMap::print(raw_ostream &OS, const Module*) const {
+void VirtRegMap::print(raw_ostream &OS, const Module *) const {
   OS << "********** REGISTER MAP **********\n";
   for (unsigned i = 0, e = MRI->getNumVirtRegs(); i != e; ++i) {
     Register Reg = Register::index2VirtReg(i);
@@ -164,9 +163,7 @@ void VirtRegMap::print(raw_ostream &OS, const Module*) const {
 }
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
-LLVM_DUMP_METHOD void VirtRegMap::dump() const {
-  print(dbgs());
-}
+LLVM_DUMP_METHOD void VirtRegMap::dump() const { print(dbgs()); }
 #endif
 
 //===----------------------------------------------------------------------===//
@@ -202,18 +199,17 @@ class VirtRegRewriter : public MachineFunctionPass {
 
 public:
   static char ID;
-  VirtRegRewriter(bool ClearVirtRegs_ = true) :
-    MachineFunctionPass(ID),
-    ClearVirtRegs(ClearVirtRegs_) {}
+  VirtRegRewriter(bool ClearVirtRegs_ = true)
+      : MachineFunctionPass(ID), ClearVirtRegs(ClearVirtRegs_) {}
 
   void getAnalysisUsage(AnalysisUsage &AU) const override;
 
-  bool runOnMachineFunction(MachineFunction&) override;
+  bool runOnMachineFunction(MachineFunction &) override;
 
   MachineFunctionProperties getSetProperties() const override {
     if (ClearVirtRegs) {
       return MachineFunctionProperties().set(
-        MachineFunctionProperties::Property::NoVRegs);
+          MachineFunctionProperties::Property::NoVRegs);
     }
 
     return MachineFunctionProperties();
@@ -283,7 +279,8 @@ bool VirtRegRewriter::runOnMachineFunction(MachineFunction &fn) {
     DebugVars->emitDebugValues(VRM);
 
     // All machine operands and other references to virtual registers have been
-    // replaced. Remove the virtual registers and release all the transient data.
+    // replaced. Remove the virtual registers and release all the transient
+    // data.
     VRM->clearAllVirt();
     MRI->clearVirtRegs();
   }
@@ -450,8 +447,9 @@ void VirtRegRewriter::expandCopyBundle(MachineInstr &MI) const {
 
     // Only do this when the complete bundle is made out of COPYs and KILLs.
     MachineBasicBlock &MBB = *MI.getParent();
-    for (MachineBasicBlock::reverse_instr_iterator I =
-         std::next(MI.getReverseIterator()), E = MBB.instr_rend();
+    for (MachineBasicBlock::reverse_instr_iterator
+             I = std::next(MI.getReverseIterator()),
+             E = MBB.instr_rend();
          I != E && I->isBundledWithSucc(); ++I) {
       if (!I->isCopy() && !I->isKill())
         return;
@@ -474,7 +472,7 @@ void VirtRegRewriter::expandCopyBundle(MachineInstr &MI) const {
     // the source registers, try to schedule the instructions to avoid any
     // clobbering.
     for (int E = MIs.size(), PrevE = E; E > 1; PrevE = E) {
-      for (int I = E; I--; )
+      for (int I = E; I--;)
         if (!anyRegsAlias(MIs[I], ArrayRef(MIs).take_front(E), TRI)) {
           if (I + 1 != E)
             std::swap(MIs[I], MIs[E - 1]);
diff --git a/llvm/lib/CodeGen/WinEHPrepare.cpp b/llvm/lib/CodeGen/WinEHPrepare.cpp
index 11597b11989364a..343e539602a29c1 100644
--- a/llvm/lib/CodeGen/WinEHPrepare.cpp
+++ b/llvm/lib/CodeGen/WinEHPrepare.cpp
@@ -42,8 +42,7 @@ using namespace llvm;
 
 static cl::opt<bool> DisableDemotion(
     "disable-demotion", cl::Hidden,
-    cl::desc(
-        "Clone multicolor basic blocks but do not demote cross scopes"),
+    cl::desc("Clone multicolor basic blocks but do not demote cross scopes"),
     cl::init(false));
 
 static cl::opt<bool> DisableCleanups(
@@ -103,8 +102,8 @@ class WinEHPrepare : public FunctionPass {
 } // end anonymous namespace
 
 char WinEHPrepare::ID = 0;
-INITIALIZE_PASS(WinEHPrepare, DEBUG_TYPE, "Prepare Windows exceptions",
-                false, false)
+INITIALIZE_PASS(WinEHPrepare, DEBUG_TYPE, "Prepare Windows exceptions", false,
+                false)
 
 FunctionPass *llvm::createWinEHPass(bool DemoteCatchSwitchPHIOnly) {
   return new WinEHPrepare(DemoteCatchSwitchPHIOnly);
@@ -377,10 +376,9 @@ static void calculateCXXStateNumbers(WinEHFuncInfo &FuncInfo,
     int TryLow = addUnwindMapEntry(FuncInfo, ParentState, nullptr);
     FuncInfo.EHPadStateMap[CatchSwitch] = TryLow;
     for (const BasicBlock *PredBlock : predecessors(BB))
-      if ((PredBlock = getEHPadFromPredecessor(PredBlock,
-                                               CatchSwitch->getParentPad())))
-        calculateCXXStateNumbers(FuncInfo, PredBlock->getFirstNonPHI(),
-                                 TryLow);
+      if ((PredBlock =
+               getEHPadFromPredecessor(PredBlock, CatchSwitch->getParentPad())))
+        calculateCXXStateNumbers(FuncInfo, PredBlock->getFirstNonPHI(), TryLow);
     int CatchLow = addUnwindMapEntry(FuncInfo, ParentState, nullptr);
 
     // catchpads are separate funclets in C++ EH due to the way rethrow works.
@@ -510,8 +508,8 @@ static void calculateSEHStateNumbers(WinEHFuncInfo &FuncInfo,
     LLVM_DEBUG(dbgs() << "Assigning state #" << TryState << " to BB "
                       << CatchPadBB->getName() << '\n');
     for (const BasicBlock *PredBlock : predecessors(BB))
-      if ((PredBlock = getEHPadFromPredecessor(PredBlock,
-                                               CatchSwitch->getParentPad())))
+      if ((PredBlock =
+               getEHPadFromPredecessor(PredBlock, CatchSwitch->getParentPad())))
         calculateSEHStateNumbers(FuncInfo, PredBlock->getFirstNonPHI(),
                                  TryState);
 
@@ -897,8 +895,8 @@ void WinEHPrepare::cloneCommonBlocks(Function &F) {
 
       DEBUG_WITH_TYPE("winehprepare-coloring",
                       dbgs() << "  Cloning block \'" << BB->getName()
-                              << "\' for funclet \'" << FuncletPadBB->getName()
-                              << "\'.\n");
+                             << "\' for funclet \'" << FuncletPadBB->getName()
+                             << "\'.\n");
 
       // Create a new basic block and copy instructions into it!
       BasicBlock *CBB =
@@ -931,8 +929,8 @@ void WinEHPrepare::cloneCommonBlocks(Function &F) {
 
       DEBUG_WITH_TYPE("winehprepare-coloring",
                       dbgs() << "  Assigned color \'" << FuncletPadBB->getName()
-                              << "\' to block \'" << NewBlock->getName()
-                              << "\'.\n");
+                             << "\' to block \'" << NewBlock->getName()
+                             << "\'.\n");
 
       llvm::erase_value(BlocksInFunclet, OldBlock);
       ColorVector &OldColors = BlockColors[OldBlock];
@@ -940,8 +938,8 @@ void WinEHPrepare::cloneCommonBlocks(Function &F) {
 
       DEBUG_WITH_TYPE("winehprepare-coloring",
                       dbgs() << "  Removed color \'" << FuncletPadBB->getName()
-                              << "\' from block \'" << OldBlock->getName()
-                              << "\'.\n");
+                             << "\' from block \'" << OldBlock->getName()
+                             << "\'.\n");
     }
 
     // Loop over all of the instructions in this funclet, fixing up operand
@@ -1388,9 +1386,9 @@ void WinEHFuncInfo::addIPToStateRange(const InvokeInst *II,
   LabelToStateMap[InvokeBegin] = std::make_pair(InvokeStateMap[II], InvokeEnd);
 }
 
-void WinEHFuncInfo::addIPToStateRange(int State, MCSymbol* InvokeBegin,
-    MCSymbol* InvokeEnd) {
-    LabelToStateMap[InvokeBegin] = std::make_pair(State, InvokeEnd);
+void WinEHFuncInfo::addIPToStateRange(int State, MCSymbol *InvokeBegin,
+                                      MCSymbol *InvokeEnd) {
+  LabelToStateMap[InvokeBegin] = std::make_pair(State, InvokeEnd);
 }
 
 WinEHFuncInfo::WinEHFuncInfo() = default;
diff --git a/llvm/lib/DWARFLinker/CMakeLists.txt b/llvm/lib/DWARFLinker/CMakeLists.txt
index f720c5e844b36e0..9b97491957448b6 100644
--- a/llvm/lib/DWARFLinker/CMakeLists.txt
+++ b/llvm/lib/DWARFLinker/CMakeLists.txt
@@ -1,23 +1,12 @@
-add_llvm_component_library(LLVMDWARFLinker
-  DWARFLinkerCompileUnit.cpp
-  DWARFLinkerDeclContext.cpp
-  DWARFLinker.cpp
-  DWARFStreamer.cpp
+add_llvm_component_library(
+    LLVMDWARFLinker DWARFLinkerCompileUnit.cpp DWARFLinkerDeclContext
+        .cpp DWARFLinker.cpp DWARFStreamer.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/DWARFLinker
+            ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+    llvm /
+    DWARFLinker
 
-  DEPENDS
-  intrinsics_gen
+        DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  AsmPrinter
-  BinaryFormat
-  CodeGen
-  CodeGenTypes
-  DebugInfoDWARF
-  MC
-  Object
-  Support
-  TargetParser
-  )
+            LINK_COMPONENTS AsmPrinter BinaryFormat CodeGen CodeGenTypes
+                DebugInfoDWARF MC Object Support TargetParser)
diff --git a/llvm/lib/DWARFLinker/DWARFLinker.cpp b/llvm/lib/DWARFLinker/DWARFLinker.cpp
index 35e5cefc3877060..b190ab3ede349d9 100644
--- a/llvm/lib/DWARFLinker/DWARFLinker.cpp
+++ b/llvm/lib/DWARFLinker/DWARFLinker.cpp
@@ -1329,9 +1329,9 @@ unsigned DWARFLinker::DIECloner::cloneAddressAttribute(
   //     independently by the linker).
   //   - If address relocated in an inline_subprogram that happens at the
   //     beginning of its inlining function.
-  //  To avoid above cases and to not apply relocation twice (in applyValidRelocs
-  //  and here), read address attribute from InputDIE and apply Info.PCOffset
-  //  here.
+  //  To avoid above cases and to not apply relocation twice (in
+  //  applyValidRelocs and here), read address attribute from InputDIE and apply
+  //  Info.PCOffset here.
 
   std::optional<DWARFFormValue> AddrAttribute = InputDIE.find(AttrSpec.Attr);
   if (!AddrAttribute)
@@ -1968,8 +1968,7 @@ static void patchAddrBase(DIE &Die, DIEInteger Offset) {
 }
 
 void DWARFLinker::DIECloner::emitDebugAddrSection(
-    CompileUnit &Unit,
-    const uint16_t DwarfVersion) const {
+    CompileUnit &Unit, const uint16_t DwarfVersion) const {
 
   if (LLVM_UNLIKELY(Linker.Options.Update))
     return;
@@ -3033,7 +3032,6 @@ Error DWARFLinker::cloneModuleUnit(LinkContext &Context, RefModuleUnit &Unit,
 void DWARFLinker::verifyInput(const DWARFFile &File) {
   assert(File.Dwarf);
 
-
   std::string Buffer;
   raw_string_ostream OS(Buffer);
   DIDumpOptions DumpOpts;
diff --git a/llvm/lib/DWARFLinkerParallel/CMakeLists.txt b/llvm/lib/DWARFLinkerParallel/CMakeLists.txt
index d321ecf8d5ce847..46931d255796d6f 100644
--- a/llvm/lib/DWARFLinkerParallel/CMakeLists.txt
+++ b/llvm/lib/DWARFLinkerParallel/CMakeLists.txt
@@ -1,28 +1,14 @@
-add_llvm_component_library(LLVMDWARFLinkerParallel
-  DependencyTracker.cpp
-  DIEAttributeCloner.cpp
-  DWARFEmitterImpl.cpp
-  DWARFFile.cpp
-  DWARFLinker.cpp
-  DWARFLinkerCompileUnit.cpp
-  DWARFLinkerImpl.cpp
-  DWARFLinkerUnit.cpp
-  OutputSections.cpp
-  StringPool.cpp
+add_llvm_component_library(
+    LLVMDWARFLinkerParallel DependencyTracker.cpp DIEAttributeCloner
+        .cpp DWARFEmitterImpl.cpp DWARFFile.cpp DWARFLinker
+        .cpp DWARFLinkerCompileUnit.cpp DWARFLinkerImpl.cpp DWARFLinkerUnit
+        .cpp OutputSections.cpp StringPool.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/DWARFLinkerParallel
+            ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+    llvm /
+    DWARFLinkerParallel
 
-  DEPENDS
-  intrinsics_gen
+        DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  AsmPrinter
-  BinaryFormat
-  CodeGen
-  DebugInfoDWARF
-  MC
-  Object
-  Support
-  TargetParser
-  )
+            LINK_COMPONENTS AsmPrinter BinaryFormat CodeGen DebugInfoDWARF MC
+                Object Support TargetParser)
diff --git a/llvm/lib/DWP/CMakeLists.txt b/llvm/lib/DWP/CMakeLists.txt
index 777de1978dae376..1eadd4583bbad4c 100644
--- a/llvm/lib/DWP/CMakeLists.txt
+++ b/llvm/lib/DWP/CMakeLists.txt
@@ -1,16 +1,10 @@
-add_llvm_component_library(LLVMDWP
-  DWP.cpp
-  DWPError.cpp
+add_llvm_component_library(LLVMDWP DWP.cpp DWPError.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/DWP
+                               ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+                           llvm /
+                           DWP
 
-  DEPENDS
-  intrinsics_gen
+                               DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  DebugInfoDWARF
-  MC
-  Object
-  Support
-  )
+                                   LINK_COMPONENTS DebugInfoDWARF MC Object
+                                       Support)
diff --git a/llvm/lib/DebugInfo/BTF/CMakeLists.txt b/llvm/lib/DebugInfo/BTF/CMakeLists.txt
index 689470c8f23e8d1..1b6496452ec3bbc 100644
--- a/llvm/lib/DebugInfo/BTF/CMakeLists.txt
+++ b/llvm/lib/DebugInfo/BTF/CMakeLists.txt
@@ -1,12 +1,7 @@
-add_llvm_component_library(LLVMDebugInfoBTF
-  BTFParser.cpp
-  BTFContext.cpp
-  ADDITIONAL_HEADER_DIRS
-  "${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/BTF"
+add_llvm_component_library(
+    LLVMDebugInfoBTF BTFParser.cpp BTFContext.cpp ADDITIONAL_HEADER_DIRS
+    "${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/BTF"
 
-  DEPENDS
-  intrinsics_gen
+    DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  Support
-  )
+        LINK_COMPONENTS Support)
diff --git a/llvm/lib/DebugInfo/CMakeLists.txt b/llvm/lib/DebugInfo/CMakeLists.txt
index 0d72630f09ab662..250c94cc41d3607 100644
--- a/llvm/lib/DebugInfo/CMakeLists.txt
+++ b/llvm/lib/DebugInfo/CMakeLists.txt
@@ -1,8 +1,3 @@
-add_subdirectory(DWARF)
-add_subdirectory(GSYM)
-add_subdirectory(LogicalView)
-add_subdirectory(MSF)
-add_subdirectory(CodeView)
-add_subdirectory(PDB)
-add_subdirectory(Symbolize)
-add_subdirectory(BTF)
+add_subdirectory(DWARF) add_subdirectory(GSYM) add_subdirectory(LogicalView)
+    add_subdirectory(MSF) add_subdirectory(CodeView) add_subdirectory(PDB)
+        add_subdirectory(Symbolize) add_subdirectory(BTF)
diff --git a/llvm/lib/DebugInfo/CodeView/AppendingTypeTableBuilder.cpp b/llvm/lib/DebugInfo/CodeView/AppendingTypeTableBuilder.cpp
index dc32e83369272ad..b992239f683eba6 100644
--- a/llvm/lib/DebugInfo/CodeView/AppendingTypeTableBuilder.cpp
+++ b/llvm/lib/DebugInfo/CodeView/AppendingTypeTableBuilder.cpp
@@ -42,7 +42,7 @@ std::optional<TypeIndex> AppendingTypeTableBuilder::getNext(TypeIndex Prev) {
   return Prev;
 }
 
-CVType AppendingTypeTableBuilder::getType(TypeIndex Index){
+CVType AppendingTypeTableBuilder::getType(TypeIndex Index) {
   return CVType(SeenRecords[Index.toArrayIndex()]);
 }
 
diff --git a/llvm/lib/DebugInfo/CodeView/CMakeLists.txt b/llvm/lib/DebugInfo/CodeView/CMakeLists.txt
index 48cbad0f6751a0d..b8b263eaf6b5505 100644
--- a/llvm/lib/DebugInfo/CodeView/CMakeLists.txt
+++ b/llvm/lib/DebugInfo/CodeView/CMakeLists.txt
@@ -1,48 +1,23 @@
-add_llvm_component_library(LLVMDebugInfoCodeView
-  AppendingTypeTableBuilder.cpp
-  CodeViewError.cpp
-  CodeViewRecordIO.cpp
-  ContinuationRecordBuilder.cpp
-  CVSymbolVisitor.cpp
-  CVTypeVisitor.cpp
-  DebugChecksumsSubsection.cpp
-  DebugCrossExSubsection.cpp
-  DebugCrossImpSubsection.cpp
-  DebugFrameDataSubsection.cpp
-  DebugInlineeLinesSubsection.cpp
-  DebugLinesSubsection.cpp
-  DebugStringTableSubsection.cpp
-  DebugSubsection.cpp
-  DebugSubsectionRecord.cpp
-  DebugSubsectionVisitor.cpp
-  DebugSymbolRVASubsection.cpp
-  DebugSymbolsSubsection.cpp
-  EnumTables.cpp
-  Formatters.cpp
-  GlobalTypeTableBuilder.cpp
-  LazyRandomTypeCollection.cpp
-  Line.cpp
-  MergingTypeTableBuilder.cpp
-  RecordName.cpp
-  RecordSerialization.cpp
-  SimpleTypeSerializer.cpp
-  StringsAndChecksums.cpp
-  SymbolDumper.cpp
-  SymbolRecordHelpers.cpp
-  SymbolRecordMapping.cpp
-  SymbolSerializer.cpp
-  TypeDumpVisitor.cpp
-  TypeIndex.cpp
-  TypeIndexDiscovery.cpp
-  TypeHashing.cpp
-  TypeRecordHelpers.cpp
-  TypeRecordMapping.cpp
-  TypeStreamMerger.cpp
-  TypeTableCollection.cpp
+add_llvm_component_library(
+    LLVMDebugInfoCodeView AppendingTypeTableBuilder.cpp CodeViewError
+        .cpp CodeViewRecordIO.cpp ContinuationRecordBuilder.cpp CVSymbolVisitor
+        .cpp CVTypeVisitor.cpp DebugChecksumsSubsection
+        .cpp DebugCrossExSubsection.cpp DebugCrossImpSubsection
+        .cpp DebugFrameDataSubsection.cpp DebugInlineeLinesSubsection
+        .cpp DebugLinesSubsection.cpp DebugStringTableSubsection
+        .cpp DebugSubsection.cpp DebugSubsectionRecord
+        .cpp DebugSubsectionVisitor.cpp DebugSymbolRVASubsection
+        .cpp DebugSymbolsSubsection.cpp EnumTables.cpp Formatters
+        .cpp GlobalTypeTableBuilder.cpp LazyRandomTypeCollection.cpp Line
+        .cpp MergingTypeTableBuilder.cpp RecordName.cpp RecordSerialization
+        .cpp SimpleTypeSerializer.cpp StringsAndChecksums.cpp SymbolDumper
+        .cpp SymbolRecordHelpers.cpp SymbolRecordMapping.cpp SymbolSerializer
+        .cpp TypeDumpVisitor.cpp TypeIndex.cpp TypeIndexDiscovery
+        .cpp TypeHashing.cpp TypeRecordHelpers.cpp TypeRecordMapping
+        .cpp TypeStreamMerger.cpp TypeTableCollection.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/CodeView
+            ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+    llvm / DebugInfo /
+    CodeView
 
-  LINK_COMPONENTS
-  Support
-  )
+        LINK_COMPONENTS Support)
diff --git a/llvm/lib/DebugInfo/CodeView/CVTypeVisitor.cpp b/llvm/lib/DebugInfo/CodeView/CVTypeVisitor.cpp
index 689c643a7006c40..a322b64cb2a3067 100644
--- a/llvm/lib/DebugInfo/CodeView/CVTypeVisitor.cpp
+++ b/llvm/lib/DebugInfo/CodeView/CVTypeVisitor.cpp
@@ -20,7 +20,6 @@
 using namespace llvm;
 using namespace llvm::codeview;
 
-
 template <typename T>
 static Error visitKnownRecord(CVType &Record, TypeVisitorCallbacks &Callbacks) {
   TypeRecordKind RK = static_cast<TypeRecordKind>(Record.kind());
@@ -214,7 +213,7 @@ struct VisitHelper {
   TypeVisitorCallbackPipeline Pipeline;
   CVTypeVisitor Visitor;
 };
-}
+} // namespace
 
 Error llvm::codeview::visitTypeRecord(CVType &Record, TypeIndex Index,
                                       TypeVisitorCallbacks &Callbacks,
diff --git a/llvm/lib/DebugInfo/CodeView/ContinuationRecordBuilder.cpp b/llvm/lib/DebugInfo/CodeView/ContinuationRecordBuilder.cpp
index 0adbdb596218765..af2f87ec39578a7 100644
--- a/llvm/lib/DebugInfo/CodeView/ContinuationRecordBuilder.cpp
+++ b/llvm/lib/DebugInfo/CodeView/ContinuationRecordBuilder.cpp
@@ -105,7 +105,7 @@ void ContinuationRecordBuilder::writeMemberType(RecordType &Record) {
     // the previous member.  Save off the length of the member we just wrote so
     // that we can do validate it.
     uint32_t MemberLength = SegmentWriter.getOffset() - OriginalOffset;
-    (void) MemberLength;
+    (void)MemberLength;
     insertSegmentEnd(OriginalOffset);
     // Since this member now becomes a new top-level record, it should have
     // gotten a RecordPrefix injected, and that RecordPrefix + the member we
@@ -135,7 +135,7 @@ void ContinuationRecordBuilder::insertSegmentEnd(uint32_t Offset) {
 
   uint32_t NewSegmentBegin = Offset + ContinuationLength;
   uint32_t SegmentLength = NewSegmentBegin - SegmentOffsets.back();
-  (void) SegmentLength;
+  (void)SegmentLength;
 
   assert(SegmentLength % 4 == 0);
   assert(SegmentLength <= MaxRecordLength);
diff --git a/llvm/lib/DebugInfo/CodeView/DebugChecksumsSubsection.cpp b/llvm/lib/DebugInfo/CodeView/DebugChecksumsSubsection.cpp
index 30aa60b065bf0c6..2f1edf24c11d2ce 100644
--- a/llvm/lib/DebugInfo/CodeView/DebugChecksumsSubsection.cpp
+++ b/llvm/lib/DebugInfo/CodeView/DebugChecksumsSubsection.cpp
@@ -31,8 +31,8 @@ struct FileChecksumEntryHeader {
                               // Checksum bytes follow.
 };
 
-Error VarStreamArrayExtractor<FileChecksumEntry>::
-operator()(BinaryStreamRef Stream, uint32_t &Len, FileChecksumEntry &Item) {
+Error VarStreamArrayExtractor<FileChecksumEntry>::operator()(
+    BinaryStreamRef Stream, uint32_t &Len, FileChecksumEntry &Item) {
   BinaryStreamReader Reader(Stream);
 
   const FileChecksumEntryHeader *Header;
diff --git a/llvm/lib/DebugInfo/CodeView/DebugCrossImpSubsection.cpp b/llvm/lib/DebugInfo/CodeView/DebugCrossImpSubsection.cpp
index e5f82f9c37f474f..dd3d47aefcf0f8f 100644
--- a/llvm/lib/DebugInfo/CodeView/DebugCrossImpSubsection.cpp
+++ b/llvm/lib/DebugInfo/CodeView/DebugCrossImpSubsection.cpp
@@ -22,9 +22,9 @@
 using namespace llvm;
 using namespace llvm::codeview;
 
-Error VarStreamArrayExtractor<CrossModuleImportItem>::
-operator()(BinaryStreamRef Stream, uint32_t &Len,
-           codeview::CrossModuleImportItem &Item) {
+Error VarStreamArrayExtractor<CrossModuleImportItem>::operator()(
+    BinaryStreamRef Stream, uint32_t &Len,
+    codeview::CrossModuleImportItem &Item) {
   BinaryStreamReader Reader(Stream);
   if (Reader.bytesRemaining() < sizeof(CrossModuleImport))
     return make_error<CodeViewError>(
diff --git a/llvm/lib/DebugInfo/CodeView/DebugInlineeLinesSubsection.cpp b/llvm/lib/DebugInfo/CodeView/DebugInlineeLinesSubsection.cpp
index 620c9e53ad954f5..8d9e9eb0aa306ed 100644
--- a/llvm/lib/DebugInfo/CodeView/DebugInlineeLinesSubsection.cpp
+++ b/llvm/lib/DebugInfo/CodeView/DebugInlineeLinesSubsection.cpp
@@ -21,8 +21,8 @@
 using namespace llvm;
 using namespace llvm::codeview;
 
-Error VarStreamArrayExtractor<InlineeSourceLine>::
-operator()(BinaryStreamRef Stream, uint32_t &Len, InlineeSourceLine &Item) {
+Error VarStreamArrayExtractor<InlineeSourceLine>::operator()(
+    BinaryStreamRef Stream, uint32_t &Len, InlineeSourceLine &Item) {
   BinaryStreamReader Reader(Stream);
 
   if (auto EC = Reader.readObject(Item.Header))
diff --git a/llvm/lib/DebugInfo/CodeView/Formatters.cpp b/llvm/lib/DebugInfo/CodeView/Formatters.cpp
index 8c21764600b609a..ed7f74854a91a5c 100644
--- a/llvm/lib/DebugInfo/CodeView/Formatters.cpp
+++ b/llvm/lib/DebugInfo/CodeView/Formatters.cpp
@@ -45,13 +45,13 @@ void GuidAdapter::format(raw_ostream &Stream, StringRef Style) {
     support::ubig64_t Data4;
   };
   const MSGuid *G = reinterpret_cast<const MSGuid *>(Item.data());
-  Stream
-      << '{' << format_hex_no_prefix(G->Data1, 8, /*Upper=*/true)
-      << '-' << format_hex_no_prefix(G->Data2, 4, /*Upper=*/true)
-      << '-' << format_hex_no_prefix(G->Data3, 4, /*Upper=*/true)
-      << '-' << format_hex_no_prefix(G->Data4 >> 48, 4, /*Upper=*/true) << '-'
-      << format_hex_no_prefix(G->Data4 & ((1ULL << 48) - 1), 12, /*Upper=*/true)
-      << '}';
+  Stream << '{' << format_hex_no_prefix(G->Data1, 8, /*Upper=*/true) << '-'
+         << format_hex_no_prefix(G->Data2, 4, /*Upper=*/true) << '-'
+         << format_hex_no_prefix(G->Data3, 4, /*Upper=*/true) << '-'
+         << format_hex_no_prefix(G->Data4 >> 48, 4, /*Upper=*/true) << '-'
+         << format_hex_no_prefix(G->Data4 & ((1ULL << 48) - 1), 12,
+                                 /*Upper=*/true)
+         << '}';
 }
 
 raw_ostream &llvm::codeview::operator<<(raw_ostream &OS, const GUID &Guid) {
diff --git a/llvm/lib/DebugInfo/CodeView/LazyRandomTypeCollection.cpp b/llvm/lib/DebugInfo/CodeView/LazyRandomTypeCollection.cpp
index 460f95d96a29e59..f0ecdb024dc13c1 100644
--- a/llvm/lib/DebugInfo/CodeView/LazyRandomTypeCollection.cpp
+++ b/llvm/lib/DebugInfo/CodeView/LazyRandomTypeCollection.cpp
@@ -44,8 +44,7 @@ LazyRandomTypeCollection::LazyRandomTypeCollection(
 
 LazyRandomTypeCollection::LazyRandomTypeCollection(ArrayRef<uint8_t> Data,
                                                    uint32_t RecordCountHint)
-    : LazyRandomTypeCollection(RecordCountHint) {
-}
+    : LazyRandomTypeCollection(RecordCountHint) {}
 
 LazyRandomTypeCollection::LazyRandomTypeCollection(StringRef Data,
                                                    uint32_t RecordCountHint)
diff --git a/llvm/lib/DebugInfo/CodeView/RecordName.cpp b/llvm/lib/DebugInfo/CodeView/RecordName.cpp
index 5fbbc4a5d497741..64567fa3401f9de 100644
--- a/llvm/lib/DebugInfo/CodeView/RecordName.cpp
+++ b/llvm/lib/DebugInfo/CodeView/RecordName.cpp
@@ -241,8 +241,7 @@ Error TypeNameComputer::visitKnownRecord(CVType &CVR, LabelRecord &R) {
   return Error::success();
 }
 
-Error TypeNameComputer::visitKnownRecord(CVType &CVR,
-                                         PrecompRecord &Precomp) {
+Error TypeNameComputer::visitKnownRecord(CVType &CVR, PrecompRecord &Precomp) {
   return Error::success();
 }
 
diff --git a/llvm/lib/DebugInfo/CodeView/SymbolDumper.cpp b/llvm/lib/DebugInfo/CodeView/SymbolDumper.cpp
index c86fb244f6887b9..c9e5ca1457d4fb6 100644
--- a/llvm/lib/DebugInfo/CodeView/SymbolDumper.cpp
+++ b/llvm/lib/DebugInfo/CodeView/SymbolDumper.cpp
@@ -61,7 +61,7 @@ class CVSymbolDumperImpl : public SymbolVisitorCallbacks {
   bool PrintRecordBytes;
   bool InFunctionScope;
 };
-}
+} // namespace
 
 static StringRef getSymbolKindName(SymbolKind Kind) {
   switch (Kind) {
@@ -258,7 +258,8 @@ Error CVSymbolDumperImpl::visitKnownRecord(CVSymbol &CVR,
 
 Error CVSymbolDumperImpl::visitKnownRecord(CVSymbol &CVR,
                                            Compile3Sym &Compile3) {
-  W.printEnum("Language", uint8_t(Compile3.getLanguage()), getSourceLanguageNames());
+  W.printEnum("Language", uint8_t(Compile3.getLanguage()),
+              getSourceLanguageNames());
   W.printFlags("Flags", uint32_t(Compile3.getFlags()),
                getCompileSym3FlagNames());
   W.printEnum("Machine", unsigned(Compile3.Machine), getCPUTypeNames());
diff --git a/llvm/lib/DebugInfo/CodeView/SymbolRecordMapping.cpp b/llvm/lib/DebugInfo/CodeView/SymbolRecordMapping.cpp
index b5e366b965a955b..cc2ef274b74c1b4 100644
--- a/llvm/lib/DebugInfo/CodeView/SymbolRecordMapping.cpp
+++ b/llvm/lib/DebugInfo/CodeView/SymbolRecordMapping.cpp
@@ -23,7 +23,7 @@ struct MapGap {
     return Error::success();
   }
 };
-}
+} // namespace
 
 static Error mapLocalVariableAddrRange(CodeViewRecordIO &IO,
                                        LocalVariableAddrRange &Range) {
@@ -512,18 +512,26 @@ RegisterId codeview::decodeFramePtrReg(EncodedFramePtrReg EncodedReg,
   case CPUType::PentiumPro:
   case CPUType::Pentium3:
     switch (EncodedReg) {
-    case EncodedFramePtrReg::None:     return RegisterId::NONE;
-    case EncodedFramePtrReg::StackPtr: return RegisterId::VFRAME;
-    case EncodedFramePtrReg::FramePtr: return RegisterId::EBP;
-    case EncodedFramePtrReg::BasePtr:  return RegisterId::EBX;
+    case EncodedFramePtrReg::None:
+      return RegisterId::NONE;
+    case EncodedFramePtrReg::StackPtr:
+      return RegisterId::VFRAME;
+    case EncodedFramePtrReg::FramePtr:
+      return RegisterId::EBP;
+    case EncodedFramePtrReg::BasePtr:
+      return RegisterId::EBX;
     }
     llvm_unreachable("bad encoding");
   case CPUType::X64:
     switch (EncodedReg) {
-    case EncodedFramePtrReg::None:     return RegisterId::NONE;
-    case EncodedFramePtrReg::StackPtr: return RegisterId::RSP;
-    case EncodedFramePtrReg::FramePtr: return RegisterId::RBP;
-    case EncodedFramePtrReg::BasePtr:  return RegisterId::R13;
+    case EncodedFramePtrReg::None:
+      return RegisterId::NONE;
+    case EncodedFramePtrReg::StackPtr:
+      return RegisterId::RSP;
+    case EncodedFramePtrReg::FramePtr:
+      return RegisterId::RBP;
+    case EncodedFramePtrReg::BasePtr:
+      return RegisterId::R13;
     }
     llvm_unreachable("bad encoding");
   }
diff --git a/llvm/lib/DebugInfo/CodeView/TypeDumpVisitor.cpp b/llvm/lib/DebugInfo/CodeView/TypeDumpVisitor.cpp
index df7e42df1afcd21..4be53edb37da24a 100644
--- a/llvm/lib/DebugInfo/CodeView/TypeDumpVisitor.cpp
+++ b/llvm/lib/DebugInfo/CodeView/TypeDumpVisitor.cpp
@@ -45,8 +45,10 @@ static const EnumEntry<uint16_t> ClassOptionNames[] = {
 };
 
 static const EnumEntry<uint8_t> MemberAccessNames[] = {
-    ENUM_ENTRY(MemberAccess, None), ENUM_ENTRY(MemberAccess, Private),
-    ENUM_ENTRY(MemberAccess, Protected), ENUM_ENTRY(MemberAccess, Public),
+    ENUM_ENTRY(MemberAccess, None),
+    ENUM_ENTRY(MemberAccess, Private),
+    ENUM_ENTRY(MemberAccess, Protected),
+    ENUM_ENTRY(MemberAccess, Public),
 };
 
 static const EnumEntry<uint16_t> MethodOptionNames[] = {
@@ -104,7 +106,8 @@ static const EnumEntry<uint16_t> PtrMemberRepNames[] = {
 };
 
 static const EnumEntry<uint16_t> TypeModifierNames[] = {
-    ENUM_ENTRY(ModifierOptions, Const), ENUM_ENTRY(ModifierOptions, Volatile),
+    ENUM_ENTRY(ModifierOptions, Const),
+    ENUM_ENTRY(ModifierOptions, Volatile),
     ENUM_ENTRY(ModifierOptions, Unaligned),
 };
 
@@ -142,7 +145,8 @@ static const EnumEntry<uint8_t> FunctionOptionEnum[] = {
 };
 
 static const EnumEntry<uint16_t> LabelTypeEnum[] = {
-    ENUM_ENTRY(LabelType, Near), ENUM_ENTRY(LabelType, Far),
+    ENUM_ENTRY(LabelType, Near),
+    ENUM_ENTRY(LabelType, Far),
 };
 
 #undef ENUM_ENTRY
@@ -553,8 +557,7 @@ Error TypeDumpVisitor::visitKnownRecord(CVType &CVR, LabelRecord &LR) {
   return Error::success();
 }
 
-Error TypeDumpVisitor::visitKnownRecord(CVType &CVR,
-                                        PrecompRecord &Precomp) {
+Error TypeDumpVisitor::visitKnownRecord(CVType &CVR, PrecompRecord &Precomp) {
   W->printHex("StartIndex", Precomp.getStartTypeIndex());
   W->printHex("Count", Precomp.getTypesCount());
   W->printHex("Signature", Precomp.getSignature());
diff --git a/llvm/lib/DebugInfo/CodeView/TypeIndexDiscovery.cpp b/llvm/lib/DebugInfo/CodeView/TypeIndexDiscovery.cpp
index e903a37a8c8e081..0dbc4648f31da38 100644
--- a/llvm/lib/DebugInfo/CodeView/TypeIndexDiscovery.cpp
+++ b/llvm/lib/DebugInfo/CodeView/TypeIndexDiscovery.cpp
@@ -7,8 +7,8 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/DebugInfo/CodeView/TypeIndexDiscovery.h"
-#include "llvm/DebugInfo/CodeView/TypeRecord.h"
 #include "llvm/ADT/ArrayRef.h"
+#include "llvm/DebugInfo/CodeView/TypeRecord.h"
 #include "llvm/Support/Endian.h"
 
 using namespace llvm;
diff --git a/llvm/lib/DebugInfo/CodeView/TypeRecordMapping.cpp b/llvm/lib/DebugInfo/CodeView/TypeRecordMapping.cpp
index 0bc65f8d0359a36..ca2433898d58831 100644
--- a/llvm/lib/DebugInfo/CodeView/TypeRecordMapping.cpp
+++ b/llvm/lib/DebugInfo/CodeView/TypeRecordMapping.cpp
@@ -738,8 +738,7 @@ Error TypeRecordMapping::visitKnownMember(CVMemberRecord &CVR,
   return Error::success();
 }
 
-Error TypeRecordMapping::visitKnownRecord(CVType &CVR,
-                                          PrecompRecord &Precomp) {
+Error TypeRecordMapping::visitKnownRecord(CVType &CVR, PrecompRecord &Precomp) {
   error(IO.mapInteger(Precomp.StartTypeIndex, "StartIndex"));
   error(IO.mapInteger(Precomp.TypesCount, "Count"));
   error(IO.mapInteger(Precomp.Signature, "Signature"));
diff --git a/llvm/lib/DebugInfo/CodeView/TypeStreamMerger.cpp b/llvm/lib/DebugInfo/CodeView/TypeStreamMerger.cpp
index b9aaa3146c9cc4a..045bc512edbdb39 100644
--- a/llvm/lib/DebugInfo/CodeView/TypeStreamMerger.cpp
+++ b/llvm/lib/DebugInfo/CodeView/TypeStreamMerger.cpp
@@ -483,8 +483,8 @@ Expected<bool> TypeStreamMerger::shouldRemapType(const CVType &Type) {
   // reasons, to avoid re-parsing the Types stream.
   if (Type.kind() == LF_ENDPRECOMP) {
     EndPrecompRecord EP;
-    if (auto EC = TypeDeserializer::deserializeAs(const_cast<CVType &>(Type),
-                                                  EP))
+    if (auto EC =
+            TypeDeserializer::deserializeAs(const_cast<CVType &>(Type), EP))
       return joinErrors(std::move(EC), errorCorruptRecord());
     // Only one record of this kind can appear in a OBJ.
     if (PCHInfo)
diff --git a/llvm/lib/DebugInfo/DWARF/CMakeLists.txt b/llvm/lib/DebugInfo/DWARF/CMakeLists.txt
index 0ca08b092b26c9a..f10c3ab686cd043 100644
--- a/llvm/lib/DebugInfo/DWARF/CMakeLists.txt
+++ b/llvm/lib/DebugInfo/DWARF/CMakeLists.txt
@@ -1,41 +1,17 @@
-add_llvm_component_library(LLVMDebugInfoDWARF
-  DWARFAbbreviationDeclaration.cpp
-  DWARFAddressRange.cpp
-  DWARFAcceleratorTable.cpp
-  DWARFCompileUnit.cpp
-  DWARFContext.cpp
-  DWARFDataExtractor.cpp
-  DWARFDebugAbbrev.cpp
-  DWARFDebugAddr.cpp
-  DWARFDebugArangeSet.cpp
-  DWARFDebugAranges.cpp
-  DWARFDebugFrame.cpp
-  DWARFTypePrinter.cpp
-  DWARFDebugInfoEntry.cpp
-  DWARFDebugLine.cpp
-  DWARFDebugLoc.cpp
-  DWARFDebugMacro.cpp
-  DWARFDebugPubTable.cpp
-  DWARFDebugRangeList.cpp
-  DWARFDebugRnglists.cpp
-  DWARFDie.cpp
-  DWARFExpression.cpp
-  DWARFFormValue.cpp
-  DWARFGdbIndex.cpp
-  DWARFListTable.cpp
-  DWARFLocationExpression.cpp
-  DWARFTypeUnit.cpp
-  DWARFUnitIndex.cpp
-  DWARFUnit.cpp
-  DWARFVerifier.cpp
+add_llvm_component_library(
+    LLVMDebugInfoDWARF DWARFAbbreviationDeclaration.cpp DWARFAddressRange
+        .cpp DWARFAcceleratorTable.cpp DWARFCompileUnit.cpp DWARFContext
+        .cpp DWARFDataExtractor.cpp DWARFDebugAbbrev.cpp DWARFDebugAddr
+        .cpp DWARFDebugArangeSet.cpp DWARFDebugAranges.cpp DWARFDebugFrame
+        .cpp DWARFTypePrinter.cpp DWARFDebugInfoEntry.cpp DWARFDebugLine
+        .cpp DWARFDebugLoc.cpp DWARFDebugMacro.cpp DWARFDebugPubTable
+        .cpp DWARFDebugRangeList.cpp DWARFDebugRnglists.cpp DWARFDie
+        .cpp DWARFExpression.cpp DWARFFormValue.cpp DWARFGdbIndex
+        .cpp DWARFListTable.cpp DWARFLocationExpression.cpp DWARFTypeUnit
+        .cpp DWARFUnitIndex.cpp DWARFUnit.cpp DWARFVerifier.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/DWARF
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo
+            ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+    llvm / DebugInfo / DWARF ${LLVM_MAIN_INCLUDE_DIR} / llvm /
+    DebugInfo
 
-  LINK_COMPONENTS
-  BinaryFormat
-  Object
-  Support
-  TargetParser
-  )
+        LINK_COMPONENTS BinaryFormat Object Support TargetParser)
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFAbbreviationDeclaration.cpp b/llvm/lib/DebugInfo/DWARF/DWARFAbbreviationDeclaration.cpp
index ecdbd004efadb49..19a4fab845b752f 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFAbbreviationDeclaration.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFAbbreviationDeclaration.cpp
@@ -30,9 +30,7 @@ void DWARFAbbreviationDeclaration::clear() {
   FixedAttributeSize.reset();
 }
 
-DWARFAbbreviationDeclaration::DWARFAbbreviationDeclaration() {
-  clear();
-}
+DWARFAbbreviationDeclaration::DWARFAbbreviationDeclaration() { clear(); }
 
 llvm::Expected<DWARFAbbreviationDeclaration::ExtractState>
 DWARFAbbreviationDeclaration::extract(DataExtractor Data, uint64_t *OffsetPtr) {
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFAcceleratorTable.cpp b/llvm/lib/DebugInfo/DWARF/DWARFAcceleratorTable.cpp
index 7d8289ed420abb9..17fc660837069e2 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFAcceleratorTable.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFAcceleratorTable.cpp
@@ -248,8 +248,8 @@ LLVM_DUMP_METHOD void AppleAcceleratorTable::dump(raw_ostream &OS) const {
     }
 
     for (unsigned HashIdx = Index; HashIdx < Hdr.HashCount; ++HashIdx) {
-      uint64_t HashOffset = HashesBase + HashIdx*4;
-      uint64_t OffsetsOffset = OffsetsBase + HashIdx*4;
+      uint64_t HashOffset = HashesBase + HashIdx * 4;
+      uint64_t OffsetsOffset = OffsetsBase + HashIdx * 4;
       uint32_t Hash = AccelSection.getU32(&HashOffset);
 
       if (Hash % Hdr.BucketCount != Bucket)
@@ -443,7 +443,7 @@ void DWARFDebugNames::Header::dump(ScopedPrinter &W) const {
 }
 
 Error DWARFDebugNames::Header::extract(const DWARFDataExtractor &AS,
-                                             uint64_t *Offset) {
+                                       uint64_t *Offset) {
   auto HeaderError = [Offset = *Offset](Error E) {
     return createStringError(errc::illegal_byte_sequence,
                              "parsing .debug_names header at 0x%" PRIx64 ": %s",
@@ -738,8 +738,9 @@ bool DWARFDebugNames::NameIndex::dumpEntry(ScopedPrinter &W,
   uint64_t EntryId = *Offset;
   auto EntryOr = getEntry(Offset);
   if (!EntryOr) {
-    handleAllErrors(EntryOr.takeError(), [](const SentinelError &) {},
-                    [&W](const ErrorInfoBase &EI) { EI.log(W.startLine()); });
+    handleAllErrors(
+        EntryOr.takeError(), [](const SentinelError &) {},
+        [&W](const ErrorInfoBase &EI) { EI.log(W.startLine()); });
     return false;
   }
 
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFContext.cpp b/llvm/lib/DebugInfo/DWARF/DWARFContext.cpp
index a45ed0e56553d4e..c8ece1ef70ffa74 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFContext.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFContext.cpp
@@ -47,8 +47,8 @@
 #include "llvm/Support/DataExtractor.h"
 #include "llvm/Support/Error.h"
 #include "llvm/Support/Format.h"
-#include "llvm/Support/LEB128.h"
 #include "llvm/Support/FormatVariadic.h"
+#include "llvm/Support/LEB128.h"
 #include "llvm/Support/MemoryBuffer.h"
 #include "llvm/Support/Path.h"
 #include "llvm/Support/raw_ostream.h"
@@ -70,7 +70,6 @@ using DWARFLineTable = DWARFDebugLine::LineTable;
 using FileLineInfoKind = DILineInfoSpecifier::FileLineInfoKind;
 using FunctionNameKind = DILineInfoSpecifier::FunctionNameKind;
 
-
 void fixupIndexV4(DWARFContext &C, DWARFUnitIndex &Index) {
   using EntryType = DWARFUnitIndex::Entry::SectionContribution;
   using EntryMap = DenseMap<uint32_t, EntryType>;
@@ -199,7 +198,6 @@ static T &getAccelTable(std::unique_ptr<T> &Cache, const DWARFObject &Obj,
   return *Cache;
 }
 
-
 std::unique_ptr<DWARFDebugMacro>
 DWARFContext::DWARFContextState::parseMacroOrMacinfo(MacroSecType SectionType) {
   auto Macro = std::make_unique<DWARFDebugMacro>();
@@ -278,9 +276,8 @@ class ThreadUnsafeDWARFContextState : public DWARFContext::DWARFContextState {
   std::string DWPName;
 
 public:
-  ThreadUnsafeDWARFContextState(DWARFContext &DC, std::string &DWP) :
-      DWARFContext::DWARFContextState(DC),
-      DWPName(std::move(DWP)) {}
+  ThreadUnsafeDWARFContextState(DWARFContext &DC, std::string &DWP)
+      : DWARFContext::DWARFContextState(DC), DWPName(std::move(DWP)) {}
 
   DWARFUnitVector &getNormalUnits() override {
     if (NormalUnits.empty()) {
@@ -324,8 +321,8 @@ class ThreadUnsafeDWARFContextState : public DWARFContext::DWARFContextState {
     if (CUIndex)
       return *CUIndex;
 
-    DataExtractor Data(D.getDWARFObj().getCUIndexSection(),
-                       D.isLittleEndian(), 0);
+    DataExtractor Data(D.getDWARFObj().getCUIndexSection(), D.isLittleEndian(),
+                       0);
     CUIndex = std::make_unique<DWARFUnitIndex>(DW_SECT_INFO);
     if (CUIndex->parse(Data))
       fixupIndex(D, *CUIndex);
@@ -335,8 +332,8 @@ class ThreadUnsafeDWARFContextState : public DWARFContext::DWARFContextState {
     if (TUIndex)
       return *TUIndex;
 
-    DataExtractor Data(D.getDWARFObj().getTUIndexSection(),
-                       D.isLittleEndian(), 0);
+    DataExtractor Data(D.getDWARFObj().getTUIndexSection(), D.isLittleEndian(),
+                       0);
     TUIndex = std::make_unique<DWARFUnitIndex>(DW_SECT_EXT_TYPES);
     bool isParseSuccessful = TUIndex->parse(Data);
     // If we are parsing TU-index and for .debug_types section we don't need
@@ -360,8 +357,8 @@ class ThreadUnsafeDWARFContextState : public DWARFContext::DWARFContextState {
     if (Abbrev)
       return Abbrev.get();
 
-    DataExtractor Data(D.getDWARFObj().getAbbrevSection(),
-                       D.isLittleEndian(), 0);
+    DataExtractor Data(D.getDWARFObj().getAbbrevSection(), D.isLittleEndian(),
+                       0);
     Abbrev = std::make_unique<DWARFDebugAbbrev>(Data);
     return Abbrev.get();
   }
@@ -390,8 +387,9 @@ class ThreadUnsafeDWARFContextState : public DWARFContext::DWARFContextState {
     return Aranges.get();
   }
 
-  Expected<const DWARFDebugLine::LineTable *>
-  getLineTableForUnit(DWARFUnit *U, function_ref<void(Error)> RecoverableErrorHandler) override {
+  Expected<const DWARFDebugLine::LineTable *> getLineTableForUnit(
+      DWARFUnit *U,
+      function_ref<void(Error)> RecoverableErrorHandler) override {
     if (!Line)
       Line.reset(new DWARFDebugLine);
 
@@ -417,7 +415,6 @@ class ThreadUnsafeDWARFContextState : public DWARFContext::DWARFContextState {
                             U->isLittleEndian(), U->getAddressByteSize());
     return Line->getOrParseLineTable(Data, stmtOffset, U->getContext(), U,
                                      RecoverableErrorHandler);
-
   }
 
   void clearLineTableForUnit(DWARFUnit *U) override {
@@ -442,20 +439,19 @@ class ThreadUnsafeDWARFContextState : public DWARFContext::DWARFContextState {
     const DWARFObject &DObj = D.getDWARFObj();
     const DWARFSection &DS = DObj.getFrameSection();
 
-    // There's a "bug" in the DWARFv3 standard with respect to the target address
-    // size within debug frame sections. While DWARF is supposed to be independent
-    // of its container, FDEs have fields with size being "target address size",
-    // which isn't specified in DWARF in general. It's only specified for CUs, but
-    // .eh_frame can appear without a .debug_info section. Follow the example of
-    // other tools (libdwarf) and extract this from the container (ObjectFile
-    // provides this information). This problem is fixed in DWARFv4
-    // See this dwarf-discuss discussion for more details:
+    // There's a "bug" in the DWARFv3 standard with respect to the target
+    // address size within debug frame sections. While DWARF is supposed to be
+    // independent of its container, FDEs have fields with size being "target
+    // address size", which isn't specified in DWARF in general. It's only
+    // specified for CUs, but .eh_frame can appear without a .debug_info
+    // section. Follow the example of other tools (libdwarf) and extract this
+    // from the container (ObjectFile provides this information). This problem
+    // is fixed in DWARFv4 See this dwarf-discuss discussion for more details:
     // http://lists.dwarfstd.org/htdig.cgi/dwarf-discuss-dwarfstd.org/2011-December/001173.html
     DWARFDataExtractor Data(DObj, DS, D.isLittleEndian(),
                             DObj.getAddressSize());
-    auto DF =
-        std::make_unique<DWARFDebugFrame>(D.getArch(), /*IsEH=*/false,
-                                          DS.Address);
+    auto DF = std::make_unique<DWARFDebugFrame>(D.getArch(), /*IsEH=*/false,
+                                                DS.Address);
     if (Error E = DF->parse(Data))
       return std::move(E);
 
@@ -471,9 +467,8 @@ class ThreadUnsafeDWARFContextState : public DWARFContext::DWARFContextState {
     const DWARFSection &DS = DObj.getEHFrameSection();
     DWARFDataExtractor Data(DObj, DS, D.isLittleEndian(),
                             DObj.getAddressSize());
-    auto DF =
-        std::make_unique<DWARFDebugFrame>(D.getArch(), /*IsEH=*/true,
-                                          DS.Address);
+    auto DF = std::make_unique<DWARFDebugFrame>(D.getArch(), /*IsEH=*/true,
+                                                DS.Address);
     if (Error E = DF->parse(Data))
       return std::move(E);
     EHFrame.swap(DF);
@@ -509,20 +504,17 @@ class ThreadUnsafeDWARFContextState : public DWARFContext::DWARFContextState {
     const DWARFObject &DObj = D.getDWARFObj();
     return getAccelTable(AppleNames, DObj, DObj.getAppleNamesSection(),
                          DObj.getStrSection(), D.isLittleEndian());
-
   }
   const AppleAcceleratorTable &getAppleTypes() override {
     const DWARFObject &DObj = D.getDWARFObj();
     return getAccelTable(AppleTypes, DObj, DObj.getAppleTypesSection(),
                          DObj.getStrSection(), D.isLittleEndian());
-
   }
   const AppleAcceleratorTable &getAppleNamespaces() override {
     const DWARFObject &DObj = D.getDWARFObj();
     return getAccelTable(AppleNamespaces, DObj,
-                         DObj.getAppleNamespacesSection(),
-                         DObj.getStrSection(), D.isLittleEndian());
-
+                         DObj.getAppleNamespacesSection(), DObj.getStrSection(),
+                         D.isLittleEndian());
   }
   const AppleAcceleratorTable &getAppleObjC() override {
     const DWARFObject &DObj = D.getDWARFObj();
@@ -530,8 +522,7 @@ class ThreadUnsafeDWARFContextState : public DWARFContext::DWARFContextState {
                          DObj.getStrSection(), D.isLittleEndian());
   }
 
-  std::shared_ptr<DWARFContext>
-  getDWOContext(StringRef AbsolutePath) override {
+  std::shared_ptr<DWARFContext> getDWOContext(StringRef AbsolutePath) override {
     if (auto S = DWP.lock()) {
       DWARFContext *Ctxt = S->Context.get();
       return std::shared_ptr<DWARFContext>(std::move(S), Ctxt);
@@ -575,8 +566,8 @@ class ThreadUnsafeDWARFContextState : public DWARFContext::DWARFContextState {
 
     auto S = std::make_shared<DWOFile>();
     S->File = std::move(Obj.get());
-    S->Context = DWARFContext::create(*S->File.getBinary(),
-                                      DWARFContext::ProcessDebugRelocations::Ignore);
+    S->Context = DWARFContext::create(
+        *S->File.getBinary(), DWARFContext::ProcessDebugRelocations::Ignore);
     *Entry = S;
     auto *Ctxt = S->Context.get();
     return std::shared_ptr<DWARFContext>(std::move(S), Ctxt);
@@ -585,7 +576,7 @@ class ThreadUnsafeDWARFContextState : public DWARFContext::DWARFContextState {
   const DenseMap<uint64_t, DWARFTypeUnit *> &getNormalTypeUnitMap() {
     if (!NormalTypeUnits) {
       NormalTypeUnits.emplace();
-      for (const auto &U :D.normal_units()) {
+      for (const auto &U : D.normal_units()) {
         if (DWARFTypeUnit *TU = dyn_cast<DWARFTypeUnit>(U.get()))
           (*NormalTypeUnits)[TU->getTypeHash()] = TU;
       }
@@ -596,7 +587,7 @@ class ThreadUnsafeDWARFContextState : public DWARFContext::DWARFContextState {
   const DenseMap<uint64_t, DWARFTypeUnit *> &getDWOTypeUnitMap() {
     if (!DWOTypeUnits) {
       DWOTypeUnits.emplace();
-      for (const auto &U :D.dwo_units()) {
+      for (const auto &U : D.dwo_units()) {
         if (DWARFTypeUnit *TU = dyn_cast<DWARFTypeUnit>(U.get()))
           (*DWOTypeUnits)[TU->getTypeHash()] = TU;
       }
@@ -611,16 +602,14 @@ class ThreadUnsafeDWARFContextState : public DWARFContext::DWARFContextState {
     else
       return getNormalTypeUnitMap();
   }
-
-
 };
 
 class ThreadSafeState : public ThreadUnsafeDWARFContextState {
   std::recursive_mutex Mutex;
 
 public:
-  ThreadSafeState(DWARFContext &DC, std::string &DWP) :
-      ThreadUnsafeDWARFContextState(DC, DWP) {}
+  ThreadSafeState(DWARFContext &DC, std::string &DWP)
+      : ThreadUnsafeDWARFContextState(DC, DWP) {}
 
   DWARFUnitVector &getNormalUnits() override {
     std::unique_lock<std::recursive_mutex> LockGuard(Mutex);
@@ -659,10 +648,12 @@ class ThreadSafeState : public ThreadUnsafeDWARFContextState {
     std::unique_lock<std::recursive_mutex> LockGuard(Mutex);
     return ThreadUnsafeDWARFContextState::getDebugAranges();
   }
-  Expected<const DWARFDebugLine::LineTable *>
-  getLineTableForUnit(DWARFUnit *U, function_ref<void(Error)> RecoverableErrorHandler) override {
+  Expected<const DWARFDebugLine::LineTable *> getLineTableForUnit(
+      DWARFUnit *U,
+      function_ref<void(Error)> RecoverableErrorHandler) override {
     std::unique_lock<std::recursive_mutex> LockGuard(Mutex);
-    return ThreadUnsafeDWARFContextState::getLineTableForUnit(U, RecoverableErrorHandler);
+    return ThreadUnsafeDWARFContextState::getLineTableForUnit(
+        U, RecoverableErrorHandler);
   }
   void clearLineTableForUnit(DWARFUnit *U) override {
     std::unique_lock<std::recursive_mutex> LockGuard(Mutex);
@@ -712,8 +703,7 @@ class ThreadSafeState : public ThreadUnsafeDWARFContextState {
     std::unique_lock<std::recursive_mutex> LockGuard(Mutex);
     return ThreadUnsafeDWARFContextState::getAppleObjC();
   }
-  std::shared_ptr<DWARFContext>
-  getDWOContext(StringRef AbsolutePath) override {
+  std::shared_ptr<DWARFContext> getDWOContext(StringRef AbsolutePath) override {
     std::unique_lock<std::recursive_mutex> LockGuard(Mutex);
     return ThreadUnsafeDWARFContextState::getDWOContext(AbsolutePath);
   }
@@ -724,21 +714,18 @@ class ThreadSafeState : public ThreadUnsafeDWARFContextState {
   }
 };
 
-
-
 DWARFContext::DWARFContext(std::unique_ptr<const DWARFObject> DObj,
                            std::string DWPName,
                            std::function<void(Error)> RecoverableErrorHandler,
                            std::function<void(Error)> WarningHandler,
                            bool ThreadSafe)
-    : DIContext(CK_DWARF),
-      RecoverableErrorHandler(RecoverableErrorHandler),
+    : DIContext(CK_DWARF), RecoverableErrorHandler(RecoverableErrorHandler),
       WarningHandler(WarningHandler), DObj(std::move(DObj)) {
-        if (ThreadSafe)
-          State.reset(new ThreadUnsafeDWARFContextState(*this, DWPName));
-        else
-          State.reset(new ThreadSafeState(*this, DWPName));
-      }
+  if (ThreadSafe)
+    State.reset(new ThreadUnsafeDWARFContextState(*this, DWPName));
+  else
+    State.reset(new ThreadSafeState(*this, DWPName));
+}
 
 DWARFContext::~DWARFContext() = default;
 
@@ -755,7 +742,7 @@ static void dumpUUID(raw_ostream &OS, const ObjectFile &Obj) {
         return;
       }
       OS << "UUID: ";
-      memcpy(&UUID, LC.Ptr+sizeof(LC.C), sizeof(UUID));
+      memcpy(&UUID, LC.Ptr + sizeof(LC.C), sizeof(UUID));
       OS.write_uuid(UUID);
       Triple T = MachO->getArchTriple();
       OS << " (" << T.getArchName() << ')';
@@ -928,7 +915,6 @@ static void dumpRnglistsSection(
   }
 }
 
-
 static void dumpLoclistsSection(raw_ostream &OS, DIDumpOptions DumpOpts,
                                 DWARFDataExtractor Data, const DWARFObject &Obj,
                                 std::optional<uint64_t> DumpOffset) {
@@ -1210,8 +1196,8 @@ void DWARFContext::dump(
 
   if (shouldDump(Explicit, ".debug_addr", DIDT_ID_DebugAddr,
                  DObj->getAddrSection().Data)) {
-    DWARFDataExtractor AddrData(*DObj, DObj->getAddrSection(),
-                                   isLittleEndian(), 0);
+    DWARFDataExtractor AddrData(*DObj, DObj->getAddrSection(), isLittleEndian(),
+                                0);
     dumpAddrSection(OS, AddrData, DumpOpts, getMaxVersion(), getCUAddrSize());
   }
 
@@ -1324,8 +1310,7 @@ DWARFTypeUnit *DWARFContext::getTypeUnitForHash(uint16_t Version, uint64_t Hash,
   DWARFUnitVector &DWOUnits = State->getDWOUnits();
   if (const auto &TUI = getTUIndex()) {
     if (const auto *R = TUI.getFromHash(Hash))
-      return dyn_cast_or_null<DWARFTypeUnit>(
-          DWOUnits.getUnitForIndexEntry(*R));
+      return dyn_cast_or_null<DWARFTypeUnit>(DWOUnits.getUnitForIndexEntry(*R));
     return nullptr;
   }
   const DenseMap<uint64_t, DWARFTypeUnit *> &Map = State->getTypeUnitMap(IsDWO);
@@ -1390,17 +1375,11 @@ bool DWARFContext::verify(raw_ostream &OS, DIDumpOptions DumpOpts) {
   return Success;
 }
 
-const DWARFUnitIndex &DWARFContext::getCUIndex() {
-  return State->getCUIndex();
-}
+const DWARFUnitIndex &DWARFContext::getCUIndex() { return State->getCUIndex(); }
 
-const DWARFUnitIndex &DWARFContext::getTUIndex() {
-  return State->getTUIndex();
-}
+const DWARFUnitIndex &DWARFContext::getTUIndex() { return State->getTUIndex(); }
 
-DWARFGdbIndex &DWARFContext::getGdbIndex() {
-  return State->getGdbIndex();
-}
+DWARFGdbIndex &DWARFContext::getGdbIndex() { return State->getGdbIndex(); }
 
 const DWARFDebugAbbrev *DWARFContext::getDebugAbbrev() {
   return State->getDebugAbbrev();
@@ -1442,7 +1421,6 @@ const DWARFDebugMacro *DWARFContext::getDebugMacinfoDWO() {
   return State->getDebugMacinfoDWO();
 }
 
-
 const DWARFDebugNames &DWARFContext::getDebugNames() {
   return State->getDebugNames();
 }
@@ -2099,7 +2077,7 @@ class DWARFObjInMemory final : public DWARFObject {
         consumeError(NameOrErr.takeError());
 
       ++SectionAmountMap[Name];
-      SectionNames.push_back({ Name, true });
+      SectionNames.push_back({Name, true});
 
       // Skip BSS and Virtual sections, they aren't interesting.
       if (Section.isBSS() || Section.isVirtual())
@@ -2334,17 +2312,19 @@ class DWARFObjInMemory final : public DWARFObject {
 
   StringRef getAbbrevSection() const override { return AbbrevSection; }
   const DWARFSection &getLocSection() const override { return LocSection; }
-  const DWARFSection &getLoclistsSection() const override { return LoclistsSection; }
-  StringRef getArangesSection() const override { return ArangesSection; }
-  const DWARFSection &getFrameSection() const override {
-    return FrameSection;
+  const DWARFSection &getLoclistsSection() const override {
+    return LoclistsSection;
   }
+  StringRef getArangesSection() const override { return ArangesSection; }
+  const DWARFSection &getFrameSection() const override { return FrameSection; }
   const DWARFSection &getEHFrameSection() const override {
     return EHFrameSection;
   }
   const DWARFSection &getLineSection() const override { return LineSection; }
   StringRef getStrSection() const override { return StrSection; }
-  const DWARFSection &getRangesSection() const override { return RangesSection; }
+  const DWARFSection &getRangesSection() const override {
+    return RangesSection;
+  }
   const DWARFSection &getRnglistsSection() const override {
     return RnglistsSection;
   }
@@ -2352,8 +2332,12 @@ class DWARFObjInMemory final : public DWARFObject {
   StringRef getMacroDWOSection() const override { return MacroDWOSection; }
   StringRef getMacinfoSection() const override { return MacinfoSection; }
   StringRef getMacinfoDWOSection() const override { return MacinfoDWOSection; }
-  const DWARFSection &getPubnamesSection() const override { return PubnamesSection; }
-  const DWARFSection &getPubtypesSection() const override { return PubtypesSection; }
+  const DWARFSection &getPubnamesSection() const override {
+    return PubnamesSection;
+  }
+  const DWARFSection &getPubtypesSection() const override {
+    return PubtypesSection;
+  }
   const DWARFSection &getGnuPubnamesSection() const override {
     return GnuPubnamesSection;
   }
@@ -2372,9 +2356,7 @@ class DWARFObjInMemory final : public DWARFObject {
   const DWARFSection &getAppleObjCSection() const override {
     return AppleObjCSection;
   }
-  const DWARFSection &getNamesSection() const override {
-    return NamesSection;
-  }
+  const DWARFSection &getNamesSection() const override { return NamesSection; }
 
   StringRef getFileName() const override { return FileName; }
   uint8_t getAddressSize() const override { return AddressSize; }
@@ -2391,28 +2373,22 @@ class DWARFObjInMemory final : public DWARFObject {
 };
 } // namespace
 
-std::unique_ptr<DWARFContext>
-DWARFContext::create(const object::ObjectFile &Obj,
-                     ProcessDebugRelocations RelocAction,
-                     const LoadedObjectInfo *L, std::string DWPName,
-                     std::function<void(Error)> RecoverableErrorHandler,
-                     std::function<void(Error)> WarningHandler,
-                     bool ThreadSafe) {
+std::unique_ptr<DWARFContext> DWARFContext::create(
+    const object::ObjectFile &Obj, ProcessDebugRelocations RelocAction,
+    const LoadedObjectInfo *L, std::string DWPName,
+    std::function<void(Error)> RecoverableErrorHandler,
+    std::function<void(Error)> WarningHandler, bool ThreadSafe) {
   auto DObj = std::make_unique<DWARFObjInMemory>(
       Obj, L, RecoverableErrorHandler, WarningHandler, RelocAction);
-  return std::make_unique<DWARFContext>(std::move(DObj),
-                                        std::move(DWPName),
-                                        RecoverableErrorHandler,
-                                        WarningHandler,
+  return std::make_unique<DWARFContext>(std::move(DObj), std::move(DWPName),
+                                        RecoverableErrorHandler, WarningHandler,
                                         ThreadSafe);
 }
 
-std::unique_ptr<DWARFContext>
-DWARFContext::create(const StringMap<std::unique_ptr<MemoryBuffer>> &Sections,
-                     uint8_t AddrSize, bool isLittleEndian,
-                     std::function<void(Error)> RecoverableErrorHandler,
-                     std::function<void(Error)> WarningHandler,
-                     bool ThreadSafe) {
+std::unique_ptr<DWARFContext> DWARFContext::create(
+    const StringMap<std::unique_ptr<MemoryBuffer>> &Sections, uint8_t AddrSize,
+    bool isLittleEndian, std::function<void(Error)> RecoverableErrorHandler,
+    std::function<void(Error)> WarningHandler, bool ThreadSafe) {
   auto DObj =
       std::make_unique<DWARFObjInMemory>(Sections, AddrSize, isLittleEndian);
   return std::make_unique<DWARFContext>(
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFDebugAbbrev.cpp b/llvm/lib/DebugInfo/DWARF/DWARFDebugAbbrev.cpp
index 3014e61f566a909..7180a3dd645ec1b 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFDebugAbbrev.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFDebugAbbrev.cpp
@@ -15,9 +15,7 @@
 
 using namespace llvm;
 
-DWARFAbbreviationDeclarationSet::DWARFAbbreviationDeclarationSet() {
-  clear();
-}
+DWARFAbbreviationDeclarationSet::DWARFAbbreviationDeclarationSet() { clear(); }
 
 void DWARFAbbreviationDeclarationSet::clear() {
   Offset = 0;
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFDebugAddr.cpp b/llvm/lib/DebugInfo/DWARF/DWARFDebugAddr.cpp
index 98eaf1a095d9f40..8a2348b8690c242 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFDebugAddr.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFDebugAddr.cpp
@@ -120,8 +120,7 @@ Error DWARFDebugAddrTable::extractPreStandard(const DWARFDataExtractor &Data,
 }
 
 Error DWARFDebugAddrTable::extract(const DWARFDataExtractor &Data,
-                                   uint64_t *OffsetPtr,
-                                   uint16_t CUVersion,
+                                   uint64_t *OffsetPtr, uint16_t CUVersion,
                                    uint8_t CUAddrSize,
                                    std::function<void(Error)> WarnCallback) {
   if (CUVersion > 0 && CUVersion < 5)
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFDebugArangeSet.cpp b/llvm/lib/DebugInfo/DWARF/DWARFDebugArangeSet.cpp
index c60c9d9d7227290..36926aff2ad40af 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFDebugArangeSet.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFDebugArangeSet.cpp
@@ -136,7 +136,8 @@ Error DWARFDebugArangeSet::extract(DWARFDataExtractor data,
   uint64_t end_offset = Offset + full_length;
   while (*offset_ptr < end_offset) {
     uint64_t EntryOffset = *offset_ptr;
-    arangeDescriptor.Address = data.getUnsigned(offset_ptr, HeaderData.AddrSize);
+    arangeDescriptor.Address =
+        data.getUnsigned(offset_ptr, HeaderData.AddrSize);
     arangeDescriptor.Length = data.getUnsigned(offset_ptr, HeaderData.AddrSize);
 
     // Each set of tuples is terminated by a 0 for the address and 0
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFDebugAranges.cpp b/llvm/lib/DebugInfo/DWARF/DWARFDebugAranges.cpp
index 49ee27db6d54200..a57fc4658c4315d 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFDebugAranges.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFDebugAranges.cpp
@@ -88,8 +88,8 @@ void DWARFDebugAranges::appendRange(uint64_t CUOffset, uint64_t LowPC,
 }
 
 void DWARFDebugAranges::construct() {
-  std::multiset<uint64_t> ValidCUs;  // Maintain the set of CUs describing
-                                     // a current address range.
+  std::multiset<uint64_t> ValidCUs; // Maintain the set of CUs describing
+                                    // a current address range.
   llvm::sort(Endpoints);
   uint64_t PrevAddress = -1ULL;
   for (const auto &E : Endpoints) {
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFDebugFrame.cpp b/llvm/lib/DebugInfo/DWARF/DWARFDebugFrame.cpp
index aae1668c1639c4b..44f571782f54019 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFDebugFrame.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFDebugFrame.cpp
@@ -870,7 +870,7 @@ void CFIProgram::printOperand(raw_ostream &OS, DIDumpOptions DumpOpts,
     if (!OpcodeName.empty())
       OS << " " << OpcodeName;
     else
-      OS << format(" Opcode %x",  Opcode);
+      OS << format(" Opcode %x", Opcode);
     break;
   }
   case OT_None:
@@ -888,19 +888,19 @@ void CFIProgram::printOperand(raw_ostream &OS, DIDumpOptions DumpOpts,
     if (CodeAlignmentFactor)
       OS << format(" %" PRId64, Operand * CodeAlignmentFactor);
     else
-      OS << format(" %" PRId64 "*code_alignment_factor" , Operand);
+      OS << format(" %" PRId64 "*code_alignment_factor", Operand);
     break;
   case OT_SignedFactDataOffset:
     if (DataAlignmentFactor)
       OS << format(" %" PRId64, int64_t(Operand) * DataAlignmentFactor);
     else
-      OS << format(" %" PRId64 "*data_alignment_factor" , int64_t(Operand));
+      OS << format(" %" PRId64 "*data_alignment_factor", int64_t(Operand));
     break;
   case OT_UnsignedFactDataOffset:
     if (DataAlignmentFactor)
       OS << format(" %" PRId64, Operand * DataAlignmentFactor);
     else
-      OS << format(" %" PRId64 "*data_alignment_factor" , Operand);
+      OS << format(" %" PRId64 "*data_alignment_factor", Operand);
     break;
   case OT_Register:
     OS << ' ';
@@ -1017,8 +1017,8 @@ void FDE::dump(raw_ostream &OS, DIDumpOptions DumpOpts) const {
   OS << "\n";
 }
 
-DWARFDebugFrame::DWARFDebugFrame(Triple::ArchType Arch,
-    bool IsEH, uint64_t EHFrameAddress)
+DWARFDebugFrame::DWARFDebugFrame(Triple::ArchType Arch, bool IsEH,
+                                 uint64_t EHFrameAddress)
     : Arch(Arch), IsEH(IsEH), EHFrameAddress(EHFrameAddress) {}
 
 DWARFDebugFrame::~DWARFDebugFrame() = default;
@@ -1028,7 +1028,8 @@ static void LLVM_ATTRIBUTE_UNUSED dumpDataAux(DataExtractor Data,
   errs() << "DUMP: ";
   for (int i = 0; i < Length; ++i) {
     uint8_t c = Data.getU8(&Offset);
-    errs().write_hex(c); errs() << " ";
+    errs().write_hex(c);
+    errs() << " ";
   }
   errs() << "\n";
 }
@@ -1075,8 +1076,8 @@ Error DWARFDebugFrame::parse(DWARFDataExtractor Data) {
       uint8_t Version = Data.getU8(&Offset);
       const char *Augmentation = Data.getCStr(&Offset);
       StringRef AugmentationString(Augmentation ? Augmentation : "");
-      uint8_t AddressSize = Version < 4 ? Data.getAddressSize() :
-                                          Data.getU8(&Offset);
+      uint8_t AddressSize =
+          Version < 4 ? Data.getAddressSize() : Data.getU8(&Offset);
       Data.setAddressSize(AddressSize);
       uint8_t SegmentDescriptorSize = Version < 4 ? 0 : Data.getU8(&Offset);
       uint64_t CodeAlignmentFactor = Data.getULEB128(&Offset);
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFDebugLine.cpp b/llvm/lib/DebugInfo/DWARF/DWARFDebugLine.cpp
index 6f2afe5d50e9c81..0cb09cbc61601b4 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFDebugLine.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFDebugLine.cpp
@@ -158,19 +158,18 @@ void DWARFDebugLine::Prologue::dump(raw_ostream &OS,
     uint32_t FileBase = getVersion() >= 5 ? 0 : 1;
     for (uint32_t I = 0; I != FileNames.size(); ++I) {
       const FileNameEntry &FileEntry = FileNames[I];
-      OS <<   format("file_names[%3u]:\n", I + FileBase);
-      OS <<          "           name: ";
+      OS << format("file_names[%3u]:\n", I + FileBase);
+      OS << "           name: ";
       FileEntry.Name.dump(OS, DumpOptions);
-      OS << '\n'
-         <<   format("      dir_index: %" PRIu64 "\n", FileEntry.DirIdx);
+      OS << '\n' << format("      dir_index: %" PRIu64 "\n", FileEntry.DirIdx);
       if (ContentTypes.HasMD5)
-        OS <<        "   md5_checksum: " << FileEntry.Checksum.digest() << '\n';
+        OS << "   md5_checksum: " << FileEntry.Checksum.digest() << '\n';
       if (ContentTypes.HasModTime)
         OS << format("       mod_time: 0x%8.8" PRIx64 "\n", FileEntry.ModTime);
       if (ContentTypes.HasLength)
         OS << format("         length: 0x%8.8" PRIx64 "\n", FileEntry.Length);
       if (ContentTypes.HasSource) {
-        OS <<        "         source: ";
+        OS << "         source: ";
         FileEntry.Source.dump(OS, DumpOptions);
         OS << '\n';
       }
@@ -583,9 +582,10 @@ Expected<const DWARFDebugLine::LineTable *> DWARFDebugLine::getOrParseLineTable(
     DWARFDataExtractor &DebugLineData, uint64_t Offset, const DWARFContext &Ctx,
     const DWARFUnit *U, function_ref<void(Error)> RecoverableErrorHandler) {
   if (!DebugLineData.isValidOffset(Offset))
-    return createStringError(errc::invalid_argument, "offset 0x%8.8" PRIx64
-                       " is not a valid debug line section offset",
-                       Offset);
+    return createStringError(errc::invalid_argument,
+                             "offset 0x%8.8" PRIx64
+                             " is not a valid debug line section offset",
+                             Offset);
 
   std::pair<LineTableIter, bool> Pos =
       LineTableMap.insert(LineTableMapTy::value_type(Offset, LineTable()));
@@ -945,7 +945,8 @@ Error DWARFDebugLine::LineTable::parse(
 
           if (Cursor && Verbose) {
             *OS << " (";
-            DWARFFormValue::dumpAddress(*OS, OpcodeAddressSize, State.Row.Address.Address);
+            DWARFFormValue::dumpAddress(*OS, OpcodeAddressSize,
+                                        State.Row.Address.Address);
             *OS << ')';
           }
         }
@@ -1138,8 +1139,7 @@ Error DWARFDebugLine::LineTable::parse(
         // DW_LNS_advance_pc. Such assemblers, however, can use
         // DW_LNS_fixed_advance_pc instead, sacrificing compression.
         {
-          uint16_t PCOffset =
-              TableData.getRelocatedValue(Cursor, 2);
+          uint16_t PCOffset = TableData.getRelocatedValue(Cursor, 2);
           if (Cursor) {
             State.Row.Address.Address += PCOffset;
             State.Row.OpIndex = 0;
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFDebugLoc.cpp b/llvm/lib/DebugInfo/DWARF/DWARFDebugLoc.cpp
index 00c2823cee0af99..2b769bf7b7d7691 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFDebugLoc.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFDebugLoc.cpp
@@ -387,7 +387,7 @@ void DWARFDebugLoclists::dumpRawEntry(const DWARFLocationEntry &Entry,
 void DWARFDebugLoclists::dumpRange(uint64_t StartOffset, uint64_t Size,
                                    raw_ostream &OS, const DWARFObject &Obj,
                                    DIDumpOptions DumpOpts) {
-  if (!Data.isValidOffsetForDataOfSize(StartOffset, Size))  {
+  if (!Data.isValidOffsetForDataOfSize(StartOffset, Size)) {
     OS << "Invalid dump range\n";
     return;
   }
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFDebugPubTable.cpp b/llvm/lib/DebugInfo/DWARF/DWARFDebugPubTable.cpp
index 5031acdb54efcc1..d6e105fc418a86f 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFDebugPubTable.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFDebugPubTable.cpp
@@ -7,9 +7,9 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/DebugInfo/DWARF/DWARFDebugPubTable.h"
-#include "llvm/DebugInfo/DWARF/DWARFDataExtractor.h"
 #include "llvm/ADT/StringRef.h"
 #include "llvm/BinaryFormat/Dwarf.h"
+#include "llvm/DebugInfo/DWARF/DWARFDataExtractor.h"
 #include "llvm/Support/DataExtractor.h"
 #include "llvm/Support/Errc.h"
 #include "llvm/Support/Format.h"
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFDebugRangeList.cpp b/llvm/lib/DebugInfo/DWARF/DWARFDebugRangeList.cpp
index db01719bed599c1..2171de6b5e44a37 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFDebugRangeList.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFDebugRangeList.cpp
@@ -33,7 +33,8 @@ Error DWARFDebugRangeList::extract(const DWARFDataExtractor &data,
   clear();
   if (!data.isValidOffset(*offset_ptr))
     return createStringError(errc::invalid_argument,
-                       "invalid range list offset 0x%" PRIx64, *offset_ptr);
+                             "invalid range list offset 0x%" PRIx64,
+                             *offset_ptr);
 
   AddressSize = data.getAddressSize();
   if (Error SizeErr = DWARFContext::checkAddressSizeSupported(
@@ -54,8 +55,8 @@ Error DWARFDebugRangeList::extract(const DWARFDataExtractor &data,
     if (*offset_ptr != prev_offset + 2 * AddressSize) {
       clear();
       return createStringError(errc::invalid_argument,
-                         "invalid range list entry at offset 0x%" PRIx64,
-                         prev_offset);
+                               "invalid range list entry at offset 0x%" PRIx64,
+                               prev_offset);
     }
     if (Entry.isEndOfListEntry())
       break;
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFDie.cpp b/llvm/lib/DebugInfo/DWARF/DWARFDie.cpp
index 7af7ed8be7b4aff..39337b2b85c4bde 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFDie.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFDie.cpp
@@ -702,7 +702,7 @@ DWARFDie::attribute_iterator &DWARFDie::attribute_iterator::operator++() {
 }
 
 bool DWARFAttribute::mayHaveLocationList(dwarf::Attribute Attr) {
-  switch(Attr) {
+  switch (Attr) {
   case DW_AT_location:
   case DW_AT_string_length:
   case DW_AT_return_addr:
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFExpression.cpp b/llvm/lib/DebugInfo/DWARF/DWARFExpression.cpp
index 87a4fc78ceb19c7..5810a43aa7d13f5 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFExpression.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFExpression.cpp
@@ -210,8 +210,8 @@ bool DWARFExpression::Operation::extract(DataExtractor Data,
         Operands[Operand] = Data.getULEB128(&Offset);
         break;
       case 3: // global as uint32
-         Operands[Operand] = Data.getU32(&Offset);
-         break;
+        Operands[Operand] = Data.getU32(&Offset);
+        break;
       default:
         return false; // Unknown Wasm location
       }
@@ -249,8 +249,7 @@ static void prettyPrintBaseTypeRef(DWARFUnit *U, raw_ostream &OS,
     if (auto Name = dwarf::toString(Die.find(dwarf::DW_AT_name)))
       OS << " \"" << *Name << "\"";
   } else {
-    OS << format(" <invalid base_type ref: 0x%" PRIx64 ">",
-                 Operands[Operand]);
+    OS << format(" <invalid base_type ref: 0x%" PRIx64 ">", Operands[Operand]);
   }
 }
 
@@ -339,7 +338,8 @@ bool DWARFExpression::Operation::print(raw_ostream &OS, DIDumpOptions DumpOpts,
       case 4:
         OS << format(" 0x%" PRIx64, Operands[Operand]);
         break;
-      default: assert(false);
+      default:
+        assert(false);
       }
     } else if (Size == Operation::SizeBlock) {
       uint64_t Offset = Operands[Operand];
@@ -348,8 +348,7 @@ bool DWARFExpression::Operation::print(raw_ostream &OS, DIDumpOptions DumpOpts,
     } else {
       if (Signed)
         OS << format(" %+" PRId64, (int64_t)Operands[Operand]);
-      else if (Opcode != DW_OP_entry_value &&
-               Opcode != DW_OP_GNU_entry_value)
+      else if (Opcode != DW_OP_entry_value && Opcode != DW_OP_GNU_entry_value)
         OS << format(" 0x%" PRIx64, Operands[Operand]);
     }
   }
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFFormValue.cpp b/llvm/lib/DebugInfo/DWARF/DWARFFormValue.cpp
index 29949ee021456df..a3b5799e95afe54 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFFormValue.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFFormValue.cpp
@@ -616,19 +616,19 @@ Expected<const char *> DWARFFormValue::getAsCString() const {
   // Prefer the Unit's string extractor, because for .dwo it will point to
   // .debug_str.dwo, while the Context's extractor always uses .debug_str.
   bool IsDebugLineString = Form == DW_FORM_line_strp;
-  DataExtractor StrData =
-      IsDebugLineString ? C->getLineStringExtractor()
-                        : U ? U->getStringExtractor() : C->getStringExtractor();
+  DataExtractor StrData = IsDebugLineString ? C->getLineStringExtractor()
+                          : U               ? U->getStringExtractor()
+                                            : C->getStringExtractor();
   if (const char *Str = StrData.getCStr(&Offset))
     return Str;
   std::string Msg = FormEncodingString(Form).str();
   if (Index)
-    Msg += (" uses index " + Twine(*Index) + ", but the referenced string").str();
+    Msg +=
+        (" uses index " + Twine(*Index) + ", but the referenced string").str();
   Msg += (" offset " + Twine(Offset) + " is beyond " +
           (IsDebugLineString ? ".debug_line_str" : ".debug_str") + " bounds")
              .str();
-  return make_error<StringError>(Msg,
-      inconvertibleErrorCode());
+  return make_error<StringError>(Msg, inconvertibleErrorCode());
 }
 
 std::optional<uint64_t> DWARFFormValue::getAsAddress() const {
@@ -683,7 +683,7 @@ DWARFFormValue::getAsRelativeReference() const {
   case DW_FORM_ref_udata:
     if (!U)
       return std::nullopt;
-    return UnitOffset{const_cast<DWARFUnit*>(U), Value.uval};
+    return UnitOffset{const_cast<DWARFUnit *>(U), Value.uval};
   case DW_FORM_ref_addr:
   case DW_FORM_ref_sig8:
   case DW_FORM_GNU_ref_alt:
@@ -758,8 +758,9 @@ DWARFFormValue::getAsFile(DILineInfoSpecifier::FileLineInfoKind Kind) const {
   return std::nullopt;
 }
 
-bool llvm::dwarf::doesFormBelongToClass(dwarf::Form Form, DWARFFormValue::FormClass FC,
-                           uint16_t DwarfVersion) {
+bool llvm::dwarf::doesFormBelongToClass(dwarf::Form Form,
+                                        DWARFFormValue::FormClass FC,
+                                        uint16_t DwarfVersion) {
   // First, check DWARF5 form classes.
   if (Form < ArrayRef(DWARF5FormClasses).size() &&
       DWARF5FormClasses[Form] == FC)
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFListTable.cpp b/llvm/lib/DebugInfo/DWARF/DWARFListTable.cpp
index b73dda3ff9ceaa3..6f9a0ffdfc469c3 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFListTable.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFListTable.cpp
@@ -32,17 +32,18 @@ Error DWARFListTableHeader::extract(DWARFDataExtractor Data,
       HeaderData.Length + dwarf::getUnitLengthFieldByteSize(Format);
   if (FullLength < getHeaderSize(Format))
     return createStringError(errc::invalid_argument,
-                       "%s table at offset 0x%" PRIx64
-                       " has too small length (0x%" PRIx64
-                       ") to contain a complete header",
-                       SectionName.data(), HeaderOffset, FullLength);
+                             "%s table at offset 0x%" PRIx64
+                             " has too small length (0x%" PRIx64
+                             ") to contain a complete header",
+                             SectionName.data(), HeaderOffset, FullLength);
   assert(FullLength == length() && "Inconsistent calculation of length.");
   uint64_t End = HeaderOffset + FullLength;
   if (!Data.isValidOffsetForDataOfSize(HeaderOffset, FullLength))
-    return createStringError(errc::invalid_argument,
-                       "section is not large enough to contain a %s table "
-                       "of length 0x%" PRIx64 " at offset 0x%" PRIx64,
-                       SectionName.data(), FullLength, HeaderOffset);
+    return createStringError(
+        errc::invalid_argument,
+        "section is not large enough to contain a %s table "
+        "of length 0x%" PRIx64 " at offset 0x%" PRIx64,
+        SectionName.data(), FullLength, HeaderOffset);
 
   HeaderData.Version = Data.getU16(OffsetPtr);
   HeaderData.AddrSize = Data.getU8(OffsetPtr);
@@ -52,21 +53,24 @@ Error DWARFListTableHeader::extract(DWARFDataExtractor Data,
   // Perform basic validation of the remaining header fields.
   if (HeaderData.Version != 5)
     return createStringError(errc::invalid_argument,
-                       "unrecognised %s table version %" PRIu16
-                       " in table at offset 0x%" PRIx64,
-                       SectionName.data(), HeaderData.Version, HeaderOffset);
+                             "unrecognised %s table version %" PRIu16
+                             " in table at offset 0x%" PRIx64,
+                             SectionName.data(), HeaderData.Version,
+                             HeaderOffset);
   if (Error SizeErr = DWARFContext::checkAddressSizeSupported(
           HeaderData.AddrSize, errc::not_supported,
           "%s table at offset 0x%" PRIx64, SectionName.data(), HeaderOffset))
     return SizeErr;
   if (HeaderData.SegSize != 0)
     return createStringError(errc::not_supported,
-                       "%s table at offset 0x%" PRIx64
-                       " has unsupported segment selector size %" PRIu8,
-                       SectionName.data(), HeaderOffset, HeaderData.SegSize);
+                             "%s table at offset 0x%" PRIx64
+                             " has unsupported segment selector size %" PRIu8,
+                             SectionName.data(), HeaderOffset,
+                             HeaderData.SegSize);
   if (End < HeaderOffset + getHeaderSize(Format) +
                 HeaderData.OffsetEntryCount * OffsetByteSize)
-    return createStringError(errc::invalid_argument,
+    return createStringError(
+        errc::invalid_argument,
         "%s table at offset 0x%" PRIx64 " has more offset entries (%" PRIu32
         ") than there is space for",
         SectionName.data(), HeaderOffset, HeaderData.OffsetEntryCount);
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFUnit.cpp b/llvm/lib/DebugInfo/DWARF/DWARFUnit.cpp
index 387345a4ac2d601..a0fb86de6dfcfde 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFUnit.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFUnit.cpp
@@ -44,10 +44,9 @@ void DWARFUnitVector::addUnitsForSection(DWARFContext &C,
                                          DWARFSectionKind SectionKind) {
   const DWARFObject &D = C.getDWARFObj();
   addUnitsImpl(C, D, Section, C.getDebugAbbrev(), &D.getRangesSection(),
-               &D.getLocSection(), D.getStrSection(),
-               D.getStrOffsetsSection(), &D.getAddrSection(),
-               D.getLineSection(), D.isLittleEndian(), false, false,
-               SectionKind);
+               &D.getLocSection(), D.getStrSection(), D.getStrOffsetsSection(),
+               &D.getAddrSection(), D.getLineSection(), D.isLittleEndian(),
+               false, false, SectionKind);
 }
 
 void DWARFUnitVector::addUnitsForDWOSection(DWARFContext &C,
@@ -55,11 +54,11 @@ void DWARFUnitVector::addUnitsForDWOSection(DWARFContext &C,
                                             DWARFSectionKind SectionKind,
                                             bool Lazy) {
   const DWARFObject &D = C.getDWARFObj();
-  addUnitsImpl(C, D, DWOSection, C.getDebugAbbrevDWO(), &D.getRangesDWOSection(),
-               &D.getLocDWOSection(), D.getStrDWOSection(),
-               D.getStrOffsetsDWOSection(), &D.getAddrSection(),
-               D.getLineDWOSection(), C.isLittleEndian(), true, Lazy,
-               SectionKind);
+  addUnitsImpl(C, D, DWOSection, C.getDebugAbbrevDWO(),
+               &D.getRangesDWOSection(), &D.getLocDWOSection(),
+               D.getStrDWOSection(), D.getStrOffsetsDWOSection(),
+               &D.getAddrSection(), D.getLineDWOSection(), C.isLittleEndian(),
+               true, Lazy, SectionKind);
 }
 
 void DWARFUnitVector::addUnitsImpl(
@@ -100,12 +99,12 @@ void DWARFUnitVector::addUnitsImpl(
       std::unique_ptr<DWARFUnit> U;
       if (Header.isTypeUnit())
         U = std::make_unique<DWARFTypeUnit>(Context, InfoSection, Header, DA,
-                                             RS, LocSection, SS, SOS, AOS, LS,
-                                             LE, IsDWO, *this);
+                                            RS, LocSection, SS, SOS, AOS, LS,
+                                            LE, IsDWO, *this);
       else
-        U = std::make_unique<DWARFCompileUnit>(Context, InfoSection, Header,
-                                                DA, RS, LocSection, SS, SOS,
-                                                AOS, LS, LE, IsDWO, *this);
+        U = std::make_unique<DWARFCompileUnit>(Context, InfoSection, Header, DA,
+                                               RS, LocSection, SS, SOS, AOS, LS,
+                                               LE, IsDWO, *this);
       return U;
     };
   }
@@ -484,8 +483,7 @@ void DWARFUnit::extractDIEsIfNeeded(bool CUDieOnly) {
 }
 
 Error DWARFUnit::tryExtractDIEsIfNeeded(bool CUDieOnly) {
-  if ((CUDieOnly && !DieArray.empty()) ||
-      DieArray.size() > 1)
+  if ((CUDieOnly && !DieArray.empty()) || DieArray.size() > 1)
     return Error::success(); // Already parsed.
 
   bool HasCUDie = !DieArray.empty();
@@ -856,9 +854,8 @@ DWARFDie DWARFUnit::getVariableForAddress(uint64_t Address) {
   return R->second.second;
 }
 
-void
-DWARFUnit::getInlinedChainForAddress(uint64_t Address,
-                                     SmallVectorImpl<DWARFDie> &InlinedChain) {
+void DWARFUnit::getInlinedChainForAddress(
+    uint64_t Address, SmallVectorImpl<DWARFDie> &InlinedChain) {
   assert(InlinedChain.empty());
   // Try to look for subprogram DIEs in the DWO file.
   parseDWO();
@@ -874,7 +871,7 @@ DWARFUnit::getInlinedChainForAddress(uint64_t Address,
     }
     if (SubroutineDIE.getTag() == DW_TAG_inlined_subroutine)
       InlinedChain.push_back(SubroutineDIE);
-    SubroutineDIE  = SubroutineDIE.getParent();
+    SubroutineDIE = SubroutineDIE.getParent();
   }
 }
 
@@ -1075,7 +1072,8 @@ StrOffsetsContributionDescriptor::validateContributionSize(
   if (ValidationSize >= Size)
     if (DA.isValidOffsetForDataOfSize((uint32_t)Base, ValidationSize))
       return *this;
-  return createStringError(errc::invalid_argument, "length exceeds section size");
+  return createStringError(errc::invalid_argument,
+                           "length exceeds section size");
 }
 
 // Look for a DWARF64-formatted contribution to the string offsets table
@@ -1083,10 +1081,13 @@ StrOffsetsContributionDescriptor::validateContributionSize(
 static Expected<StrOffsetsContributionDescriptor>
 parseDWARF64StringOffsetsTableHeader(DWARFDataExtractor &DA, uint64_t Offset) {
   if (!DA.isValidOffsetForDataOfSize(Offset, 16))
-    return createStringError(errc::invalid_argument, "section offset exceeds section size");
+    return createStringError(errc::invalid_argument,
+                             "section offset exceeds section size");
 
   if (DA.getU32(&Offset) != dwarf::DW_LENGTH_DWARF64)
-    return createStringError(errc::invalid_argument, "32 bit contribution referenced from a 64 bit unit");
+    return createStringError(
+        errc::invalid_argument,
+        "32 bit contribution referenced from a 64 bit unit");
 
   uint64_t Size = DA.getU64(&Offset);
   uint8_t Version = DA.getU16(&Offset);
@@ -1101,7 +1102,8 @@ parseDWARF64StringOffsetsTableHeader(DWARFDataExtractor &DA, uint64_t Offset) {
 static Expected<StrOffsetsContributionDescriptor>
 parseDWARF32StringOffsetsTableHeader(DWARFDataExtractor &DA, uint64_t Offset) {
   if (!DA.isValidOffsetForDataOfSize(Offset, 8))
-    return createStringError(errc::invalid_argument, "section offset exceeds section size");
+    return createStringError(errc::invalid_argument,
+                             "section offset exceeds section size");
 
   uint32_t ContributionSize = DA.getU32(&Offset);
   if (ContributionSize >= dwarf::DW_LENGTH_lo_reserved)
@@ -1123,7 +1125,8 @@ parseDWARFStringOffsetsTableHeader(DWARFDataExtractor &DA,
   switch (Format) {
   case dwarf::DwarfFormat::DWARF64: {
     if (Offset < 16)
-      return createStringError(errc::invalid_argument, "insufficient space for 64 bit header prefix");
+      return createStringError(errc::invalid_argument,
+                               "insufficient space for 64 bit header prefix");
     auto DescOrError = parseDWARF64StringOffsetsTableHeader(DA, Offset - 16);
     if (!DescOrError)
       return DescOrError.takeError();
@@ -1132,7 +1135,8 @@ parseDWARFStringOffsetsTableHeader(DWARFDataExtractor &DA,
   }
   case dwarf::DwarfFormat::DWARF32: {
     if (Offset < 8)
-      return createStringError(errc::invalid_argument, "insufficient space for 32 bit header prefix");
+      return createStringError(errc::invalid_argument,
+                               "insufficient space for 32 bit header prefix");
     auto DescOrError = parseDWARF32StringOffsetsTableHeader(DA, Offset - 8);
     if (!DescOrError)
       return DescOrError.takeError();
@@ -1170,7 +1174,8 @@ DWARFUnit::determineStringOffsetsTableContributionDWO(DWARFDataExtractor &DA) {
       return std::nullopt;
     Offset += Header.getFormat() == dwarf::DwarfFormat::DWARF32 ? 8 : 16;
     // Look for a valid contribution at the given offset.
-    auto DescOrError = parseDWARFStringOffsetsTableHeader(DA, Header.getFormat(), Offset);
+    auto DescOrError =
+        parseDWARFStringOffsetsTableHeader(DA, Header.getFormat(), Offset);
     if (!DescOrError)
       return DescOrError.takeError();
     return *DescOrError;
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFUnitIndex.cpp b/llvm/lib/DebugInfo/DWARF/DWARFUnitIndex.cpp
index a4487e2dc21be11..b3eb8fdf67c356c 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFUnitIndex.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFUnitIndex.cpp
@@ -47,17 +47,17 @@ uint32_t llvm::serializeSectionKind(DWARFSectionKind Kind,
   }
   assert(IndexVersion == 2);
   switch (Kind) {
-#define CASE(S,T) \
-  case DW_SECT_##S: \
+#define CASE(S, T)                                                             \
+  case DW_SECT_##S:                                                            \
     return static_cast<uint32_t>(DWARFSectionKindV2::DW_SECT_##T)
-  CASE(INFO, INFO);
-  CASE(EXT_TYPES, TYPES);
-  CASE(ABBREV, ABBREV);
-  CASE(LINE, LINE);
-  CASE(EXT_LOC, LOC);
-  CASE(STR_OFFSETS, STR_OFFSETS);
-  CASE(EXT_MACINFO, MACINFO);
-  CASE(MACRO, MACRO);
+    CASE(INFO, INFO);
+    CASE(EXT_TYPES, TYPES);
+    CASE(ABBREV, ABBREV);
+    CASE(LINE, LINE);
+    CASE(EXT_LOC, LOC);
+    CASE(STR_OFFSETS, STR_OFFSETS);
+    CASE(EXT_MACINFO, MACINFO);
+    CASE(MACRO, MACRO);
 #undef CASE
   default:
     // All other section kinds have no corresponding values in v2 indexes.
@@ -68,22 +68,21 @@ uint32_t llvm::serializeSectionKind(DWARFSectionKind Kind,
 DWARFSectionKind llvm::deserializeSectionKind(uint32_t Value,
                                               unsigned IndexVersion) {
   if (IndexVersion == 5)
-    return isKnownV5SectionID(Value)
-               ? static_cast<DWARFSectionKind>(Value)
-               : DW_SECT_EXT_unknown;
+    return isKnownV5SectionID(Value) ? static_cast<DWARFSectionKind>(Value)
+                                     : DW_SECT_EXT_unknown;
   assert(IndexVersion == 2);
   switch (static_cast<DWARFSectionKindV2>(Value)) {
-#define CASE(S,T) \
-  case DWARFSectionKindV2::DW_SECT_##S: \
+#define CASE(S, T)                                                             \
+  case DWARFSectionKindV2::DW_SECT_##S:                                        \
     return DW_SECT_##T
-  CASE(INFO, INFO);
-  CASE(TYPES, EXT_TYPES);
-  CASE(ABBREV, ABBREV);
-  CASE(LINE, LINE);
-  CASE(LOC, EXT_LOC);
-  CASE(STR_OFFSETS, STR_OFFSETS);
-  CASE(MACINFO, EXT_MACINFO);
-  CASE(MACRO, MACRO);
+    CASE(INFO, INFO);
+    CASE(TYPES, EXT_TYPES);
+    CASE(ABBREV, ABBREV);
+    CASE(LINE, LINE);
+    CASE(LOC, EXT_LOC);
+    CASE(STR_OFFSETS, STR_OFFSETS);
+    CASE(MACINFO, EXT_MACINFO);
+    CASE(MACRO, MACRO);
 #undef CASE
   }
   return DW_SECT_EXT_unknown;
@@ -113,7 +112,8 @@ bool DWARFUnitIndex::Header::parse(DataExtractor IndexData,
 }
 
 void DWARFUnitIndex::Header::dump(raw_ostream &OS) const {
-  OS << format("version = %u, units = %u, slots = %u\n\n", Version, NumUnits, NumBuckets);
+  OS << format("version = %u, units = %u, slots = %u\n\n", Version, NumUnits,
+               NumBuckets);
 }
 
 bool DWARFUnitIndex::parse(DataExtractor IndexData) {
diff --git a/llvm/lib/DebugInfo/DWARF/DWARFVerifier.cpp b/llvm/lib/DebugInfo/DWARF/DWARFVerifier.cpp
index 43ed60d7f9775d9..7601535715f70dd 100644
--- a/llvm/lib/DebugInfo/DWARF/DWARFVerifier.cpp
+++ b/llvm/lib/DebugInfo/DWARF/DWARFVerifier.cpp
@@ -142,11 +142,13 @@ bool DWARFVerifier::verifyUnitHeader(const DWARFDataExtractor DebugInfoData,
   if (Version >= 5) {
     UnitType = DebugInfoData.getU8(Offset);
     AddrSize = DebugInfoData.getU8(Offset);
-    AbbrOffset = isUnitDWARF64 ? DebugInfoData.getU64(Offset) : DebugInfoData.getU32(Offset);
+    AbbrOffset = isUnitDWARF64 ? DebugInfoData.getU64(Offset)
+                               : DebugInfoData.getU32(Offset);
     ValidType = dwarf::isUnitType(UnitType);
   } else {
     UnitType = 0;
-    AbbrOffset = isUnitDWARF64 ? DebugInfoData.getU64(Offset) : DebugInfoData.getU32(Offset);
+    AbbrOffset = isUnitDWARF64 ? DebugInfoData.getU64(Offset)
+                               : DebugInfoData.getU32(Offset);
     AddrSize = DebugInfoData.getU8(Offset);
   }
 
@@ -353,7 +355,7 @@ unsigned DWARFVerifier::verifyUnits(const DWARFUnitVector &Units) {
   unsigned Index = 1;
   for (const auto &Unit : Units) {
     OS << "Verifying unit: " << Index << " / " << Units.getNumUnits();
-    if (const char* Name = Unit->getUnitDIE(true).getShortName())
+    if (const char *Name = Unit->getUnitDIE(true).getShortName())
       OS << ", \"" << Name << '\"';
     OS << '\n';
     OS.flush();
@@ -467,14 +469,12 @@ bool DWARFVerifier::handleDebugInfo() {
   unsigned NumErrors = 0;
 
   OS << "Verifying .debug_info Unit Header Chain...\n";
-  DObj.forEachInfoSections([&](const DWARFSection &S) {
-    NumErrors += verifyUnitSection(S);
-  });
+  DObj.forEachInfoSections(
+      [&](const DWARFSection &S) { NumErrors += verifyUnitSection(S); });
 
   OS << "Verifying .debug_types Unit Header Chain...\n";
-  DObj.forEachTypesSections([&](const DWARFSection &S) {
-    NumErrors += verifyUnitSection(S);
-  });
+  DObj.forEachTypesSections(
+      [&](const DWARFSection &S) { NumErrors += verifyUnitSection(S); });
 
   OS << "Verifying non-dwo Units...\n";
   NumErrors += verifyUnits(DCtx.getNormalUnitsVector());
@@ -815,8 +815,7 @@ unsigned DWARFVerifier::verifyDebugInfoReferences(
     return DWARFDie();
   };
   unsigned NumErrors = 0;
-  for (const std::pair<const uint64_t, std::set<uint64_t>> &Pair :
-       References) {
+  for (const std::pair<const uint64_t, std::set<uint64_t>> &Pair : References) {
     if (GetDIEForOffset(Pair.first))
       continue;
     ++NumErrors;
@@ -1062,12 +1061,12 @@ unsigned DWARFVerifier::verifyAppleAccelTable(const DWARFSection *AccelSection,
           if (!Name)
             Name = "<NULL>";
 
-          error() << format(
-              "%s Bucket[%d] Hash[%d] = 0x%08x "
-              "Str[%u] = 0x%08" PRIx64 " DIE[%d] = 0x%08" PRIx64 " "
-              "is not a valid DIE offset for \"%s\".\n",
-              SectionName, BucketIdx, HashIdx, Hash, StringCount, StrpOffset,
-              HashDataIdx, Offset, Name);
+          error() << format("%s Bucket[%d] Hash[%d] = 0x%08x "
+                            "Str[%u] = 0x%08" PRIx64 " DIE[%d] = 0x%08" PRIx64
+                            " "
+                            "is not a valid DIE offset for \"%s\".\n",
+                            SectionName, BucketIdx, HashIdx, Hash, StringCount,
+                            StrpOffset, HashDataIdx, Offset, Name);
 
           ++NumErrors;
           continue;
@@ -1459,22 +1458,22 @@ unsigned DWARFVerifier::verifyNameIndexEntries(
       ++NumErrors;
     }
   }
-  handleAllErrors(EntryOr.takeError(),
-                  [&](const DWARFDebugNames::SentinelError &) {
-                    if (NumEntries > 0)
-                      return;
-                    error() << formatv("Name Index @ {0:x}: Name {1} ({2}) is "
-                                       "not associated with any entries.\n",
-                                       NI.getUnitOffset(), NTE.getIndex(), Str);
-                    ++NumErrors;
-                  },
-                  [&](const ErrorInfoBase &Info) {
-                    error()
-                        << formatv("Name Index @ {0:x}: Name {1} ({2}): {3}\n",
-                                   NI.getUnitOffset(), NTE.getIndex(), Str,
-                                   Info.message());
-                    ++NumErrors;
-                  });
+  handleAllErrors(
+      EntryOr.takeError(),
+      [&](const DWARFDebugNames::SentinelError &) {
+        if (NumEntries > 0)
+          return;
+        error() << formatv("Name Index @ {0:x}: Name {1} ({2}) is "
+                           "not associated with any entries.\n",
+                           NI.getUnitOffset(), NTE.getIndex(), Str);
+        ++NumErrors;
+      },
+      [&](const ErrorInfoBase &Info) {
+        error() << formatv("Name Index @ {0:x}: Name {1} ({2}): {3}\n",
+                           NI.getUnitOffset(), NTE.getIndex(), Str,
+                           Info.message());
+        ++NumErrors;
+      });
   return NumErrors;
 }
 
@@ -1755,7 +1754,8 @@ bool DWARFVerifier::verifyDebugStrOffsets(
           SectionName, StartOffset, Length, OffsetByteSize, Remainder);
       Success = false;
     }
-    for (uint64_t Index = 0; C && C.tell() + OffsetByteSize <= NextUnit; ++Index) {
+    for (uint64_t Index = 0; C && C.tell() + OffsetByteSize <= NextUnit;
+         ++Index) {
       uint64_t OffOff = C.tell();
       uint64_t StrOff = DA.getAddress(C);
       // check StrOff refers to the start of a string
@@ -1764,7 +1764,8 @@ bool DWARFVerifier::verifyDebugStrOffsets(
       if (StrData.size() <= StrOff) {
         error() << formatv(
             "{0}: contribution {1:X}: index {2:X}: invalid string "
-            "offset *{3:X} == {4:X}, is beyond the bounds of the string section of length {5:X}\n",
+            "offset *{3:X} == {4:X}, is beyond the bounds of the string "
+            "section of length {5:X}\n",
             SectionName, StartOffset, Index, OffOff, StrOff, StrData.size());
         continue;
       }
diff --git a/llvm/lib/DebugInfo/GSYM/CMakeLists.txt b/llvm/lib/DebugInfo/GSYM/CMakeLists.txt
index 0c09c2bf998c462..9ba600c2ab3c3c0 100644
--- a/llvm/lib/DebugInfo/GSYM/CMakeLists.txt
+++ b/llvm/lib/DebugInfo/GSYM/CMakeLists.txt
@@ -1,27 +1,13 @@
-add_llvm_component_library(LLVMDebugInfoGSYM
-  DwarfTransformer.cpp
-  Header.cpp
-  FileWriter.cpp
-  FunctionInfo.cpp
-  GsymCreator.cpp
-  GsymReader.cpp
-  InlineInfo.cpp
-  LineTable.cpp
-  LookupResult.cpp
-  ObjectFileTransformer.cpp
-  ExtractRanges.cpp
+add_llvm_component_library(
+    LLVMDebugInfoGSYM DwarfTransformer.cpp Header.cpp FileWriter
+        .cpp FunctionInfo.cpp GsymCreator.cpp GsymReader.cpp InlineInfo
+        .cpp LineTable.cpp LookupResult.cpp ObjectFileTransformer
+        .cpp ExtractRanges.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/GSYM
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo
+            ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+    llvm / DebugInfo / GSYM ${LLVM_MAIN_INCLUDE_DIR} / llvm /
+    DebugInfo
 
-  DEPENDS
-  LLVMMC
+        DEPENDS LLVMMC
 
-  LINK_COMPONENTS
-  MC
-  Object
-  Support
-  TargetParser
-  DebugInfoDWARF
-  )
+            LINK_COMPONENTS MC Object Support TargetParser DebugInfoDWARF)
diff --git a/llvm/lib/DebugInfo/GSYM/DwarfTransformer.cpp b/llvm/lib/DebugInfo/GSYM/DwarfTransformer.cpp
index e38347f15e3ae8b..9e5af990f2ccee2 100644
--- a/llvm/lib/DebugInfo/GSYM/DwarfTransformer.cpp
+++ b/llvm/lib/DebugInfo/GSYM/DwarfTransformer.cpp
@@ -83,7 +83,6 @@ struct llvm::gsym::CUInfo {
   }
 };
 
-
 static DWARFDie GetParentDeclContextDIE(DWARFDie &Die) {
   if (DWARFDie SpecDie =
           Die.getAttributeValueAsReferencedDie(dwarf::DW_AT_specification)) {
@@ -171,7 +170,7 @@ getQualifiedNameIndex(DWARFDie &Die, uint64_t Language, GsymCreator &Gsym) {
         // templates
         if (ParentName.front() == '<' && ParentName.back() == '>')
           Name = "{" + ParentName.substr(1, ParentName.size() - 2).str() + "}" +
-                "::" + Name;
+                 "::" + Name;
         else
           Name = ParentName.str() + "::" + Name;
       }
@@ -300,7 +299,6 @@ static void convertFunctionLineTable(raw_ostream *Log, CUInfo &CUI,
   const object::SectionedAddress SecAddress{
       StartAddress, object::SectionedAddress::UndefSection};
 
-
   if (!CUI.LineTable->lookupAddressRange(SecAddress, RangeSize, RowVector)) {
     // If we have a DW_TAG_subprogram but no line entries, fall back to using
     // the DW_AT_decl_file an d DW_AT_decl_line if we have both attributes.
@@ -334,8 +332,9 @@ static void convertFunctionLineTable(raw_ostream *Log, CUInfo &CUI,
       if (RowAddress < FI.Range.start()) {
         if (Log) {
           *Log << "error: DIE has a start address whose LowPC is between the "
-                  "line table Row[" << RowIndex << "] with address "
-               << HEX64(RowAddress) << " and the next one.\n";
+                  "line table Row["
+               << RowIndex << "] with address " << HEX64(RowAddress)
+               << " and the next one.\n";
           Die.dump(*Log, 0, DIDumpOptions::getForSingleDIE());
         }
         RowAddress = FI.Range.start();
@@ -373,7 +372,7 @@ static void convertFunctionLineTable(raw_ostream *Log, CUInfo &CUI,
     // Skip multiple line entries for the same file and line.
     auto LastLE = FI.OptLineTable->last();
     if (LastLE && LastLE->File == FileIdx && LastLE->Line == Row.Line)
-        continue;
+      continue;
     // Only push a row if it isn't an end sequence. End sequence markers are
     // included for the last address in a function or the last contiguous
     // address in a sequence.
@@ -484,7 +483,7 @@ void DwarfTransformer::handleDie(raw_ostream *OS, CUInfo &CUI, DWARFDie Die) {
         if (FI.Inline->Children.empty()) {
           if (WarnIfEmpty && OS && !Gsym.isQuiet()) {
             *OS << "warning: DIE contains inline function information that has "
-                  "no valid ranges, removing inline information:\n";
+                   "no valid ranges, removing inline information:\n";
             Die.dump(*OS, 0, DIDumpOptions::getForSingleDIE());
           }
           FI.Inline = std::nullopt;
@@ -554,7 +553,7 @@ Error DwarfTransformer::convert(uint32_t NumThreads, raw_ostream *OS) {
         pool.async([this, CUI, &LogMutex, OS, Die]() mutable {
           std::string ThreadLogStorage;
           raw_string_ostream ThreadOS(ThreadLogStorage);
-          handleDie(OS ? &ThreadOS: nullptr, CUI, Die);
+          handleDie(OS ? &ThreadOS : nullptr, CUI, Die);
           ThreadOS.flush();
           if (OS && !ThreadLogStorage.empty()) {
             // Print ThreadLogStorage lines into an actual stream under a lock
@@ -587,14 +586,14 @@ llvm::Error DwarfTransformer::verify(StringRef GsymPath, raw_ostream &Log) {
   for (uint32_t I = 0; I < NumAddrs; ++I) {
     auto FuncAddr = Gsym->getAddress(I);
     if (!FuncAddr)
-        return createStringError(std::errc::invalid_argument,
-                                  "failed to extract address[%i]", I);
+      return createStringError(std::errc::invalid_argument,
+                               "failed to extract address[%i]", I);
 
     auto FI = Gsym->getFunctionInfo(*FuncAddr);
     if (!FI)
-      return createStringError(std::errc::invalid_argument,
-                            "failed to extract function info for address 0x%"
-                            PRIu64, *FuncAddr);
+      return createStringError(
+          std::errc::invalid_argument,
+          "failed to extract function info for address 0x%" PRIu64, *FuncAddr);
 
     for (auto Addr = *FuncAddr; Addr < *FuncAddr + FI->size(); ++Addr) {
       const object::SectionedAddress SectAddr{
@@ -603,12 +602,10 @@ llvm::Error DwarfTransformer::verify(StringRef GsymPath, raw_ostream &Log) {
       if (!LR)
         return LR.takeError();
 
-      auto DwarfInlineInfos =
-          DICtx.getInliningInfoForAddress(SectAddr, DLIS);
+      auto DwarfInlineInfos = DICtx.getInliningInfoForAddress(SectAddr, DLIS);
       uint32_t NumDwarfInlineInfos = DwarfInlineInfos.getNumberOfFrames();
       if (NumDwarfInlineInfos == 0) {
-        DwarfInlineInfos.addFrame(
-            DICtx.getLineInfoForAddress(SectAddr, DLIS));
+        DwarfInlineInfos.addFrame(DICtx.getLineInfoForAddress(SectAddr, DLIS));
       }
 
       // Check for 1 entry that has no file and line info
@@ -629,19 +626,17 @@ llvm::Error DwarfTransformer::verify(StringRef GsymPath, raw_ostream &Log) {
               << dii.FileName << ':' << dii.Line << '\n';
         }
         Log << "    " << LR->Locations.size() << " GSYM frames:\n";
-        for (size_t Idx = 0, count = LR->Locations.size();
-              Idx < count; ++Idx) {
+        for (size_t Idx = 0, count = LR->Locations.size(); Idx < count; ++Idx) {
           const auto &gii = LR->Locations[Idx];
-          Log << "    [" << Idx << "]: " << gii.Name << " @ " << gii.Dir
-              << '/' << gii.Base << ':' << gii.Line << '\n';
+          Log << "    [" << Idx << "]: " << gii.Name << " @ " << gii.Dir << '/'
+              << gii.Base << ':' << gii.Line << '\n';
         }
         DwarfInlineInfos = DICtx.getInliningInfoForAddress(SectAddr, DLIS);
         Gsym->dump(Log, *FI);
         continue;
       }
 
-      for (size_t Idx = 0, count = LR->Locations.size(); Idx < count;
-            ++Idx) {
+      for (size_t Idx = 0, count = LR->Locations.size(); Idx < count; ++Idx) {
         const auto &gii = LR->Locations[Idx];
         if (Idx < NumDwarfInlineInfos) {
           const auto &dii = DwarfInlineInfos.getFrame(Idx);
diff --git a/llvm/lib/DebugInfo/GSYM/FileWriter.cpp b/llvm/lib/DebugInfo/GSYM/FileWriter.cpp
index b725f3ac74f5a89..3e7c19834c0886e 100644
--- a/llvm/lib/DebugInfo/GSYM/FileWriter.cpp
+++ b/llvm/lib/DebugInfo/GSYM/FileWriter.cpp
@@ -51,21 +51,16 @@ void FileWriter::writeU64(uint64_t U) {
 
 void FileWriter::fixup32(uint32_t U, uint64_t Offset) {
   const uint32_t Swapped = support::endian::byte_swap(U, ByteOrder);
-  OS.pwrite(reinterpret_cast<const char *>(&Swapped), sizeof(Swapped),
-            Offset);
+  OS.pwrite(reinterpret_cast<const char *>(&Swapped), sizeof(Swapped), Offset);
 }
 
 void FileWriter::writeData(llvm::ArrayRef<uint8_t> Data) {
   OS.write(reinterpret_cast<const char *>(Data.data()), Data.size());
 }
 
-void FileWriter::writeNullTerminated(llvm::StringRef Str) {
-  OS << Str << '\0';
-}
+void FileWriter::writeNullTerminated(llvm::StringRef Str) { OS << Str << '\0'; }
 
-uint64_t FileWriter::tell() {
-  return OS.tell();
-}
+uint64_t FileWriter::tell() { return OS.tell(); }
 
 void FileWriter::alignTo(size_t Align) {
   off_t Offset = OS.tell();
diff --git a/llvm/lib/DebugInfo/GSYM/FunctionInfo.cpp b/llvm/lib/DebugInfo/GSYM/FunctionInfo.cpp
index 145a43d3b381bbf..4e72571ac5f12bc 100644
--- a/llvm/lib/DebugInfo/GSYM/FunctionInfo.cpp
+++ b/llvm/lib/DebugInfo/GSYM/FunctionInfo.cpp
@@ -9,8 +9,8 @@
 #include "llvm/DebugInfo/GSYM/FunctionInfo.h"
 #include "llvm/DebugInfo/GSYM/FileWriter.h"
 #include "llvm/DebugInfo/GSYM/GsymReader.h"
-#include "llvm/DebugInfo/GSYM/LineTable.h"
 #include "llvm/DebugInfo/GSYM/InlineInfo.h"
+#include "llvm/DebugInfo/GSYM/LineTable.h"
 #include "llvm/Support/DataExtractor.h"
 #include <optional>
 
@@ -26,7 +26,8 @@ enum InfoType : uint32_t {
 };
 
 raw_ostream &llvm::gsym::operator<<(raw_ostream &OS, const FunctionInfo &FI) {
-  OS << FI.Range << ": " << "Name=" << HEX32(FI.Name) << '\n';
+  OS << FI.Range << ": "
+     << "Name=" << HEX32(FI.Name) << '\n';
   if (FI.OptLineTable)
     OS << FI.OptLineTable << '\n';
   if (FI.Inline)
@@ -40,56 +41,61 @@ llvm::Expected<FunctionInfo> FunctionInfo::decode(DataExtractor &Data,
   uint64_t Offset = 0;
   if (!Data.isValidOffsetForDataOfSize(Offset, 4))
     return createStringError(std::errc::io_error,
-        "0x%8.8" PRIx64 ": missing FunctionInfo Size", Offset);
+                             "0x%8.8" PRIx64 ": missing FunctionInfo Size",
+                             Offset);
   FI.Range = {BaseAddr, BaseAddr + Data.getU32(&Offset)};
   if (!Data.isValidOffsetForDataOfSize(Offset, 4))
     return createStringError(std::errc::io_error,
-        "0x%8.8" PRIx64 ": missing FunctionInfo Name", Offset);
+                             "0x%8.8" PRIx64 ": missing FunctionInfo Name",
+                             Offset);
   FI.Name = Data.getU32(&Offset);
   if (FI.Name == 0)
     return createStringError(std::errc::io_error,
-        "0x%8.8" PRIx64 ": invalid FunctionInfo Name value 0x%8.8x",
-        Offset - 4, FI.Name);
+                             "0x%8.8" PRIx64
+                             ": invalid FunctionInfo Name value 0x%8.8x",
+                             Offset - 4, FI.Name);
   bool Done = false;
   while (!Done) {
     if (!Data.isValidOffsetForDataOfSize(Offset, 4))
-      return createStringError(std::errc::io_error,
+      return createStringError(
+          std::errc::io_error,
           "0x%8.8" PRIx64 ": missing FunctionInfo InfoType value", Offset);
     const uint32_t IT = Data.getU32(&Offset);
     if (!Data.isValidOffsetForDataOfSize(Offset, 4))
-      return createStringError(std::errc::io_error,
+      return createStringError(
+          std::errc::io_error,
           "0x%8.8" PRIx64 ": missing FunctionInfo InfoType length", Offset);
     const uint32_t InfoLength = Data.getU32(&Offset);
     if (!Data.isValidOffsetForDataOfSize(Offset, InfoLength))
       return createStringError(std::errc::io_error,
-          "0x%8.8" PRIx64 ": missing FunctionInfo data for InfoType %u",
-          Offset, IT);
+                               "0x%8.8" PRIx64
+                               ": missing FunctionInfo data for InfoType %u",
+                               Offset, IT);
     DataExtractor InfoData(Data.getData().substr(Offset, InfoLength),
-                           Data.isLittleEndian(),
-                           Data.getAddressSize());
+                           Data.isLittleEndian(), Data.getAddressSize());
     switch (IT) {
-      case InfoType::EndOfList:
-        Done = true;
-        break;
+    case InfoType::EndOfList:
+      Done = true;
+      break;
 
-      case InfoType::LineTableInfo:
-        if (Expected<LineTable> LT = LineTable::decode(InfoData, BaseAddr))
-          FI.OptLineTable = std::move(LT.get());
-        else
-          return LT.takeError();
-        break;
+    case InfoType::LineTableInfo:
+      if (Expected<LineTable> LT = LineTable::decode(InfoData, BaseAddr))
+        FI.OptLineTable = std::move(LT.get());
+      else
+        return LT.takeError();
+      break;
 
-      case InfoType::InlineInfo:
-        if (Expected<InlineInfo> II = InlineInfo::decode(InfoData, BaseAddr))
-          FI.Inline = std::move(II.get());
-        else
-          return II.takeError();
-        break;
+    case InfoType::InlineInfo:
+      if (Expected<InlineInfo> II = InlineInfo::decode(InfoData, BaseAddr))
+        FI.Inline = std::move(II.get());
+      else
+        return II.takeError();
+      break;
 
-      default:
-        return createStringError(std::errc::io_error,
-                                 "0x%8.8" PRIx64 ": unsupported InfoType %u",
-                                 Offset-8, IT);
+    default:
+      return createStringError(std::errc::io_error,
+                               "0x%8.8" PRIx64 ": unsupported InfoType %u",
+                               Offset - 8, IT);
     }
     Offset += InfoLength;
   }
@@ -114,7 +120,7 @@ uint64_t FunctionInfo::cacheEncoding() {
 llvm::Expected<uint64_t> FunctionInfo::encode(FileWriter &Out) const {
   if (!isValid())
     return createStringError(std::errc::invalid_argument,
-        "attempted to encode invalid FunctionInfo object");
+                             "attempted to encode invalid FunctionInfo object");
   // Align FunctionInfo data to a 4 byte alignment.
   Out.alignTo(4);
   const uint64_t FuncInfoOffset = Out.tell();
@@ -146,8 +152,8 @@ llvm::Expected<uint64_t> FunctionInfo::encode(FileWriter &Out) const {
       return std::move(err);
     const auto Length = Out.tell() - StartOffset;
     if (Length > UINT32_MAX)
-        return createStringError(std::errc::invalid_argument,
-            "LineTable length is greater than UINT32_MAX");
+      return createStringError(std::errc::invalid_argument,
+                               "LineTable length is greater than UINT32_MAX");
     // Fixup the size of the LineTable data with the correct size.
     Out.fixup32(static_cast<uint32_t>(Length), StartOffset - 4);
   }
@@ -164,8 +170,8 @@ llvm::Expected<uint64_t> FunctionInfo::encode(FileWriter &Out) const {
       return std::move(err);
     const auto Length = Out.tell() - StartOffset;
     if (Length > UINT32_MAX)
-        return createStringError(std::errc::invalid_argument,
-            "InlineInfo length is greater than UINT32_MAX");
+      return createStringError(std::errc::invalid_argument,
+                               "InlineInfo length is greater than UINT32_MAX");
     // Fixup the size of the InlineInfo data with the correct size.
     Out.fixup32(static_cast<uint32_t>(Length), StartOffset - 4);
   }
@@ -176,7 +182,6 @@ llvm::Expected<uint64_t> FunctionInfo::encode(FileWriter &Out) const {
   return FuncInfoOffset;
 }
 
-
 llvm::Expected<LookupResult> FunctionInfo::lookup(DataExtractor &Data,
                                                   const GsymReader &GR,
                                                   uint64_t FuncAddr,
@@ -191,18 +196,19 @@ llvm::Expected<LookupResult> FunctionInfo::lookup(DataExtractor &Data,
   // "decode".
   if (!Data.isValidOffset(Offset))
     return createStringError(std::errc::io_error,
-                              "FunctionInfo data is truncated");
+                             "FunctionInfo data is truncated");
   // This function will be called with the result of a binary search of the
   // address table, we must still make sure the address does not fall into a
   // gap between functions are after the last function.
   if (LR.FuncRange.size() > 0 && !LR.FuncRange.contains(Addr))
     return createStringError(std::errc::io_error,
-        "address 0x%" PRIx64 " is not in GSYM", Addr);
+                             "address 0x%" PRIx64 " is not in GSYM", Addr);
 
   if (NameOffset == 0)
     return createStringError(std::errc::io_error,
-        "0x%8.8" PRIx64 ": invalid FunctionInfo Name value 0x00000000",
-        Offset - 4);
+                             "0x%8.8" PRIx64
+                             ": invalid FunctionInfo Name value 0x00000000",
+                             Offset - 4);
   LR.FuncName = GR.getString(NameOffset);
   bool Done = false;
   std::optional<LineEntry> LineEntry;
@@ -220,25 +226,25 @@ llvm::Expected<LookupResult> FunctionInfo::lookup(DataExtractor &Data,
     DataExtractor InfoData(InfoBytes, Data.isLittleEndian(),
                            Data.getAddressSize());
     switch (IT) {
-      case InfoType::EndOfList:
-        Done = true;
-        break;
+    case InfoType::EndOfList:
+      Done = true;
+      break;
 
-      case InfoType::LineTableInfo:
-        if (auto ExpectedLE = LineTable::lookup(InfoData, FuncAddr, Addr))
-          LineEntry = ExpectedLE.get();
-        else
-          return ExpectedLE.takeError();
-        break;
+    case InfoType::LineTableInfo:
+      if (auto ExpectedLE = LineTable::lookup(InfoData, FuncAddr, Addr))
+        LineEntry = ExpectedLE.get();
+      else
+        return ExpectedLE.takeError();
+      break;
 
-      case InfoType::InlineInfo:
-        // We will parse the inline info after our line table, but only if
-        // we have a line entry.
-        InlineInfoData = InfoData;
-        break;
+    case InfoType::InlineInfo:
+      // We will parse the inline info after our line table, but only if
+      // we have a line entry.
+      InlineInfoData = InfoData;
+      break;
 
-      default:
-        break;
+    default:
+      break;
     }
     Offset += InfoLength;
   }
@@ -256,8 +262,8 @@ llvm::Expected<LookupResult> FunctionInfo::lookup(DataExtractor &Data,
   std::optional<FileEntry> LineEntryFile = GR.getFile(LineEntry->File);
   if (!LineEntryFile)
     return createStringError(std::errc::invalid_argument,
-                              "failed to extract file[%" PRIu32 "]",
-                              LineEntry->File);
+                             "failed to extract file[%" PRIu32 "]",
+                             LineEntry->File);
 
   SourceLocation SrcLoc;
   SrcLoc.Name = LR.FuncName;
@@ -271,8 +277,8 @@ llvm::Expected<LookupResult> FunctionInfo::lookup(DataExtractor &Data,
     return LR;
   // We have inline information. Try to augment the lookup result with this
   // data.
-  llvm::Error Err = InlineInfo::lookup(GR, *InlineInfoData, FuncAddr, Addr,
-                                       LR.Locations);
+  llvm::Error Err =
+      InlineInfo::lookup(GR, *InlineInfoData, FuncAddr, Addr, LR.Locations);
   if (Err)
     return std::move(Err);
   return LR;
diff --git a/llvm/lib/DebugInfo/GSYM/GsymCreator.cpp b/llvm/lib/DebugInfo/GSYM/GsymCreator.cpp
index 6006b86aa3858f2..0cfe8db8dffe5a0 100644
--- a/llvm/lib/DebugInfo/GSYM/GsymCreator.cpp
+++ b/llvm/lib/DebugInfo/GSYM/GsymCreator.cpp
@@ -61,7 +61,6 @@ uint32_t GsymCreator::copyFile(const GsymCreator &SrcGC, uint32_t FileIdx) {
   return insertFileEntry(DstFE);
 }
 
-
 llvm::Error GsymCreator::save(StringRef Path,
                               llvm::support::endianness ByteOrder,
                               std::optional<uint64_t> SegmentSize) const {
@@ -229,7 +228,7 @@ llvm::Error GsymCreator::finalize(llvm::raw_ostream &OS) {
       std::vector<FunctionInfo> FinalizedFuncs;
       FinalizedFuncs.reserve(Funcs.size());
       FinalizedFuncs.emplace_back(std::move(Funcs.front()));
-      for (size_t Idx=1; Idx < NumBefore; ++Idx) {
+      for (size_t Idx = 1; Idx < NumBefore; ++Idx) {
         FunctionInfo &Prev = FinalizedFuncs.back();
         FunctionInfo &Curr = Funcs[Idx];
         // Empty ranges won't intersect, but we still need to
@@ -250,9 +249,9 @@ llvm::Error GsymCreator::finalize(llvm::raw_ostream &OS) {
                 if (!Quiet) {
                   OS << "warning: same address range contains "
                         "different debug "
-                    << "info. Removing:\n"
-                    << Prev << "\nIn favor of this one:\n"
-                    << Curr << "\n";
+                     << "info. Removing:\n"
+                     << Prev << "\nIn favor of this one:\n"
+                     << Curr << "\n";
                 }
               }
               // We want to swap the current entry with the previous since
@@ -263,13 +262,14 @@ llvm::Error GsymCreator::finalize(llvm::raw_ostream &OS) {
           } else {
             if (!Quiet) { // print warnings about overlaps
               OS << "warning: function ranges overlap:\n"
-                << Prev << "\n"
-                << Curr << "\n";
+                 << Prev << "\n"
+                 << Curr << "\n";
             }
             FinalizedFuncs.emplace_back(std::move(Curr));
           }
         } else {
-          if (Prev.Range.size() == 0 && Curr.Range.contains(Prev.Range.start())) {
+          if (Prev.Range.size() == 0 &&
+              Curr.Range.contains(Prev.Range.start())) {
             // Symbols on macOS don't have address ranges, so if the range
             // doesn't match and the size is zero, then we replace the empty
             // symbol function info with the current one.
@@ -282,18 +282,18 @@ llvm::Error GsymCreator::finalize(llvm::raw_ostream &OS) {
       std::swap(Funcs, FinalizedFuncs);
     }
     // If our last function info entry doesn't have a size and if we have valid
-    // text ranges, we should set the size of the last entry since any search for
-    // a high address might match our last entry. By fixing up this size, we can
-    // help ensure we don't cause lookups to always return the last symbol that
-    // has no size when doing lookups.
+    // text ranges, we should set the size of the last entry since any search
+    // for a high address might match our last entry. By fixing up this size, we
+    // can help ensure we don't cause lookups to always return the last symbol
+    // that has no size when doing lookups.
     if (!Funcs.empty() && Funcs.back().Range.size() == 0 && ValidTextRanges) {
-      if (auto Range =
-              ValidTextRanges->getRangeThatContains(Funcs.back().Range.start())) {
+      if (auto Range = ValidTextRanges->getRangeThatContains(
+              Funcs.back().Range.start())) {
         Funcs.back().Range = {Funcs.back().Range.start(), Range->end()};
       }
     }
     OS << "Pruned " << NumBefore - Funcs.size() << " functions, ended with "
-      << Funcs.size() << " total\n";
+       << Funcs.size() << " total\n";
   }
   return Error::success();
 }
@@ -394,10 +394,14 @@ std::optional<uint64_t> GsymCreator::getBaseAddress() const {
 
 uint64_t GsymCreator::getMaxAddressOffset() const {
   switch (getAddressOffsetSize()) {
-    case 1: return UINT8_MAX;
-    case 2: return UINT16_MAX;
-    case 4: return UINT32_MAX;
-    case 8: return UINT64_MAX;
+  case 1:
+    return UINT8_MAX;
+  case 2:
+    return UINT16_MAX;
+  case 4:
+    return UINT32_MAX;
+  case 8:
+    return UINT64_MAX;
   }
   llvm_unreachable("invalid address offset");
 }
@@ -439,11 +443,12 @@ uint64_t GsymCreator::calculateHeaderAndTableSize() const {
 void GsymCreator::fixupInlineInfo(const GsymCreator &SrcGC, InlineInfo &II) {
   II.Name = copyString(SrcGC, II.Name);
   II.CallFile = copyFile(SrcGC, II.CallFile);
-  for (auto &ChildII: II.Children)
+  for (auto &ChildII : II.Children)
     fixupInlineInfo(SrcGC, ChildII);
 }
 
-uint64_t GsymCreator::copyFunctionInfo(const GsymCreator &SrcGC, size_t FuncIdx) {
+uint64_t GsymCreator::copyFunctionInfo(const GsymCreator &SrcGC,
+                                       size_t FuncIdx) {
   // To copy a function info we need to copy any files and strings over into
   // this GsymCreator and then copy the function info and update the string
   // table offsets to match the new offsets.
@@ -460,7 +465,7 @@ uint64_t GsymCreator::copyFunctionInfo(const GsymCreator &SrcGC, size_t FuncIdx)
     // from SrcGC and must be converted to file indexes from this GsymCreator.
     LineTable &DstLT = DstFI.OptLineTable.value();
     const size_t NumLines = DstLT.size();
-    for (size_t I=0; I<NumLines; ++I) {
+    for (size_t I = 0; I < NumLines; ++I) {
       LineEntry &LE = DstLT.get(I);
       LE.File = copyFile(SrcGC, LE.File);
     }
@@ -539,10 +544,11 @@ GsymCreator::createSegment(uint64_t SegmentSize, size_t &FuncIdx) const {
     const uint64_t HeaderAndTableSize = GC->calculateHeaderAndTableSize();
     if (HeaderAndTableSize + SegmentFuncInfosSize >= SegmentSize) {
       if (SegmentFuncInfosSize == 0)
-        return createStringError(std::errc::invalid_argument,
-                                 "a segment size of %" PRIu64 " is to small to "
-                                 "fit any function infos, specify a larger value",
-                                 SegmentSize);
+        return createStringError(
+            std::errc::invalid_argument,
+            "a segment size of %" PRIu64 " is to small to "
+            "fit any function infos, specify a larger value",
+            SegmentSize);
 
       break;
     }
diff --git a/llvm/lib/DebugInfo/GSYM/GsymReader.cpp b/llvm/lib/DebugInfo/GSYM/GsymReader.cpp
index 6afaeea8f598e12..b5056aa4f72b076 100644
--- a/llvm/lib/DebugInfo/GSYM/GsymReader.cpp
+++ b/llvm/lib/DebugInfo/GSYM/GsymReader.cpp
@@ -23,11 +23,11 @@
 using namespace llvm;
 using namespace gsym;
 
-GsymReader::GsymReader(std::unique_ptr<MemoryBuffer> Buffer) :
-    MemBuffer(std::move(Buffer)),
-    Endian(support::endian::system_endianness()) {}
+GsymReader::GsymReader(std::unique_ptr<MemoryBuffer> Buffer)
+    : MemBuffer(std::move(Buffer)),
+      Endian(support::endian::system_endianness()) {}
 
-  GsymReader::GsymReader(GsymReader &&RHS) = default;
+GsymReader::GsymReader(GsymReader &&RHS) = default;
 
 GsymReader::~GsymReader() = default;
 
@@ -58,8 +58,7 @@ GsymReader::create(std::unique_ptr<MemoryBuffer> &MemBuffer) {
   return std::move(GR);
 }
 
-llvm::Error
-GsymReader::parse() {
+llvm::Error GsymReader::parse() {
   BinaryStreamReader FileData(MemBuffer->getBuffer(),
                               support::endian::system_endianness());
   // Check for the magic bytes. This file format is designed to be mmap'ed
@@ -71,17 +70,16 @@ GsymReader::parse() {
 
   const auto HostByteOrder = support::endian::system_endianness();
   switch (Hdr->Magic) {
-    case GSYM_MAGIC:
-      Endian = HostByteOrder;
-      break;
-    case GSYM_CIGAM:
-      // This is a GSYM file, but not native endianness.
-      Endian = sys::IsBigEndianHost ? support::little : support::big;
-      Swap.reset(new SwappedData);
-      break;
-    default:
-      return createStringError(std::errc::invalid_argument,
-                               "not a GSYM file");
+  case GSYM_MAGIC:
+    Endian = HostByteOrder;
+    break;
+  case GSYM_CIGAM:
+    // This is a GSYM file, but not native endianness.
+    Endian = sys::IsBigEndianHost ? support::little : support::big;
+    Swap.reset(new SwappedData);
+    break;
+  default:
+    return createStringError(std::errc::invalid_argument, "not a GSYM file");
   }
 
   bool DataIsLittleEndian = HostByteOrder != support::little;
@@ -108,64 +106,63 @@ GsymReader::parse() {
 
     // Read the address offsets.
     if (FileData.padToAlignment(Hdr->AddrOffSize) ||
-        FileData.readArray(AddrOffsets,
-                           Hdr->NumAddresses * Hdr->AddrOffSize))
+        FileData.readArray(AddrOffsets, Hdr->NumAddresses * Hdr->AddrOffSize))
       return createStringError(std::errc::invalid_argument,
-                              "failed to read address table");
+                               "failed to read address table");
 
     // Read the address info offsets.
     if (FileData.padToAlignment(4) ||
         FileData.readArray(AddrInfoOffsets, Hdr->NumAddresses))
       return createStringError(std::errc::invalid_argument,
-                              "failed to read address info offsets table");
+                               "failed to read address info offsets table");
 
     // Read the file table.
     uint32_t NumFiles = 0;
     if (FileData.readInteger(NumFiles) || FileData.readArray(Files, NumFiles))
       return createStringError(std::errc::invalid_argument,
-                              "failed to read file table");
+                               "failed to read file table");
 
     // Get the string table.
     FileData.setOffset(Hdr->StrtabOffset);
     if (FileData.readFixedString(StrTab.Data, Hdr->StrtabSize))
       return createStringError(std::errc::invalid_argument,
-                              "failed to read string table");
-} else {
-  // This is the non native endianness case that is not common and not
-  // optimized for lookups. Here we decode the important tables into local
-  // storage and then set the ArrayRef objects to point to these swapped
-  // copies of the read only data so lookups can be as efficient as possible.
-  DataExtractor Data(MemBuffer->getBuffer(), DataIsLittleEndian, 4);
-
-  // Read the address offsets.
-  uint64_t Offset = alignTo(sizeof(Header), Hdr->AddrOffSize);
-  Swap->AddrOffsets.resize(Hdr->NumAddresses * Hdr->AddrOffSize);
-  switch (Hdr->AddrOffSize) {
+                               "failed to read string table");
+  } else {
+    // This is the non native endianness case that is not common and not
+    // optimized for lookups. Here we decode the important tables into local
+    // storage and then set the ArrayRef objects to point to these swapped
+    // copies of the read only data so lookups can be as efficient as possible.
+    DataExtractor Data(MemBuffer->getBuffer(), DataIsLittleEndian, 4);
+
+    // Read the address offsets.
+    uint64_t Offset = alignTo(sizeof(Header), Hdr->AddrOffSize);
+    Swap->AddrOffsets.resize(Hdr->NumAddresses * Hdr->AddrOffSize);
+    switch (Hdr->AddrOffSize) {
     case 1:
       if (!Data.getU8(&Offset, Swap->AddrOffsets.data(), Hdr->NumAddresses))
         return createStringError(std::errc::invalid_argument,
-                                  "failed to read address table");
+                                 "failed to read address table");
       break;
     case 2:
       if (!Data.getU16(&Offset,
-                        reinterpret_cast<uint16_t *>(Swap->AddrOffsets.data()),
-                        Hdr->NumAddresses))
+                       reinterpret_cast<uint16_t *>(Swap->AddrOffsets.data()),
+                       Hdr->NumAddresses))
         return createStringError(std::errc::invalid_argument,
-                                  "failed to read address table");
+                                 "failed to read address table");
       break;
     case 4:
       if (!Data.getU32(&Offset,
-                        reinterpret_cast<uint32_t *>(Swap->AddrOffsets.data()),
-                        Hdr->NumAddresses))
+                       reinterpret_cast<uint32_t *>(Swap->AddrOffsets.data()),
+                       Hdr->NumAddresses))
         return createStringError(std::errc::invalid_argument,
-                                  "failed to read address table");
+                                 "failed to read address table");
       break;
     case 8:
       if (!Data.getU64(&Offset,
-                        reinterpret_cast<uint64_t *>(Swap->AddrOffsets.data()),
-                        Hdr->NumAddresses))
+                       reinterpret_cast<uint64_t *>(Swap->AddrOffsets.data()),
+                       Hdr->NumAddresses))
         return createStringError(std::errc::invalid_argument,
-                                  "failed to read address table");
+                                 "failed to read address table");
     }
     AddrOffsets = ArrayRef<uint8_t>(Swap->AddrOffsets);
 
@@ -181,21 +178,20 @@ GsymReader::parse() {
     const uint32_t NumFiles = Data.getU32(&Offset);
     if (NumFiles > 0) {
       Swap->Files.resize(NumFiles);
-      if (Data.getU32(&Offset, &Swap->Files[0].Dir, NumFiles*2))
+      if (Data.getU32(&Offset, &Swap->Files[0].Dir, NumFiles * 2))
         Files = ArrayRef<FileEntry>(Swap->Files);
       else
         return createStringError(std::errc::invalid_argument,
                                  "failed to read file table");
     }
     // Get the string table.
-    StrTab.Data = MemBuffer->getBuffer().substr(Hdr->StrtabOffset,
-                                                Hdr->StrtabSize);
+    StrTab.Data =
+        MemBuffer->getBuffer().substr(Hdr->StrtabOffset, Hdr->StrtabSize);
     if (StrTab.Data.empty())
       return createStringError(std::errc::invalid_argument,
                                "failed to read string table");
   }
   return Error::success();
-
 }
 
 const Header &GsymReader::getHeader() const {
@@ -208,10 +204,14 @@ const Header &GsymReader::getHeader() const {
 
 std::optional<uint64_t> GsymReader::getAddress(size_t Index) const {
   switch (Hdr->AddrOffSize) {
-  case 1: return addressForIndex<uint8_t>(Index);
-  case 2: return addressForIndex<uint16_t>(Index);
-  case 4: return addressForIndex<uint32_t>(Index);
-  case 8: return addressForIndex<uint64_t>(Index);
+  case 1:
+    return addressForIndex<uint8_t>(Index);
+  case 2:
+    return addressForIndex<uint16_t>(Index);
+  case 4:
+    return addressForIndex<uint32_t>(Index);
+  case 8:
+    return addressForIndex<uint64_t>(Index);
   }
   return std::nullopt;
 }
@@ -223,8 +223,7 @@ std::optional<uint64_t> GsymReader::getAddressInfoOffset(size_t Index) const {
   return std::nullopt;
 }
 
-Expected<uint64_t>
-GsymReader::getAddressIndex(const uint64_t Addr) const {
+Expected<uint64_t> GsymReader::getAddressIndex(const uint64_t Addr) const {
   if (Addr >= Hdr->BaseAddress) {
     const uint64_t AddrOffset = Addr - Hdr->BaseAddress;
     std::optional<uint64_t> AddrOffsetIndex;
@@ -251,7 +250,6 @@ GsymReader::getAddressIndex(const uint64_t Addr) const {
   }
   return createStringError(std::errc::invalid_argument,
                            "address 0x%" PRIx64 " is not in GSYM", Addr);
-
 }
 
 llvm::Expected<FunctionInfo> GsymReader::getFunctionInfo(uint64_t Addr) const {
@@ -268,7 +266,7 @@ llvm::Expected<FunctionInfo> GsymReader::getFunctionInfo(uint64_t Addr) const {
       if (ExpectedFI->Range.contains(Addr) || ExpectedFI->Range.size() == 0)
         return ExpectedFI;
       return createStringError(std::errc::invalid_argument,
-                                "address 0x%" PRIx64 " is not in GSYM", Addr);
+                               "address 0x%" PRIx64 " is not in GSYM", Addr);
     }
   }
   return createStringError(std::errc::invalid_argument,
@@ -300,22 +298,41 @@ void GsymReader::dump(raw_ostream &OS) {
   OS << "INDEX  OFFSET";
 
   switch (Hdr->AddrOffSize) {
-  case 1: OS << "8 "; break;
-  case 2: OS << "16"; break;
-  case 4: OS << "32"; break;
-  case 8: OS << "64"; break;
-  default: OS << "??"; break;
+  case 1:
+    OS << "8 ";
+    break;
+  case 2:
+    OS << "16";
+    break;
+  case 4:
+    OS << "32";
+    break;
+  case 8:
+    OS << "64";
+    break;
+  default:
+    OS << "??";
+    break;
   }
   OS << " (ADDRESS)\n";
   OS << "====== =============================== \n";
   for (uint32_t I = 0; I < Header.NumAddresses; ++I) {
     OS << format("[%4u] ", I);
     switch (Hdr->AddrOffSize) {
-    case 1: OS << HEX8(getAddrOffsets<uint8_t>()[I]); break;
-    case 2: OS << HEX16(getAddrOffsets<uint16_t>()[I]); break;
-    case 4: OS << HEX32(getAddrOffsets<uint32_t>()[I]); break;
-    case 8: OS << HEX32(getAddrOffsets<uint64_t>()[I]); break;
-    default: break;
+    case 1:
+      OS << HEX8(getAddrOffsets<uint8_t>()[I]);
+      break;
+    case 2:
+      OS << HEX16(getAddrOffsets<uint16_t>()[I]);
+      break;
+    case 4:
+      OS << HEX32(getAddrOffsets<uint32_t>()[I]);
+      break;
+    case 8:
+      OS << HEX32(getAddrOffsets<uint64_t>()[I]);
+      break;
+    default:
+      break;
     }
     OS << " (" << HEX64(*getAddress(I)) << ")\n";
   }
@@ -356,7 +373,7 @@ void GsymReader::dump(raw_ostream &OS, const FunctionInfo &FI) {
 
 void GsymReader::dump(raw_ostream &OS, const LineTable &LT) {
   OS << "LineTable:\n";
-  for (auto &LE: LT) {
+  for (auto &LE : LT) {
     OS << "  " << HEX64(LE.Addr) << ' ';
     if (LE.File)
       dump(OS, getFile(LE.File));
@@ -378,7 +395,7 @@ void GsymReader::dump(raw_ostream &OS, const InlineInfo &II, uint32_t Indent) {
     }
   }
   OS << '\n';
-  for (const auto &ChildII: II.Children)
+  for (const auto &ChildII : II.Children)
     dump(OS, ChildII, Indent + 2);
 }
 
diff --git a/llvm/lib/DebugInfo/GSYM/Header.cpp b/llvm/lib/DebugInfo/GSYM/Header.cpp
index 0b3fb9c498949af..6d0b1497ff99595 100644
--- a/llvm/lib/DebugInfo/GSYM/Header.cpp
+++ b/llvm/lib/DebugInfo/GSYM/Header.cpp
@@ -46,14 +46,17 @@ llvm::Error Header::checkForError() const {
     return createStringError(std::errc::invalid_argument,
                              "unsupported GSYM version %u", Version);
   switch (AddrOffSize) {
-    case 1: break;
-    case 2: break;
-    case 4: break;
-    case 8: break;
-    default:
-        return createStringError(std::errc::invalid_argument,
-                                 "invalid address offset size %u",
-                                 AddrOffSize);
+  case 1:
+    break;
+  case 2:
+    break;
+  case 4:
+    break;
+  case 8:
+    break;
+  default:
+    return createStringError(std::errc::invalid_argument,
+                             "invalid address offset size %u", AddrOffSize);
   }
   if (UUIDSize > GSYM_MAX_UUID_SIZE)
     return createStringError(std::errc::invalid_argument,
@@ -100,10 +103,10 @@ llvm::Error Header::encode(FileWriter &O) const {
 
 bool llvm::gsym::operator==(const Header &LHS, const Header &RHS) {
   return LHS.Magic == RHS.Magic && LHS.Version == RHS.Version &&
-      LHS.AddrOffSize == RHS.AddrOffSize && LHS.UUIDSize == RHS.UUIDSize &&
-      LHS.BaseAddress == RHS.BaseAddress &&
-      LHS.NumAddresses == RHS.NumAddresses &&
-      LHS.StrtabOffset == RHS.StrtabOffset &&
-      LHS.StrtabSize == RHS.StrtabSize &&
-      memcmp(LHS.UUID, RHS.UUID, LHS.UUIDSize) == 0;
+         LHS.AddrOffSize == RHS.AddrOffSize && LHS.UUIDSize == RHS.UUIDSize &&
+         LHS.BaseAddress == RHS.BaseAddress &&
+         LHS.NumAddresses == RHS.NumAddresses &&
+         LHS.StrtabOffset == RHS.StrtabOffset &&
+         LHS.StrtabSize == RHS.StrtabSize &&
+         memcmp(LHS.UUID, RHS.UUID, LHS.UUIDSize) == 0;
 }
diff --git a/llvm/lib/DebugInfo/GSYM/InlineInfo.cpp b/llvm/lib/DebugInfo/GSYM/InlineInfo.cpp
index ecfb21501eda926..e9e1b89db10feda 100644
--- a/llvm/lib/DebugInfo/GSYM/InlineInfo.cpp
+++ b/llvm/lib/DebugInfo/GSYM/InlineInfo.cpp
@@ -6,10 +6,10 @@
 //
 //===----------------------------------------------------------------------===//
 
+#include "llvm/DebugInfo/GSYM/InlineInfo.h"
 #include "llvm/DebugInfo/GSYM/FileEntry.h"
 #include "llvm/DebugInfo/GSYM/FileWriter.h"
 #include "llvm/DebugInfo/GSYM/GsymReader.h"
-#include "llvm/DebugInfo/GSYM/InlineInfo.h"
 #include "llvm/Support/DataExtractor.h"
 #include <algorithm>
 #include <inttypes.h>
@@ -17,7 +17,6 @@
 using namespace llvm;
 using namespace gsym;
 
-
 raw_ostream &llvm::gsym::operator<<(raw_ostream &OS, const InlineInfo &II) {
   if (!II.isValid())
     return OS;
@@ -37,7 +36,7 @@ raw_ostream &llvm::gsym::operator<<(raw_ostream &OS, const InlineInfo &II) {
 }
 
 static bool getInlineStackHelper(const InlineInfo &II, uint64_t Addr,
-    std::vector<const InlineInfo *> &InlineStack) {
+                                 std::vector<const InlineInfo *> &InlineStack) {
   if (II.Ranges.contains(Addr)) {
     // If this is the top level that represents the concrete function,
     // there will be no name and we shoud clear the inline stack. Otherwise
@@ -80,7 +79,7 @@ static bool skip(DataExtractor &Data, uint64_t &Offset, bool SkippedRanges) {
       return false;
   }
   bool HasChildren = Data.getU8(&Offset) != 0;
-  Data.getU32(&Offset); // Skip Inline.Name.
+  Data.getU32(&Offset);     // Skip Inline.Name.
   Data.getULEB128(&Offset); // Skip Inline.CallFile.
   Data.getULEB128(&Offset); // Skip Inline.CallLine.
   if (HasChildren) {
@@ -181,26 +180,31 @@ static llvm::Expected<InlineInfo> decode(DataExtractor &Data, uint64_t &Offset,
                                          uint64_t BaseAddr) {
   InlineInfo Inline;
   if (!Data.isValidOffset(Offset))
-    return createStringError(std::errc::io_error,
+    return createStringError(
+        std::errc::io_error,
         "0x%8.8" PRIx64 ": missing InlineInfo address ranges data", Offset);
   decodeRanges(Inline.Ranges, Data, BaseAddr, Offset);
   if (Inline.Ranges.empty())
     return Inline;
   if (!Data.isValidOffsetForDataOfSize(Offset, 1))
     return createStringError(std::errc::io_error,
-        "0x%8.8" PRIx64 ": missing InlineInfo uint8_t indicating children",
-        Offset);
+                             "0x%8.8" PRIx64
+                             ": missing InlineInfo uint8_t indicating children",
+                             Offset);
   bool HasChildren = Data.getU8(&Offset) != 0;
   if (!Data.isValidOffsetForDataOfSize(Offset, 4))
-    return createStringError(std::errc::io_error,
+    return createStringError(
+        std::errc::io_error,
         "0x%8.8" PRIx64 ": missing InlineInfo uint32_t for name", Offset);
   Inline.Name = Data.getU32(&Offset);
   if (!Data.isValidOffset(Offset))
-    return createStringError(std::errc::io_error,
+    return createStringError(
+        std::errc::io_error,
         "0x%8.8" PRIx64 ": missing ULEB128 for InlineInfo call file", Offset);
   Inline.CallFile = (uint32_t)Data.getULEB128(&Offset);
   if (!Data.isValidOffset(Offset))
-    return createStringError(std::errc::io_error,
+    return createStringError(
+        std::errc::io_error,
         "0x%8.8" PRIx64 ": missing ULEB128 for InlineInfo call line", Offset);
   Inline.CallLine = (uint32_t)Data.getULEB128(&Offset);
   if (HasChildren) {
@@ -247,7 +251,7 @@ llvm::Error InlineInfo::encode(FileWriter &O, uint64_t BaseAddr) const {
     for (const auto &Child : Children) {
       // Make sure all child address ranges are contained in the parent address
       // ranges.
-      for (const auto &ChildRange: Child.Ranges) {
+      for (const auto &ChildRange : Child.Ranges) {
         if (!Ranges.contains(ChildRange))
           return createStringError(std::errc::invalid_argument,
                                    "child range not contained in parent");
diff --git a/llvm/lib/DebugInfo/GSYM/LineTable.cpp b/llvm/lib/DebugInfo/GSYM/LineTable.cpp
index a49a3ba9bf2ad1d..bcee6f141ea18ad 100644
--- a/llvm/lib/DebugInfo/GSYM/LineTable.cpp
+++ b/llvm/lib/DebugInfo/GSYM/LineTable.cpp
@@ -56,23 +56,27 @@ static llvm::Error parse(DataExtractor &Data, uint64_t BaseAddr,
   uint64_t Offset = 0;
   if (!Data.isValidOffset(Offset))
     return createStringError(std::errc::io_error,
-        "0x%8.8" PRIx64 ": missing LineTable MinDelta", Offset);
+                             "0x%8.8" PRIx64 ": missing LineTable MinDelta",
+                             Offset);
   int64_t MinDelta = Data.getSLEB128(&Offset);
   if (!Data.isValidOffset(Offset))
     return createStringError(std::errc::io_error,
-        "0x%8.8" PRIx64 ": missing LineTable MaxDelta", Offset);
+                             "0x%8.8" PRIx64 ": missing LineTable MaxDelta",
+                             Offset);
   int64_t MaxDelta = Data.getSLEB128(&Offset);
   int64_t LineRange = MaxDelta - MinDelta + 1;
   if (!Data.isValidOffset(Offset))
     return createStringError(std::errc::io_error,
-        "0x%8.8" PRIx64 ": missing LineTable FirstLine", Offset);
+                             "0x%8.8" PRIx64 ": missing LineTable FirstLine",
+                             Offset);
   const uint32_t FirstLine = (uint32_t)Data.getULEB128(&Offset);
   LineEntry Row(BaseAddr, 1, FirstLine);
   bool Done = false;
   while (!Done) {
     if (!Data.isValidOffset(Offset))
       return createStringError(std::errc::io_error,
-          "0x%8.8" PRIx64 ": EOF found before EndSequence", Offset);
+                               "0x%8.8" PRIx64 ": EOF found before EndSequence",
+                               Offset);
     uint8_t Op = Data.getU8(&Offset);
     switch (Op) {
     case EndSequence:
@@ -80,16 +84,16 @@ static llvm::Error parse(DataExtractor &Data, uint64_t BaseAddr,
       break;
     case SetFile:
       if (!Data.isValidOffset(Offset))
-        return createStringError(std::errc::io_error,
-            "0x%8.8" PRIx64 ": EOF found before SetFile value",
-            Offset);
+        return createStringError(
+            std::errc::io_error,
+            "0x%8.8" PRIx64 ": EOF found before SetFile value", Offset);
       Row.File = (uint32_t)Data.getULEB128(&Offset);
       break;
     case AdvancePC:
       if (!Data.isValidOffset(Offset))
-        return createStringError(std::errc::io_error,
-            "0x%8.8" PRIx64 ": EOF found before AdvancePC value",
-            Offset);
+        return createStringError(
+            std::errc::io_error,
+            "0x%8.8" PRIx64 ": EOF found before AdvancePC value", Offset);
       Row.Addr += Data.getULEB128(&Offset);
       // If the function callback returns false, we stop parsing.
       if (Callback(Row) == false)
@@ -97,23 +101,23 @@ static llvm::Error parse(DataExtractor &Data, uint64_t BaseAddr,
       break;
     case AdvanceLine:
       if (!Data.isValidOffset(Offset))
-        return createStringError(std::errc::io_error,
-            "0x%8.8" PRIx64 ": EOF found before AdvanceLine value",
-            Offset);
+        return createStringError(
+            std::errc::io_error,
+            "0x%8.8" PRIx64 ": EOF found before AdvanceLine value", Offset);
       Row.Line += Data.getSLEB128(&Offset);
       break;
     default: {
-        // A byte that contains both address and line increment.
-        uint8_t AdjustedOp = Op - FirstSpecial;
-        int64_t LineDelta = MinDelta + (AdjustedOp % LineRange);
-        uint64_t AddrDelta = (AdjustedOp / LineRange);
-        Row.Line += LineDelta;
-        Row.Addr += AddrDelta;
-        // If the function callback returns false, we stop parsing.
-        if (Callback(Row) == false)
-          return Error::success();
-        break;
-      }
+      // A byte that contains both address and line increment.
+      uint8_t AdjustedOp = Op - FirstSpecial;
+      int64_t LineDelta = MinDelta + (AdjustedOp % LineRange);
+      uint64_t AddrDelta = (AdjustedOp / LineRange);
+      Row.Line += LineDelta;
+      Row.Addr += AddrDelta;
+      // If the function callback returns false, we stop parsing.
+      if (Callback(Row) == false)
+        return Error::success();
+      break;
+    }
     }
   }
   return Error::success();
@@ -200,10 +204,11 @@ llvm::Error LineTable::encode(FileWriter &Out, uint64_t BaseAddr) const {
 
   for (const auto &Curr : Lines) {
     if (Curr.Addr < BaseAddr)
-      return createStringError(std::errc::invalid_argument,
-                               "LineEntry has address 0x%" PRIx64 " which is "
-                               "less than the function start address 0x%"
-                               PRIx64, Curr.Addr, BaseAddr);
+      return createStringError(
+          std::errc::invalid_argument,
+          "LineEntry has address 0x%" PRIx64 " which is "
+          "less than the function start address 0x%" PRIx64,
+          Curr.Addr, BaseAddr);
     if (Curr.Addr < Prev.Addr)
       return createStringError(std::errc::invalid_argument,
                                "LineEntry in LineTable not in ascending order");
@@ -263,20 +268,21 @@ llvm::Expected<LineTable> LineTable::decode(DataExtractor &Data,
 // We will need to determine if we need to cache the line table by calling
 // LineTable::parseAllEntries(...) or just call this function each time.
 // There is a CPU vs memory tradeoff we will need to determined.
-Expected<LineEntry> LineTable::lookup(DataExtractor &Data, uint64_t BaseAddr, uint64_t Addr) {
+Expected<LineEntry> LineTable::lookup(DataExtractor &Data, uint64_t BaseAddr,
+                                      uint64_t Addr) {
   LineEntry Result;
-  llvm::Error Err = parse(Data, BaseAddr,
-                          [Addr, &Result](const LineEntry &Row) -> bool {
-    if (Addr < Row.Addr)
-      return false; // Stop parsing, result contains the line table row!
-    Result = Row;
-    if (Addr == Row.Addr) {
-      // Stop parsing, this is the row we are looking for since the address
-      // matches.
-      return false;
-    }
-    return true; // Keep parsing till we find the right row.
-  });
+  llvm::Error Err =
+      parse(Data, BaseAddr, [Addr, &Result](const LineEntry &Row) -> bool {
+        if (Addr < Row.Addr)
+          return false; // Stop parsing, result contains the line table row!
+        Result = Row;
+        if (Addr == Row.Addr) {
+          // Stop parsing, this is the row we are looking for since the address
+          // matches.
+          return false;
+        }
+        return true; // Keep parsing till we find the right row.
+      });
   if (Err)
     return std::move(Err);
   if (Result.isValid())
diff --git a/llvm/lib/DebugInfo/GSYM/ObjectFileTransformer.cpp b/llvm/lib/DebugInfo/GSYM/ObjectFileTransformer.cpp
index a60b2d3860766e7..6e1aa705c0c975a 100644
--- a/llvm/lib/DebugInfo/GSYM/ObjectFileTransformer.cpp
+++ b/llvm/lib/DebugInfo/GSYM/ObjectFileTransformer.cpp
@@ -14,8 +14,8 @@
 #include "llvm/Support/DataExtractor.h"
 #include "llvm/Support/raw_ostream.h"
 
-#include "llvm/DebugInfo/GSYM/ObjectFileTransformer.h"
 #include "llvm/DebugInfo/GSYM/GsymCreator.h"
+#include "llvm/DebugInfo/GSYM/ObjectFileTransformer.h"
 
 using namespace llvm;
 using namespace gsym;
diff --git a/llvm/lib/DebugInfo/LogicalView/CMakeLists.txt b/llvm/lib/DebugInfo/LogicalView/CMakeLists.txt
index 38a174661b4f3fd..1c81a3a0c9d82a8 100644
--- a/llvm/lib/DebugInfo/LogicalView/CMakeLists.txt
+++ b/llvm/lib/DebugInfo/LogicalView/CMakeLists.txt
@@ -1,52 +1,28 @@
-macro(add_lv_impl_folder group)
-  list(APPEND LV_IMPL_SOURCES ${ARGN})
-  source_group(${group} FILES ${ARGN})
-endmacro()
+macro(add_lv_impl_folder group) list(APPEND LV_IMPL_SOURCES ${
+    ARGN}) source_group(${group} FILES ${ARGN}) endmacro()
 
-add_lv_impl_folder(Core
-  Core/LVCompare.cpp
-  Core/LVElement.cpp
-  Core/LVLine.cpp
-  Core/LVLocation.cpp
-  Core/LVObject.cpp
-  Core/LVOptions.cpp
-  Core/LVRange.cpp
-  Core/LVReader.cpp
-  Core/LVScope.cpp
-  Core/LVSort.cpp
-  Core/LVSupport.cpp
-  Core/LVSymbol.cpp
-  Core/LVType.cpp
-  )
+    add_lv_impl_folder(Core Core / LVCompare.cpp Core / LVElement.cpp Core /
+                       LVLine.cpp Core / LVLocation.cpp Core /
+                       LVObject.cpp Core / LVOptions.cpp Core /
+                       LVRange.cpp Core / LVReader.cpp Core / LVScope.cpp Core /
+                       LVSort.cpp Core / LVSupport.cpp Core /
+                       LVSymbol.cpp Core / LVType.cpp)
 
-add_lv_impl_folder(Readers
-  LVReaderHandler.cpp
-  Readers/LVBinaryReader.cpp
-  Readers/LVCodeViewReader.cpp
-  Readers/LVCodeViewVisitor.cpp
-  Readers/LVELFReader.cpp
-  )
+        add_lv_impl_folder(Readers LVReaderHandler.cpp Readers /
+                           LVBinaryReader.cpp Readers /
+                           LVCodeViewReader.cpp Readers /
+                           LVCodeViewVisitor.cpp Readers / LVELFReader.cpp)
 
-list(APPEND LIBLV_ADDITIONAL_HEADER_DIRS
-  "${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/LogicalView"
-  "${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/LogicalView/Core"
-  "${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/LogicalView/Readers"
-  )
+            list(APPEND LIBLV_ADDITIONAL_HEADER_DIRS
+                 "${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/LogicalView"
+                 "${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/LogicalView/Core"
+                 "${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/LogicalView/Readers")
 
-add_llvm_component_library(LLVMDebugInfoLogicalView
-  ${LV_IMPL_SOURCES}
+                add_llvm_component_library(
+                    LLVMDebugInfoLogicalView ${LV_IMPL_SOURCES}
 
-  ADDITIONAL_HEADER_DIRS
-  ${LIBLV_ADDITIONAL_HEADER_DIRS}
+                    ADDITIONAL_HEADER_DIRS ${LIBLV_ADDITIONAL_HEADER_DIRS}
 
-  LINK_COMPONENTS
-  BinaryFormat
-  Demangle
-  Object
-  MC
-  Support
-  TargetParser
-  DebugInfoDWARF
-  DebugInfoCodeView
-  DebugInfoPDB
-  )
+                    LINK_COMPONENTS BinaryFormat Demangle Object MC Support
+                        TargetParser DebugInfoDWARF DebugInfoCodeView
+                            DebugInfoPDB)
diff --git a/llvm/lib/DebugInfo/MSF/CMakeLists.txt b/llvm/lib/DebugInfo/MSF/CMakeLists.txt
index 20daa3c21e62f20..a18498ad5972236 100644
--- a/llvm/lib/DebugInfo/MSF/CMakeLists.txt
+++ b/llvm/lib/DebugInfo/MSF/CMakeLists.txt
@@ -1,11 +1,6 @@
-add_llvm_component_library(LLVMDebugInfoMSF
-  MappedBlockStream.cpp
-  MSFBuilder.cpp
-  MSFCommon.cpp
-  MSFError.cpp
-  ADDITIONAL_HEADER_DIRS
-  "${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/MSF"
+add_llvm_component_library(
+    LLVMDebugInfoMSF MappedBlockStream.cpp MSFBuilder.cpp MSFCommon.cpp MSFError
+        .cpp ADDITIONAL_HEADER_DIRS
+    "${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/MSF"
 
-  LINK_COMPONENTS
-  Support
-  )
+    LINK_COMPONENTS Support)
diff --git a/llvm/lib/DebugInfo/MSF/MappedBlockStream.cpp b/llvm/lib/DebugInfo/MSF/MappedBlockStream.cpp
index 94935d63452edc9..9224132cac9dee7 100644
--- a/llvm/lib/DebugInfo/MSF/MappedBlockStream.cpp
+++ b/llvm/lib/DebugInfo/MSF/MappedBlockStream.cpp
@@ -28,7 +28,7 @@ namespace {
 template <typename Base> class MappedBlockStreamImpl : public Base {
 public:
   template <typename... Args>
-  MappedBlockStreamImpl(Args &&... Params)
+  MappedBlockStreamImpl(Args &&...Params)
       : Base(std::forward<Args>(Params)...) {}
 };
 
diff --git a/llvm/lib/DebugInfo/PDB/CMakeLists.txt b/llvm/lib/DebugInfo/PDB/CMakeLists.txt
index b42fae41992e963..c22a1ce84a4befe 100644
--- a/llvm/lib/DebugInfo/PDB/CMakeLists.txt
+++ b/llvm/lib/DebugInfo/PDB/CMakeLists.txt
@@ -1,156 +1,141 @@
-macro(add_pdb_impl_folder group)
-  list(APPEND PDB_IMPL_SOURCES ${ARGN})
-  source_group(${group} FILES ${ARGN})
-endmacro()
+macro(add_pdb_impl_folder group) list(APPEND PDB_IMPL_SOURCES ${
+    ARGN}) source_group(${group} FILES ${ARGN}) endmacro()
 
-if(LLVM_ENABLE_DIA_SDK)
-  include_directories(SYSTEM ${MSVC_DIA_SDK_DIR}/include)
-  set(LIBPDB_LINK_FOLDERS "${MSVC_DIA_SDK_DIR}\\lib")
+    if (LLVM_ENABLE_DIA_SDK) include_directories(
+        SYSTEM ${MSVC_DIA_SDK_DIR} / include) set(LIBPDB_LINK_FOLDERS
+                                                  "${MSVC_DIA_SDK_DIR}\\lib")
 
-  if ("$ENV{VSCMD_ARG_TGT_ARCH}" STREQUAL "arm64")
-    set(LIBPDB_LINK_FOLDERS "${LIBPDB_LINK_FOLDERS}\\arm64")
-  elseif ("$ENV{VSCMD_ARG_TGT_ARCH}" STREQUAL "arm")
-    set(LIBPDB_LINK_FOLDERS "${LIBPDB_LINK_FOLDERS}\\arm")
-  elseif (CMAKE_SIZEOF_VOID_P EQUAL 8)
-    set(LIBPDB_LINK_FOLDERS "${LIBPDB_LINK_FOLDERS}\\amd64")
-  endif()
-  file(TO_CMAKE_PATH "${LIBPDB_LINK_FOLDERS}\\diaguids.lib" LIBPDB_ADDITIONAL_LIBRARIES)
+        if ("$ENV{VSCMD_ARG_TGT_ARCH}" STREQUAL "arm64") set(
+            LIBPDB_LINK_FOLDERS
+            "${LIBPDB_LINK_FOLDERS}\\arm64") elseif("$ENV{VSCMD_ARG_TGT_"
+                                                    "ARCH}" STREQUAL "arm")
+            set(LIBPDB_LINK_FOLDERS "${LIBPDB_LINK_FOLDERS}\\arm") elseif(
+                CMAKE_SIZEOF_VOID_P
+                    EQUAL 8) set(LIBPDB_LINK_FOLDERS
+                                 "${LIBPDB_LINK_FOLDERS}\\amd64") endif()
+                file(TO_CMAKE_PATH "${LIBPDB_LINK_FOLDERS}\\diaguids."
+                                   "lib" LIBPDB_ADDITIONAL_LIBRARIES)
 
-  add_pdb_impl_folder(DIA
-    DIA/DIADataStream.cpp
-    DIA/DIAEnumDebugStreams.cpp
-    DIA/DIAEnumFrameData.cpp
-    DIA/DIAEnumInjectedSources.cpp
-    DIA/DIAEnumLineNumbers.cpp
-    DIA/DIAEnumSectionContribs.cpp
-    DIA/DIAEnumSourceFiles.cpp
-    DIA/DIAEnumSymbols.cpp
-    DIA/DIAEnumTables.cpp
-    DIA/DIAError.cpp
-    DIA/DIAFrameData.cpp
-    DIA/DIAInjectedSource.cpp
-    DIA/DIALineNumber.cpp
-    DIA/DIARawSymbol.cpp
-    DIA/DIASectionContrib.cpp
-    DIA/DIASession.cpp
-    DIA/DIASourceFile.cpp
-    DIA/DIATable.cpp
-    )
+                    add_pdb_impl_folder(
+                        DIA DIA / DIADataStream.cpp DIA /
+                        DIAEnumDebugStreams.cpp DIA / DIAEnumFrameData.cpp DIA /
+                        DIAEnumInjectedSources.cpp DIA /
+                        DIAEnumLineNumbers.cpp DIA /
+                        DIAEnumSectionContribs.cpp DIA /
+                        DIAEnumSourceFiles.cpp DIA / DIAEnumSymbols.cpp DIA /
+                        DIAEnumTables.cpp DIA / DIAError.cpp DIA /
+                        DIAFrameData.cpp DIA / DIAInjectedSource.cpp DIA /
+                        DIALineNumber.cpp DIA / DIARawSymbol.cpp DIA /
+                        DIASectionContrib.cpp DIA / DIASession.cpp DIA /
+                        DIASourceFile.cpp DIA / DIATable.cpp)
 
-    set(LIBPDB_ADDITIONAL_HEADER_DIRS "${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/PDB/DIA")
-endif()
+                        set(LIBPDB_ADDITIONAL_HEADER_DIRS "${LLVM_MAIN_INCLUDE_"
+                                                          "DIR}/llvm/DebugInfo/"
+                                                          "PDB/DIA") endif()
 
-add_pdb_impl_folder(Native
-  Native/DbiModuleDescriptor.cpp
-  Native/DbiModuleDescriptorBuilder.cpp
-  Native/DbiModuleList.cpp
-  Native/DbiStream.cpp
-  Native/DbiStreamBuilder.cpp
-  Native/EnumTables.cpp
-  Native/FormatUtil.cpp
-  Native/GlobalsStream.cpp
-  Native/Hash.cpp
-  Native/HashTable.cpp
-  Native/InfoStream.cpp
-  Native/InfoStreamBuilder.cpp
-  Native/InjectedSourceStream.cpp
-  Native/InputFile.cpp
-  Native/LinePrinter.cpp
-  Native/ModuleDebugStream.cpp
-  Native/NativeCompilandSymbol.cpp
-  Native/NativeEnumGlobals.cpp
-  Native/NativeEnumInjectedSources.cpp
-  Native/NativeEnumLineNumbers.cpp
-  Native/NativeEnumModules.cpp
-  Native/NativeEnumTypes.cpp
-  Native/NativeEnumSymbols.cpp
-  Native/NativeExeSymbol.cpp
-  Native/NativeFunctionSymbol.cpp
-  Native/NativeInlineSiteSymbol.cpp
-  Native/NativeLineNumber.cpp
-  Native/NativePublicSymbol.cpp
-  Native/NativeRawSymbol.cpp
-  Native/NativeSourceFile.cpp
-  Native/NativeSymbolEnumerator.cpp
-  Native/NativeTypeArray.cpp
-  Native/NativeTypeBuiltin.cpp
-  Native/NativeTypeEnum.cpp
-  Native/NativeTypeFunctionSig.cpp
-  Native/NativeTypePointer.cpp
-  Native/NativeTypeTypedef.cpp
-  Native/NativeTypeUDT.cpp
-  Native/NativeTypeVTShape.cpp
-  Native/NamedStreamMap.cpp
-  Native/NativeSession.cpp
-  Native/PDBFile.cpp
-  Native/PDBFileBuilder.cpp
-  Native/PDBStringTable.cpp
-  Native/PDBStringTableBuilder.cpp
-  Native/PublicsStream.cpp
-  Native/GSIStreamBuilder.cpp
-  Native/RawError.cpp
-  Native/SymbolCache.cpp
-  Native/SymbolStream.cpp
-  Native/TpiHashing.cpp
-  Native/TpiStream.cpp
-  Native/TpiStreamBuilder.cpp
-  )
+                            add_pdb_impl_folder(
+                                Native Native / DbiModuleDescriptor.cpp Native /
+                                DbiModuleDescriptorBuilder.cpp Native /
+                                DbiModuleList.cpp Native /
+                                DbiStream.cpp Native /
+                                DbiStreamBuilder.cpp Native /
+                                EnumTables.cpp Native / FormatUtil.cpp Native /
+                                GlobalsStream.cpp Native / Hash.cpp Native /
+                                HashTable.cpp Native / InfoStream.cpp Native /
+                                InfoStreamBuilder.cpp Native /
+                                InjectedSourceStream.cpp Native /
+                                InputFile.cpp Native / LinePrinter.cpp Native /
+                                ModuleDebugStream.cpp Native /
+                                NativeCompilandSymbol.cpp Native /
+                                NativeEnumGlobals.cpp Native /
+                                NativeEnumInjectedSources.cpp Native /
+                                NativeEnumLineNumbers.cpp Native /
+                                NativeEnumModules.cpp Native /
+                                NativeEnumTypes.cpp Native /
+                                NativeEnumSymbols.cpp Native /
+                                NativeExeSymbol.cpp Native /
+                                NativeFunctionSymbol.cpp Native /
+                                NativeInlineSiteSymbol.cpp Native /
+                                NativeLineNumber.cpp Native /
+                                NativePublicSymbol.cpp Native /
+                                NativeRawSymbol.cpp Native /
+                                NativeSourceFile.cpp Native /
+                                NativeSymbolEnumerator.cpp Native /
+                                NativeTypeArray.cpp Native /
+                                NativeTypeBuiltin.cpp Native /
+                                NativeTypeEnum.cpp Native /
+                                NativeTypeFunctionSig.cpp Native /
+                                NativeTypePointer.cpp Native /
+                                NativeTypeTypedef.cpp Native /
+                                NativeTypeUDT.cpp Native /
+                                NativeTypeVTShape.cpp Native /
+                                NamedStreamMap.cpp Native /
+                                NativeSession.cpp Native / PDBFile.cpp Native /
+                                PDBFileBuilder.cpp Native /
+                                PDBStringTable.cpp Native /
+                                PDBStringTableBuilder.cpp Native /
+                                PublicsStream.cpp Native /
+                                GSIStreamBuilder.cpp Native /
+                                RawError.cpp Native / SymbolCache.cpp Native /
+                                SymbolStream.cpp Native /
+                                TpiHashing.cpp Native / TpiStream.cpp Native /
+                                TpiStreamBuilder.cpp)
 
-list(APPEND LIBPDB_ADDITIONAL_HEADER_DIRS "${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/PDB/Native")
-list(APPEND LIBPDB_ADDITIONAL_HEADER_DIRS "${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/PDB")
+                                list(APPEND LIBPDB_ADDITIONAL_HEADER_DIRS
+                                     "${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/"
+                                     "PDB/Native")
+                                    list(APPEND LIBPDB_ADDITIONAL_HEADER_DIRS
+                                         "${LLVM_MAIN_INCLUDE_DIR}/llvm/"
+                                         "DebugInfo/PDB")
 
-add_llvm_component_library(LLVMDebugInfoPDB
-  GenericError.cpp
-  IPDBSourceFile.cpp
-  PDB.cpp
-  PDBContext.cpp
-  PDBExtras.cpp
-  PDBInterfaceAnchors.cpp
-  PDBSymbol.cpp
-  PDBSymbolAnnotation.cpp
-  PDBSymbolBlock.cpp
-  PDBSymbolCompiland.cpp
-  PDBSymbolCompilandDetails.cpp
-  PDBSymbolCompilandEnv.cpp
-  PDBSymbolCustom.cpp
-  PDBSymbolData.cpp
-  PDBSymbolExe.cpp
-  PDBSymbolFunc.cpp
-  PDBSymbolFuncDebugEnd.cpp
-  PDBSymbolFuncDebugStart.cpp
-  PDBSymbolLabel.cpp
-  PDBSymbolPublicSymbol.cpp
-  PDBSymbolThunk.cpp
-  PDBSymbolTypeArray.cpp
-  PDBSymbolTypeBaseClass.cpp
-  PDBSymbolTypeBuiltin.cpp
-  PDBSymbolTypeCustom.cpp
-  PDBSymbolTypeDimension.cpp
-  PDBSymbolTypeEnum.cpp
-  PDBSymbolTypeFriend.cpp
-  PDBSymbolTypeFunctionArg.cpp
-  PDBSymbolTypeFunctionSig.cpp
-  PDBSymbolTypeManaged.cpp
-  PDBSymbolTypePointer.cpp
-  PDBSymbolTypeTypedef.cpp
-  PDBSymbolTypeUDT.cpp
-  PDBSymbolTypeVTable.cpp
-  PDBSymbolTypeVTableShape.cpp
-  PDBSymbolUnknown.cpp
-  PDBSymbolUsingNamespace.cpp
-  PDBSymDumper.cpp
-  UDTLayout.cpp
-  ${PDB_IMPL_SOURCES}
+                                        add_llvm_component_library(
+                                            LLVMDebugInfoPDB GenericError
+                                                .cpp IPDBSourceFile.cpp PDB
+                                                .cpp PDBContext.cpp PDBExtras
+                                                .cpp PDBInterfaceAnchors
+                                                .cpp PDBSymbol
+                                                .cpp PDBSymbolAnnotation
+                                                .cpp PDBSymbolBlock
+                                                .cpp PDBSymbolCompiland
+                                                .cpp PDBSymbolCompilandDetails
+                                                .cpp PDBSymbolCompilandEnv
+                                                .cpp PDBSymbolCustom
+                                                .cpp PDBSymbolData
+                                                .cpp PDBSymbolExe
+                                                .cpp PDBSymbolFunc
+                                                .cpp PDBSymbolFuncDebugEnd
+                                                .cpp PDBSymbolFuncDebugStart
+                                                .cpp PDBSymbolLabel
+                                                .cpp PDBSymbolPublicSymbol
+                                                .cpp PDBSymbolThunk
+                                                .cpp PDBSymbolTypeArray
+                                                .cpp PDBSymbolTypeBaseClass
+                                                .cpp PDBSymbolTypeBuiltin
+                                                .cpp PDBSymbolTypeCustom
+                                                .cpp PDBSymbolTypeDimension
+                                                .cpp PDBSymbolTypeEnum
+                                                .cpp PDBSymbolTypeFriend
+                                                .cpp PDBSymbolTypeFunctionArg
+                                                .cpp PDBSymbolTypeFunctionSig
+                                                .cpp PDBSymbolTypeManaged
+                                                .cpp PDBSymbolTypePointer
+                                                .cpp PDBSymbolTypeTypedef
+                                                .cpp PDBSymbolTypeUDT
+                                                .cpp PDBSymbolTypeVTable
+                                                .cpp PDBSymbolTypeVTableShape
+                                                .cpp PDBSymbolUnknown
+                                                .cpp PDBSymbolUsingNamespace
+                                                .cpp PDBSymDumper.cpp UDTLayout
+                                                .cpp ${PDB_IMPL_SOURCES}
 
-  ADDITIONAL_HEADER_DIRS
-  ${LIBPDB_ADDITIONAL_HEADER_DIRS}
+                                            ADDITIONAL_HEADER_DIRS ${
+                                                LIBPDB_ADDITIONAL_HEADER_DIRS}
 
-  LINK_COMPONENTS
-  BinaryFormat
-  Object
-  Support
-  DebugInfoCodeView
-  DebugInfoMSF
-  )
+                                            LINK_COMPONENTS BinaryFormat Object
+                                                Support DebugInfoCodeView
+                                                    DebugInfoMSF)
 
-target_link_libraries(LLVMDebugInfoPDB INTERFACE "${LIBPDB_ADDITIONAL_LIBRARIES}")
+                                            target_link_libraries(
+                                                LLVMDebugInfoPDB INTERFACE
+                                                "${LIBPDB_ADDITIONAL_"
+                                                "LIBRARIES}")
diff --git a/llvm/lib/DebugInfo/PDB/PDBContext.cpp b/llvm/lib/DebugInfo/PDB/PDBContext.cpp
index e600fb7385f1335..256dbc00a8f4499 100644
--- a/llvm/lib/DebugInfo/PDB/PDBContext.cpp
+++ b/llvm/lib/DebugInfo/PDB/PDBContext.cpp
@@ -30,7 +30,7 @@ PDBContext::PDBContext(const COFFObjectFile &Object,
     Session->setLoadAddress(ImageBase.get());
 }
 
-void PDBContext::dump(raw_ostream &OS, DIDumpOptions DumpOpts){}
+void PDBContext::dump(raw_ostream &OS, DIDumpOptions DumpOpts) {}
 
 DILineInfo PDBContext::getLineInfoForAddress(object::SectionedAddress Address,
                                              DILineInfoSpecifier Specifier) {
diff --git a/llvm/lib/DebugInfo/PDB/PDBExtras.cpp b/llvm/lib/DebugInfo/PDB/PDBExtras.cpp
index 2b318bf1c6488f5..62db9b4096af095 100644
--- a/llvm/lib/DebugInfo/PDB/PDBExtras.cpp
+++ b/llvm/lib/DebugInfo/PDB/PDBExtras.cpp
@@ -34,8 +34,8 @@ raw_ostream &llvm::pdb::operator<<(raw_ostream &OS,
     CASE_OUTPUT_ENUM_CLASS_NAME(PDB_VariantType, UInt16, OS)
     CASE_OUTPUT_ENUM_CLASS_NAME(PDB_VariantType, UInt32, OS)
     CASE_OUTPUT_ENUM_CLASS_NAME(PDB_VariantType, UInt64, OS)
-    default:
-      OS << "Unknown";
+  default:
+    OS << "Unknown";
   }
   return OS;
 }
@@ -72,30 +72,30 @@ raw_ostream &llvm::pdb::operator<<(raw_ostream &OS,
                                    const PDB_CallingConv &Conv) {
   OS << "__";
   switch (Conv) {
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, NearC      , "cdecl", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, FarC       , "cdecl", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, NearPascal , "pascal", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, FarPascal  , "pascal", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, NearFast   , "fastcall", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, FarFast    , "fastcall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, NearC, "cdecl", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, FarC, "cdecl", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, NearPascal, "pascal", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, FarPascal, "pascal", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, NearFast, "fastcall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, FarFast, "fastcall", OS)
     CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, NearStdCall, "stdcall", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, FarStdCall , "stdcall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, FarStdCall, "stdcall", OS)
     CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, NearSysCall, "syscall", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, FarSysCall , "syscall", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, ThisCall   , "thiscall", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, MipsCall   , "mipscall", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, Generic    , "genericcall", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, AlphaCall  , "alphacall", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, PpcCall    , "ppccall", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, SHCall     , "superhcall", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, ArmCall    , "armcall", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, AM33Call   , "am33call", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, TriCall    , "tricall", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, SH5Call    , "sh5call", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, M32RCall   , "m32rcall", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, ClrCall    , "clrcall", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, Inline     , "inlinecall", OS)
-    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, NearVector , "vectorcall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, FarSysCall, "syscall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, ThisCall, "thiscall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, MipsCall, "mipscall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, Generic, "genericcall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, AlphaCall, "alphacall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, PpcCall, "ppccall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, SHCall, "superhcall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, ArmCall, "armcall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, AM33Call, "am33call", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, TriCall, "tricall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, SH5Call, "sh5call", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, M32RCall, "m32rcall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, ClrCall, "clrcall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, Inline, "inlinecall", OS)
+    CASE_OUTPUT_ENUM_CLASS_STR(PDB_CallingConv, NearVector, "vectorcall", OS)
   }
   return OS;
 }
@@ -354,44 +354,44 @@ raw_ostream &llvm::pdb::dumpPDBSourceCompression(raw_ostream &OS,
 
 raw_ostream &llvm::pdb::operator<<(raw_ostream &OS, const Variant &Value) {
   switch (Value.Type) {
-    case PDB_VariantType::Bool:
-      OS << (Value.Value.Bool ? "true" : "false");
-      break;
-    case PDB_VariantType::Double:
-      OS << Value.Value.Double;
-      break;
-    case PDB_VariantType::Int16:
-      OS << Value.Value.Int16;
-      break;
-    case PDB_VariantType::Int32:
-      OS << Value.Value.Int32;
-      break;
-    case PDB_VariantType::Int64:
-      OS << Value.Value.Int64;
-      break;
-    case PDB_VariantType::Int8:
-      OS << static_cast<int>(Value.Value.Int8);
-      break;
-    case PDB_VariantType::Single:
-      OS << Value.Value.Single;
-      break;
-    case PDB_VariantType::UInt16:
-      OS << Value.Value.UInt16;
-      break;
-    case PDB_VariantType::UInt32:
-      OS << Value.Value.UInt32;
-      break;
-    case PDB_VariantType::UInt64:
-      OS << Value.Value.UInt64;
-      break;
-    case PDB_VariantType::UInt8:
-      OS << static_cast<unsigned>(Value.Value.UInt8);
-      break;
-    case PDB_VariantType::String:
-      OS << Value.Value.String;
-      break;
-    default:
-      OS << Value.Type;
+  case PDB_VariantType::Bool:
+    OS << (Value.Value.Bool ? "true" : "false");
+    break;
+  case PDB_VariantType::Double:
+    OS << Value.Value.Double;
+    break;
+  case PDB_VariantType::Int16:
+    OS << Value.Value.Int16;
+    break;
+  case PDB_VariantType::Int32:
+    OS << Value.Value.Int32;
+    break;
+  case PDB_VariantType::Int64:
+    OS << Value.Value.Int64;
+    break;
+  case PDB_VariantType::Int8:
+    OS << static_cast<int>(Value.Value.Int8);
+    break;
+  case PDB_VariantType::Single:
+    OS << Value.Value.Single;
+    break;
+  case PDB_VariantType::UInt16:
+    OS << Value.Value.UInt16;
+    break;
+  case PDB_VariantType::UInt32:
+    OS << Value.Value.UInt32;
+    break;
+  case PDB_VariantType::UInt64:
+    OS << Value.Value.UInt64;
+    break;
+  case PDB_VariantType::UInt8:
+    OS << static_cast<unsigned>(Value.Value.UInt8);
+    break;
+  case PDB_VariantType::String:
+    OS << Value.Value.String;
+    break;
+  default:
+    OS << Value.Type;
   }
   return OS;
 }
diff --git a/llvm/lib/DebugInfo/PDB/PDBSymbolFunc.cpp b/llvm/lib/DebugInfo/PDB/PDBSymbolFunc.cpp
index 59d57e83fc10380..ab718e01e029c1a 100644
--- a/llvm/lib/DebugInfo/PDB/PDBSymbolFunc.cpp
+++ b/llvm/lib/DebugInfo/PDB/PDBSymbolFunc.cpp
@@ -77,7 +77,7 @@ class FunctionArgEnumerator : public IPDBEnumChildren<PDBSymbolData> {
   ArgListType Args;
   ArgListType::const_iterator CurIter;
 };
-}
+} // namespace
 
 std::unique_ptr<IPDBEnumChildren<PDBSymbolData>>
 PDBSymbolFunc::getArguments() const {
diff --git a/llvm/lib/DebugInfo/PDB/PDBSymbolTypeFunctionSig.cpp b/llvm/lib/DebugInfo/PDB/PDBSymbolTypeFunctionSig.cpp
index 1373615522eb249..0a6f762ae7a80d4 100644
--- a/llvm/lib/DebugInfo/PDB/PDBSymbolTypeFunctionSig.cpp
+++ b/llvm/lib/DebugInfo/PDB/PDBSymbolTypeFunctionSig.cpp
@@ -59,7 +59,7 @@ class FunctionArgEnumerator : public IPDBEnumSymbols {
   const IPDBSession &Session;
   std::unique_ptr<ArgEnumeratorType> Enumerator;
 };
-}
+} // namespace
 
 std::unique_ptr<IPDBEnumSymbols>
 PDBSymbolTypeFunctionSig::getArguments() const {
diff --git a/llvm/lib/DebugInfo/PDB/UDTLayout.cpp b/llvm/lib/DebugInfo/PDB/UDTLayout.cpp
index 679881054f9da96..d4a979467637aca 100644
--- a/llvm/lib/DebugInfo/PDB/UDTLayout.cpp
+++ b/llvm/lib/DebugInfo/PDB/UDTLayout.cpp
@@ -83,8 +83,7 @@ VBPtrLayoutItem::VBPtrLayoutItem(const UDTLayoutBase &Parent,
                                  std::unique_ptr<PDBSymbolTypeBuiltin> Sym,
                                  uint32_t Offset, uint32_t Size)
     : LayoutItemBase(&Parent, Sym.get(), "<vbptr>", Offset, Size, false),
-      Type(std::move(Sym)) {
-}
+      Type(std::move(Sym)) {}
 
 const PDBSymbolData &DataMemberLayoutItem::getDataMember() {
   return *cast<PDBSymbolData>(Symbol);
@@ -182,8 +181,7 @@ void UDTLayoutBase::initializeChildren(const PDBSymbol &Sym) {
         VirtualBaseSyms.push_back(std::move(Base));
       else
         Bases.push_back(std::move(Base));
-    }
-    else if (auto Data = unique_dyn_cast<PDBSymbolData>(Child)) {
+    } else if (auto Data = unique_dyn_cast<PDBSymbolData>(Child)) {
       if (Data->getDataKind() == PDB_DataKind::Member)
         Members.push_back(std::move(Data));
       else
@@ -209,7 +207,7 @@ void UDTLayoutBase::initializeChildren(const PDBSymbol &Sym) {
     uint32_t Offset = Base->getOffset();
     // Non-virtual bases never get elided.
     auto BL = std::make_unique<BaseClassLayout>(*this, Offset, false,
-                                                 std::move(Base));
+                                                std::move(Base));
 
     AllBases.push_back(BL.get());
     addChildToLayout(std::move(BL));
@@ -240,7 +238,7 @@ void UDTLayoutBase::initializeChildren(const PDBSymbol &Sym) {
     if (!hasVBPtrAtOffset(VBPO)) {
       if (auto VBP = VB->getRawSymbol().getVirtualBaseTableType()) {
         auto VBPL = std::make_unique<VBPtrLayoutItem>(*this, std::move(VBP),
-                                                       VBPO, VBP->getLength());
+                                                      VBPO, VBP->getLength());
         VBPtr = VBPL.get();
         addChildToLayout(std::move(VBPL));
       }
diff --git a/llvm/lib/DebugInfo/Symbolize/CMakeLists.txt b/llvm/lib/DebugInfo/Symbolize/CMakeLists.txt
index 29f62bf6156fc6d..a6042ccb09abf04 100644
--- a/llvm/lib/DebugInfo/Symbolize/CMakeLists.txt
+++ b/llvm/lib/DebugInfo/Symbolize/CMakeLists.txt
@@ -1,19 +1,10 @@
-add_llvm_component_library(LLVMSymbolize
-  DIPrinter.cpp
-  Markup.cpp
-  MarkupFilter.cpp
-  SymbolizableObjectFile.cpp
-  Symbolize.cpp
+add_llvm_component_library(
+    LLVMSymbolize DIPrinter.cpp Markup.cpp MarkupFilter
+        .cpp SymbolizableObjectFile.cpp Symbolize.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/DebugInfo/Symbolize
+            ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+    llvm / DebugInfo /
+    Symbolize
 
-  LINK_COMPONENTS
-  DebugInfoDWARF
-  DebugInfoPDB
-  DebugInfoBTF
-  Object
-  Support
-  Demangle
-  TargetParser
-  )
+        LINK_COMPONENTS DebugInfoDWARF DebugInfoPDB DebugInfoBTF Object Support
+            Demangle TargetParser)
diff --git a/llvm/lib/DebugInfo/Symbolize/SymbolizableObjectFile.cpp b/llvm/lib/DebugInfo/Symbolize/SymbolizableObjectFile.cpp
index 6b8068a531c05fa..fdde9f621b83fb9 100644
--- a/llvm/lib/DebugInfo/Symbolize/SymbolizableObjectFile.cpp
+++ b/llvm/lib/DebugInfo/Symbolize/SymbolizableObjectFile.cpp
@@ -99,9 +99,7 @@ struct OffsetNamePair {
   uint32_t Offset;
   StringRef Name;
 
-  bool operator<(const OffsetNamePair &R) const {
-    return Offset < R.Offset;
-  }
+  bool operator<(const OffsetNamePair &R) const { return Offset < R.Offset; }
 };
 
 } // end anonymous namespace
@@ -222,7 +220,8 @@ Error SymbolizableObjectFile::addSymbol(const SymbolRef &Symbol,
 // Return true if this is a 32-bit x86 PE COFF module.
 bool SymbolizableObjectFile::isWin32Module() const {
   auto *CoffObject = dyn_cast<COFFObjectFile>(Module);
-  return CoffObject && CoffObject->getMachine() == COFF::IMAGE_FILE_MACHINE_I386;
+  return CoffObject &&
+         CoffObject->getMachine() == COFF::IMAGE_FILE_MACHINE_I386;
 }
 
 uint64_t SymbolizableObjectFile::getModulePreferredBase() const {
diff --git a/llvm/lib/Debuginfod/CMakeLists.txt b/llvm/lib/Debuginfod/CMakeLists.txt
index b1329bd2d077e48..9ed9c73fe56ad92 100644
--- a/llvm/lib/Debuginfod/CMakeLists.txt
+++ b/llvm/lib/Debuginfod/CMakeLists.txt
@@ -1,36 +1,26 @@
-# Link LibCURL if the user wants it
+#Link LibCURL if the user wants it
 if (LLVM_ENABLE_CURL)
-  set(imported_libs CURL::libcurl)
-endif()
+set(imported_libs CURL::libcurl) endif()
 
-# Link cpp-httplib if the user wants it
-if (LLVM_ENABLE_HTTPLIB)
-  set(imported_libs ${imported_libs} httplib::httplib)
-endif()
+#Link cpp - httplib if the user wants it
+    if (LLVM_ENABLE_HTTPLIB)
+        set(imported_libs ${imported_libs} httplib::httplib) endif()
 
-# Make sure pthread is linked if this is a unix host
-if (CMAKE_HOST_UNIX)
-  set(imported_libs ${imported_libs} ${LLVM_PTHREAD_LIB})
-endif()
+#Make sure pthread is linked if this is a unix host
+            if (CMAKE_HOST_UNIX) set(imported_libs ${imported_libs} ${
+                LLVM_PTHREAD_LIB}) endif()
 
-# Note: This isn't a component, since that could potentially add a libcurl
-# dependency to libLLVM.
-add_llvm_library(LLVMDebuginfod
-  BuildIDFetcher.cpp
-  Debuginfod.cpp
-  HTTPClient.cpp
-  HTTPServer.cpp
+#Note : This isn't a component, since that could potentially add a libcurl
+#dependency to libLLVM.
+                add_llvm_library(LLVMDebuginfod BuildIDFetcher.cpp Debuginfod
+                                     .cpp HTTPClient.cpp HTTPServer.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/Debuginfod
+                                         ADDITIONAL_HEADER_DIRS ${
+                                             LLVM_MAIN_INCLUDE_DIR} /
+                                 llvm /
+                                 Debuginfod
 
-  LINK_LIBS
-  ${imported_libs}
+                                     LINK_LIBS ${imported_libs}
 
-  LINK_COMPONENTS
-  Support
-  Symbolize
-  DebugInfoDWARF
-  BinaryFormat
-  Object
-  )
+                                 LINK_COMPONENTS Support Symbolize
+                                     DebugInfoDWARF BinaryFormat Object)
diff --git a/llvm/lib/Debuginfod/HTTPServer.cpp b/llvm/lib/Debuginfod/HTTPServer.cpp
index 1264353ce4b33a9..8aa10a74a3903ac 100644
--- a/llvm/lib/Debuginfod/HTTPServer.cpp
+++ b/llvm/lib/Debuginfod/HTTPServer.cpp
@@ -187,12 +187,8 @@ Expected<unsigned> HTTPServer::bind(const char *HostInterface) {
   return make_error<HTTPServerError>("no httplib");
 }
 
-Error HTTPServer::listen() {
-  return make_error<HTTPServerError>("no httplib");
-}
+Error HTTPServer::listen() { return make_error<HTTPServerError>("no httplib"); }
 
-void HTTPServer::stop() {
-  llvm_unreachable("no httplib");
-}
+void HTTPServer::stop() { llvm_unreachable("no httplib"); }
 
 #endif // LLVM_ENABLE_HTTPLIB
diff --git a/llvm/lib/Demangle/CMakeLists.txt b/llvm/lib/Demangle/CMakeLists.txt
index eb7d212a0244932..33364df78d93df5 100644
--- a/llvm/lib/Demangle/CMakeLists.txt
+++ b/llvm/lib/Demangle/CMakeLists.txt
@@ -1,12 +1,7 @@
-add_llvm_component_library(LLVMDemangle
-  Demangle.cpp
-  ItaniumDemangle.cpp
-  MicrosoftDemangle.cpp
-  MicrosoftDemangleNodes.cpp
-  RustDemangle.cpp
-  DLangDemangle.cpp
+add_llvm_component_library(
+    LLVMDemangle Demangle.cpp ItaniumDemangle.cpp MicrosoftDemangle
+        .cpp MicrosoftDemangleNodes.cpp RustDemangle.cpp DLangDemangle.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  "${LLVM_MAIN_INCLUDE_DIR}/llvm/Demangle"
+            ADDITIONAL_HEADER_DIRS "${LLVM_MAIN_INCLUDE_DIR}/llvm/Demangle"
 
 )
diff --git a/llvm/lib/Demangle/ItaniumDemangle.cpp b/llvm/lib/Demangle/ItaniumDemangle.cpp
index e3f208f0adf8dcc..ed286bd50d2a240 100644
--- a/llvm/lib/Demangle/ItaniumDemangle.cpp
+++ b/llvm/lib/Demangle/ItaniumDemangle.cpp
@@ -10,8 +10,8 @@
 // file does not yet support:
 //   - C++ modules TS
 
-#include "llvm/Demangle/Demangle.h"
 #include "llvm/Demangle/ItaniumDemangle.h"
+#include "llvm/Demangle/Demangle.h"
 
 #include <cassert>
 #include <cctype>
@@ -65,13 +65,13 @@ struct DumpVisitor {
   unsigned Depth = 0;
   bool PendingNewline = false;
 
-  template<typename NodeT> static constexpr bool wantsNewline(const NodeT *) {
+  template <typename NodeT> static constexpr bool wantsNewline(const NodeT *) {
     return true;
   }
   static bool wantsNewline(NodeArray A) { return !A.empty(); }
   static constexpr bool wantsNewline(...) { return false; }
 
-  template<typename ...Ts> static bool anyWantNewline(Ts ...Vs) {
+  template <typename... Ts> static bool anyWantNewline(Ts... Vs) {
     for (bool B : {wantsNewline(Vs)...})
       if (B)
         return true;
@@ -133,17 +133,22 @@ struct DumpVisitor {
     }
   }
   void print(Qualifiers Qs) {
-    if (!Qs) return printStr("QualNone");
-    struct QualName { Qualifiers Q; const char *Name; } Names[] = {
-      {QualConst, "QualConst"},
-      {QualVolatile, "QualVolatile"},
-      {QualRestrict, "QualRestrict"},
+    if (!Qs)
+      return printStr("QualNone");
+    struct QualName {
+      Qualifiers Q;
+      const char *Name;
+    } Names[] = {
+        {QualConst, "QualConst"},
+        {QualVolatile, "QualVolatile"},
+        {QualRestrict, "QualRestrict"},
     };
     for (QualName Name : Names) {
       if (Qs & Name.Q) {
         printStr(Name.Name);
         Qs = Qualifiers(Qs & ~Name.Q);
-        if (Qs) printStr(" | ");
+        if (Qs)
+          printStr(" | ");
       }
     }
   }
@@ -225,13 +230,13 @@ struct DumpVisitor {
     PendingNewline = false;
   }
 
-  template<typename T> void printWithPendingNewline(T V) {
+  template <typename T> void printWithPendingNewline(T V) {
     print(V);
     if (wantsNewline(V))
       PendingNewline = true;
   }
 
-  template<typename T> void printWithComma(T V) {
+  template <typename T> void printWithComma(T V) {
     if (PendingNewline || wantsNewline(V)) {
       printStr(",");
       newLine();
@@ -245,16 +250,16 @@ struct DumpVisitor {
   struct CtorArgPrinter {
     DumpVisitor &Visitor;
 
-    template<typename T, typename ...Rest> void operator()(T V, Rest ...Vs) {
+    template <typename T, typename... Rest> void operator()(T V, Rest... Vs) {
       if (Visitor.anyWantNewline(V, Vs...))
         Visitor.newLine();
       Visitor.printWithPendingNewline(V);
-      int PrintInOrder[] = { (Visitor.printWithComma(Vs), 0)..., 0 };
+      int PrintInOrder[] = {(Visitor.printWithComma(Vs), 0)..., 0};
       (void)PrintInOrder;
     }
   };
 
-  template<typename NodeT> void operator()(const NodeT *Node) {
+  template <typename NodeT> void operator()(const NodeT *Node) {
     Depth += 2;
     fprintf(stderr, "%s(", itanium_demangle::NodeKind<NodeT>::name());
     Node->match(CtorArgPrinter{*this});
@@ -276,7 +281,7 @@ struct DumpVisitor {
     Depth -= 2;
   }
 };
-}
+} // namespace
 
 void itanium_demangle::Node::dump() const {
   DumpVisitor V;
@@ -288,7 +293,7 @@ void itanium_demangle::Node::dump() const {
 namespace {
 class BumpPointerAllocator {
   struct BlockMeta {
-    BlockMeta* Next;
+    BlockMeta *Next;
     size_t Current;
   };
 
@@ -296,29 +301,29 @@ class BumpPointerAllocator {
   static constexpr size_t UsableAllocSize = AllocSize - sizeof(BlockMeta);
 
   alignas(long double) char InitialBuffer[AllocSize];
-  BlockMeta* BlockList = nullptr;
+  BlockMeta *BlockList = nullptr;
 
   void grow() {
-    char* NewMeta = static_cast<char *>(std::malloc(AllocSize));
+    char *NewMeta = static_cast<char *>(std::malloc(AllocSize));
     if (NewMeta == nullptr)
       std::terminate();
     BlockList = new (NewMeta) BlockMeta{BlockList, 0};
   }
 
-  void* allocateMassive(size_t NBytes) {
+  void *allocateMassive(size_t NBytes) {
     NBytes += sizeof(BlockMeta);
-    BlockMeta* NewMeta = reinterpret_cast<BlockMeta*>(std::malloc(NBytes));
+    BlockMeta *NewMeta = reinterpret_cast<BlockMeta *>(std::malloc(NBytes));
     if (NewMeta == nullptr)
       std::terminate();
     BlockList->Next = new (NewMeta) BlockMeta{BlockList->Next, 0};
-    return static_cast<void*>(NewMeta + 1);
+    return static_cast<void *>(NewMeta + 1);
   }
 
 public:
   BumpPointerAllocator()
-      : BlockList(new (InitialBuffer) BlockMeta{nullptr, 0}) {}
+      : BlockList(new(InitialBuffer) BlockMeta{nullptr, 0}) {}
 
-  void* allocate(size_t N) {
+  void *allocate(size_t N) {
     N = (N + 15u) & ~15u;
     if (N + BlockList->Current >= UsableAllocSize) {
       if (N > UsableAllocSize)
@@ -326,15 +331,15 @@ class BumpPointerAllocator {
       grow();
     }
     BlockList->Current += N;
-    return static_cast<void*>(reinterpret_cast<char*>(BlockList + 1) +
-                              BlockList->Current - N);
+    return static_cast<void *>(reinterpret_cast<char *>(BlockList + 1) +
+                               BlockList->Current - N);
   }
 
   void reset() {
     while (BlockList) {
-      BlockMeta* Tmp = BlockList;
+      BlockMeta *Tmp = BlockList;
       BlockList = BlockList->Next;
-      if (reinterpret_cast<char*>(Tmp) != InitialBuffer)
+      if (reinterpret_cast<char *>(Tmp) != InitialBuffer)
         std::free(Tmp);
     }
     BlockList = new (InitialBuffer) BlockMeta{nullptr, 0};
@@ -349,16 +354,15 @@ class DefaultAllocator {
 public:
   void reset() { Alloc.reset(); }
 
-  template<typename T, typename ...Args> T *makeNode(Args &&...args) {
-    return new (Alloc.allocate(sizeof(T)))
-        T(std::forward<Args>(args)...);
+  template <typename T, typename... Args> T *makeNode(Args &&...args) {
+    return new (Alloc.allocate(sizeof(T))) T(std::forward<Args>(args)...);
   }
 
   void *allocateNodeArray(size_t sz) {
     return Alloc.allocate(sizeof(Node *) * sz);
   }
 };
-}  // unnamed namespace
+} // unnamed namespace
 
 //===----------------------------------------------------------------------===//
 // Code beyond this point should not be synchronized with libc++abi.
@@ -396,8 +400,8 @@ ItaniumPartialDemangler::ItaniumPartialDemangler(
   Other.Context = Other.RootNode = nullptr;
 }
 
-ItaniumPartialDemangler &ItaniumPartialDemangler::
-operator=(ItaniumPartialDemangler &&Other) {
+ItaniumPartialDemangler &
+ItaniumPartialDemangler::operator=(ItaniumPartialDemangler &&Other) {
   std::swap(RootNode, Other.RootNode);
   std::swap(Context, Other.Context);
   return *this;
@@ -458,7 +462,7 @@ char *ItaniumPartialDemangler::getFunctionDeclContextName(char *Buf,
 
   OutputBuffer OB(Buf, N);
 
- KeepGoingLocalFunction:
+KeepGoingLocalFunction:
   while (true) {
     if (Name->getKind() == Node::KAbiTagAttr) {
       Name = static_cast<const AbiTagAttr *>(Name)->Base;
@@ -518,8 +522,8 @@ char *ItaniumPartialDemangler::getFunctionParameters(char *Buf,
   return OB.getBuffer();
 }
 
-char *ItaniumPartialDemangler::getFunctionReturnType(
-    char *Buf, size_t *N) const {
+char *ItaniumPartialDemangler::getFunctionReturnType(char *Buf,
+                                                     size_t *N) const {
   if (!isFunction())
     return nullptr;
 
diff --git a/llvm/lib/Demangle/MicrosoftDemangle.cpp b/llvm/lib/Demangle/MicrosoftDemangle.cpp
index cd7ff40d63a4928..9549c7730cc91aa 100644
--- a/llvm/lib/Demangle/MicrosoftDemangle.cpp
+++ b/llvm/lib/Demangle/MicrosoftDemangle.cpp
@@ -1146,7 +1146,7 @@ static void writeHexDigit(char *Buffer, uint8_t Digit) {
 }
 
 static void outputHex(OutputBuffer &OB, unsigned C) {
-  assert (C != 0);
+  assert(C != 0);
 
   // It's easier to do the math if we can work from right to left, but we need
   // to print the numbers from left to right.  So render this into a temporary
diff --git a/llvm/lib/ExecutionEngine/CMakeLists.txt b/llvm/lib/ExecutionEngine/CMakeLists.txt
index af6be62dd525312..68b8b095d0ec3d5 100644
--- a/llvm/lib/ExecutionEngine/CMakeLists.txt
+++ b/llvm/lib/ExecutionEngine/CMakeLists.txt
@@ -1,47 +1,33 @@
 
 
-add_llvm_component_library(LLVMExecutionEngine
-  ExecutionEngine.cpp
-  ExecutionEngineBindings.cpp
-  GDBRegistrationListener.cpp
-  SectionMemoryManager.cpp
-  TargetSelect.cpp
-
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/ExecutionEngine
-
-  DEPENDS
-  intrinsics_gen
-
-  LINK_COMPONENTS
-  Core
-  MC
-  Object
-  OrcTargetProcess
-  RuntimeDyld
-  Support
-  Target
-  TargetParser
-  )
-
-if(BUILD_SHARED_LIBS)
-  target_link_libraries(LLVMExecutionEngine PUBLIC LLVMRuntimeDyld)
-endif()
-
-add_subdirectory(Interpreter)
-add_subdirectory(JITLink)
-add_subdirectory(MCJIT)
-add_subdirectory(Orc)
-add_subdirectory(RuntimeDyld)
-
-if( LLVM_USE_OPROFILE )
-  add_subdirectory(OProfileJIT)
-endif( LLVM_USE_OPROFILE )
-
-if( LLVM_USE_INTEL_JITEVENTS )
-  add_subdirectory(IntelJITEvents)
-endif( LLVM_USE_INTEL_JITEVENTS )
-
-if( LLVM_USE_PERF )
-  add_subdirectory(PerfJITEvents)
-endif( LLVM_USE_PERF )
+add_llvm_component_library(
+    LLVMExecutionEngine ExecutionEngine.cpp ExecutionEngineBindings
+        .cpp GDBRegistrationListener.cpp SectionMemoryManager.cpp TargetSelect
+        .cpp
+
+            ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+    llvm /
+    ExecutionEngine
+
+        DEPENDS intrinsics_gen
+
+            LINK_COMPONENTS Core MC Object OrcTargetProcess RuntimeDyld Support
+                Target TargetParser)
+
+    if (BUILD_SHARED_LIBS) target_link_libraries(
+        LLVMExecutionEngine PUBLIC LLVMRuntimeDyld) endif()
+
+        add_subdirectory(Interpreter) add_subdirectory(JITLink)
+            add_subdirectory(MCJIT) add_subdirectory(Orc)
+                add_subdirectory(RuntimeDyld)
+
+                    if (LLVM_USE_OPROFILE) add_subdirectory(OProfileJIT)
+                        endif(LLVM_USE_OPROFILE)
+
+                            if (LLVM_USE_INTEL_JITEVENTS)
+                                add_subdirectory(IntelJITEvents)
+                                    endif(LLVM_USE_INTEL_JITEVENTS)
+
+                                        if (LLVM_USE_PERF)
+                                            add_subdirectory(PerfJITEvents)
+                                                endif(LLVM_USE_PERF)
diff --git a/llvm/lib/ExecutionEngine/ExecutionEngine.cpp b/llvm/lib/ExecutionEngine/ExecutionEngine.cpp
index 98d7dcb8ec12bd9..76cc9cfa60f9146 100644
--- a/llvm/lib/ExecutionEngine/ExecutionEngine.cpp
+++ b/llvm/lib/ExecutionEngine/ExecutionEngine.cpp
@@ -45,7 +45,7 @@ using namespace llvm;
 #define DEBUG_TYPE "jit"
 
 STATISTIC(NumInitBytes, "Number of bytes of global vars initialized");
-STATISTIC(NumGlobals  , "Number of global vars initialized");
+STATISTIC(NumGlobals, "Number of global vars initialized");
 
 ExecutionEngine *(*ExecutionEngine::MCJITCtor)(
     std::unique_ptr<Module> M, std::string *ErrorStr,
@@ -53,16 +53,16 @@ ExecutionEngine *(*ExecutionEngine::MCJITCtor)(
     std::shared_ptr<LegacyJITSymbolResolver> Resolver,
     std::unique_ptr<TargetMachine> TM) = nullptr;
 
-ExecutionEngine *(*ExecutionEngine::InterpCtor)(std::unique_ptr<Module> M,
-                                                std::string *ErrorStr) =nullptr;
+ExecutionEngine *(*ExecutionEngine::InterpCtor)(
+    std::unique_ptr<Module> M, std::string *ErrorStr) = nullptr;
 
 void JITEventListener::anchor() {}
 
 void ObjectCache::anchor() {}
 
 void ExecutionEngine::Init(std::unique_ptr<Module> M) {
-  CompilingLazily         = false;
-  GVCompilationDisabled   = false;
+  CompilingLazily = false;
+  GVCompilationDisabled = false;
   SymbolSearchingDisabled = false;
 
   // IR module verification is enabled by default in debug builds, and disabled
@@ -87,27 +87,25 @@ ExecutionEngine::ExecutionEngine(DataLayout DL, std::unique_ptr<Module> M)
   Init(std::move(M));
 }
 
-ExecutionEngine::~ExecutionEngine() {
-  clearAllGlobalMappings();
-}
+ExecutionEngine::~ExecutionEngine() { clearAllGlobalMappings(); }
 
 namespace {
 /// Helper class which uses a value handler to automatically deletes the
 /// memory block when the GlobalVariable is destroyed.
 class GVMemoryBlock final : public CallbackVH {
   GVMemoryBlock(const GlobalVariable *GV)
-    : CallbackVH(const_cast<GlobalVariable*>(GV)) {}
+      : CallbackVH(const_cast<GlobalVariable *>(GV)) {}
 
 public:
   /// Returns the address the GlobalVariable should be written into.  The
   /// GVMemoryBlock object prefixes that.
-  static char *Create(const GlobalVariable *GV, const DataLayout& TD) {
+  static char *Create(const GlobalVariable *GV, const DataLayout &TD) {
     Type *ElTy = GV->getValueType();
     size_t GVSize = (size_t)TD.getTypeAllocSize(ElTy);
     void *RawMemory = ::operator new(
         alignTo(sizeof(GVMemoryBlock), TD.getPreferredAlign(GV)) + GVSize);
-    new(RawMemory) GVMemoryBlock(GV);
-    return static_cast<char*>(RawMemory) + sizeof(GVMemoryBlock);
+    new (RawMemory) GVMemoryBlock(GV);
+    return static_cast<char *>(RawMemory) + sizeof(GVMemoryBlock);
   }
 
   void deleted() override {
@@ -118,7 +116,7 @@ class GVMemoryBlock final : public CallbackVH {
     ::operator delete(this);
   }
 };
-}  // anonymous namespace
+} // anonymous namespace
 
 char *ExecutionEngine::getMemoryForGV(const GlobalVariable *GV) {
   return GVMemoryBlock::Create(GV, getDataLayout());
@@ -128,8 +126,8 @@ void ExecutionEngine::addObjectFile(std::unique_ptr<object::ObjectFile> O) {
   llvm_unreachable("ExecutionEngine subclass doesn't implement addObjectFile.");
 }
 
-void
-ExecutionEngine::addObjectFile(object::OwningBinary<object::ObjectFile> O) {
+void ExecutionEngine::addObjectFile(
+    object::OwningBinary<object::ObjectFile> O) {
   llvm_unreachable("ExecutionEngine subclass doesn't implement addObjectFile.");
 }
 
@@ -159,9 +157,10 @@ Function *ExecutionEngine::FindFunctionNamed(StringRef FnName) {
   return nullptr;
 }
 
-GlobalVariable *ExecutionEngine::FindGlobalVariableNamed(StringRef Name, bool AllowInternal) {
+GlobalVariable *ExecutionEngine::FindGlobalVariableNamed(StringRef Name,
+                                                         bool AllowInternal) {
   for (unsigned i = 0, e = Modules.size(); i != e; ++i) {
-    GlobalVariable *GV = Modules[i]->getGlobalVariable(Name,AllowInternal);
+    GlobalVariable *GV = Modules[i]->getGlobalVariable(Name, AllowInternal);
     if (GV && !GV->isDeclaration())
       return GV;
   }
@@ -191,10 +190,9 @@ std::string ExecutionEngine::getMangledName(const GlobalValue *GV) {
   std::lock_guard<sys::Mutex> locked(lock);
   SmallString<128> FullName;
 
-  const DataLayout &DL =
-    GV->getParent()->getDataLayout().isDefault()
-      ? getDataLayout()
-      : GV->getParent()->getDataLayout();
+  const DataLayout &DL = GV->getParent()->getDataLayout().isDefault()
+                             ? getDataLayout()
+                             : GV->getParent()->getDataLayout();
 
   Mangler::getNameWithPrefix(FullName, GV->getName(), DL);
   return std::string(FullName.str());
@@ -202,7 +200,7 @@ std::string ExecutionEngine::getMangledName(const GlobalValue *GV) {
 
 void ExecutionEngine::addGlobalMapping(const GlobalValue *GV, void *Addr) {
   std::lock_guard<sys::Mutex> locked(lock);
-  addGlobalMapping(getMangledName(GV), (uint64_t) Addr);
+  addGlobalMapping(getMangledName(GV), (uint64_t)Addr);
 }
 
 void ExecutionEngine::addGlobalMapping(StringRef Name, uint64_t Addr) {
@@ -241,14 +239,13 @@ void ExecutionEngine::clearGlobalMappingsFromModule(Module *M) {
 uint64_t ExecutionEngine::updateGlobalMapping(const GlobalValue *GV,
                                               void *Addr) {
   std::lock_guard<sys::Mutex> locked(lock);
-  return updateGlobalMapping(getMangledName(GV), (uint64_t) Addr);
+  return updateGlobalMapping(getMangledName(GV), (uint64_t)Addr);
 }
 
 uint64_t ExecutionEngine::updateGlobalMapping(StringRef Name, uint64_t Addr) {
   std::lock_guard<sys::Mutex> locked(lock);
 
-  ExecutionEngineState::GlobalAddressMapTy &Map =
-    EEState.getGlobalAddressMap();
+  ExecutionEngineState::GlobalAddressMapTy &Map = EEState.getGlobalAddressMap();
 
   // Deleting from the mapping?
   if (!Addr)
@@ -275,16 +272,15 @@ uint64_t ExecutionEngine::getAddressToGlobalIfAvailable(StringRef S) {
   std::lock_guard<sys::Mutex> locked(lock);
   uint64_t Address = 0;
   ExecutionEngineState::GlobalAddressMapTy::iterator I =
-    EEState.getGlobalAddressMap().find(S);
+      EEState.getGlobalAddressMap().find(S);
   if (I != EEState.getGlobalAddressMap().end())
     Address = I->second;
   return Address;
 }
 
-
 void *ExecutionEngine::getPointerToGlobalIfAvailable(StringRef S) {
   std::lock_guard<sys::Mutex> locked(lock);
-  if (void* Address = (void *) getAddressToGlobalIfAvailable(S))
+  if (void *Address = (void *)getAddressToGlobalIfAvailable(S))
     return Address;
   return nullptr;
 }
@@ -300,8 +296,9 @@ const GlobalValue *ExecutionEngine::getGlobalValueAtAddress(void *Addr) {
   // If we haven't computed the reverse mapping yet, do so first.
   if (EEState.getGlobalAddressReverseMap().empty()) {
     for (ExecutionEngineState::GlobalAddressMapTy::iterator
-           I = EEState.getGlobalAddressMap().begin(),
-           E = EEState.getGlobalAddressMap().end(); I != E; ++I) {
+             I = EEState.getGlobalAddressMap().begin(),
+             E = EEState.getGlobalAddressMap().end();
+         I != E; ++I) {
       StringRef Name = I->first();
       uint64_t Addr = I->second;
       EEState.getGlobalAddressReverseMap().insert(
@@ -310,7 +307,7 @@ const GlobalValue *ExecutionEngine::getGlobalValueAtAddress(void *Addr) {
   }
 
   std::map<uint64_t, std::string>::iterator I =
-    EEState.getGlobalAddressReverseMap().find((uint64_t) Addr);
+      EEState.getGlobalAddressReverseMap().find((uint64_t)Addr);
 
   if (I != EEState.getGlobalAddressReverseMap().end()) {
     StringRef Name = I->second;
@@ -325,41 +322,42 @@ namespace {
 class ArgvArray {
   std::unique_ptr<char[]> Array;
   std::vector<std::unique_ptr<char[]>> Values;
+
 public:
   /// Turn a vector of strings into a nice argv style array of pointers to null
   /// terminated strings.
   void *reset(LLVMContext &C, ExecutionEngine *EE,
               const std::vector<std::string> &InputArgv);
 };
-}  // anonymous namespace
+} // anonymous namespace
 void *ArgvArray::reset(LLVMContext &C, ExecutionEngine *EE,
                        const std::vector<std::string> &InputArgv) {
-  Values.clear();  // Free the old contents.
+  Values.clear(); // Free the old contents.
   Values.reserve(InputArgv.size());
   unsigned PtrSize = EE->getDataLayout().getPointerSize();
-  Array = std::make_unique<char[]>((InputArgv.size()+1)*PtrSize);
+  Array = std::make_unique<char[]>((InputArgv.size() + 1) * PtrSize);
 
   LLVM_DEBUG(dbgs() << "JIT: ARGV = " << (void *)Array.get() << "\n");
   Type *SBytePtr = Type::getInt8PtrTy(C);
 
   for (unsigned i = 0; i != InputArgv.size(); ++i) {
-    unsigned Size = InputArgv[i].size()+1;
+    unsigned Size = InputArgv[i].size() + 1;
     auto Dest = std::make_unique<char[]>(Size);
     LLVM_DEBUG(dbgs() << "JIT: ARGV[" << i << "] = " << (void *)Dest.get()
                       << "\n");
 
     std::copy(InputArgv[i].begin(), InputArgv[i].end(), Dest.get());
-    Dest[Size-1] = 0;
+    Dest[Size - 1] = 0;
 
     // Endian safe: Array[i] = (PointerTy)Dest;
     EE->StoreValueToMemory(PTOGV(Dest.get()),
-                           (GenericValue*)(&Array[i*PtrSize]), SBytePtr);
+                           (GenericValue *)(&Array[i * PtrSize]), SBytePtr);
     Values.push_back(std::move(Dest));
   }
 
   // Null terminate it
   EE->StoreValueToMemory(PTOGV(nullptr),
-                         (GenericValue*)(&Array[InputArgv.size()*PtrSize]),
+                         (GenericValue *)(&Array[InputArgv.size() * PtrSize]),
                          SBytePtr);
   return Array.get();
 }
@@ -373,7 +371,8 @@ void ExecutionEngine::runStaticConstructorsDestructors(Module &module,
   // an old-style (llvmgcc3) static ctor with __main linked in and in use.  If
   // this is the case, don't execute any of the global ctors, __main will do
   // it.
-  if (!GV || GV->isDeclaration() || GV->hasLocalLinkage()) return;
+  if (!GV || GV->isDeclaration() || GV->hasLocalLinkage())
+    return;
 
   // Should be an array of '{ i32, void ()* }' structs.  The first value is
   // the init priority, which we ignore.
@@ -382,11 +381,12 @@ void ExecutionEngine::runStaticConstructorsDestructors(Module &module,
     return;
   for (unsigned i = 0, e = InitList->getNumOperands(); i != e; ++i) {
     ConstantStruct *CS = dyn_cast<ConstantStruct>(InitList->getOperand(i));
-    if (!CS) continue;
+    if (!CS)
+      continue;
 
     Constant *FP = CS->getOperand(1);
     if (FP->isNullValue())
-      continue;  // Found a sentinal value, ignore.
+      continue; // Found a sentinal value, ignore.
 
     // Strip off constant expression casts.
     if (ConstantExpr *CE = dyn_cast<ConstantExpr>(FP))
@@ -414,7 +414,7 @@ void ExecutionEngine::runStaticConstructorsDestructors(bool isDtors) {
 static bool isTargetNullPtr(ExecutionEngine *EE, void *Loc) {
   unsigned PtrSize = EE->getDataLayout().getPointerSize();
   for (unsigned i = 0; i < PtrSize; ++i)
-    if (*(i + (uint8_t*)Loc))
+    if (*(i + (uint8_t *)Loc))
       return false;
   return true;
 }
@@ -422,7 +422,7 @@ static bool isTargetNullPtr(ExecutionEngine *EE, void *Loc) {
 
 int ExecutionEngine::runFunctionAsMain(Function *Fn,
                                        const std::vector<std::string> &argv,
-                                       const char * const * envp) {
+                                       const char *const *envp) {
   std::vector<GenericValue> GVArgs;
   GenericValue GVArgc;
   GVArgc.IntVal = APInt(32, argv.size());
@@ -441,8 +441,7 @@ int ExecutionEngine::runFunctionAsMain(Function *Fn,
     report_fatal_error("Invalid type for second argument of main() supplied");
   if (NumArgs >= 1 && !FTy->getParamType(0)->isIntegerTy(32))
     report_fatal_error("Invalid type for first argument of main() supplied");
-  if (!FTy->getReturnType()->isIntegerTy() &&
-      !FTy->getReturnType()->isVoidTy())
+  if (!FTy->getReturnType()->isIntegerTy() && !FTy->getReturnType()->isVoidTy())
     report_fatal_error("Invalid return type of main() supplied");
 
   ArgvArray CArgv;
@@ -484,14 +483,14 @@ EngineBuilder::EngineBuilder(std::unique_ptr<Module> M)
 EngineBuilder::~EngineBuilder() = default;
 
 EngineBuilder &EngineBuilder::setMCJITMemoryManager(
-                                   std::unique_ptr<RTDyldMemoryManager> mcjmm) {
+    std::unique_ptr<RTDyldMemoryManager> mcjmm) {
   auto SharedMM = std::shared_ptr<RTDyldMemoryManager>(std::move(mcjmm));
   MemMgr = SharedMM;
   Resolver = SharedMM;
   return *this;
 }
 
-EngineBuilder&
+EngineBuilder &
 EngineBuilder::setMemoryManager(std::unique_ptr<MCJITMemoryManager> MM) {
   MemMgr = std::shared_ptr<MCJITMemoryManager>(std::move(MM));
   return *this;
@@ -563,11 +562,11 @@ ExecutionEngine *EngineBuilder::create(TargetMachine *TM) {
 }
 
 void *ExecutionEngine::getPointerToGlobal(const GlobalValue *GV) {
-  if (Function *F = const_cast<Function*>(dyn_cast<Function>(GV)))
+  if (Function *F = const_cast<Function *>(dyn_cast<Function>(GV)))
     return getPointerToFunction(F);
 
   std::lock_guard<sys::Mutex> locked(lock);
-  if (void* P = getPointerToGlobalIfAvailable(GV))
+  if (void *P = getPointerToGlobalIfAvailable(GV))
     return P;
 
   // Global variable might have been added since interpreter started.
@@ -599,36 +598,35 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
       break;
     case Type::StructTyID: {
       // if the whole struct is 'undef' just reserve memory for the value.
-      if(StructType *STy = dyn_cast<StructType>(C->getType())) {
+      if (StructType *STy = dyn_cast<StructType>(C->getType())) {
         unsigned int elemNum = STy->getNumElements();
         Result.AggregateVal.resize(elemNum);
         for (unsigned int i = 0; i < elemNum; ++i) {
           Type *ElemTy = STy->getElementType(i);
           if (ElemTy->isIntegerTy())
             Result.AggregateVal[i].IntVal =
-              APInt(ElemTy->getPrimitiveSizeInBits(), 0);
+                APInt(ElemTy->getPrimitiveSizeInBits(), 0);
           else if (ElemTy->isAggregateType()) {
-              const Constant *ElemUndef = UndefValue::get(ElemTy);
-              Result.AggregateVal[i] = getConstantValue(ElemUndef);
-            }
+            const Constant *ElemUndef = UndefValue::get(ElemTy);
+            Result.AggregateVal[i] = getConstantValue(ElemUndef);
           }
         }
       }
+    } break;
+    case Type::ScalableVectorTyID:
+      report_fatal_error(
+          "Scalable vector support not yet implemented in ExecutionEngine");
+    case Type::FixedVectorTyID:
+      // if the whole vector is 'undef' just reserve memory for the value.
+      auto *VTy = cast<FixedVectorType>(C->getType());
+      Type *ElemTy = VTy->getElementType();
+      unsigned int elemNum = VTy->getNumElements();
+      Result.AggregateVal.resize(elemNum);
+      if (ElemTy->isIntegerTy())
+        for (unsigned int i = 0; i < elemNum; ++i)
+          Result.AggregateVal[i].IntVal =
+              APInt(ElemTy->getPrimitiveSizeInBits(), 0);
       break;
-      case Type::ScalableVectorTyID:
-        report_fatal_error(
-            "Scalable vector support not yet implemented in ExecutionEngine");
-      case Type::FixedVectorTyID:
-        // if the whole vector is 'undef' just reserve memory for the value.
-        auto *VTy = cast<FixedVectorType>(C->getType());
-        Type *ElemTy = VTy->getElementType();
-        unsigned int elemNum = VTy->getNumElements();
-        Result.AggregateVal.resize(elemNum);
-        if (ElemTy->isIntegerTy())
-          for (unsigned int i = 0; i < elemNum; ++i)
-            Result.AggregateVal[i].IntVal =
-                APInt(ElemTy->getPrimitiveSizeInBits(), 0);
-        break;
     }
     return Result;
   }
@@ -643,7 +641,7 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
       APInt Offset(DL.getPointerSizeInBits(), 0);
       cast<GEPOperator>(CE)->accumulateConstantOffset(DL, Offset);
 
-      char* tmp = (char*) Result.PointerVal;
+      char *tmp = (char *)Result.PointerVal;
       Result = PTOGV(tmp + Offset.getSExtValue());
       return Result;
     }
@@ -671,7 +669,7 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
       GV.FloatVal = float(GV.DoubleVal);
       return GV;
     }
-    case Instruction::FPExt:{
+    case Instruction::FPExt: {
       // FIXME long double
       GenericValue GV = getConstantValue(Op0);
       GV.DoubleVal = double(GV.FloatVal);
@@ -685,8 +683,7 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
         GV.DoubleVal = GV.IntVal.roundToDouble();
       else if (CE->getType()->isX86_FP80Ty()) {
         APFloat apf = APFloat::getZero(APFloat::x87DoubleExtended());
-        (void)apf.convertFromAPInt(GV.IntVal,
-                                   false,
+        (void)apf.convertFromAPInt(GV.IntVal, false,
                                    APFloat::rmNearestTiesToEven);
         GV.IntVal = apf.bitcastToAPInt();
       }
@@ -700,8 +697,7 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
         GV.DoubleVal = GV.IntVal.signedRoundToDouble();
       else if (CE->getType()->isX86_FP80Ty()) {
         APFloat apf = APFloat::getZero(APFloat::x87DoubleExtended());
-        (void)apf.convertFromAPInt(GV.IntVal,
-                                   true,
+        (void)apf.convertFromAPInt(GV.IntVal, true,
                                    APFloat::rmNearestTiesToEven);
         GV.IntVal = apf.bitcastToAPInt();
       }
@@ -720,7 +716,7 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
         uint64_t v;
         bool ignored;
         (void)apf.convertToInteger(MutableArrayRef(v), BitWidth,
-                                   CE->getOpcode()==Instruction::FPToSI,
+                                   CE->getOpcode() == Instruction::FPToSI,
                                    APFloat::rmTowardZero, &ignored);
         GV.IntVal = v; // endian?
       }
@@ -745,27 +741,28 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
     }
     case Instruction::BitCast: {
       GenericValue GV = getConstantValue(Op0);
-      Type* DestTy = CE->getType();
+      Type *DestTy = CE->getType();
       switch (Op0->getType()->getTypeID()) {
-        default: llvm_unreachable("Invalid bitcast operand");
-        case Type::IntegerTyID:
-          assert(DestTy->isFloatingPointTy() && "invalid bitcast");
-          if (DestTy->isFloatTy())
-            GV.FloatVal = GV.IntVal.bitsToFloat();
-          else if (DestTy->isDoubleTy())
-            GV.DoubleVal = GV.IntVal.bitsToDouble();
-          break;
-        case Type::FloatTyID:
-          assert(DestTy->isIntegerTy(32) && "Invalid bitcast");
-          GV.IntVal = APInt::floatToBits(GV.FloatVal);
-          break;
-        case Type::DoubleTyID:
-          assert(DestTy->isIntegerTy(64) && "Invalid bitcast");
-          GV.IntVal = APInt::doubleToBits(GV.DoubleVal);
-          break;
-        case Type::PointerTyID:
-          assert(DestTy->isPointerTy() && "Invalid bitcast");
-          break; // getConstantValue(Op0)  above already converted it
+      default:
+        llvm_unreachable("Invalid bitcast operand");
+      case Type::IntegerTyID:
+        assert(DestTy->isFloatingPointTy() && "invalid bitcast");
+        if (DestTy->isFloatTy())
+          GV.FloatVal = GV.IntVal.bitsToFloat();
+        else if (DestTy->isDoubleTy())
+          GV.DoubleVal = GV.IntVal.bitsToDouble();
+        break;
+      case Type::FloatTyID:
+        assert(DestTy->isIntegerTy(32) && "Invalid bitcast");
+        GV.IntVal = APInt::floatToBits(GV.FloatVal);
+        break;
+      case Type::DoubleTyID:
+        assert(DestTy->isIntegerTy(64) && "Invalid bitcast");
+        GV.IntVal = APInt::doubleToBits(GV.DoubleVal);
+        break;
+      case Type::PointerTyID:
+        assert(DestTy->isPointerTy() && "Invalid bitcast");
+        break; // getConstantValue(Op0)  above already converted it
       }
       return GV;
     }
@@ -786,85 +783,119 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
       GenericValue RHS = getConstantValue(CE->getOperand(1));
       GenericValue GV;
       switch (CE->getOperand(0)->getType()->getTypeID()) {
-      default: llvm_unreachable("Bad add type!");
+      default:
+        llvm_unreachable("Bad add type!");
       case Type::IntegerTyID:
         switch (CE->getOpcode()) {
-          default: llvm_unreachable("Invalid integer opcode");
-          case Instruction::Add: GV.IntVal = LHS.IntVal + RHS.IntVal; break;
-          case Instruction::Sub: GV.IntVal = LHS.IntVal - RHS.IntVal; break;
-          case Instruction::Mul: GV.IntVal = LHS.IntVal * RHS.IntVal; break;
-          case Instruction::UDiv:GV.IntVal = LHS.IntVal.udiv(RHS.IntVal); break;
-          case Instruction::SDiv:GV.IntVal = LHS.IntVal.sdiv(RHS.IntVal); break;
-          case Instruction::URem:GV.IntVal = LHS.IntVal.urem(RHS.IntVal); break;
-          case Instruction::SRem:GV.IntVal = LHS.IntVal.srem(RHS.IntVal); break;
-          case Instruction::And: GV.IntVal = LHS.IntVal & RHS.IntVal; break;
-          case Instruction::Or:  GV.IntVal = LHS.IntVal | RHS.IntVal; break;
-          case Instruction::Xor: GV.IntVal = LHS.IntVal ^ RHS.IntVal; break;
+        default:
+          llvm_unreachable("Invalid integer opcode");
+        case Instruction::Add:
+          GV.IntVal = LHS.IntVal + RHS.IntVal;
+          break;
+        case Instruction::Sub:
+          GV.IntVal = LHS.IntVal - RHS.IntVal;
+          break;
+        case Instruction::Mul:
+          GV.IntVal = LHS.IntVal * RHS.IntVal;
+          break;
+        case Instruction::UDiv:
+          GV.IntVal = LHS.IntVal.udiv(RHS.IntVal);
+          break;
+        case Instruction::SDiv:
+          GV.IntVal = LHS.IntVal.sdiv(RHS.IntVal);
+          break;
+        case Instruction::URem:
+          GV.IntVal = LHS.IntVal.urem(RHS.IntVal);
+          break;
+        case Instruction::SRem:
+          GV.IntVal = LHS.IntVal.srem(RHS.IntVal);
+          break;
+        case Instruction::And:
+          GV.IntVal = LHS.IntVal & RHS.IntVal;
+          break;
+        case Instruction::Or:
+          GV.IntVal = LHS.IntVal | RHS.IntVal;
+          break;
+        case Instruction::Xor:
+          GV.IntVal = LHS.IntVal ^ RHS.IntVal;
+          break;
         }
         break;
       case Type::FloatTyID:
         switch (CE->getOpcode()) {
-          default: llvm_unreachable("Invalid float opcode");
-          case Instruction::FAdd:
-            GV.FloatVal = LHS.FloatVal + RHS.FloatVal; break;
-          case Instruction::FSub:
-            GV.FloatVal = LHS.FloatVal - RHS.FloatVal; break;
-          case Instruction::FMul:
-            GV.FloatVal = LHS.FloatVal * RHS.FloatVal; break;
-          case Instruction::FDiv:
-            GV.FloatVal = LHS.FloatVal / RHS.FloatVal; break;
-          case Instruction::FRem:
-            GV.FloatVal = std::fmod(LHS.FloatVal,RHS.FloatVal); break;
+        default:
+          llvm_unreachable("Invalid float opcode");
+        case Instruction::FAdd:
+          GV.FloatVal = LHS.FloatVal + RHS.FloatVal;
+          break;
+        case Instruction::FSub:
+          GV.FloatVal = LHS.FloatVal - RHS.FloatVal;
+          break;
+        case Instruction::FMul:
+          GV.FloatVal = LHS.FloatVal * RHS.FloatVal;
+          break;
+        case Instruction::FDiv:
+          GV.FloatVal = LHS.FloatVal / RHS.FloatVal;
+          break;
+        case Instruction::FRem:
+          GV.FloatVal = std::fmod(LHS.FloatVal, RHS.FloatVal);
+          break;
         }
         break;
       case Type::DoubleTyID:
         switch (CE->getOpcode()) {
-          default: llvm_unreachable("Invalid double opcode");
-          case Instruction::FAdd:
-            GV.DoubleVal = LHS.DoubleVal + RHS.DoubleVal; break;
-          case Instruction::FSub:
-            GV.DoubleVal = LHS.DoubleVal - RHS.DoubleVal; break;
-          case Instruction::FMul:
-            GV.DoubleVal = LHS.DoubleVal * RHS.DoubleVal; break;
-          case Instruction::FDiv:
-            GV.DoubleVal = LHS.DoubleVal / RHS.DoubleVal; break;
-          case Instruction::FRem:
-            GV.DoubleVal = std::fmod(LHS.DoubleVal,RHS.DoubleVal); break;
+        default:
+          llvm_unreachable("Invalid double opcode");
+        case Instruction::FAdd:
+          GV.DoubleVal = LHS.DoubleVal + RHS.DoubleVal;
+          break;
+        case Instruction::FSub:
+          GV.DoubleVal = LHS.DoubleVal - RHS.DoubleVal;
+          break;
+        case Instruction::FMul:
+          GV.DoubleVal = LHS.DoubleVal * RHS.DoubleVal;
+          break;
+        case Instruction::FDiv:
+          GV.DoubleVal = LHS.DoubleVal / RHS.DoubleVal;
+          break;
+        case Instruction::FRem:
+          GV.DoubleVal = std::fmod(LHS.DoubleVal, RHS.DoubleVal);
+          break;
         }
         break;
       case Type::X86_FP80TyID:
       case Type::PPC_FP128TyID:
       case Type::FP128TyID: {
-        const fltSemantics &Sem = CE->getOperand(0)->getType()->getFltSemantics();
+        const fltSemantics &Sem =
+            CE->getOperand(0)->getType()->getFltSemantics();
         APFloat apfLHS = APFloat(Sem, LHS.IntVal);
         switch (CE->getOpcode()) {
-          default: llvm_unreachable("Invalid long double opcode");
-          case Instruction::FAdd:
-            apfLHS.add(APFloat(Sem, RHS.IntVal), APFloat::rmNearestTiesToEven);
-            GV.IntVal = apfLHS.bitcastToAPInt();
-            break;
-          case Instruction::FSub:
-            apfLHS.subtract(APFloat(Sem, RHS.IntVal),
-                            APFloat::rmNearestTiesToEven);
-            GV.IntVal = apfLHS.bitcastToAPInt();
-            break;
-          case Instruction::FMul:
-            apfLHS.multiply(APFloat(Sem, RHS.IntVal),
-                            APFloat::rmNearestTiesToEven);
-            GV.IntVal = apfLHS.bitcastToAPInt();
-            break;
-          case Instruction::FDiv:
-            apfLHS.divide(APFloat(Sem, RHS.IntVal),
+        default:
+          llvm_unreachable("Invalid long double opcode");
+        case Instruction::FAdd:
+          apfLHS.add(APFloat(Sem, RHS.IntVal), APFloat::rmNearestTiesToEven);
+          GV.IntVal = apfLHS.bitcastToAPInt();
+          break;
+        case Instruction::FSub:
+          apfLHS.subtract(APFloat(Sem, RHS.IntVal),
                           APFloat::rmNearestTiesToEven);
-            GV.IntVal = apfLHS.bitcastToAPInt();
-            break;
-          case Instruction::FRem:
-            apfLHS.mod(APFloat(Sem, RHS.IntVal));
-            GV.IntVal = apfLHS.bitcastToAPInt();
-            break;
-          }
+          GV.IntVal = apfLHS.bitcastToAPInt();
+          break;
+        case Instruction::FMul:
+          apfLHS.multiply(APFloat(Sem, RHS.IntVal),
+                          APFloat::rmNearestTiesToEven);
+          GV.IntVal = apfLHS.bitcastToAPInt();
+          break;
+        case Instruction::FDiv:
+          apfLHS.divide(APFloat(Sem, RHS.IntVal), APFloat::rmNearestTiesToEven);
+          GV.IntVal = apfLHS.bitcastToAPInt();
+          break;
+        case Instruction::FRem:
+          apfLHS.mod(APFloat(Sem, RHS.IntVal));
+          GV.IntVal = apfLHS.bitcastToAPInt();
+          break;
         }
-        break;
+      } break;
       }
       return GV;
     }
@@ -896,7 +927,7 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
   case Type::X86_FP80TyID:
   case Type::FP128TyID:
   case Type::PPC_FP128TyID:
-    Result.IntVal = cast <ConstantFP>(C)->getValueAPF().bitcastToAPInt();
+    Result.IntVal = cast<ConstantFP>(C)->getValueAPF().bitcastToAPInt();
     break;
   case Type::IntegerTyID:
     Result.IntVal = cast<ConstantInt>(C)->getValue();
@@ -908,9 +939,9 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
     if (isa<ConstantPointerNull>(C))
       Result.PointerVal = nullptr;
     else if (const Function *F = dyn_cast<Function>(C))
-      Result = PTOGV(getPointerToFunctionOrStub(const_cast<Function*>(F)));
+      Result = PTOGV(getPointerToFunctionOrStub(const_cast<Function *>(F)));
     else if (const GlobalVariable *GV = dyn_cast<GlobalVariable>(C))
-      Result = PTOGV(getOrEmitGlobalVariable(const_cast<GlobalVariable*>(GV)));
+      Result = PTOGV(getOrEmitGlobalVariable(const_cast<GlobalVariable *>(GV)));
     else
       llvm_unreachable("Unknown constant pointer type!");
     break;
@@ -919,25 +950,25 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
         "Scalable vector support not yet implemented in ExecutionEngine");
   case Type::FixedVectorTyID: {
     unsigned elemNum;
-    Type* ElemTy;
+    Type *ElemTy;
     const ConstantDataVector *CDV = dyn_cast<ConstantDataVector>(C);
     const ConstantVector *CV = dyn_cast<ConstantVector>(C);
     const ConstantAggregateZero *CAZ = dyn_cast<ConstantAggregateZero>(C);
 
     if (CDV) {
-        elemNum = CDV->getNumElements();
-        ElemTy = CDV->getElementType();
+      elemNum = CDV->getNumElements();
+      ElemTy = CDV->getElementType();
     } else if (CV || CAZ) {
       auto *VTy = cast<FixedVectorType>(C->getType());
       elemNum = VTy->getNumElements();
       ElemTy = VTy->getElementType();
     } else {
-        llvm_unreachable("Unknown constant vector type!");
+      llvm_unreachable("Unknown constant vector type!");
     }
 
     Result.AggregateVal.resize(elemNum);
     // Check if vector holds floats.
-    if(ElemTy->isFloatTy()) {
+    if (ElemTy->isFloatTy()) {
       if (CAZ) {
         GenericValue floatZero;
         floatZero.FloatVal = 0.f;
@@ -945,14 +976,16 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
                   floatZero);
         break;
       }
-      if(CV) {
+      if (CV) {
         for (unsigned i = 0; i < elemNum; ++i)
           if (!isa<UndefValue>(CV->getOperand(i)))
-            Result.AggregateVal[i].FloatVal = cast<ConstantFP>(
-              CV->getOperand(i))->getValueAPF().convertToFloat();
+            Result.AggregateVal[i].FloatVal =
+                cast<ConstantFP>(CV->getOperand(i))
+                    ->getValueAPF()
+                    .convertToFloat();
         break;
       }
-      if(CDV)
+      if (CDV)
         for (unsigned i = 0; i < elemNum; ++i)
           Result.AggregateVal[i].FloatVal = CDV->getElementAsFloat(i);
 
@@ -967,14 +1000,16 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
                   doubleZero);
         break;
       }
-      if(CV) {
+      if (CV) {
         for (unsigned i = 0; i < elemNum; ++i)
           if (!isa<UndefValue>(CV->getOperand(i)))
-            Result.AggregateVal[i].DoubleVal = cast<ConstantFP>(
-              CV->getOperand(i))->getValueAPF().convertToDouble();
+            Result.AggregateVal[i].DoubleVal =
+                cast<ConstantFP>(CV->getOperand(i))
+                    ->getValueAPF()
+                    .convertToDouble();
         break;
       }
-      if(CDV)
+      if (CDV)
         for (unsigned i = 0; i < elemNum; ++i)
           Result.AggregateVal[i].DoubleVal = CDV->getElementAsDouble(i);
 
@@ -989,22 +1024,22 @@ GenericValue ExecutionEngine::getConstantValue(const Constant *C) {
                   intZero);
         break;
       }
-      if(CV) {
+      if (CV) {
         for (unsigned i = 0; i < elemNum; ++i)
           if (!isa<UndefValue>(CV->getOperand(i)))
-            Result.AggregateVal[i].IntVal = cast<ConstantInt>(
-                                            CV->getOperand(i))->getValue();
-          else {
             Result.AggregateVal[i].IntVal =
-              APInt(CV->getOperand(i)->getType()->getPrimitiveSizeInBits(), 0);
+                cast<ConstantInt>(CV->getOperand(i))->getValue();
+          else {
+            Result.AggregateVal[i].IntVal = APInt(
+                CV->getOperand(i)->getType()->getPrimitiveSizeInBits(), 0);
           }
         break;
       }
-      if(CDV)
+      if (CDV)
         for (unsigned i = 0; i < elemNum; ++i)
-          Result.AggregateVal[i].IntVal = APInt(
-            CDV->getElementType()->getPrimitiveSizeInBits(),
-            CDV->getElementAsInteger(i));
+          Result.AggregateVal[i].IntVal =
+              APInt(CDV->getElementType()->getPrimitiveSizeInBits(),
+                    CDV->getElementAsInteger(i));
 
       break;
     }
@@ -1035,13 +1070,13 @@ void ExecutionEngine::StoreValueToMemory(const GenericValue &Val,
     dbgs() << "Cannot store value of type " << *Ty << "!\n";
     break;
   case Type::IntegerTyID:
-    StoreIntToMemory(Val.IntVal, (uint8_t*)Ptr, StoreBytes);
+    StoreIntToMemory(Val.IntVal, (uint8_t *)Ptr, StoreBytes);
     break;
   case Type::FloatTyID:
-    *((float*)Ptr) = Val.FloatVal;
+    *((float *)Ptr) = Val.FloatVal;
     break;
   case Type::DoubleTyID:
-    *((double*)Ptr) = Val.DoubleVal;
+    *((double *)Ptr) = Val.DoubleVal;
     break;
   case Type::X86_FP80TyID:
     memcpy(Ptr, Val.IntVal.getRawData(), 10);
@@ -1051,19 +1086,20 @@ void ExecutionEngine::StoreValueToMemory(const GenericValue &Val,
     if (StoreBytes != sizeof(PointerTy))
       memset(&(Ptr->PointerVal), 0, StoreBytes);
 
-    *((PointerTy*)Ptr) = Val.PointerVal;
+    *((PointerTy *)Ptr) = Val.PointerVal;
     break;
   case Type::FixedVectorTyID:
   case Type::ScalableVectorTyID:
     for (unsigned i = 0; i < Val.AggregateVal.size(); ++i) {
       if (cast<VectorType>(Ty)->getElementType()->isDoubleTy())
-        *(((double*)Ptr)+i) = Val.AggregateVal[i].DoubleVal;
+        *(((double *)Ptr) + i) = Val.AggregateVal[i].DoubleVal;
       if (cast<VectorType>(Ty)->getElementType()->isFloatTy())
-        *(((float*)Ptr)+i) = Val.AggregateVal[i].FloatVal;
+        *(((float *)Ptr) + i) = Val.AggregateVal[i].FloatVal;
       if (cast<VectorType>(Ty)->getElementType()->isIntegerTy()) {
-        unsigned numOfBytes =(Val.AggregateVal[i].IntVal.getBitWidth()+7)/8;
+        unsigned numOfBytes =
+            (Val.AggregateVal[i].IntVal.getBitWidth() + 7) / 8;
         StoreIntToMemory(Val.AggregateVal[i].IntVal,
-          (uint8_t*)Ptr + numOfBytes*i, numOfBytes);
+                         (uint8_t *)Ptr + numOfBytes * i, numOfBytes);
       }
     }
     break;
@@ -1071,14 +1107,13 @@ void ExecutionEngine::StoreValueToMemory(const GenericValue &Val,
 
   if (sys::IsLittleEndianHost != getDataLayout().isLittleEndian())
     // Host and target are different endian - reverse the stored bytes.
-    std::reverse((uint8_t*)Ptr, StoreBytes + (uint8_t*)Ptr);
+    std::reverse((uint8_t *)Ptr, StoreBytes + (uint8_t *)Ptr);
 }
 
 /// FIXME: document
 ///
 void ExecutionEngine::LoadValueFromMemory(GenericValue &Result,
-                                          GenericValue *Ptr,
-                                          Type *Ty) {
+                                          GenericValue *Ptr, Type *Ty) {
   if (auto *TETy = dyn_cast<TargetExtType>(Ty))
     Ty = TETy->getLayoutType();
 
@@ -1088,16 +1123,16 @@ void ExecutionEngine::LoadValueFromMemory(GenericValue &Result,
   case Type::IntegerTyID:
     // An APInt with all words initially zero.
     Result.IntVal = APInt(cast<IntegerType>(Ty)->getBitWidth(), 0);
-    LoadIntFromMemory(Result.IntVal, (uint8_t*)Ptr, LoadBytes);
+    LoadIntFromMemory(Result.IntVal, (uint8_t *)Ptr, LoadBytes);
     break;
   case Type::FloatTyID:
-    Result.FloatVal = *((float*)Ptr);
+    Result.FloatVal = *((float *)Ptr);
     break;
   case Type::DoubleTyID:
-    Result.DoubleVal = *((double*)Ptr);
+    Result.DoubleVal = *((double *)Ptr);
     break;
   case Type::PointerTyID:
-    Result.PointerVal = *((PointerTy*)Ptr);
+    Result.PointerVal = *((PointerTy *)Ptr);
     break;
   case Type::X86_FP80TyID: {
     // This is endian dependent, but it will only work on x86 anyway.
@@ -1117,12 +1152,12 @@ void ExecutionEngine::LoadValueFromMemory(GenericValue &Result,
     if (ElemT->isFloatTy()) {
       Result.AggregateVal.resize(numElems);
       for (unsigned i = 0; i < numElems; ++i)
-        Result.AggregateVal[i].FloatVal = *((float*)Ptr+i);
+        Result.AggregateVal[i].FloatVal = *((float *)Ptr + i);
     }
     if (ElemT->isDoubleTy()) {
       Result.AggregateVal.resize(numElems);
       for (unsigned i = 0; i < numElems; ++i)
-        Result.AggregateVal[i].DoubleVal = *((double*)Ptr+i);
+        Result.AggregateVal[i].DoubleVal = *((double *)Ptr + i);
     }
     if (ElemT->isIntegerTy()) {
       GenericValue intZero;
@@ -1131,9 +1166,10 @@ void ExecutionEngine::LoadValueFromMemory(GenericValue &Result,
       Result.AggregateVal.resize(numElems, intZero);
       for (unsigned i = 0; i < numElems; ++i)
         LoadIntFromMemory(Result.AggregateVal[i].IntVal,
-          (uint8_t*)Ptr+((elemBitWidth+7)/8)*i, (elemBitWidth+7)/8);
+                          (uint8_t *)Ptr + ((elemBitWidth + 7) / 8) * i,
+                          (elemBitWidth + 7) / 8);
     }
-  break;
+    break;
   }
   default:
     SmallString<256> Msg;
@@ -1153,7 +1189,7 @@ void ExecutionEngine::InitializeMemory(const Constant *Init, void *Addr) {
     unsigned ElementSize =
         getDataLayout().getTypeAllocSize(CP->getType()->getElementType());
     for (unsigned i = 0, e = CP->getNumOperands(); i != e; ++i)
-      InitializeMemory(CP->getOperand(i), (char*)Addr+i*ElementSize);
+      InitializeMemory(CP->getOperand(i), (char *)Addr + i * ElementSize);
     return;
   }
 
@@ -1166,7 +1202,7 @@ void ExecutionEngine::InitializeMemory(const Constant *Init, void *Addr) {
     unsigned ElementSize =
         getDataLayout().getTypeAllocSize(CPA->getType()->getElementType());
     for (unsigned i = 0, e = CPA->getNumOperands(); i != e; ++i)
-      InitializeMemory(CPA->getOperand(i), (char*)Addr+i*ElementSize);
+      InitializeMemory(CPA->getOperand(i), (char *)Addr + i * ElementSize);
     return;
   }
 
@@ -1174,12 +1210,13 @@ void ExecutionEngine::InitializeMemory(const Constant *Init, void *Addr) {
     const StructLayout *SL =
         getDataLayout().getStructLayout(cast<StructType>(CPS->getType()));
     for (unsigned i = 0, e = CPS->getNumOperands(); i != e; ++i)
-      InitializeMemory(CPS->getOperand(i), (char*)Addr+SL->getElementOffset(i));
+      InitializeMemory(CPS->getOperand(i),
+                       (char *)Addr + SL->getElementOffset(i));
     return;
   }
 
   if (const ConstantDataSequential *CDS =
-               dyn_cast<ConstantDataSequential>(Init)) {
+          dyn_cast<ConstantDataSequential>(Init)) {
     // CDS is already laid out in host memory order.
     StringRef Data = CDS->getRawDataValues();
     memcpy(Addr, Data.data(), Data.size());
@@ -1188,7 +1225,7 @@ void ExecutionEngine::InitializeMemory(const Constant *Init, void *Addr) {
 
   if (Init->getType()->isFirstClassType()) {
     GenericValue Val = getConstantValue(Init);
-    StoreValueToMemory(Val, (GenericValue*)Addr, Init->getType());
+    StoreValueToMemory(Val, (GenericValue *)Addr, Init->getType());
     return;
   }
 
@@ -1203,8 +1240,8 @@ void ExecutionEngine::emitGlobals() {
   // Loop over all of the global variables in the program, allocating the memory
   // to hold them.  If there is more than one module, do a prepass over globals
   // to figure out how the different modules should link together.
-  std::map<std::pair<std::string, Type*>,
-           const GlobalValue*> LinkedGlobalsMap;
+  std::map<std::pair<std::string, Type *>, const GlobalValue *>
+      LinkedGlobalsMap;
 
   if (Modules.size() != 1) {
     for (unsigned m = 0, e = Modules.size(); m != e; ++m) {
@@ -1212,7 +1249,8 @@ void ExecutionEngine::emitGlobals() {
       for (const auto &GV : M.globals()) {
         if (GV.hasLocalLinkage() || GV.isDeclaration() ||
             GV.hasAppendingLinkage() || !GV.hasName())
-          continue;// Ignore external globals and globals with internal linkage.
+          continue; // Ignore external globals and globals with internal
+                    // linkage.
 
         const GlobalValue *&GVEntry = LinkedGlobalsMap[std::make_pair(
             std::string(GV.getName()), GV.getType())];
@@ -1236,7 +1274,7 @@ void ExecutionEngine::emitGlobals() {
     }
   }
 
-  std::vector<const GlobalValue*> NonCanonicalGlobals;
+  std::vector<const GlobalValue *> NonCanonicalGlobals;
   for (unsigned m = 0, e = Modules.size(); m != e; ++m) {
     Module &M = *Modules[m];
     for (const auto &GV : M.globals()) {
@@ -1261,8 +1299,8 @@ void ExecutionEngine::emitGlobals() {
                 std::string(GV.getName())))
           addGlobalMapping(&GV, SymAddr);
         else {
-          report_fatal_error("Could not resolve external global address: "
-                            +GV.getName());
+          report_fatal_error("Could not resolve external global address: " +
+                             GV.getName());
         }
       }
     }
@@ -1286,7 +1324,7 @@ void ExecutionEngine::emitGlobals() {
         if (!LinkedGlobalsMap.empty()) {
           if (const GlobalValue *GVEntry = LinkedGlobalsMap[std::make_pair(
                   std::string(GV.getName()), GV.getType())])
-            if (GVEntry != &GV)  // Not the canonical variable.
+            if (GVEntry != &GV) // Not the canonical variable.
               continue;
         }
         emitGlobalVariable(&GV);
@@ -1306,7 +1344,8 @@ void ExecutionEngine::emitGlobalVariable(const GlobalVariable *GV) {
     GA = getMemoryForGV(GV);
 
     // If we failed to allocate memory for this global, return.
-    if (!GA) return;
+    if (!GA)
+      return;
 
     addGlobalMapping(GV, GA);
   }
diff --git a/llvm/lib/ExecutionEngine/ExecutionEngineBindings.cpp b/llvm/lib/ExecutionEngine/ExecutionEngineBindings.cpp
index dc9a07e3f2123a8..4fceb45159a1bca 100644
--- a/llvm/lib/ExecutionEngine/ExecutionEngineBindings.cpp
+++ b/llvm/lib/ExecutionEngine/ExecutionEngineBindings.cpp
@@ -30,10 +30,8 @@ using namespace llvm;
 // Wrapping the C bindings types.
 DEFINE_SIMPLE_CONVERSION_FUNCTIONS(GenericValue, LLVMGenericValueRef)
 
-
 static LLVMTargetMachineRef wrap(const TargetMachine *P) {
-  return
-  reinterpret_cast<LLVMTargetMachineRef>(const_cast<TargetMachine*>(P));
+  return reinterpret_cast<LLVMTargetMachineRef>(const_cast<TargetMachine *>(P));
 }
 
 /*===-- Operations on generic values --------------------------------------===*/
@@ -102,13 +100,11 @@ void LLVMDisposeGenericValue(LLVMGenericValueRef GenVal) {
 /*===-- Operations on execution engines -----------------------------------===*/
 
 LLVMBool LLVMCreateExecutionEngineForModule(LLVMExecutionEngineRef *OutEE,
-                                            LLVMModuleRef M,
-                                            char **OutError) {
+                                            LLVMModuleRef M, char **OutError) {
   std::string Error;
   EngineBuilder builder(std::unique_ptr<Module>(unwrap(M)));
-  builder.setEngineKind(EngineKind::Either)
-         .setErrorStr(&Error);
-  if (ExecutionEngine *EE = builder.create()){
+  builder.setEngineKind(EngineKind::Either).setErrorStr(&Error);
+  if (ExecutionEngine *EE = builder.create()) {
     *OutEE = wrap(EE);
     return 0;
   }
@@ -117,12 +113,10 @@ LLVMBool LLVMCreateExecutionEngineForModule(LLVMExecutionEngineRef *OutEE,
 }
 
 LLVMBool LLVMCreateInterpreterForModule(LLVMExecutionEngineRef *OutInterp,
-                                        LLVMModuleRef M,
-                                        char **OutError) {
+                                        LLVMModuleRef M, char **OutError) {
   std::string Error;
   EngineBuilder builder(std::unique_ptr<Module>(unwrap(M)));
-  builder.setEngineKind(EngineKind::Interpreter)
-         .setErrorStr(&Error);
+  builder.setEngineKind(EngineKind::Interpreter).setErrorStr(&Error);
   if (ExecutionEngine *Interp = builder.create()) {
     *OutInterp = wrap(Interp);
     return 0;
@@ -132,14 +126,13 @@ LLVMBool LLVMCreateInterpreterForModule(LLVMExecutionEngineRef *OutInterp,
 }
 
 LLVMBool LLVMCreateJITCompilerForModule(LLVMExecutionEngineRef *OutJIT,
-                                        LLVMModuleRef M,
-                                        unsigned OptLevel,
+                                        LLVMModuleRef M, unsigned OptLevel,
                                         char **OutError) {
   std::string Error;
   EngineBuilder builder(std::unique_ptr<Module>(unwrap(M)));
   builder.setEngineKind(EngineKind::JIT)
-         .setErrorStr(&Error)
-         .setOptLevel((CodeGenOpt::Level)OptLevel);
+      .setErrorStr(&Error)
+      .setOptLevel((CodeGenOpt::Level)OptLevel);
   if (ExecutionEngine *JIT = builder.create()) {
     *OutJIT = wrap(JIT);
     return 0;
@@ -158,17 +151,18 @@ void LLVMInitializeMCJITCompilerOptions(LLVMMCJITCompilerOptions *PassedOptions,
          std::min(sizeof(options), SizeOfPassedOptions));
 }
 
-LLVMBool LLVMCreateMCJITCompilerForModule(
-    LLVMExecutionEngineRef *OutJIT, LLVMModuleRef M,
-    LLVMMCJITCompilerOptions *PassedOptions, size_t SizeOfPassedOptions,
-    char **OutError) {
+LLVMBool
+LLVMCreateMCJITCompilerForModule(LLVMExecutionEngineRef *OutJIT,
+                                 LLVMModuleRef M,
+                                 LLVMMCJITCompilerOptions *PassedOptions,
+                                 size_t SizeOfPassedOptions, char **OutError) {
   LLVMMCJITCompilerOptions options;
   // If the user passed a larger sized options struct, then they were compiled
   // against a newer LLVM. Tell them that something is wrong.
   if (SizeOfPassedOptions > sizeof(options)) {
     *OutError = strdup(
-      "Refusing to use options struct that is larger than my own; assuming "
-      "LLVM library mismatch.");
+        "Refusing to use options struct that is larger than my own; assuming "
+        "LLVM library mismatch.");
     return 1;
   }
 
@@ -196,15 +190,15 @@ LLVMBool LLVMCreateMCJITCompilerForModule(
   std::string Error;
   EngineBuilder builder(std::move(Mod));
   builder.setEngineKind(EngineKind::JIT)
-         .setErrorStr(&Error)
-         .setOptLevel((CodeGenOpt::Level)options.OptLevel)
-         .setTargetOptions(targetOptions);
+      .setErrorStr(&Error)
+      .setOptLevel((CodeGenOpt::Level)options.OptLevel)
+      .setTargetOptions(targetOptions);
   bool JIT;
   if (std::optional<CodeModel::Model> CM = unwrap(options.CodeModel, JIT))
     builder.setCodeModel(*CM);
   if (options.MCJMM)
     builder.setMCJITMemoryManager(
-      std::unique_ptr<RTDyldMemoryManager>(unwrap(options.MCJMM)));
+        std::unique_ptr<RTDyldMemoryManager>(unwrap(options.MCJMM)));
   if (ExecutionEngine *JIT = builder.create()) {
     *OutJIT = wrap(JIT);
     return 0;
@@ -228,8 +222,8 @@ void LLVMRunStaticDestructors(LLVMExecutionEngineRef EE) {
 }
 
 int LLVMRunFunctionAsMain(LLVMExecutionEngineRef EE, LLVMValueRef F,
-                          unsigned ArgC, const char * const *ArgV,
-                          const char * const *EnvP) {
+                          unsigned ArgC, const char *const *ArgV,
+                          const char *const *EnvP) {
   unwrap(EE)->finalizeObject();
 
   std::vector<std::string> ArgVec(ArgV, ArgV + ArgC);
@@ -254,7 +248,7 @@ LLVMGenericValueRef LLVMRunFunction(LLVMExecutionEngineRef EE, LLVMValueRef F,
 void LLVMFreeMachineCodeForFunction(LLVMExecutionEngineRef EE, LLVMValueRef F) {
 }
 
-void LLVMAddModule(LLVMExecutionEngineRef EE, LLVMModuleRef M){
+void LLVMAddModule(LLVMExecutionEngineRef EE, LLVMModuleRef M) {
   unwrap(EE)->addModule(std::unique_ptr<Module>(unwrap(M)));
 }
 
@@ -290,7 +284,7 @@ LLVMGetExecutionEngineTargetMachine(LLVMExecutionEngineRef EE) {
 }
 
 void LLVMAddGlobalMapping(LLVMExecutionEngineRef EE, LLVMValueRef Global,
-                          void* Addr) {
+                          void *Addr) {
   unwrap(EE)->addGlobalMapping(unwrap<GlobalValue>(Global), Addr);
 }
 
@@ -300,7 +294,8 @@ void *LLVMGetPointerToGlobal(LLVMExecutionEngineRef EE, LLVMValueRef Global) {
   return unwrap(EE)->getPointerToGlobal(unwrap<GlobalValue>(Global));
 }
 
-uint64_t LLVMGetGlobalValueAddress(LLVMExecutionEngineRef EE, const char *Name) {
+uint64_t LLVMGetGlobalValueAddress(LLVMExecutionEngineRef EE,
+                                   const char *Name) {
   return unwrap(EE)->getGlobalValueAddress(Name);
 }
 
@@ -333,7 +328,7 @@ struct SimpleBindingMMFunctions {
 
 class SimpleBindingMemoryManager : public RTDyldMemoryManager {
 public:
-  SimpleBindingMemoryManager(const SimpleBindingMMFunctions& Functions,
+  SimpleBindingMemoryManager(const SimpleBindingMMFunctions &Functions,
                              void *Opaque);
   ~SimpleBindingMemoryManager() override;
 
@@ -353,17 +348,14 @@ class SimpleBindingMemoryManager : public RTDyldMemoryManager {
 };
 
 SimpleBindingMemoryManager::SimpleBindingMemoryManager(
-  const SimpleBindingMMFunctions& Functions,
-  void *Opaque)
-  : Functions(Functions), Opaque(Opaque) {
+    const SimpleBindingMMFunctions &Functions, void *Opaque)
+    : Functions(Functions), Opaque(Opaque) {
   assert(Functions.AllocateCodeSection &&
          "No AllocateCodeSection function provided!");
   assert(Functions.AllocateDataSection &&
          "No AllocateDataSection function provided!");
-  assert(Functions.FinalizeMemory &&
-         "No FinalizeMemory function provided!");
-  assert(Functions.Destroy &&
-         "No Destroy function provided!");
+  assert(Functions.FinalizeMemory && "No FinalizeMemory function provided!");
+  assert(Functions.Destroy && "No Destroy function provided!");
 }
 
 SimpleBindingMemoryManager::~SimpleBindingMemoryManager() {
@@ -371,18 +363,19 @@ SimpleBindingMemoryManager::~SimpleBindingMemoryManager() {
 }
 
 uint8_t *SimpleBindingMemoryManager::allocateCodeSection(
-  uintptr_t Size, unsigned Alignment, unsigned SectionID,
-  StringRef SectionName) {
+    uintptr_t Size, unsigned Alignment, unsigned SectionID,
+    StringRef SectionName) {
   return Functions.AllocateCodeSection(Opaque, Size, Alignment, SectionID,
                                        SectionName.str().c_str());
 }
 
-uint8_t *SimpleBindingMemoryManager::allocateDataSection(
-  uintptr_t Size, unsigned Alignment, unsigned SectionID,
-  StringRef SectionName, bool isReadOnly) {
+uint8_t *SimpleBindingMemoryManager::allocateDataSection(uintptr_t Size,
+                                                         unsigned Alignment,
+                                                         unsigned SectionID,
+                                                         StringRef SectionName,
+                                                         bool isReadOnly) {
   return Functions.AllocateDataSection(Opaque, Size, Alignment, SectionID,
-                                       SectionName.str().c_str(),
-                                       isReadOnly);
+                                       SectionName.str().c_str(), isReadOnly);
 }
 
 bool SimpleBindingMemoryManager::finalizeMemory(std::string *ErrMsg) {
@@ -401,11 +394,11 @@ bool SimpleBindingMemoryManager::finalizeMemory(std::string *ErrMsg) {
 } // anonymous namespace
 
 LLVMMCJITMemoryManagerRef LLVMCreateSimpleMCJITMemoryManager(
-  void *Opaque,
-  LLVMMemoryManagerAllocateCodeSectionCallback AllocateCodeSection,
-  LLVMMemoryManagerAllocateDataSectionCallback AllocateDataSection,
-  LLVMMemoryManagerFinalizeMemoryCallback FinalizeMemory,
-  LLVMMemoryManagerDestroyCallback Destroy) {
+    void *Opaque,
+    LLVMMemoryManagerAllocateCodeSectionCallback AllocateCodeSection,
+    LLVMMemoryManagerAllocateDataSectionCallback AllocateDataSection,
+    LLVMMemoryManagerFinalizeMemoryCallback FinalizeMemory,
+    LLVMMemoryManagerDestroyCallback Destroy) {
 
   if (!AllocateCodeSection || !AllocateDataSection || !FinalizeMemory ||
       !Destroy)
@@ -425,24 +418,18 @@ void LLVMDisposeMCJITMemoryManager(LLVMMCJITMemoryManagerRef MM) {
 
 /*===-- JIT Event Listener functions -------------------------------------===*/
 
-
 #if !LLVM_USE_INTEL_JITEVENTS
-LLVMJITEventListenerRef LLVMCreateIntelJITEventListener(void)
-{
+LLVMJITEventListenerRef LLVMCreateIntelJITEventListener(void) {
   return nullptr;
 }
 #endif
 
 #if !LLVM_USE_OPROFILE
-LLVMJITEventListenerRef LLVMCreateOProfileJITEventListener(void)
-{
+LLVMJITEventListenerRef LLVMCreateOProfileJITEventListener(void) {
   return nullptr;
 }
 #endif
 
 #if !LLVM_USE_PERF
-LLVMJITEventListenerRef LLVMCreatePerfJITEventListener(void)
-{
-  return nullptr;
-}
+LLVMJITEventListenerRef LLVMCreatePerfJITEventListener(void) { return nullptr; }
 #endif
diff --git a/llvm/lib/ExecutionEngine/GDBRegistrationListener.cpp b/llvm/lib/ExecutionEngine/GDBRegistrationListener.cpp
index b5b76130c55eb91..6ffb485e3b92b52 100644
--- a/llvm/lib/ExecutionEngine/GDBRegistrationListener.cpp
+++ b/llvm/lib/ExecutionEngine/GDBRegistrationListener.cpp
@@ -22,35 +22,35 @@ using namespace llvm::object;
 // This must be kept in sync with gdb/gdb/jit.h .
 extern "C" {
 
-  typedef enum {
-    JIT_NOACTION = 0,
-    JIT_REGISTER_FN,
-    JIT_UNREGISTER_FN
-  } jit_actions_t;
-
-  struct jit_code_entry {
-    struct jit_code_entry *next_entry;
-    struct jit_code_entry *prev_entry;
-    const char *symfile_addr;
-    uint64_t symfile_size;
-  };
-
-  struct jit_descriptor {
-    uint32_t version;
-    // This should be jit_actions_t, but we want to be specific about the
-    // bit-width.
-    uint32_t action_flag;
-    struct jit_code_entry *relevant_entry;
-    struct jit_code_entry *first_entry;
-  };
-
-  // We put information about the JITed function in this global, which the
-  // debugger reads.  Make sure to specify the version statically, because the
-  // debugger checks the version before we can set it during runtime.
-  extern struct jit_descriptor __jit_debug_descriptor;
-
-  // Debuggers puts a breakpoint in this function.
-  extern "C" void __jit_debug_register_code();
+typedef enum {
+  JIT_NOACTION = 0,
+  JIT_REGISTER_FN,
+  JIT_UNREGISTER_FN
+} jit_actions_t;
+
+struct jit_code_entry {
+  struct jit_code_entry *next_entry;
+  struct jit_code_entry *prev_entry;
+  const char *symfile_addr;
+  uint64_t symfile_size;
+};
+
+struct jit_descriptor {
+  uint32_t version;
+  // This should be jit_actions_t, but we want to be specific about the
+  // bit-width.
+  uint32_t action_flag;
+  struct jit_code_entry *relevant_entry;
+  struct jit_code_entry *first_entry;
+};
+
+// We put information about the JITed function in this global, which the
+// debugger reads.  Make sure to specify the version statically, because the
+// debugger checks the version before we can set it during runtime.
+extern struct jit_descriptor __jit_debug_descriptor;
+
+// Debuggers puts a breakpoint in this function.
+extern "C" void __jit_debug_register_code();
 }
 
 namespace {
@@ -74,7 +74,7 @@ struct RegisteredObjectInfo {
 
   RegisteredObjectInfo(std::size_t Size, jit_code_entry *Entry,
                        OwningBinary<ObjectFile> Obj)
-    : Size(Size), Entry(Entry), Obj(std::move(Obj)) {}
+      : Size(Size), Entry(Entry), Obj(std::move(Obj)) {}
 
   std::size_t Size;
   jit_code_entry *Entry;
@@ -134,12 +134,12 @@ class GDBJITRegistrationListener : public JITEventListener {
 };
 
 /// Do the registration.
-void NotifyDebugger(jit_code_entry* JITCodeEntry) {
+void NotifyDebugger(jit_code_entry *JITCodeEntry) {
   __jit_debug_descriptor.action_flag = JIT_REGISTER_FN;
 
   // Insert this entry at the head of the list.
   JITCodeEntry->prev_entry = nullptr;
-  jit_code_entry* NextEntry = __jit_debug_descriptor.first_entry;
+  jit_code_entry *NextEntry = __jit_debug_descriptor.first_entry;
   JITCodeEntry->next_entry = NextEntry;
   if (NextEntry) {
     NextEntry->prev_entry = JITCodeEntry;
@@ -172,17 +172,18 @@ void GDBJITRegistrationListener::notifyObjectLoaded(
   if (!DebugObj.getBinary())
     return;
 
-  const char *Buffer = DebugObj.getBinary()->getMemoryBufferRef().getBufferStart();
-  size_t      Size = DebugObj.getBinary()->getMemoryBufferRef().getBufferSize();
+  const char *Buffer =
+      DebugObj.getBinary()->getMemoryBufferRef().getBufferStart();
+  size_t Size = DebugObj.getBinary()->getMemoryBufferRef().getBufferSize();
 
   std::lock_guard<llvm::sys::Mutex> locked(JITDebugLock);
   assert(!ObjectBufferMap.contains(K) &&
          "Second attempt to perform debug registration.");
-  jit_code_entry* JITCodeEntry = new jit_code_entry();
+  jit_code_entry *JITCodeEntry = new jit_code_entry();
 
   if (!JITCodeEntry) {
     llvm::report_fatal_error(
-      "Allocation failed when registering a JIT entry!\n");
+        "Allocation failed when registering a JIT entry!\n");
   } else {
     JITCodeEntry->symfile_addr = Buffer;
     JITCodeEntry->symfile_size = Size;
@@ -206,23 +207,22 @@ void GDBJITRegistrationListener::notifyFreeingObject(ObjectKey K) {
 void GDBJITRegistrationListener::deregisterObjectInternal(
     RegisteredObjectBufferMap::iterator I) {
 
-  jit_code_entry*& JITCodeEntry = I->second.Entry;
+  jit_code_entry *&JITCodeEntry = I->second.Entry;
 
   // Do the unregistration.
   {
     __jit_debug_descriptor.action_flag = JIT_UNREGISTER_FN;
 
     // Remove the jit_code_entry from the linked list.
-    jit_code_entry* PrevEntry = JITCodeEntry->prev_entry;
-    jit_code_entry* NextEntry = JITCodeEntry->next_entry;
+    jit_code_entry *PrevEntry = JITCodeEntry->prev_entry;
+    jit_code_entry *NextEntry = JITCodeEntry->next_entry;
 
     if (NextEntry) {
       NextEntry->prev_entry = PrevEntry;
     }
     if (PrevEntry) {
       PrevEntry->next_entry = NextEntry;
-    }
-    else {
+    } else {
       assert(__jit_debug_descriptor.first_entry == JITCodeEntry);
       __jit_debug_descriptor.first_entry = NextEntry;
     }
@@ -240,13 +240,12 @@ void GDBJITRegistrationListener::deregisterObjectInternal(
 
 namespace llvm {
 
-JITEventListener* JITEventListener::createGDBRegistrationListener() {
+JITEventListener *JITEventListener::createGDBRegistrationListener() {
   return &GDBJITRegistrationListener::instance();
 }
 
 } // namespace llvm
 
-LLVMJITEventListenerRef LLVMCreateGDBRegistrationListener(void)
-{
+LLVMJITEventListenerRef LLVMCreateGDBRegistrationListener(void) {
   return wrap(JITEventListener::createGDBRegistrationListener());
 }
diff --git a/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventListener.cpp b/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventListener.cpp
index 4afcf95e9e8e8d4..77618a004bf2c7c 100644
--- a/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventListener.cpp
+++ b/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventListener.cpp
@@ -119,26 +119,25 @@ class IntelIttnotifyInfo {
 };
 
 class IntelJITEventListener : public JITEventListener {
-  typedef DenseMap<void*, unsigned int> MethodIDMap;
+  typedef DenseMap<void *, unsigned int> MethodIDMap;
 
   std::unique_ptr<IntelJITEventsWrapper> Wrapper;
   MethodIDMap MethodIDs;
 
   typedef SmallVector<const void *, 64> MethodAddressVector;
-  typedef DenseMap<const void *, MethodAddressVector>  ObjectMap;
+  typedef DenseMap<const void *, MethodAddressVector> ObjectMap;
 
-  ObjectMap  LoadedObjectMap;
+  ObjectMap LoadedObjectMap;
   std::map<ObjectKey, OwningBinary<ObjectFile>> DebugObjects;
 
   std::map<ObjectKey, std::unique_ptr<IntelIttnotifyInfo>> KeyToIttnotify;
 
 public:
-  IntelJITEventListener(IntelJITEventsWrapper* libraryWrapper) {
-      Wrapper.reset(libraryWrapper);
+  IntelJITEventListener(IntelJITEventsWrapper *libraryWrapper) {
+    Wrapper.reset(libraryWrapper);
   }
 
-  ~IntelJITEventListener() {
-  }
+  ~IntelJITEventListener() {}
 
   void notifyObjectLoaded(ObjectKey Key, const ObjectFile &Obj,
                           const RuntimeDyld::LoadedObjectInfo &L) override;
@@ -157,17 +156,15 @@ static LineNumberInfo DILineInfoToIntelJITFormat(uintptr_t StartAddress,
   return Result;
 }
 
-static iJIT_Method_Load FunctionDescToIntelJITFormat(
-    IntelJITEventsWrapper& Wrapper,
-    const char* FnName,
-    uintptr_t FnStart,
-    size_t FnSize) {
+static iJIT_Method_Load
+FunctionDescToIntelJITFormat(IntelJITEventsWrapper &Wrapper, const char *FnName,
+                             uintptr_t FnStart, size_t FnSize) {
   iJIT_Method_Load Result;
   memset(&Result, 0, sizeof(iJIT_Method_Load));
 
   Result.method_id = Wrapper.iJIT_GetNewMethodID();
-  Result.method_name = const_cast<char*>(FnName);
-  Result.method_load_address = reinterpret_cast<void*>(FnStart);
+  Result.method_name = const_cast<char *>(FnName);
+  Result.method_load_address = reinterpret_cast<void *>(FnStart);
   Result.method_size = FnSize;
 
   Result.class_id = 0;
@@ -380,7 +377,7 @@ void IntelJITEventListener::notifyFreeingObject(ObjectKey Key) {
   }
 }
 
-}  // anonymous namespace.
+} // anonymous namespace.
 
 namespace llvm {
 JITEventListener *JITEventListener::createIntelJITEventListener() {
@@ -388,14 +385,13 @@ JITEventListener *JITEventListener::createIntelJITEventListener() {
 }
 
 // for testing
-JITEventListener *JITEventListener::createIntelJITEventListener(
-                                      IntelJITEventsWrapper* TestImpl) {
+JITEventListener *
+JITEventListener::createIntelJITEventListener(IntelJITEventsWrapper *TestImpl) {
   return new IntelJITEventListener(TestImpl);
 }
 
 } // namespace llvm
 
-LLVMJITEventListenerRef LLVMCreateIntelJITEventListener(void)
-{
+LLVMJITEventListenerRef LLVMCreateIntelJITEventListener(void) {
   return wrap(JITEventListener::createIntelJITEventListener());
 }
diff --git a/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventsWrapper.h b/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventsWrapper.h
index 088b33b798a3242..3eb30c67fea73a0 100644
--- a/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventsWrapper.h
+++ b/llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventsWrapper.h
@@ -31,9 +31,9 @@ typedef enum {
 class IntelJITEventsWrapper {
   // Function pointer types for testing implementation of Intel jitprofiling
   // library
-  typedef int (*NotifyEventPtr)(iJIT_JVM_EVENT, void*);
+  typedef int (*NotifyEventPtr)(iJIT_JVM_EVENT, void *);
   typedef int (*IttnotifyInfoPtr)(IttEventType, const char *, unsigned int);
-  typedef void (*RegisterCallbackExPtr)(void *, iJIT_ModeChangedEx );
+  typedef void (*RegisterCallbackExPtr)(void *, iJIT_ModeChangedEx);
   typedef iJIT_IsProfilingActiveFlags (*IsProfilingActivePtr)(void);
   typedef void (*FinalizeThreadPtr)(void);
   typedef void (*FinalizeProcessPtr)(void);
@@ -70,7 +70,7 @@ class IntelJITEventsWrapper {
 
   // Sends an event announcing that a function has been emitted
   //   return values are event-specific.  See Intel documentation for details.
-  int  iJIT_NotifyEvent(iJIT_JVM_EVENT EventType, void *EventSpecificData) {
+  int iJIT_NotifyEvent(iJIT_JVM_EVENT EventType, void *EventSpecificData) {
     if (!NotifyEventFunc)
       return -1;
     return NotifyEventFunc(EventType, EventSpecificData);
@@ -105,6 +105,6 @@ class IntelJITEventsWrapper {
   }
 };
 
-} //namespace llvm
+} // namespace llvm
 
-#endif //INTEL_JIT_EVENTS_WRAPPER_H
+#endif // INTEL_JIT_EVENTS_WRAPPER_H
diff --git a/llvm/lib/ExecutionEngine/IntelJITEvents/ittnotify_config.h b/llvm/lib/ExecutionEngine/IntelJITEvents/ittnotify_config.h
index 16ce672150cc25e..23f06036a7c8e34 100644
--- a/llvm/lib/ExecutionEngine/IntelJITEvents/ittnotify_config.h
+++ b/llvm/lib/ExecutionEngine/IntelJITEvents/ittnotify_config.h
@@ -20,41 +20,41 @@
 
 /** @cond exclude_from_documentation */
 #ifndef ITT_OS_WIN
-#  define ITT_OS_WIN   1
+#define ITT_OS_WIN 1
 #endif /* ITT_OS_WIN */
 
 #ifndef ITT_OS_LINUX
-#  define ITT_OS_LINUX 2
+#define ITT_OS_LINUX 2
 #endif /* ITT_OS_LINUX */
 
 #ifndef ITT_OS_MAC
-#  define ITT_OS_MAC   3
+#define ITT_OS_MAC 3
 #endif /* ITT_OS_MAC */
 
 #ifndef ITT_OS
-#  if defined WIN32 || defined _WIN32
-#    define ITT_OS ITT_OS_WIN
-#  elif defined( __APPLE__ ) && defined( __MACH__ )
-#    define ITT_OS ITT_OS_MAC
-#  else
-#    define ITT_OS ITT_OS_LINUX
-#  endif
+#if defined WIN32 || defined _WIN32
+#define ITT_OS ITT_OS_WIN
+#elif defined(__APPLE__) && defined(__MACH__)
+#define ITT_OS ITT_OS_MAC
+#else
+#define ITT_OS ITT_OS_LINUX
+#endif
 #endif /* ITT_OS */
 
 #ifndef ITT_PLATFORM_WIN
-#  define ITT_PLATFORM_WIN 1
+#define ITT_PLATFORM_WIN 1
 #endif /* ITT_PLATFORM_WIN */
 
 #ifndef ITT_PLATFORM_POSIX
-#  define ITT_PLATFORM_POSIX 2
+#define ITT_PLATFORM_POSIX 2
 #endif /* ITT_PLATFORM_POSIX */
 
 #ifndef ITT_PLATFORM
-#  if ITT_OS==ITT_OS_WIN
-#    define ITT_PLATFORM ITT_PLATFORM_WIN
-#  else
-#    define ITT_PLATFORM ITT_PLATFORM_POSIX
-#  endif /* _WIN32 */
+#if ITT_OS == ITT_OS_WIN
+#define ITT_PLATFORM ITT_PLATFORM_WIN
+#else
+#define ITT_PLATFORM ITT_PLATFORM_POSIX
+#endif /* _WIN32 */
 #endif /* ITT_PLATFORM */
 
 #if defined(_UNICODE) && !defined(UNICODE)
@@ -62,9 +62,9 @@
 #endif
 
 #include <stddef.h>
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
 #include <tchar.h>
-#else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 #include <stdint.h>
 #if defined(UNICODE) || defined(_UNICODE)
 #include <wchar.h>
@@ -72,382 +72,386 @@
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 
 #ifndef CDECL
-#  if ITT_PLATFORM==ITT_PLATFORM_WIN
-#    define CDECL __cdecl
-#  else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#    if defined _M_X64 || defined _M_AMD64 || defined __x86_64__
-#      define CDECL /* not actual on x86_64 platform */
-#    else  /* _M_X64 || _M_AMD64 || __x86_64__ */
-#      define CDECL __attribute__ ((cdecl))
-#    endif /* _M_X64 || _M_AMD64 || __x86_64__ */
-#  endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+#define CDECL __cdecl
+#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#if defined _M_X64 || defined _M_AMD64 || defined __x86_64__
+#define CDECL /* not actual on x86_64 platform */
+#else         /* _M_X64 || _M_AMD64 || __x86_64__ */
+#define CDECL __attribute__((cdecl))
+#endif /* _M_X64 || _M_AMD64 || __x86_64__ */
+#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 #endif /* CDECL */
 
 #ifndef STDCALL
-#  if ITT_PLATFORM==ITT_PLATFORM_WIN
-#    define STDCALL __stdcall
-#  else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#    if defined _M_X64 || defined _M_AMD64 || defined __x86_64__
-#      define STDCALL /* not supported on x86_64 platform */
-#    else  /* _M_X64 || _M_AMD64 || __x86_64__ */
-#      define STDCALL __attribute__ ((stdcall))
-#    endif /* _M_X64 || _M_AMD64 || __x86_64__ */
-#  endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+#define STDCALL __stdcall
+#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#if defined _M_X64 || defined _M_AMD64 || defined __x86_64__
+#define STDCALL /* not supported on x86_64 platform */
+#else           /* _M_X64 || _M_AMD64 || __x86_64__ */
+#define STDCALL __attribute__((stdcall))
+#endif /* _M_X64 || _M_AMD64 || __x86_64__ */
+#endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 #endif /* STDCALL */
 
-#define ITTAPI    CDECL
+#define ITTAPI CDECL
 #define LIBITTAPI CDECL
 
 /* TODO: Temporary for compatibility! */
-#define ITTAPI_CALL    CDECL
+#define ITTAPI_CALL CDECL
 #define LIBITTAPI_CALL CDECL
 
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
 /* use __forceinline (VC++ specific) */
-#define ITT_INLINE           __forceinline
+#define ITT_INLINE __forceinline
 #define ITT_INLINE_ATTRIBUTE /* nothing */
-#else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#else                        /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 /*
  * Generally, functions are not inlined unless optimization is specified.
  * For functions declared inline, this attribute inlines the function even
  * if no optimization level was specified.
  */
 #ifdef __STRICT_ANSI__
-#define ITT_INLINE           static
-#else  /* __STRICT_ANSI__ */
-#define ITT_INLINE           static inline
+#define ITT_INLINE static
+#else /* __STRICT_ANSI__ */
+#define ITT_INLINE static inline
 #endif /* __STRICT_ANSI__ */
-#define ITT_INLINE_ATTRIBUTE __attribute__ ((always_inline))
+#define ITT_INLINE_ATTRIBUTE __attribute__((always_inline))
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 /** @endcond */
 
 #ifndef ITT_ARCH_IA32
-#  define ITT_ARCH_IA32  1
+#define ITT_ARCH_IA32 1
 #endif /* ITT_ARCH_IA32 */
 
 #ifndef ITT_ARCH_IA32E
-#  define ITT_ARCH_IA32E 2
+#define ITT_ARCH_IA32E 2
 #endif /* ITT_ARCH_IA32E */
 
 #ifndef ITT_ARCH_IA64
-#  define ITT_ARCH_IA64  3
+#define ITT_ARCH_IA64 3
 #endif /* ITT_ARCH_IA64 */
 
 #ifndef ITT_ARCH
-#  if defined _M_X64 || defined _M_AMD64 || defined __x86_64__
-#    define ITT_ARCH ITT_ARCH_IA32E
-#  elif defined _M_IA64 || defined __ia64
-#    define ITT_ARCH ITT_ARCH_IA64
-#  else
-#    define ITT_ARCH ITT_ARCH_IA32
-#  endif
+#if defined _M_X64 || defined _M_AMD64 || defined __x86_64__
+#define ITT_ARCH ITT_ARCH_IA32E
+#elif defined _M_IA64 || defined __ia64
+#define ITT_ARCH ITT_ARCH_IA64
+#else
+#define ITT_ARCH ITT_ARCH_IA32
+#endif
 #endif
 
 #ifdef __cplusplus
-#  define ITT_EXTERN_C extern "C"
+#define ITT_EXTERN_C extern "C"
 #else
-#  define ITT_EXTERN_C /* nothing */
-#endif /* __cplusplus */
+#define ITT_EXTERN_C /* nothing */
+#endif               /* __cplusplus */
 
 #define ITT_TO_STR_AUX(x) #x
-#define ITT_TO_STR(x)     ITT_TO_STR_AUX(x)
+#define ITT_TO_STR(x) ITT_TO_STR_AUX(x)
 
-#define __ITT_BUILD_ASSERT(expr, suffix) do { \
-    static char __itt_build_check_##suffix[(expr) ? 1 : -1]; \
-    __itt_build_check_##suffix[0] = 0; \
-} while(0)
-#define _ITT_BUILD_ASSERT(expr, suffix)  __ITT_BUILD_ASSERT((expr), suffix)
-#define ITT_BUILD_ASSERT(expr)           _ITT_BUILD_ASSERT((expr), __LINE__)
+#define __ITT_BUILD_ASSERT(expr, suffix)                                       \
+  do {                                                                         \
+    static char __itt_build_check_##suffix[(expr) ? 1 : -1];                   \
+    __itt_build_check_##suffix[0] = 0;                                         \
+  } while (0)
+#define _ITT_BUILD_ASSERT(expr, suffix) __ITT_BUILD_ASSERT((expr), suffix)
+#define ITT_BUILD_ASSERT(expr) _ITT_BUILD_ASSERT((expr), __LINE__)
 
-#define ITT_MAGIC { 0xED, 0xAB, 0xAB, 0xEC, 0x0D, 0xEE, 0xDA, 0x30 }
+#define ITT_MAGIC                                                              \
+  { 0xED, 0xAB, 0xAB, 0xEC, 0x0D, 0xEE, 0xDA, 0x30 }
 
 /* Replace with snapshot date YYYYMMDD for promotion build. */
-#define API_VERSION_BUILD    20111111
+#define API_VERSION_BUILD 20111111
 
 #ifndef API_VERSION_NUM
 #define API_VERSION_NUM 0.0.0
 #endif /* API_VERSION_NUM */
 
-#define API_VERSION "ITT-API-Version " ITT_TO_STR(API_VERSION_NUM) \
-                                " (" ITT_TO_STR(API_VERSION_BUILD) ")"
+#define API_VERSION                                                            \
+  "ITT-API-Version " ITT_TO_STR(API_VERSION_NUM) " (" ITT_TO_STR(              \
+      API_VERSION_BUILD) ")"
 
 /* OS communication functions */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
 #include <windows.h>
-typedef HMODULE           lib_t;
-typedef DWORD             TIDT;
-typedef CRITICAL_SECTION  mutex_t;
-#define MUTEX_INITIALIZER { 0 }
+typedef HMODULE lib_t;
+typedef DWORD TIDT;
+typedef CRITICAL_SECTION mutex_t;
+#define MUTEX_INITIALIZER                                                      \
+  { 0 }
 #define strong_alias(name, aliasname) /* empty for Windows */
-#else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#else                                 /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 #include <dlfcn.h>
 #if defined(UNICODE) || defined(_UNICODE)
 #include <wchar.h>
 #endif /* UNICODE */
 #ifndef _GNU_SOURCE
 #define _GNU_SOURCE 1 /* need for PTHREAD_MUTEX_RECURSIVE */
-#endif /* _GNU_SOURCE */
+#endif                /* _GNU_SOURCE */
 #include <pthread.h>
-typedef void*             lib_t;
-typedef pthread_t         TIDT;
-typedef pthread_mutex_t   mutex_t;
+typedef void *lib_t;
+typedef pthread_t TIDT;
+typedef pthread_mutex_t mutex_t;
 #define MUTEX_INITIALIZER PTHREAD_MUTEX_INITIALIZER
-#define _strong_alias(name, aliasname) \
-            extern __typeof (name) aliasname __attribute__ ((alias (#name)));
+#define _strong_alias(name, aliasname)                                         \
+  extern __typeof(name) aliasname __attribute__((alias(#name)));
 #define strong_alias(name, aliasname) _strong_alias(name, aliasname)
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
 #define __itt_get_proc(lib, name) GetProcAddress(lib, name)
-#define __itt_mutex_init(mutex)   InitializeCriticalSection(mutex)
-#define __itt_mutex_lock(mutex)   EnterCriticalSection(mutex)
+#define __itt_mutex_init(mutex) InitializeCriticalSection(mutex)
+#define __itt_mutex_lock(mutex) EnterCriticalSection(mutex)
 #define __itt_mutex_unlock(mutex) LeaveCriticalSection(mutex)
-#define __itt_load_lib(name)      LoadLibraryA(name)
-#define __itt_unload_lib(handle)  FreeLibrary(handle)
-#define __itt_system_error()      (int)GetLastError()
-#define __itt_fstrcmp(s1, s2)     lstrcmpA(s1, s2)
-#define __itt_fstrlen(s)          lstrlenA(s)
+#define __itt_load_lib(name) LoadLibraryA(name)
+#define __itt_unload_lib(handle) FreeLibrary(handle)
+#define __itt_system_error() (int)GetLastError()
+#define __itt_fstrcmp(s1, s2) lstrcmpA(s1, s2)
+#define __itt_fstrlen(s) lstrlenA(s)
 #define __itt_fstrcpyn(s1, s2, l) lstrcpynA(s1, s2, l)
-#define __itt_fstrdup(s)          _strdup(s)
-#define __itt_thread_id()         GetCurrentThreadId()
-#define __itt_thread_yield()      SwitchToThread()
+#define __itt_fstrdup(s) _strdup(s)
+#define __itt_thread_id() GetCurrentThreadId()
+#define __itt_thread_yield() SwitchToThread()
 #ifndef ITT_SIMPLE_INIT
 ITT_INLINE long
-__itt_interlocked_increment(volatile long* ptr) ITT_INLINE_ATTRIBUTE;
-ITT_INLINE long __itt_interlocked_increment(volatile long* ptr)
-{
-    return InterlockedIncrement(ptr);
+__itt_interlocked_increment(volatile long *ptr) ITT_INLINE_ATTRIBUTE;
+ITT_INLINE long __itt_interlocked_increment(volatile long *ptr) {
+  return InterlockedIncrement(ptr);
 }
 #endif /* ITT_SIMPLE_INIT */
-#else /* ITT_PLATFORM!=ITT_PLATFORM_WIN */
+#else  /* ITT_PLATFORM!=ITT_PLATFORM_WIN */
 #define __itt_get_proc(lib, name) dlsym(lib, name)
-#define __itt_mutex_init(mutex)   {\
-    pthread_mutexattr_t mutex_attr;                                         \
-    int error_code = pthread_mutexattr_init(&mutex_attr);                   \
-    if (error_code)                                                         \
-        __itt_report_error(__itt_error_system, "pthread_mutexattr_init",    \
-                           error_code);                                     \
-    error_code = pthread_mutexattr_settype(&mutex_attr,                     \
-                                           PTHREAD_MUTEX_RECURSIVE);        \
-    if (error_code)                                                         \
-        __itt_report_error(__itt_error_system, "pthread_mutexattr_settype", \
-                           error_code);                                     \
-    error_code = pthread_mutex_init(mutex, &mutex_attr);                    \
-    if (error_code)                                                         \
-        __itt_report_error(__itt_error_system, "pthread_mutex_init",        \
-                           error_code);                                     \
-    error_code = pthread_mutexattr_destroy(&mutex_attr);                    \
-    if (error_code)                                                         \
-        __itt_report_error(__itt_error_system, "pthread_mutexattr_destroy", \
-                           error_code);                                     \
-}
-#define __itt_mutex_lock(mutex)   pthread_mutex_lock(mutex)
+#define __itt_mutex_init(mutex)                                                \
+  {                                                                            \
+    pthread_mutexattr_t mutex_attr;                                            \
+    int error_code = pthread_mutexattr_init(&mutex_attr);                      \
+    if (error_code)                                                            \
+      __itt_report_error(__itt_error_system, "pthread_mutexattr_init",         \
+                         error_code);                                          \
+    error_code =                                                               \
+        pthread_mutexattr_settype(&mutex_attr, PTHREAD_MUTEX_RECURSIVE);       \
+    if (error_code)                                                            \
+      __itt_report_error(__itt_error_system, "pthread_mutexattr_settype",      \
+                         error_code);                                          \
+    error_code = pthread_mutex_init(mutex, &mutex_attr);                       \
+    if (error_code)                                                            \
+      __itt_report_error(__itt_error_system, "pthread_mutex_init",             \
+                         error_code);                                          \
+    error_code = pthread_mutexattr_destroy(&mutex_attr);                       \
+    if (error_code)                                                            \
+      __itt_report_error(__itt_error_system, "pthread_mutexattr_destroy",      \
+                         error_code);                                          \
+  }
+#define __itt_mutex_lock(mutex) pthread_mutex_lock(mutex)
 #define __itt_mutex_unlock(mutex) pthread_mutex_unlock(mutex)
-#define __itt_load_lib(name)      dlopen(name, RTLD_LAZY)
-#define __itt_unload_lib(handle)  dlclose(handle)
-#define __itt_system_error()      errno
-#define __itt_fstrcmp(s1, s2)     strcmp(s1, s2)
-#define __itt_fstrlen(s)          strlen(s)
+#define __itt_load_lib(name) dlopen(name, RTLD_LAZY)
+#define __itt_unload_lib(handle) dlclose(handle)
+#define __itt_system_error() errno
+#define __itt_fstrcmp(s1, s2) strcmp(s1, s2)
+#define __itt_fstrlen(s) strlen(s)
 #define __itt_fstrcpyn(s1, s2, l) strncpy(s1, s2, l)
-#define __itt_fstrdup(s)          strdup(s)
-#define __itt_thread_id()         pthread_self()
-#define __itt_thread_yield()      sched_yield()
-#if ITT_ARCH==ITT_ARCH_IA64
+#define __itt_fstrdup(s) strdup(s)
+#define __itt_thread_id() pthread_self()
+#define __itt_thread_yield() sched_yield()
+#if ITT_ARCH == ITT_ARCH_IA64
 #ifdef __INTEL_COMPILER
 #define __TBB_machine_fetchadd4(addr, val) __fetchadd4_acq((void *)addr, val)
 #else  /* __INTEL_COMPILER */
 /* TODO: Add Support for not Intel compilers for IA64 */
 #endif /* __INTEL_COMPILER */
-#else /* ITT_ARCH!=ITT_ARCH_IA64 */
-ITT_INLINE long
-__TBB_machine_fetchadd4(volatile void* ptr, long addend) ITT_INLINE_ATTRIBUTE;
-ITT_INLINE long __TBB_machine_fetchadd4(volatile void* ptr, long addend)
-{
-    long result;
-    __asm__ __volatile__("lock\nxadd %0,%1"
-                          : "=r"(result),"=m"(*(long*)ptr)
-                          : "0"(addend), "m"(*(long*)ptr)
-                          : "memory");
-    return result;
+#else  /* ITT_ARCH!=ITT_ARCH_IA64 */
+ITT_INLINE long __TBB_machine_fetchadd4(volatile void *ptr,
+                                        long addend) ITT_INLINE_ATTRIBUTE;
+ITT_INLINE long __TBB_machine_fetchadd4(volatile void *ptr, long addend) {
+  long result;
+  __asm__ __volatile__("lock\nxadd %0,%1"
+                       : "=r"(result), "=m"(*(long *)ptr)
+                       : "0"(addend), "m"(*(long *)ptr)
+                       : "memory");
+  return result;
 }
 #endif /* ITT_ARCH==ITT_ARCH_IA64 */
 #ifndef ITT_SIMPLE_INIT
 ITT_INLINE long
-__itt_interlocked_increment(volatile long* ptr) ITT_INLINE_ATTRIBUTE;
-ITT_INLINE long __itt_interlocked_increment(volatile long* ptr)
-{
-    return __TBB_machine_fetchadd4(ptr, 1) + 1L;
+__itt_interlocked_increment(volatile long *ptr) ITT_INLINE_ATTRIBUTE;
+ITT_INLINE long __itt_interlocked_increment(volatile long *ptr) {
+  return __TBB_machine_fetchadd4(ptr, 1) + 1L;
 }
 #endif /* ITT_SIMPLE_INIT */
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 
 typedef enum {
-    __itt_collection_normal = 0,
-    __itt_collection_paused = 1
+  __itt_collection_normal = 0,
+  __itt_collection_paused = 1
 } __itt_collection_state;
 
 typedef enum {
-    __itt_thread_normal  = 0,
-    __itt_thread_ignored = 1
+  __itt_thread_normal = 0,
+  __itt_thread_ignored = 1
 } __itt_thread_state;
 
 #pragma pack(push, 8)
 
-typedef struct ___itt_thread_info
-{
-    const char* nameA; /*!< Copy of original name in ASCII. */
+typedef struct ___itt_thread_info {
+  const char *nameA; /*!< Copy of original name in ASCII. */
 #if defined(UNICODE) || defined(_UNICODE)
-    const wchar_t* nameW; /*!< Copy of original name in UNICODE. */
-#else  /* UNICODE || _UNICODE */
-    void* nameW;
-#endif /* UNICODE || _UNICODE */
-    TIDT               tid;
-    __itt_thread_state state;   /*!< Thread state (paused or normal) */
-    int                extra1;  /*!< Reserved to the runtime */
-    void*              extra2;  /*!< Reserved to the runtime */
-    struct ___itt_thread_info* next;
+  const wchar_t *nameW; /*!< Copy of original name in UNICODE. */
+#else                   /* UNICODE || _UNICODE */
+  void *nameW;
+#endif                  /* UNICODE || _UNICODE */
+  TIDT tid;
+  __itt_thread_state state; /*!< Thread state (paused or normal) */
+  int extra1;               /*!< Reserved to the runtime */
+  void *extra2;             /*!< Reserved to the runtime */
+  struct ___itt_thread_info *next;
 } __itt_thread_info;
 
 #include "ittnotify_types.h" /* For __itt_group_id definition */
 
-typedef struct ___itt_api_info_20101001
-{
-    const char*    name;
-    void**         func_ptr;
-    void*          init_func;
-    __itt_group_id group;
-}  __itt_api_info_20101001;
-
-typedef struct ___itt_api_info
-{
-    const char*    name;
-    void**         func_ptr;
-    void*          init_func;
-    void*          null_func;
-    __itt_group_id group;
-}  __itt_api_info;
+typedef struct ___itt_api_info_20101001 {
+  const char *name;
+  void **func_ptr;
+  void *init_func;
+  __itt_group_id group;
+} __itt_api_info_20101001;
+
+typedef struct ___itt_api_info {
+  const char *name;
+  void **func_ptr;
+  void *init_func;
+  void *null_func;
+  __itt_group_id group;
+} __itt_api_info;
 
 struct ___itt_domain;
 struct ___itt_string_handle;
 
-typedef struct ___itt_global
-{
-    unsigned char          magic[8];
-    unsigned long          version_major;
-    unsigned long          version_minor;
-    unsigned long          version_build;
-    volatile long          api_initialized;
-    volatile long          mutex_initialized;
-    volatile long          atomic_counter;
-    mutex_t                mutex;
-    lib_t                  lib;
-    void*                  error_handler;
-    const char**           dll_path_ptr;
-    __itt_api_info*        api_list_ptr;
-    struct ___itt_global*  next;
-    /* Joinable structures below */
-    __itt_thread_info*     thread_list;
-    struct ___itt_domain*  domain_list;
-    struct ___itt_string_handle* string_list;
-    __itt_collection_state state;
+typedef struct ___itt_global {
+  unsigned char magic[8];
+  unsigned long version_major;
+  unsigned long version_minor;
+  unsigned long version_build;
+  volatile long api_initialized;
+  volatile long mutex_initialized;
+  volatile long atomic_counter;
+  mutex_t mutex;
+  lib_t lib;
+  void *error_handler;
+  const char **dll_path_ptr;
+  __itt_api_info *api_list_ptr;
+  struct ___itt_global *next;
+  /* Joinable structures below */
+  __itt_thread_info *thread_list;
+  struct ___itt_domain *domain_list;
+  struct ___itt_string_handle *string_list;
+  __itt_collection_state state;
 } __itt_global;
 
 #pragma pack(pop)
 
-#define NEW_THREAD_INFO_W(gptr,h,h_tail,t,s,n) { \
-    h = (__itt_thread_info*)malloc(sizeof(__itt_thread_info)); \
-    if (h != NULL) { \
-        h->tid    = t; \
-        h->nameA  = NULL; \
-        h->nameW  = n ? _wcsdup(n) : NULL; \
-        h->state  = s; \
-        h->extra1 = 0;    /* reserved */ \
-        h->extra2 = NULL; /* reserved */ \
-        h->next   = NULL; \
-        if (h_tail == NULL) \
-            (gptr)->thread_list = h; \
-        else \
-            h_tail->next = h; \
-    } \
-}
-
-#define NEW_THREAD_INFO_A(gptr,h,h_tail,t,s,n) { \
-    h = (__itt_thread_info*)malloc(sizeof(__itt_thread_info)); \
-    if (h != NULL) { \
-        h->tid    = t; \
-        h->nameA  = n ? __itt_fstrdup(n) : NULL; \
-        h->nameW  = NULL; \
-        h->state  = s; \
-        h->extra1 = 0;    /* reserved */ \
-        h->extra2 = NULL; /* reserved */ \
-        h->next   = NULL; \
-        if (h_tail == NULL) \
-            (gptr)->thread_list = h; \
-        else \
-            h_tail->next = h; \
-    } \
-}
-
-#define NEW_DOMAIN_W(gptr,h,h_tail,name) { \
-    h = (__itt_domain*)malloc(sizeof(__itt_domain)); \
-    if (h != NULL) { \
-        h->flags  = 0;    /* domain is disabled by default */ \
-        h->nameA  = NULL; \
-        h->nameW  = name ? _wcsdup(name) : NULL; \
-        h->extra1 = 0;    /* reserved */ \
-        h->extra2 = NULL; /* reserved */ \
-        h->next   = NULL; \
-        if (h_tail == NULL) \
-            (gptr)->domain_list = h; \
-        else \
-            h_tail->next = h; \
-    } \
-}
-
-#define NEW_DOMAIN_A(gptr,h,h_tail,name) { \
-    h = (__itt_domain*)malloc(sizeof(__itt_domain)); \
-    if (h != NULL) { \
-        h->flags  = 0;    /* domain is disabled by default */ \
-        h->nameA  = name ? __itt_fstrdup(name) : NULL; \
-        h->nameW  = NULL; \
-        h->extra1 = 0;    /* reserved */ \
-        h->extra2 = NULL; /* reserved */ \
-        h->next   = NULL; \
-        if (h_tail == NULL) \
-            (gptr)->domain_list = h; \
-        else \
-            h_tail->next = h; \
-    } \
-}
-
-#define NEW_STRING_HANDLE_W(gptr,h,h_tail,name) { \
-    h = (__itt_string_handle*)malloc(sizeof(__itt_string_handle)); \
-    if (h != NULL) { \
-        h->strA   = NULL; \
-        h->strW   = name ? _wcsdup(name) : NULL; \
-        h->extra1 = 0;    /* reserved */ \
-        h->extra2 = NULL; /* reserved */ \
-        h->next   = NULL; \
-        if (h_tail == NULL) \
-            (gptr)->string_list = h; \
-        else \
-            h_tail->next = h; \
-    } \
-}
-
-#define NEW_STRING_HANDLE_A(gptr,h,h_tail,name) { \
-    h = (__itt_string_handle*)malloc(sizeof(__itt_string_handle)); \
-    if (h != NULL) { \
-        h->strA   = name ? __itt_fstrdup(name) : NULL; \
-        h->strW   = NULL; \
-        h->extra1 = 0;    /* reserved */ \
-        h->extra2 = NULL; /* reserved */ \
-        h->next   = NULL; \
-        if (h_tail == NULL) \
-            (gptr)->string_list = h; \
-        else \
-            h_tail->next = h; \
-    } \
-}
+#define NEW_THREAD_INFO_W(gptr, h, h_tail, t, s, n)                            \
+  {                                                                            \
+    h = (__itt_thread_info *)malloc(sizeof(__itt_thread_info));                \
+    if (h != NULL) {                                                           \
+      h->tid = t;                                                              \
+      h->nameA = NULL;                                                         \
+      h->nameW = n ? _wcsdup(n) : NULL;                                        \
+      h->state = s;                                                            \
+      h->extra1 = 0;    /* reserved */                                         \
+      h->extra2 = NULL; /* reserved */                                         \
+      h->next = NULL;                                                          \
+      if (h_tail == NULL)                                                      \
+        (gptr)->thread_list = h;                                               \
+      else                                                                     \
+        h_tail->next = h;                                                      \
+    }                                                                          \
+  }
+
+#define NEW_THREAD_INFO_A(gptr, h, h_tail, t, s, n)                            \
+  {                                                                            \
+    h = (__itt_thread_info *)malloc(sizeof(__itt_thread_info));                \
+    if (h != NULL) {                                                           \
+      h->tid = t;                                                              \
+      h->nameA = n ? __itt_fstrdup(n) : NULL;                                  \
+      h->nameW = NULL;                                                         \
+      h->state = s;                                                            \
+      h->extra1 = 0;    /* reserved */                                         \
+      h->extra2 = NULL; /* reserved */                                         \
+      h->next = NULL;                                                          \
+      if (h_tail == NULL)                                                      \
+        (gptr)->thread_list = h;                                               \
+      else                                                                     \
+        h_tail->next = h;                                                      \
+    }                                                                          \
+  }
+
+#define NEW_DOMAIN_W(gptr, h, h_tail, name)                                    \
+  {                                                                            \
+    h = (__itt_domain *)malloc(sizeof(__itt_domain));                          \
+    if (h != NULL) {                                                           \
+      h->flags = 0; /* domain is disabled by default */                        \
+      h->nameA = NULL;                                                         \
+      h->nameW = name ? _wcsdup(name) : NULL;                                  \
+      h->extra1 = 0;    /* reserved */                                         \
+      h->extra2 = NULL; /* reserved */                                         \
+      h->next = NULL;                                                          \
+      if (h_tail == NULL)                                                      \
+        (gptr)->domain_list = h;                                               \
+      else                                                                     \
+        h_tail->next = h;                                                      \
+    }                                                                          \
+  }
+
+#define NEW_DOMAIN_A(gptr, h, h_tail, name)                                    \
+  {                                                                            \
+    h = (__itt_domain *)malloc(sizeof(__itt_domain));                          \
+    if (h != NULL) {                                                           \
+      h->flags = 0; /* domain is disabled by default */                        \
+      h->nameA = name ? __itt_fstrdup(name) : NULL;                            \
+      h->nameW = NULL;                                                         \
+      h->extra1 = 0;    /* reserved */                                         \
+      h->extra2 = NULL; /* reserved */                                         \
+      h->next = NULL;                                                          \
+      if (h_tail == NULL)                                                      \
+        (gptr)->domain_list = h;                                               \
+      else                                                                     \
+        h_tail->next = h;                                                      \
+    }                                                                          \
+  }
+
+#define NEW_STRING_HANDLE_W(gptr, h, h_tail, name)                             \
+  {                                                                            \
+    h = (__itt_string_handle *)malloc(sizeof(__itt_string_handle));            \
+    if (h != NULL) {                                                           \
+      h->strA = NULL;                                                          \
+      h->strW = name ? _wcsdup(name) : NULL;                                   \
+      h->extra1 = 0;    /* reserved */                                         \
+      h->extra2 = NULL; /* reserved */                                         \
+      h->next = NULL;                                                          \
+      if (h_tail == NULL)                                                      \
+        (gptr)->string_list = h;                                               \
+      else                                                                     \
+        h_tail->next = h;                                                      \
+    }                                                                          \
+  }
+
+#define NEW_STRING_HANDLE_A(gptr, h, h_tail, name)                             \
+  {                                                                            \
+    h = (__itt_string_handle *)malloc(sizeof(__itt_string_handle));            \
+    if (h != NULL) {                                                           \
+      h->strA = name ? __itt_fstrdup(name) : NULL;                             \
+      h->strW = NULL;                                                          \
+      h->extra1 = 0;    /* reserved */                                         \
+      h->extra2 = NULL; /* reserved */                                         \
+      h->next = NULL;                                                          \
+      if (h_tail == NULL)                                                      \
+        (gptr)->string_list = h;                                               \
+      else                                                                     \
+        h_tail->next = h;                                                      \
+    }                                                                          \
+  }
 
 #endif /* _ITTNOTIFY_CONFIG_H_ */
diff --git a/llvm/lib/ExecutionEngine/IntelJITEvents/ittnotify_types.h b/llvm/lib/ExecutionEngine/IntelJITEvents/ittnotify_types.h
index 15008fe93e607bf..4a6483897125cb8 100644
--- a/llvm/lib/ExecutionEngine/IntelJITEvents/ittnotify_types.h
+++ b/llvm/lib/ExecutionEngine/IntelJITEvents/ittnotify_types.h
@@ -15,55 +15,51 @@
 #ifndef _ITTNOTIFY_TYPES_H_
 #define _ITTNOTIFY_TYPES_H_
 
-typedef enum ___itt_group_id
-{
-    __itt_group_none      = 0,
-    __itt_group_legacy    = 1<<0,
-    __itt_group_control   = 1<<1,
-    __itt_group_thread    = 1<<2,
-    __itt_group_mark      = 1<<3,
-    __itt_group_sync      = 1<<4,
-    __itt_group_fsync     = 1<<5,
-    __itt_group_jit       = 1<<6,
-    __itt_group_model     = 1<<7,
-    __itt_group_splitter_min = 1<<7,
-    __itt_group_counter   = 1<<8,
-    __itt_group_frame     = 1<<9,
-    __itt_group_stitch    = 1<<10,
-    __itt_group_heap      = 1<<11,
-    __itt_group_splitter_max = 1<<12,
-    __itt_group_structure = 1<<12,
-    __itt_group_suppress = 1<<13,
-    __itt_group_all       = -1
+typedef enum ___itt_group_id {
+  __itt_group_none = 0,
+  __itt_group_legacy = 1 << 0,
+  __itt_group_control = 1 << 1,
+  __itt_group_thread = 1 << 2,
+  __itt_group_mark = 1 << 3,
+  __itt_group_sync = 1 << 4,
+  __itt_group_fsync = 1 << 5,
+  __itt_group_jit = 1 << 6,
+  __itt_group_model = 1 << 7,
+  __itt_group_splitter_min = 1 << 7,
+  __itt_group_counter = 1 << 8,
+  __itt_group_frame = 1 << 9,
+  __itt_group_stitch = 1 << 10,
+  __itt_group_heap = 1 << 11,
+  __itt_group_splitter_max = 1 << 12,
+  __itt_group_structure = 1 << 12,
+  __itt_group_suppress = 1 << 13,
+  __itt_group_all = -1
 } __itt_group_id;
 
 #pragma pack(push, 8)
 
-typedef struct ___itt_group_list
-{
-    __itt_group_id id;
-    const char*    name;
+typedef struct ___itt_group_list {
+  __itt_group_id id;
+  const char *name;
 } __itt_group_list;
 
 #pragma pack(pop)
 
-#define ITT_GROUP_LIST(varname) \
-    static __itt_group_list varname[] = {       \
-        { __itt_group_all,       "all"       }, \
-        { __itt_group_control,   "control"   }, \
-        { __itt_group_thread,    "thread"    }, \
-        { __itt_group_mark,      "mark"      }, \
-        { __itt_group_sync,      "sync"      }, \
-        { __itt_group_fsync,     "fsync"     }, \
-        { __itt_group_jit,       "jit"       }, \
-        { __itt_group_model,     "model"     }, \
-        { __itt_group_counter,   "counter"   }, \
-        { __itt_group_frame,     "frame"     }, \
-        { __itt_group_stitch,    "stitch"    }, \
-        { __itt_group_heap,      "heap"      }, \
-        { __itt_group_structure, "structure" }, \
-        { __itt_group_suppress,  "suppress"  }, \
-        { __itt_group_none,      NULL        }  \
-    }
+#define ITT_GROUP_LIST(varname)                                                \
+  static __itt_group_list varname[] = {{__itt_group_all, "all"},               \
+                                       {__itt_group_control, "control"},       \
+                                       {__itt_group_thread, "thread"},         \
+                                       {__itt_group_mark, "mark"},             \
+                                       {__itt_group_sync, "sync"},             \
+                                       {__itt_group_fsync, "fsync"},           \
+                                       {__itt_group_jit, "jit"},               \
+                                       {__itt_group_model, "model"},           \
+                                       {__itt_group_counter, "counter"},       \
+                                       {__itt_group_frame, "frame"},           \
+                                       {__itt_group_stitch, "stitch"},         \
+                                       {__itt_group_heap, "heap"},             \
+                                       {__itt_group_structure, "structure"},   \
+                                       {__itt_group_suppress, "suppress"},     \
+                                       {__itt_group_none, NULL}}
 
 #endif /* _ITTNOTIFY_TYPES_H_ */
diff --git a/llvm/lib/ExecutionEngine/IntelJITEvents/jitprofiling.c b/llvm/lib/ExecutionEngine/IntelJITEvents/jitprofiling.c
index 50d64d70c98a417..48ba94e27318481 100644
--- a/llvm/lib/ExecutionEngine/IntelJITEvents/jitprofiling.c
+++ b/llvm/lib/ExecutionEngine/IntelJITEvents/jitprofiling.c
@@ -6,8 +6,8 @@
  *
  *===----------------------------------------------------------------------===*
  *
- * This file provides Intel(R) Performance Analyzer JIT (Just-In-Time) 
- * Profiling API implementation. 
+ * This file provides Intel(R) Performance Analyzer JIT (Just-In-Time)
+ * Profiling API implementation.
  *
  * NOTE: This file comes in a style different from the rest of LLVM
  * source base since  this is a piece of code shared from Intel(R)
@@ -17,10 +17,10 @@
  *===----------------------------------------------------------------------===*/
 #include "ittnotify_config.h"
 
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
 #include <windows.h>
 #pragma optimize("", off)
-#else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 #include <dlfcn.h>
 #include <pthread.h>
 #include <stdint.h>
@@ -31,44 +31,44 @@
 
 static const char rcsid[] = "\n@(#) $Revision: 243501 $\n";
 
-#define DLL_ENVIRONMENT_VAR             "VS_PROFILER"
+#define DLL_ENVIRONMENT_VAR "VS_PROFILER"
 
 #ifndef NEW_DLL_ENVIRONMENT_VAR
-#if ITT_ARCH==ITT_ARCH_IA32
-#define NEW_DLL_ENVIRONMENT_VAR	        "INTEL_JIT_PROFILER32"
+#if ITT_ARCH == ITT_ARCH_IA32
+#define NEW_DLL_ENVIRONMENT_VAR "INTEL_JIT_PROFILER32"
 #else
-#define NEW_DLL_ENVIRONMENT_VAR	        "INTEL_JIT_PROFILER64"
+#define NEW_DLL_ENVIRONMENT_VAR "INTEL_JIT_PROFILER64"
 #endif
 #endif /* NEW_DLL_ENVIRONMENT_VAR */
 
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-#define DEFAULT_DLLNAME                 "JitPI.dll"
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+#define DEFAULT_DLLNAME "JitPI.dll"
 HINSTANCE m_libHandle = NULL;
-#else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-#define DEFAULT_DLLNAME                 "libJitPI.so"
-void* m_libHandle = NULL;
+#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+#define DEFAULT_DLLNAME "libJitPI.so"
+void *m_libHandle = NULL;
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 
 /* default location of JIT profiling agent on Android */
-#define ANDROID_JIT_AGENT_PATH  "/data/intel/libittnotify.so"
+#define ANDROID_JIT_AGENT_PATH "/data/intel/libittnotify.so"
 
 /* the function pointers */
-typedef unsigned int(*TPInitialize)(void);
-static TPInitialize FUNC_Initialize=NULL;
+typedef unsigned int (*TPInitialize)(void);
+static TPInitialize FUNC_Initialize = NULL;
 
-typedef unsigned int(*TPNotify)(unsigned int, void*);
-static TPNotify FUNC_NotifyEvent=NULL;
+typedef unsigned int (*TPNotify)(unsigned int, void *);
+static TPNotify FUNC_NotifyEvent = NULL;
 
 static iJIT_IsProfilingActiveFlags executionMode = iJIT_NOTHING_RUNNING;
 
 /* end collector dll part. */
 
-/* loadiJIT_Funcs() : this function is called just in the beginning 
+/* loadiJIT_Funcs() : this function is called just in the beginning
  *  and is responsible to load the functions from BistroJavaCollector.dll
  * result:
  *  on success: the functions loads, iJIT_DLL_is_missing=0, return value = 1
  *  on failure: the functions are NULL, iJIT_DLL_is_missing=1, return value = 0
- */ 
+ */
 static int loadiJIT_Funcs(void);
 
 /* global representing whether the BistroJavaCollector can't be loaded */
@@ -77,12 +77,12 @@ static int iJIT_DLL_is_missing = 0;
 /* Virtual stack - the struct is used as a virtual stack for each thread.
  * Every thread initializes with a stack of size INIT_TOP_STACK.
  * Every method entry decreases from the current stack point,
- * and when a thread stack reaches its top of stack (return from the global 
- * function), the top of stack and the current stack increase. Notice that 
- * when returning from a function the stack pointer is the address of 
+ * and when a thread stack reaches its top of stack (return from the global
+ * function), the top of stack and the current stack increase. Notice that
+ * when returning from a function the stack pointer is the address of
  * the function return.
-*/
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
+ */
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
 static DWORD threadLocalStorageHandle = 0;
 #else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 static pthread_key_t threadLocalStorageHandle = (pthread_key_t)0;
@@ -90,375 +90,342 @@ static pthread_key_t threadLocalStorageHandle = (pthread_key_t)0;
 
 #define INIT_TOP_Stack 10000
 
-typedef struct 
-{
-    unsigned int TopStack;
-    unsigned int CurrentStack;
+typedef struct {
+  unsigned int TopStack;
+  unsigned int CurrentStack;
 } ThreadStack, *pThreadStack;
 
 /* end of virtual stack. */
 
 /*
  * The function for reporting virtual-machine related events to VTune.
- * Note: when reporting iJVM_EVENT_TYPE_ENTER_NIDS, there is no need to fill 
+ * Note: when reporting iJVM_EVENT_TYPE_ENTER_NIDS, there is no need to fill
  * in the stack_id field in the iJIT_Method_NIDS structure, as VTune fills it.
- * The return value in iJVM_EVENT_TYPE_ENTER_NIDS && 
+ * The return value in iJVM_EVENT_TYPE_ENTER_NIDS &&
  * iJVM_EVENT_TYPE_LEAVE_NIDS events will be 0 in case of failure.
- * in iJVM_EVENT_TYPE_METHOD_LOAD_FINISHED event 
+ * in iJVM_EVENT_TYPE_METHOD_LOAD_FINISHED event
  * it will be -1 if EventSpecificData == 0 otherwise it will be 0.
-*/
-
-ITT_EXTERN_C int JITAPI 
-iJIT_NotifyEvent(iJIT_JVM_EVENT event_type, void *EventSpecificData)
-{
-    int ReturnValue;
-
-    /*
-     * This section is for debugging outside of VTune. 
-     * It creates the environment variables that indicates call graph mode.
-     * If running outside of VTune remove the remark.
-     *
-     *
-     * static int firstTime = 1;
-     * char DoCallGraph[12] = "DoCallGraph";
-     * if (firstTime)
-     * {
-     * firstTime = 0;
-     * SetEnvironmentVariable( "BISTRO_COLLECTORS_DO_CALLGRAPH", DoCallGraph);
-     * }
-     *
-     * end of section.
-    */
-
-    /* initialization part - the functions have not been loaded yet. This part
-     *        will load the functions, and check if we are in Call Graph mode. 
-     *        (for special treatment).
-     */
-    if (!FUNC_NotifyEvent) 
-    {
-        if (iJIT_DLL_is_missing) 
-            return 0;
-
-        /* load the Function from the DLL */
-        if (!loadiJIT_Funcs()) 
-            return 0;
-
-        /* Call Graph initialization. */
-    }
+ */
 
-    /* If the event is method entry/exit, check that in the current mode 
-     * VTune is allowed to receive it
-     */
-    if ((event_type == iJVM_EVENT_TYPE_ENTER_NIDS || 
-         event_type == iJVM_EVENT_TYPE_LEAVE_NIDS) &&
-        (executionMode != iJIT_CALLGRAPH_ON))
-    {
-        return 0;
-    }
-    /* This section is performed when method enter event occurs.
-     * It updates the virtual stack, or creates it if this is the first 
-     * method entry in the thread. The stack pointer is decreased.
-     */
-    if (event_type == iJVM_EVENT_TYPE_ENTER_NIDS)
-    {
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-        pThreadStack threadStack = 
-            (pThreadStack)TlsGetValue (threadLocalStorageHandle);
+ITT_EXTERN_C int JITAPI iJIT_NotifyEvent(iJIT_JVM_EVENT event_type,
+                                         void *EventSpecificData) {
+  int ReturnValue;
+
+  /*
+   * This section is for debugging outside of VTune.
+   * It creates the environment variables that indicates call graph mode.
+   * If running outside of VTune remove the remark.
+   *
+   *
+   * static int firstTime = 1;
+   * char DoCallGraph[12] = "DoCallGraph";
+   * if (firstTime)
+   * {
+   * firstTime = 0;
+   * SetEnvironmentVariable( "BISTRO_COLLECTORS_DO_CALLGRAPH", DoCallGraph);
+   * }
+   *
+   * end of section.
+   */
+
+  /* initialization part - the functions have not been loaded yet. This part
+   *        will load the functions, and check if we are in Call Graph mode.
+   *        (for special treatment).
+   */
+  if (!FUNC_NotifyEvent) {
+    if (iJIT_DLL_is_missing)
+      return 0;
+
+    /* load the Function from the DLL */
+    if (!loadiJIT_Funcs())
+      return 0;
+
+    /* Call Graph initialization. */
+  }
+
+  /* If the event is method entry/exit, check that in the current mode
+   * VTune is allowed to receive it
+   */
+  if ((event_type == iJVM_EVENT_TYPE_ENTER_NIDS ||
+       event_type == iJVM_EVENT_TYPE_LEAVE_NIDS) &&
+      (executionMode != iJIT_CALLGRAPH_ON)) {
+    return 0;
+  }
+  /* This section is performed when method enter event occurs.
+   * It updates the virtual stack, or creates it if this is the first
+   * method entry in the thread. The stack pointer is decreased.
+   */
+  if (event_type == iJVM_EVENT_TYPE_ENTER_NIDS) {
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+    pThreadStack threadStack =
+        (pThreadStack)TlsGetValue(threadLocalStorageHandle);
 #else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-        pThreadStack threadStack = 
-            (pThreadStack)pthread_getspecific(threadLocalStorageHandle);
+    pThreadStack threadStack =
+        (pThreadStack)pthread_getspecific(threadLocalStorageHandle);
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 
-        /* check for use of reserved method IDs */
-        if ( ((piJIT_Method_NIDS) EventSpecificData)->method_id <= 999 )
-            return 0;
-
-        if (!threadStack)
-        {
-            /* initialize the stack. */
-            threadStack = (pThreadStack) calloc (sizeof(ThreadStack), 1);
-            threadStack->TopStack = INIT_TOP_Stack;
-            threadStack->CurrentStack = INIT_TOP_Stack;
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-            TlsSetValue(threadLocalStorageHandle,(void*)threadStack);
+    /* check for use of reserved method IDs */
+    if (((piJIT_Method_NIDS)EventSpecificData)->method_id <= 999)
+      return 0;
+
+    if (!threadStack) {
+      /* initialize the stack. */
+      threadStack = (pThreadStack)calloc(sizeof(ThreadStack), 1);
+      threadStack->TopStack = INIT_TOP_Stack;
+      threadStack->CurrentStack = INIT_TOP_Stack;
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+      TlsSetValue(threadLocalStorageHandle, (void *)threadStack);
 #else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-            pthread_setspecific(threadLocalStorageHandle,(void*)threadStack);
+      pthread_setspecific(threadLocalStorageHandle, (void *)threadStack);
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-        }
-
-        /* decrease the stack. */
-        ((piJIT_Method_NIDS) EventSpecificData)->stack_id = 
-            (threadStack->CurrentStack)--;
     }
 
-    /* This section is performed when method leave event occurs
-     * It updates the virtual stack.
-     *    Increases the stack pointer.
-     *    If the stack pointer reached the top (left the global function)
-     *        increase the pointer and the top pointer.
-     */
-    if (event_type == iJVM_EVENT_TYPE_LEAVE_NIDS)
-    {
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-        pThreadStack threadStack = 
-           (pThreadStack)TlsGetValue (threadLocalStorageHandle);
+    /* decrease the stack. */
+    ((piJIT_Method_NIDS)EventSpecificData)->stack_id =
+        (threadStack->CurrentStack)--;
+  }
+
+  /* This section is performed when method leave event occurs
+   * It updates the virtual stack.
+   *    Increases the stack pointer.
+   *    If the stack pointer reached the top (left the global function)
+   *        increase the pointer and the top pointer.
+   */
+  if (event_type == iJVM_EVENT_TYPE_LEAVE_NIDS) {
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+    pThreadStack threadStack =
+        (pThreadStack)TlsGetValue(threadLocalStorageHandle);
 #else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-        pThreadStack threadStack = 
-            (pThreadStack)pthread_getspecific(threadLocalStorageHandle);
+    pThreadStack threadStack =
+        (pThreadStack)pthread_getspecific(threadLocalStorageHandle);
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 
-        /* check for use of reserved method IDs */
-        if ( ((piJIT_Method_NIDS) EventSpecificData)->method_id <= 999 )
-            return 0;
+    /* check for use of reserved method IDs */
+    if (((piJIT_Method_NIDS)EventSpecificData)->method_id <= 999)
+      return 0;
 
-        if (!threadStack)
-        {
-            /* Error: first report in this thread is method exit */
-            exit (1);
-        }
+    if (!threadStack) {
+      /* Error: first report in this thread is method exit */
+      exit(1);
+    }
 
-        ((piJIT_Method_NIDS) EventSpecificData)->stack_id = 
-            ++(threadStack->CurrentStack) + 1;
+    ((piJIT_Method_NIDS)EventSpecificData)->stack_id =
+        ++(threadStack->CurrentStack) + 1;
 
-        if (((piJIT_Method_NIDS) EventSpecificData)->stack_id 
-               > threadStack->TopStack)
-            ((piJIT_Method_NIDS) EventSpecificData)->stack_id = 
-                (unsigned int)-1;
-    }
+    if (((piJIT_Method_NIDS)EventSpecificData)->stack_id >
+        threadStack->TopStack)
+      ((piJIT_Method_NIDS)EventSpecificData)->stack_id = (unsigned int)-1;
+  }
 
-    if (event_type == iJVM_EVENT_TYPE_METHOD_LOAD_FINISHED)
-    {
-        /* check for use of reserved method IDs */
-        if ( ((piJIT_Method_Load) EventSpecificData)->method_id <= 999 )
-            return 0;
-    }
+  if (event_type == iJVM_EVENT_TYPE_METHOD_LOAD_FINISHED) {
+    /* check for use of reserved method IDs */
+    if (((piJIT_Method_Load)EventSpecificData)->method_id <= 999)
+      return 0;
+  }
 
-    ReturnValue = (int)FUNC_NotifyEvent(event_type, EventSpecificData);   
+  ReturnValue = (int)FUNC_NotifyEvent(event_type, EventSpecificData);
 
-    return ReturnValue;
+  return ReturnValue;
 }
 
 /* The new mode call back routine */
-ITT_EXTERN_C void JITAPI 
-iJIT_RegisterCallbackEx(void *userdata, iJIT_ModeChangedEx 
-                        NewModeCallBackFuncEx) 
-{
-    /* is it already missing... or the load of functions from the DLL failed */
-    if (iJIT_DLL_is_missing || !loadiJIT_Funcs())
-    {
-        /* then do not bother with notifications */
-        NewModeCallBackFuncEx(userdata, iJIT_NO_NOTIFICATIONS);  
-        /* Error: could not load JIT functions. */
-        return;
-    }
-    /* nothing to do with the callback */
+ITT_EXTERN_C void JITAPI iJIT_RegisterCallbackEx(
+    void *userdata, iJIT_ModeChangedEx NewModeCallBackFuncEx) {
+  /* is it already missing... or the load of functions from the DLL failed */
+  if (iJIT_DLL_is_missing || !loadiJIT_Funcs()) {
+    /* then do not bother with notifications */
+    NewModeCallBackFuncEx(userdata, iJIT_NO_NOTIFICATIONS);
+    /* Error: could not load JIT functions. */
+    return;
+  }
+  /* nothing to do with the callback */
 }
 
 /*
- * This function allows the user to query in which mode, if at all, 
+ * This function allows the user to query in which mode, if at all,
  *VTune is running
  */
-ITT_EXTERN_C iJIT_IsProfilingActiveFlags JITAPI iJIT_IsProfilingActive(void)
-{
-    if (!iJIT_DLL_is_missing)
-    {
-        loadiJIT_Funcs();
-    }
+ITT_EXTERN_C iJIT_IsProfilingActiveFlags JITAPI iJIT_IsProfilingActive(void) {
+  if (!iJIT_DLL_is_missing) {
+    loadiJIT_Funcs();
+  }
 
-    return executionMode;
+  return executionMode;
 }
 
-/* this function loads the collector dll (BistroJavaCollector) 
+/* this function loads the collector dll (BistroJavaCollector)
  * and the relevant functions.
  * on success: all functions load,     iJIT_DLL_is_missing = 0, return value = 1
  * on failure: all functions are NULL, iJIT_DLL_is_missing = 1, return value = 0
- */ 
-static int loadiJIT_Funcs(void)
-{
-    static int bDllWasLoaded = 0;
-    char *dllName = (char*)rcsid; /* !! Just to avoid unused code elimination */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-    DWORD dNameLength = 0;
+ */
+static int loadiJIT_Funcs(void) {
+  static int bDllWasLoaded = 0;
+  char *dllName = (char *)rcsid; /* !! Just to avoid unused code elimination */
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+  DWORD dNameLength = 0;
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 
-    if(bDllWasLoaded)
-    {
-        /* dll was already loaded, no need to do it for the second time */
-        return 1;
-    }
+  if (bDllWasLoaded) {
+    /* dll was already loaded, no need to do it for the second time */
+    return 1;
+  }
 
-    /* Assumes that the DLL will not be found */
-    iJIT_DLL_is_missing = 1;
-    FUNC_NotifyEvent = NULL;
+  /* Assumes that the DLL will not be found */
+  iJIT_DLL_is_missing = 1;
+  FUNC_NotifyEvent = NULL;
 
-    if (m_libHandle) 
-    {
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-        FreeLibrary(m_libHandle);
+  if (m_libHandle) {
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+    FreeLibrary(m_libHandle);
 #else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-        dlclose(m_libHandle);
+    dlclose(m_libHandle);
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-        m_libHandle = NULL;
+    m_libHandle = NULL;
+  }
+
+  /* Try to get the dll name from the environment */
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+  dNameLength = GetEnvironmentVariableA(NEW_DLL_ENVIRONMENT_VAR, NULL, 0);
+  if (dNameLength) {
+    DWORD envret = 0;
+    dllName = (char *)malloc(sizeof(char) * (dNameLength + 1));
+    envret =
+        GetEnvironmentVariableA(NEW_DLL_ENVIRONMENT_VAR, dllName, dNameLength);
+    if (envret) {
+      /* Try to load the dll from the PATH... */
+      m_libHandle =
+          LoadLibraryExA(dllName, NULL, LOAD_WITH_ALTERED_SEARCH_PATH);
     }
-
-    /* Try to get the dll name from the environment */
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-    dNameLength = GetEnvironmentVariableA(NEW_DLL_ENVIRONMENT_VAR, NULL, 0);
-    if (dNameLength)
-    {
-        DWORD envret = 0;
-        dllName = (char*)malloc(sizeof(char) * (dNameLength + 1));
-        envret = GetEnvironmentVariableA(NEW_DLL_ENVIRONMENT_VAR, 
-                                         dllName, dNameLength);
-        if (envret)
-        {
-            /* Try to load the dll from the PATH... */
-            m_libHandle = LoadLibraryExA(dllName, 
-                                         NULL, LOAD_WITH_ALTERED_SEARCH_PATH);
-        }
-        free(dllName);
-    } else {
-        /* Try to use old VS_PROFILER variable */
-        dNameLength = GetEnvironmentVariableA(DLL_ENVIRONMENT_VAR, NULL, 0);
-        if (dNameLength)
-        {
-            DWORD envret = 0;
-            dllName = (char*)malloc(sizeof(char) * (dNameLength + 1));
-            envret = GetEnvironmentVariableA(DLL_ENVIRONMENT_VAR, 
-                                             dllName, dNameLength);
-            if (envret)
-            {
-                /* Try to load the dll from the PATH... */
-                m_libHandle = LoadLibraryA(dllName);
-            }
-            free(dllName);
-        }
+    free(dllName);
+  } else {
+    /* Try to use old VS_PROFILER variable */
+    dNameLength = GetEnvironmentVariableA(DLL_ENVIRONMENT_VAR, NULL, 0);
+    if (dNameLength) {
+      DWORD envret = 0;
+      dllName = (char *)malloc(sizeof(char) * (dNameLength + 1));
+      envret =
+          GetEnvironmentVariableA(DLL_ENVIRONMENT_VAR, dllName, dNameLength);
+      if (envret) {
+        /* Try to load the dll from the PATH... */
+        m_libHandle = LoadLibraryA(dllName);
+      }
+      free(dllName);
     }
-#else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-    dllName = getenv(NEW_DLL_ENVIRONMENT_VAR);
-    if (!dllName)
-        dllName = getenv(DLL_ENVIRONMENT_VAR);
+  }
+#else /* ITT_PLATFORM==ITT_PLATFORM_WIN */
+  dllName = getenv(NEW_DLL_ENVIRONMENT_VAR);
+  if (!dllName)
+    dllName = getenv(DLL_ENVIRONMENT_VAR);
 #ifdef ANDROID
-    if (!dllName)
-        dllName = ANDROID_JIT_AGENT_PATH;
+  if (!dllName)
+    dllName = ANDROID_JIT_AGENT_PATH;
 #endif
-    if (dllName)
-    {
-        /* Try to load the dll from the PATH... */
-        m_libHandle = dlopen(dllName, RTLD_LAZY);
-    }
+  if (dllName) {
+    /* Try to load the dll from the PATH... */
+    m_libHandle = dlopen(dllName, RTLD_LAZY);
+  }
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
 
-    if (!m_libHandle)
-    {
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-        m_libHandle = LoadLibraryA(DEFAULT_DLLNAME);
+  if (!m_libHandle) {
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+    m_libHandle = LoadLibraryA(DEFAULT_DLLNAME);
 #else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-        m_libHandle = dlopen(DEFAULT_DLLNAME, RTLD_LAZY);
+    m_libHandle = dlopen(DEFAULT_DLLNAME, RTLD_LAZY);
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-    }
-
-    /* if the dll wasn't loaded - exit. */
-    if (!m_libHandle)
-    {
-        iJIT_DLL_is_missing = 1; /* don't try to initialize 
-                                  * JIT agent the second time 
-                                  */
-        return 0;
-    }
-
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-    FUNC_NotifyEvent = (TPNotify)GetProcAddress(m_libHandle, "NotifyEvent");
+  }
+
+  /* if the dll wasn't loaded - exit. */
+  if (!m_libHandle) {
+    iJIT_DLL_is_missing = 1; /* don't try to initialize
+                              * JIT agent the second time
+                              */
+    return 0;
+  }
+
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+  FUNC_NotifyEvent = (TPNotify)GetProcAddress(m_libHandle, "NotifyEvent");
 #else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-    FUNC_NotifyEvent = (TPNotify)(intptr_t)dlsym(m_libHandle, "NotifyEvent");
+  FUNC_NotifyEvent = (TPNotify)(intptr_t)dlsym(m_libHandle, "NotifyEvent");
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-    if (!FUNC_NotifyEvent) 
-    {
-        FUNC_Initialize = NULL;
-        return 0;
-    }
+  if (!FUNC_NotifyEvent) {
+    FUNC_Initialize = NULL;
+    return 0;
+  }
 
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-    FUNC_Initialize = (TPInitialize)GetProcAddress(m_libHandle, "Initialize");
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+  FUNC_Initialize = (TPInitialize)GetProcAddress(m_libHandle, "Initialize");
 #else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-    FUNC_Initialize = (TPInitialize)(intptr_t)dlsym(m_libHandle, "Initialize");
+  FUNC_Initialize = (TPInitialize)(intptr_t)dlsym(m_libHandle, "Initialize");
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-    if (!FUNC_Initialize) 
-    {
-        FUNC_NotifyEvent = NULL;
-        return 0;
-    }
-
-    executionMode = (iJIT_IsProfilingActiveFlags)FUNC_Initialize();
-
-    bDllWasLoaded = 1;
-    iJIT_DLL_is_missing = 0; /* DLL is ok. */
-
-    /*
-     * Call Graph mode: init the thread local storage
-     * (need to store the virtual stack there).
-     */
-    if ( executionMode == iJIT_CALLGRAPH_ON )
-    {
-        /* Allocate a thread local storage slot for the thread "stack" */
-        if (!threadLocalStorageHandle)
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-            threadLocalStorageHandle = TlsAlloc();
+  if (!FUNC_Initialize) {
+    FUNC_NotifyEvent = NULL;
+    return 0;
+  }
+
+  executionMode = (iJIT_IsProfilingActiveFlags)FUNC_Initialize();
+
+  bDllWasLoaded = 1;
+  iJIT_DLL_is_missing = 0; /* DLL is ok. */
+
+  /*
+   * Call Graph mode: init the thread local storage
+   * (need to store the virtual stack there).
+   */
+  if (executionMode == iJIT_CALLGRAPH_ON) {
+    /* Allocate a thread local storage slot for the thread "stack" */
+    if (!threadLocalStorageHandle)
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+      threadLocalStorageHandle = TlsAlloc();
 #else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-        pthread_key_create(&threadLocalStorageHandle, NULL);
+      pthread_key_create(&threadLocalStorageHandle, NULL);
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-    }
+  }
 
-    return 1;
+  return 1;
 }
 
 /*
- * This function should be called by the user whenever a thread ends, 
+ * This function should be called by the user whenever a thread ends,
  * to free the thread "virtual stack" storage
  */
-ITT_EXTERN_C void JITAPI FinalizeThread(void)
-{
-    if (threadLocalStorageHandle)
-    {
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-        pThreadStack threadStack = 
-            (pThreadStack)TlsGetValue (threadLocalStorageHandle);
+ITT_EXTERN_C void JITAPI FinalizeThread(void) {
+  if (threadLocalStorageHandle) {
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+    pThreadStack threadStack =
+        (pThreadStack)TlsGetValue(threadLocalStorageHandle);
 #else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-        pThreadStack threadStack = 
-            (pThreadStack)pthread_getspecific(threadLocalStorageHandle);
+    pThreadStack threadStack =
+        (pThreadStack)pthread_getspecific(threadLocalStorageHandle);
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-        if (threadStack)
-        {
-            free (threadStack);
-            threadStack = NULL;
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-            TlsSetValue (threadLocalStorageHandle, threadStack);
+    if (threadStack) {
+      free(threadStack);
+      threadStack = NULL;
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+      TlsSetValue(threadLocalStorageHandle, threadStack);
 #else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-            pthread_setspecific(threadLocalStorageHandle, threadStack);
+      pthread_setspecific(threadLocalStorageHandle, threadStack);
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-        }
     }
+  }
 }
 
 /*
- * This function should be called by the user when the process ends, 
+ * This function should be called by the user when the process ends,
  * to free the local storage index
-*/
-ITT_EXTERN_C void JITAPI FinalizeProcess(void)
-{
-    if (m_libHandle) 
-    {
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-        FreeLibrary(m_libHandle);
+ */
+ITT_EXTERN_C void JITAPI FinalizeProcess(void) {
+  if (m_libHandle) {
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+    FreeLibrary(m_libHandle);
 #else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-        dlclose(m_libHandle);
+    dlclose(m_libHandle);
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
-        m_libHandle = NULL;
-    }
+    m_libHandle = NULL;
+  }
 
-    if (threadLocalStorageHandle)
-#if ITT_PLATFORM==ITT_PLATFORM_WIN
-        TlsFree (threadLocalStorageHandle);
+  if (threadLocalStorageHandle)
+#if ITT_PLATFORM == ITT_PLATFORM_WIN
+    TlsFree(threadLocalStorageHandle);
 #else  /* ITT_PLATFORM==ITT_PLATFORM_WIN */
     pthread_key_delete(threadLocalStorageHandle);
 #endif /* ITT_PLATFORM==ITT_PLATFORM_WIN */
@@ -466,15 +433,14 @@ ITT_EXTERN_C void JITAPI FinalizeProcess(void)
 
 /*
  * This function should be called by the user for any method once.
- * The function will return a unique method ID, the user should maintain 
+ * The function will return a unique method ID, the user should maintain
  * the ID for each method
  */
-ITT_EXTERN_C unsigned int JITAPI iJIT_GetNewMethodID(void)
-{
-    static unsigned int methodID = 0x100000;
+ITT_EXTERN_C unsigned int JITAPI iJIT_GetNewMethodID(void) {
+  static unsigned int methodID = 0x100000;
 
-    if (methodID == 0)
-        return 0;  /* ERROR : this is not a valid value */
+  if (methodID == 0)
+    return 0; /* ERROR : this is not a valid value */
 
-    return methodID++;
+  return methodID++;
 }
diff --git a/llvm/lib/ExecutionEngine/IntelJITEvents/jitprofiling.h b/llvm/lib/ExecutionEngine/IntelJITEvents/jitprofiling.h
index ba627b430ff1272..ade66b3d98e097d 100644
--- a/llvm/lib/ExecutionEngine/IntelJITEvents/jitprofiling.h
+++ b/llvm/lib/ExecutionEngine/IntelJITEvents/jitprofiling.h
@@ -23,103 +23,96 @@
  */
 
 /* event notification */
-typedef enum iJIT_jvm_event
-{
-
-    /* shutdown  */
-
-    /*
-     * Program exiting EventSpecificData NA
-     */
-    iJVM_EVENT_TYPE_SHUTDOWN = 2,
-
-    /* JIT profiling  */
-
-    /*
-     * issued after method code jitted into memory but before code is executed
-     * EventSpecificData is an iJIT_Method_Load
-     */
-    iJVM_EVENT_TYPE_METHOD_LOAD_FINISHED=13,
-
-    /* issued before unload. Method code will no longer be executed, but code
-     * and info are still in memory. The VTune profiler may capture method
-     * code only at this point EventSpecificData is iJIT_Method_Id
-     */
-    iJVM_EVENT_TYPE_METHOD_UNLOAD_START,
-
-    /* Method Profiling */
-
-    /* method name, Id and stack is supplied
-     * issued when a method is about to be entered EventSpecificData is
-     * iJIT_Method_NIDS
-     */
-    iJVM_EVENT_TYPE_ENTER_NIDS = 19,
-
-    /* method name, Id and stack is supplied
-     * issued when a method is about to be left EventSpecificData is
-     * iJIT_Method_NIDS
-     */
-    iJVM_EVENT_TYPE_LEAVE_NIDS
+typedef enum iJIT_jvm_event {
+
+  /* shutdown  */
+
+  /*
+   * Program exiting EventSpecificData NA
+   */
+  iJVM_EVENT_TYPE_SHUTDOWN = 2,
+
+  /* JIT profiling  */
+
+  /*
+   * issued after method code jitted into memory but before code is executed
+   * EventSpecificData is an iJIT_Method_Load
+   */
+  iJVM_EVENT_TYPE_METHOD_LOAD_FINISHED = 13,
+
+  /* issued before unload. Method code will no longer be executed, but code
+   * and info are still in memory. The VTune profiler may capture method
+   * code only at this point EventSpecificData is iJIT_Method_Id
+   */
+  iJVM_EVENT_TYPE_METHOD_UNLOAD_START,
+
+  /* Method Profiling */
+
+  /* method name, Id and stack is supplied
+   * issued when a method is about to be entered EventSpecificData is
+   * iJIT_Method_NIDS
+   */
+  iJVM_EVENT_TYPE_ENTER_NIDS = 19,
+
+  /* method name, Id and stack is supplied
+   * issued when a method is about to be left EventSpecificData is
+   * iJIT_Method_NIDS
+   */
+  iJVM_EVENT_TYPE_LEAVE_NIDS
 } iJIT_JVM_EVENT;
 
-typedef enum _iJIT_ModeFlags
-{
-    /* No need to Notify VTune, since VTune is not running */
-    iJIT_NO_NOTIFICATIONS          = 0x0000,
-
-    /* when turned on the jit must call
-     * iJIT_NotifyEvent
-     * (
-     *     iJVM_EVENT_TYPE_METHOD_LOAD_FINISHED,
-     * )
-     * for all the method already jitted
-     */
-    iJIT_BE_NOTIFY_ON_LOAD         = 0x0001,
-
-    /* when turned on the jit must call
-     * iJIT_NotifyEvent
-     * (
-     *     iJVM_EVENT_TYPE_METHOD_UNLOAD_FINISHED,
-     *  ) for all the method that are unloaded
-     */
-    iJIT_BE_NOTIFY_ON_UNLOAD       = 0x0002,
-
-    /* when turned on the jit must instrument all
-     * the currently jited code with calls on
-     * method entries
-     */
-    iJIT_BE_NOTIFY_ON_METHOD_ENTRY = 0x0004,
-
-    /* when turned on the jit must instrument all
-     * the currently jited code with calls
-     * on method exit
-     */
-    iJIT_BE_NOTIFY_ON_METHOD_EXIT  = 0x0008
+typedef enum _iJIT_ModeFlags {
+  /* No need to Notify VTune, since VTune is not running */
+  iJIT_NO_NOTIFICATIONS = 0x0000,
 
-} iJIT_ModeFlags;
+  /* when turned on the jit must call
+   * iJIT_NotifyEvent
+   * (
+   *     iJVM_EVENT_TYPE_METHOD_LOAD_FINISHED,
+   * )
+   * for all the method already jitted
+   */
+  iJIT_BE_NOTIFY_ON_LOAD = 0x0001,
 
+  /* when turned on the jit must call
+   * iJIT_NotifyEvent
+   * (
+   *     iJVM_EVENT_TYPE_METHOD_UNLOAD_FINISHED,
+   *  ) for all the method that are unloaded
+   */
+  iJIT_BE_NOTIFY_ON_UNLOAD = 0x0002,
 
- /* Flags used by iJIT_IsProfilingActive() */
-typedef enum _iJIT_IsProfilingActiveFlags
-{
-    /* No profiler is running. Currently not used */
-    iJIT_NOTHING_RUNNING           = 0x0000,
+  /* when turned on the jit must instrument all
+   * the currently jited code with calls on
+   * method entries
+   */
+  iJIT_BE_NOTIFY_ON_METHOD_ENTRY = 0x0004,
 
-    /* Sampling is running. This is the default value
-     * returned by iJIT_IsProfilingActive()
-     */
-    iJIT_SAMPLING_ON               = 0x0001,
+  /* when turned on the jit must instrument all
+   * the currently jited code with calls
+   * on method exit
+   */
+  iJIT_BE_NOTIFY_ON_METHOD_EXIT = 0x0008
+
+} iJIT_ModeFlags;
 
-      /* Call Graph is running */
-    iJIT_CALLGRAPH_ON              = 0x0002
+/* Flags used by iJIT_IsProfilingActive() */
+typedef enum _iJIT_IsProfilingActiveFlags {
+  /* No profiler is running. Currently not used */
+  iJIT_NOTHING_RUNNING = 0x0000,
+
+  /* Sampling is running. This is the default value
+   * returned by iJIT_IsProfilingActive()
+   */
+  iJIT_SAMPLING_ON = 0x0001,
+
+  /* Call Graph is running */
+  iJIT_CALLGRAPH_ON = 0x0002
 
 } iJIT_IsProfilingActiveFlags;
 
 /* Enumerator for the environment of methods*/
-typedef enum _iJDEnvironmentType
-{
-    iJDE_JittingAPI = 2
-} iJDEnvironmentType;
+typedef enum _iJDEnvironmentType { iJDE_JittingAPI = 2 } iJDEnvironmentType;
 
 /**********************************
  * Data structures for the events *
@@ -129,89 +122,84 @@ typedef enum _iJDEnvironmentType
  * iJVM_EVENT_TYPE_METHOD_UNLOAD_START
  */
 
-typedef struct _iJIT_Method_Id
-{
-   /* Id of the method (same as the one passed in
+typedef struct _iJIT_Method_Id {
+  /* Id of the method (same as the one passed in
    * the iJIT_Method_Load struct
    */
-    unsigned int       method_id;
+  unsigned int method_id;
 
 } *piJIT_Method_Id, iJIT_Method_Id;
 
-
 /* structure for the events:
  * iJVM_EVENT_TYPE_ENTER_NIDS,
  * iJVM_EVENT_TYPE_LEAVE_NIDS,
  * iJVM_EVENT_TYPE_EXCEPTION_OCCURRED_NIDS
  */
 
-typedef struct _iJIT_Method_NIDS
-{
-    /* unique method ID */
-    unsigned int       method_id;
+typedef struct _iJIT_Method_NIDS {
+  /* unique method ID */
+  unsigned int method_id;
 
-    /* NOTE: no need to fill this field, it's filled by VTune */
-    unsigned int       stack_id;
+  /* NOTE: no need to fill this field, it's filled by VTune */
+  unsigned int stack_id;
 
-    /* method name (just the method, without the class) */
-    char*              method_name;
+  /* method name (just the method, without the class) */
+  char *method_name;
 } *piJIT_Method_NIDS, iJIT_Method_NIDS;
 
 /* structures for the events:
  * iJVM_EVENT_TYPE_METHOD_LOAD_FINISHED
  */
 
-typedef struct _LineNumberInfo
-{
+typedef struct _LineNumberInfo {
   /* x86 Offset from the beginning of the method*/
   unsigned int Offset;
 
   /* source line number from the beginning of the source file */
-    unsigned int        LineNumber;
+  unsigned int LineNumber;
 
 } *pLineNumberInfo, LineNumberInfo;
 
-typedef struct _iJIT_Method_Load
-{
-    /* unique method ID - can be any unique value, (except 0 - 999) */
-    unsigned int        method_id;
+typedef struct _iJIT_Method_Load {
+  /* unique method ID - can be any unique value, (except 0 - 999) */
+  unsigned int method_id;
 
-    /* method name (can be with or without the class and signature, in any case
-     * the class name will be added to it)
-     */
-    char*               method_name;
+  /* method name (can be with or without the class and signature, in any case
+   * the class name will be added to it)
+   */
+  char *method_name;
 
-    /* virtual address of that method - This determines the method range for the
-     * iJVM_EVENT_TYPE_ENTER/LEAVE_METHOD_ADDR events
-     */
-    void*               method_load_address;
+  /* virtual address of that method - This determines the method range for the
+   * iJVM_EVENT_TYPE_ENTER/LEAVE_METHOD_ADDR events
+   */
+  void *method_load_address;
 
-    /* Size in memory - Must be exact */
-    unsigned int        method_size;
+  /* Size in memory - Must be exact */
+  unsigned int method_size;
 
-    /* Line Table size in number of entries - Zero if none */
-    unsigned int line_number_size;
+  /* Line Table size in number of entries - Zero if none */
+  unsigned int line_number_size;
 
-    /* Pointer to the beginning of the line numbers info array */
-    pLineNumberInfo     line_number_table;
+  /* Pointer to the beginning of the line numbers info array */
+  pLineNumberInfo line_number_table;
 
-    /* unique class ID */
-    unsigned int        class_id;
+  /* unique class ID */
+  unsigned int class_id;
 
-    /* class file name */
-    char*               class_file_name;
+  /* class file name */
+  char *class_file_name;
 
-    /* source file name */
-    char*               source_file_name;
+  /* source file name */
+  char *source_file_name;
 
-    /* bits supplied by the user for saving in the JIT file */
-    void*               user_data;
+  /* bits supplied by the user for saving in the JIT file */
+  void *user_data;
 
-    /* the size of the user data buffer */
-    unsigned int        user_data_size;
+  /* the size of the user data buffer */
+  unsigned int user_data_size;
 
-    /* NOTE: no need to fill this field, it's filled by VTune */
-    iJDEnvironmentType  env;
+  /* NOTE: no need to fill this field, it's filled by VTune */
+  iJDEnvironmentType env;
 
 } *piJIT_Method_Load, iJIT_Method_Load;
 
@@ -221,15 +209,15 @@ extern "C" {
 #endif
 
 #ifndef CDECL
-#  if defined WIN32 || defined _WIN32
-#    define CDECL __cdecl
-#  else /* defined WIN32 || defined _WIN32 */
-#    if defined _M_X64 || defined _M_AMD64 || defined __x86_64__
-#      define CDECL /* not actual on x86_64 platform */
-#    else  /* _M_X64 || _M_AMD64 || __x86_64__ */
-#      define CDECL __attribute__ ((cdecl))
-#    endif /* _M_X64 || _M_AMD64 || __x86_64__ */
-#  endif /* defined WIN32 || defined _WIN32 */
+#if defined WIN32 || defined _WIN32
+#define CDECL __cdecl
+#else /* defined WIN32 || defined _WIN32 */
+#if defined _M_X64 || defined _M_AMD64 || defined __x86_64__
+#define CDECL /* not actual on x86_64 platform */
+#else         /* _M_X64 || _M_AMD64 || __x86_64__ */
+#define CDECL __attribute__((cdecl))
+#endif /* _M_X64 || _M_AMD64 || __x86_64__ */
+#endif /* defined WIN32 || defined _WIN32 */
 #endif /* CDECL */
 
 #define JITAPI CDECL
diff --git a/llvm/lib/ExecutionEngine/Interpreter/CMakeLists.txt b/llvm/lib/ExecutionEngine/Interpreter/CMakeLists.txt
index 14522ba2a1bff9c..de20778a70c45f5 100644
--- a/llvm/lib/ExecutionEngine/Interpreter/CMakeLists.txt
+++ b/llvm/lib/ExecutionEngine/Interpreter/CMakeLists.txt
@@ -1,18 +1,9 @@
-add_llvm_component_library(LLVMInterpreter
-  Execution.cpp
-  ExternalFunctions.cpp
-  Interpreter.cpp
+add_llvm_component_library(
+    LLVMInterpreter Execution.cpp ExternalFunctions.cpp Interpreter.cpp
 
-  DEPENDS
-  intrinsics_gen
+        DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  CodeGen
-  Core
-  ExecutionEngine
-  Support
-  )
+            LINK_COMPONENTS CodeGen Core ExecutionEngine Support)
 
-if( LLVM_ENABLE_FFI )
-  target_link_libraries( LLVMInterpreter PRIVATE FFI::ffi )
-endif()
+    if (LLVM_ENABLE_FFI) target_link_libraries(LLVMInterpreter PRIVATE FFI::ffi)
+        endif()
diff --git a/llvm/lib/ExecutionEngine/Interpreter/Execution.cpp b/llvm/lib/ExecutionEngine/Interpreter/Execution.cpp
index 770fc93490835d1..01080e99d199a08 100644
--- a/llvm/lib/ExecutionEngine/Interpreter/Execution.cpp
+++ b/llvm/lib/ExecutionEngine/Interpreter/Execution.cpp
@@ -31,8 +31,9 @@ using namespace llvm;
 
 STATISTIC(NumDynamicInsts, "Number of dynamic instructions executed");
 
-static cl::opt<bool> PrintVolatile("interpreter-print-volatile", cl::Hidden,
-          cl::desc("make the interpreter print every volatile load and store"));
+static cl::opt<bool> PrintVolatile(
+    "interpreter-print-volatile", cl::Hidden,
+    cl::desc("make the interpreter print every volatile load and store"));
 
 //===----------------------------------------------------------------------===//
 //                     Various Helper Functions
@@ -69,7 +70,7 @@ void Interpreter::visitUnaryOperator(UnaryOperator &I) {
   if (Ty->isVectorTy()) {
     R.AggregateVal.resize(Src.AggregateVal.size());
 
-    switch(I.getOpcode()) {
+    switch (I.getOpcode()) {
     default:
       llvm_unreachable("Don't know how to handle this unary operator");
       break;
@@ -90,7 +91,9 @@ void Interpreter::visitUnaryOperator(UnaryOperator &I) {
     default:
       llvm_unreachable("Don't know how to handle this unary operator");
       break;
-    case Instruction::FNeg: executeFNegInst(R, Src, Ty); break;
+    case Instruction::FNeg:
+      executeFNegInst(R, Src, Ty);
+      break;
     }
   }
   SetValue(&I, R, SF);
@@ -100,10 +103,10 @@ void Interpreter::visitUnaryOperator(UnaryOperator &I) {
 //                    Binary Instruction Implementations
 //===----------------------------------------------------------------------===//
 
-#define IMPLEMENT_BINARY_OPERATOR(OP, TY) \
-   case Type::TY##TyID: \
-     Dest.TY##Val = Src1.TY##Val OP Src2.TY##Val; \
-     break
+#define IMPLEMENT_BINARY_OPERATOR(OP, TY)                                      \
+  case Type::TY##TyID:                                                         \
+    Dest.TY##Val = Src1.TY##Val OP Src2.TY##Val;                               \
+    break
 
 static void executeFAddInst(GenericValue &Dest, GenericValue Src1,
                             GenericValue Src2, Type *Ty) {
@@ -164,10 +167,10 @@ static void executeFRemInst(GenericValue &Dest, GenericValue Src1,
   }
 }
 
-#define IMPLEMENT_INTEGER_ICMP(OP, TY) \
-   case Type::IntegerTyID:  \
-      Dest.IntVal = APInt(1,Src1.IntVal.OP(Src2.IntVal)); \
-      break;
+#define IMPLEMENT_INTEGER_ICMP(OP, TY)                                         \
+  case Type::IntegerTyID:                                                      \
+    Dest.IntVal = APInt(1, Src1.IntVal.OP(Src2.IntVal));                       \
+    break;
 
 #define IMPLEMENT_VECTOR_INTEGER_ICMP(OP, TY)                                  \
   case Type::FixedVectorTyID:                                                  \
@@ -183,18 +186,19 @@ static void executeFRemInst(GenericValue &Dest, GenericValue Src1,
 // width as the host has.  We _do not_ want to be comparing 64 bit values when
 // running on a 32-bit target, otherwise the upper 32 bits might mess up
 // comparisons if they contain garbage.
-#define IMPLEMENT_POINTER_ICMP(OP) \
-   case Type::PointerTyID: \
-      Dest.IntVal = APInt(1,(void*)(intptr_t)Src1.PointerVal OP \
-                            (void*)(intptr_t)Src2.PointerVal); \
-      break;
+#define IMPLEMENT_POINTER_ICMP(OP)                                             \
+  case Type::PointerTyID:                                                      \
+    Dest.IntVal =                                                              \
+        APInt(1, (void *)(intptr_t)Src1.PointerVal OP(void *)(intptr_t)        \
+                     Src2.PointerVal);                                         \
+    break;
 
 static GenericValue executeICMP_EQ(GenericValue Src1, GenericValue Src2,
                                    Type *Ty) {
   GenericValue Dest;
   switch (Ty->getTypeID()) {
-    IMPLEMENT_INTEGER_ICMP(eq,Ty);
-    IMPLEMENT_VECTOR_INTEGER_ICMP(eq,Ty);
+    IMPLEMENT_INTEGER_ICMP(eq, Ty);
+    IMPLEMENT_VECTOR_INTEGER_ICMP(eq, Ty);
     IMPLEMENT_POINTER_ICMP(==);
   default:
     dbgs() << "Unhandled type for ICMP_EQ predicate: " << *Ty << "\n";
@@ -207,8 +211,8 @@ static GenericValue executeICMP_NE(GenericValue Src1, GenericValue Src2,
                                    Type *Ty) {
   GenericValue Dest;
   switch (Ty->getTypeID()) {
-    IMPLEMENT_INTEGER_ICMP(ne,Ty);
-    IMPLEMENT_VECTOR_INTEGER_ICMP(ne,Ty);
+    IMPLEMENT_INTEGER_ICMP(ne, Ty);
+    IMPLEMENT_VECTOR_INTEGER_ICMP(ne, Ty);
     IMPLEMENT_POINTER_ICMP(!=);
   default:
     dbgs() << "Unhandled type for ICMP_NE predicate: " << *Ty << "\n";
@@ -221,8 +225,8 @@ static GenericValue executeICMP_ULT(GenericValue Src1, GenericValue Src2,
                                     Type *Ty) {
   GenericValue Dest;
   switch (Ty->getTypeID()) {
-    IMPLEMENT_INTEGER_ICMP(ult,Ty);
-    IMPLEMENT_VECTOR_INTEGER_ICMP(ult,Ty);
+    IMPLEMENT_INTEGER_ICMP(ult, Ty);
+    IMPLEMENT_VECTOR_INTEGER_ICMP(ult, Ty);
     IMPLEMENT_POINTER_ICMP(<);
   default:
     dbgs() << "Unhandled type for ICMP_ULT predicate: " << *Ty << "\n";
@@ -235,8 +239,8 @@ static GenericValue executeICMP_SLT(GenericValue Src1, GenericValue Src2,
                                     Type *Ty) {
   GenericValue Dest;
   switch (Ty->getTypeID()) {
-    IMPLEMENT_INTEGER_ICMP(slt,Ty);
-    IMPLEMENT_VECTOR_INTEGER_ICMP(slt,Ty);
+    IMPLEMENT_INTEGER_ICMP(slt, Ty);
+    IMPLEMENT_VECTOR_INTEGER_ICMP(slt, Ty);
     IMPLEMENT_POINTER_ICMP(<);
   default:
     dbgs() << "Unhandled type for ICMP_SLT predicate: " << *Ty << "\n";
@@ -249,8 +253,8 @@ static GenericValue executeICMP_UGT(GenericValue Src1, GenericValue Src2,
                                     Type *Ty) {
   GenericValue Dest;
   switch (Ty->getTypeID()) {
-    IMPLEMENT_INTEGER_ICMP(ugt,Ty);
-    IMPLEMENT_VECTOR_INTEGER_ICMP(ugt,Ty);
+    IMPLEMENT_INTEGER_ICMP(ugt, Ty);
+    IMPLEMENT_VECTOR_INTEGER_ICMP(ugt, Ty);
     IMPLEMENT_POINTER_ICMP(>);
   default:
     dbgs() << "Unhandled type for ICMP_UGT predicate: " << *Ty << "\n";
@@ -263,8 +267,8 @@ static GenericValue executeICMP_SGT(GenericValue Src1, GenericValue Src2,
                                     Type *Ty) {
   GenericValue Dest;
   switch (Ty->getTypeID()) {
-    IMPLEMENT_INTEGER_ICMP(sgt,Ty);
-    IMPLEMENT_VECTOR_INTEGER_ICMP(sgt,Ty);
+    IMPLEMENT_INTEGER_ICMP(sgt, Ty);
+    IMPLEMENT_VECTOR_INTEGER_ICMP(sgt, Ty);
     IMPLEMENT_POINTER_ICMP(>);
   default:
     dbgs() << "Unhandled type for ICMP_SGT predicate: " << *Ty << "\n";
@@ -277,8 +281,8 @@ static GenericValue executeICMP_ULE(GenericValue Src1, GenericValue Src2,
                                     Type *Ty) {
   GenericValue Dest;
   switch (Ty->getTypeID()) {
-    IMPLEMENT_INTEGER_ICMP(ule,Ty);
-    IMPLEMENT_VECTOR_INTEGER_ICMP(ule,Ty);
+    IMPLEMENT_INTEGER_ICMP(ule, Ty);
+    IMPLEMENT_VECTOR_INTEGER_ICMP(ule, Ty);
     IMPLEMENT_POINTER_ICMP(<=);
   default:
     dbgs() << "Unhandled type for ICMP_ULE predicate: " << *Ty << "\n";
@@ -291,8 +295,8 @@ static GenericValue executeICMP_SLE(GenericValue Src1, GenericValue Src2,
                                     Type *Ty) {
   GenericValue Dest;
   switch (Ty->getTypeID()) {
-    IMPLEMENT_INTEGER_ICMP(sle,Ty);
-    IMPLEMENT_VECTOR_INTEGER_ICMP(sle,Ty);
+    IMPLEMENT_INTEGER_ICMP(sle, Ty);
+    IMPLEMENT_VECTOR_INTEGER_ICMP(sle, Ty);
     IMPLEMENT_POINTER_ICMP(<=);
   default:
     dbgs() << "Unhandled type for ICMP_SLE predicate: " << *Ty << "\n";
@@ -305,8 +309,8 @@ static GenericValue executeICMP_UGE(GenericValue Src1, GenericValue Src2,
                                     Type *Ty) {
   GenericValue Dest;
   switch (Ty->getTypeID()) {
-    IMPLEMENT_INTEGER_ICMP(uge,Ty);
-    IMPLEMENT_VECTOR_INTEGER_ICMP(uge,Ty);
+    IMPLEMENT_INTEGER_ICMP(uge, Ty);
+    IMPLEMENT_VECTOR_INTEGER_ICMP(uge, Ty);
     IMPLEMENT_POINTER_ICMP(>=);
   default:
     dbgs() << "Unhandled type for ICMP_UGE predicate: " << *Ty << "\n";
@@ -319,8 +323,8 @@ static GenericValue executeICMP_SGE(GenericValue Src1, GenericValue Src2,
                                     Type *Ty) {
   GenericValue Dest;
   switch (Ty->getTypeID()) {
-    IMPLEMENT_INTEGER_ICMP(sge,Ty);
-    IMPLEMENT_VECTOR_INTEGER_ICMP(sge,Ty);
+    IMPLEMENT_INTEGER_ICMP(sge, Ty);
+    IMPLEMENT_VECTOR_INTEGER_ICMP(sge, Ty);
     IMPLEMENT_POINTER_ICMP(>=);
   default:
     dbgs() << "Unhandled type for ICMP_SGE predicate: " << *Ty << "\n";
@@ -331,22 +335,42 @@ static GenericValue executeICMP_SGE(GenericValue Src1, GenericValue Src2,
 
 void Interpreter::visitICmpInst(ICmpInst &I) {
   ExecutionContext &SF = ECStack.back();
-  Type *Ty    = I.getOperand(0)->getType();
+  Type *Ty = I.getOperand(0)->getType();
   GenericValue Src1 = getOperandValue(I.getOperand(0), SF);
   GenericValue Src2 = getOperandValue(I.getOperand(1), SF);
-  GenericValue R;   // Result
+  GenericValue R; // Result
 
   switch (I.getPredicate()) {
-  case ICmpInst::ICMP_EQ:  R = executeICMP_EQ(Src1,  Src2, Ty); break;
-  case ICmpInst::ICMP_NE:  R = executeICMP_NE(Src1,  Src2, Ty); break;
-  case ICmpInst::ICMP_ULT: R = executeICMP_ULT(Src1, Src2, Ty); break;
-  case ICmpInst::ICMP_SLT: R = executeICMP_SLT(Src1, Src2, Ty); break;
-  case ICmpInst::ICMP_UGT: R = executeICMP_UGT(Src1, Src2, Ty); break;
-  case ICmpInst::ICMP_SGT: R = executeICMP_SGT(Src1, Src2, Ty); break;
-  case ICmpInst::ICMP_ULE: R = executeICMP_ULE(Src1, Src2, Ty); break;
-  case ICmpInst::ICMP_SLE: R = executeICMP_SLE(Src1, Src2, Ty); break;
-  case ICmpInst::ICMP_UGE: R = executeICMP_UGE(Src1, Src2, Ty); break;
-  case ICmpInst::ICMP_SGE: R = executeICMP_SGE(Src1, Src2, Ty); break;
+  case ICmpInst::ICMP_EQ:
+    R = executeICMP_EQ(Src1, Src2, Ty);
+    break;
+  case ICmpInst::ICMP_NE:
+    R = executeICMP_NE(Src1, Src2, Ty);
+    break;
+  case ICmpInst::ICMP_ULT:
+    R = executeICMP_ULT(Src1, Src2, Ty);
+    break;
+  case ICmpInst::ICMP_SLT:
+    R = executeICMP_SLT(Src1, Src2, Ty);
+    break;
+  case ICmpInst::ICMP_UGT:
+    R = executeICMP_UGT(Src1, Src2, Ty);
+    break;
+  case ICmpInst::ICMP_SGT:
+    R = executeICMP_SGT(Src1, Src2, Ty);
+    break;
+  case ICmpInst::ICMP_ULE:
+    R = executeICMP_ULE(Src1, Src2, Ty);
+    break;
+  case ICmpInst::ICMP_SLE:
+    R = executeICMP_SLE(Src1, Src2, Ty);
+    break;
+  case ICmpInst::ICMP_UGE:
+    R = executeICMP_UGE(Src1, Src2, Ty);
+    break;
+  case ICmpInst::ICMP_SGE:
+    R = executeICMP_SGE(Src1, Src2, Ty);
+    break;
   default:
     dbgs() << "Don't know how to handle this ICmp predicate!\n-->" << I;
     llvm_unreachable(nullptr);
@@ -355,17 +379,17 @@ void Interpreter::visitICmpInst(ICmpInst &I) {
   SetValue(&I, R, SF);
 }
 
-#define IMPLEMENT_FCMP(OP, TY) \
-   case Type::TY##TyID: \
-     Dest.IntVal = APInt(1,Src1.TY##Val OP Src2.TY##Val); \
-     break
+#define IMPLEMENT_FCMP(OP, TY)                                                 \
+  case Type::TY##TyID:                                                         \
+    Dest.IntVal = APInt(1, Src1.TY##Val OP Src2.TY##Val);                      \
+    break
 
-#define IMPLEMENT_VECTOR_FCMP_T(OP, TY)                             \
-  assert(Src1.AggregateVal.size() == Src2.AggregateVal.size());     \
-  Dest.AggregateVal.resize( Src1.AggregateVal.size() );             \
-  for( uint32_t _i=0;_i<Src1.AggregateVal.size();_i++)              \
-    Dest.AggregateVal[_i].IntVal = APInt(1,                         \
-    Src1.AggregateVal[_i].TY##Val OP Src2.AggregateVal[_i].TY##Val);\
+#define IMPLEMENT_VECTOR_FCMP_T(OP, TY)                                        \
+  assert(Src1.AggregateVal.size() == Src2.AggregateVal.size());                \
+  Dest.AggregateVal.resize(Src1.AggregateVal.size());                          \
+  for (uint32_t _i = 0; _i < Src1.AggregateVal.size(); _i++)                   \
+    Dest.AggregateVal[_i].IntVal = APInt(                                      \
+        1, Src1.AggregateVal[_i].TY##Val OP Src2.AggregateVal[_i].TY##Val);    \
   break;
 
 #define IMPLEMENT_VECTOR_FCMP(OP)                                              \
@@ -378,7 +402,7 @@ void Interpreter::visitICmpInst(ICmpInst &I) {
     }
 
 static GenericValue executeFCMP_OEQ(GenericValue Src1, GenericValue Src2,
-                                   Type *Ty) {
+                                    Type *Ty) {
   GenericValue Dest;
   switch (Ty->getTypeID()) {
     IMPLEMENT_FCMP(==, Float);
@@ -391,45 +415,42 @@ static GenericValue executeFCMP_OEQ(GenericValue Src1, GenericValue Src2,
   return Dest;
 }
 
-#define IMPLEMENT_SCALAR_NANS(TY, X,Y)                                      \
-  if (TY->isFloatTy()) {                                                    \
-    if (X.FloatVal != X.FloatVal || Y.FloatVal != Y.FloatVal) {             \
-      Dest.IntVal = APInt(1,false);                                         \
-      return Dest;                                                          \
-    }                                                                       \
-  } else {                                                                  \
-    if (X.DoubleVal != X.DoubleVal || Y.DoubleVal != Y.DoubleVal) {         \
-      Dest.IntVal = APInt(1,false);                                         \
-      return Dest;                                                          \
-    }                                                                       \
-  }
-
-#define MASK_VECTOR_NANS_T(X,Y, TZ, FLAG)                                   \
-  assert(X.AggregateVal.size() == Y.AggregateVal.size());                   \
-  Dest.AggregateVal.resize( X.AggregateVal.size() );                        \
-  for( uint32_t _i=0;_i<X.AggregateVal.size();_i++) {                       \
-    if (X.AggregateVal[_i].TZ##Val != X.AggregateVal[_i].TZ##Val ||         \
-        Y.AggregateVal[_i].TZ##Val != Y.AggregateVal[_i].TZ##Val)           \
-      Dest.AggregateVal[_i].IntVal = APInt(1,FLAG);                         \
-    else  {                                                                 \
-      Dest.AggregateVal[_i].IntVal = APInt(1,!FLAG);                        \
-    }                                                                       \
+#define IMPLEMENT_SCALAR_NANS(TY, X, Y)                                        \
+  if (TY->isFloatTy()) {                                                       \
+    if (X.FloatVal != X.FloatVal || Y.FloatVal != Y.FloatVal) {                \
+      Dest.IntVal = APInt(1, false);                                           \
+      return Dest;                                                             \
+    }                                                                          \
+  } else {                                                                     \
+    if (X.DoubleVal != X.DoubleVal || Y.DoubleVal != Y.DoubleVal) {            \
+      Dest.IntVal = APInt(1, false);                                           \
+      return Dest;                                                             \
+    }                                                                          \
+  }
+
+#define MASK_VECTOR_NANS_T(X, Y, TZ, FLAG)                                     \
+  assert(X.AggregateVal.size() == Y.AggregateVal.size());                      \
+  Dest.AggregateVal.resize(X.AggregateVal.size());                             \
+  for (uint32_t _i = 0; _i < X.AggregateVal.size(); _i++) {                    \
+    if (X.AggregateVal[_i].TZ##Val != X.AggregateVal[_i].TZ##Val ||            \
+        Y.AggregateVal[_i].TZ##Val != Y.AggregateVal[_i].TZ##Val)              \
+      Dest.AggregateVal[_i].IntVal = APInt(1, FLAG);                           \
+    else {                                                                     \
+      Dest.AggregateVal[_i].IntVal = APInt(1, !FLAG);                          \
+    }                                                                          \
+  }
+
+#define MASK_VECTOR_NANS(TY, X, Y, FLAG)                                       \
+  if (TY->isVectorTy()) {                                                      \
+    if (cast<VectorType>(TY)->getElementType()->isFloatTy()) {                 \
+      MASK_VECTOR_NANS_T(X, Y, Float, FLAG)                                    \
+    } else {                                                                   \
+      MASK_VECTOR_NANS_T(X, Y, Double, FLAG)                                   \
+    }                                                                          \
   }
 
-#define MASK_VECTOR_NANS(TY, X,Y, FLAG)                                     \
-  if (TY->isVectorTy()) {                                                   \
-    if (cast<VectorType>(TY)->getElementType()->isFloatTy()) {              \
-      MASK_VECTOR_NANS_T(X, Y, Float, FLAG)                                 \
-    } else {                                                                \
-      MASK_VECTOR_NANS_T(X, Y, Double, FLAG)                                \
-    }                                                                       \
-  }                                                                         \
-
-
-
 static GenericValue executeFCMP_ONE(GenericValue Src1, GenericValue Src2,
-                                    Type *Ty)
-{
+                                    Type *Ty) {
   GenericValue Dest;
   // if input is scalar value and Src1 or Src2 is NaN return false
   IMPLEMENT_SCALAR_NANS(Ty, Src1, Src2)
@@ -440,21 +461,21 @@ static GenericValue executeFCMP_ONE(GenericValue Src1, GenericValue Src2,
     IMPLEMENT_FCMP(!=, Float);
     IMPLEMENT_FCMP(!=, Double);
     IMPLEMENT_VECTOR_FCMP(!=);
-    default:
-      dbgs() << "Unhandled type for FCmp NE instruction: " << *Ty << "\n";
-      llvm_unreachable(nullptr);
+  default:
+    dbgs() << "Unhandled type for FCmp NE instruction: " << *Ty << "\n";
+    llvm_unreachable(nullptr);
   }
   // in vector case mask out NaN elements
   if (Ty->isVectorTy())
-    for( size_t _i=0; _i<Src1.AggregateVal.size(); _i++)
+    for (size_t _i = 0; _i < Src1.AggregateVal.size(); _i++)
       if (DestMask.AggregateVal[_i].IntVal == false)
-        Dest.AggregateVal[_i].IntVal = APInt(1,false);
+        Dest.AggregateVal[_i].IntVal = APInt(1, false);
 
   return Dest;
 }
 
 static GenericValue executeFCMP_OLE(GenericValue Src1, GenericValue Src2,
-                                   Type *Ty) {
+                                    Type *Ty) {
   GenericValue Dest;
   switch (Ty->getTypeID()) {
     IMPLEMENT_FCMP(<=, Float);
@@ -468,7 +489,7 @@ static GenericValue executeFCMP_OLE(GenericValue Src1, GenericValue Src2,
 }
 
 static GenericValue executeFCMP_OGE(GenericValue Src1, GenericValue Src2,
-                                   Type *Ty) {
+                                    Type *Ty) {
   GenericValue Dest;
   switch (Ty->getTypeID()) {
     IMPLEMENT_FCMP(>=, Float);
@@ -482,7 +503,7 @@ static GenericValue executeFCMP_OGE(GenericValue Src1, GenericValue Src2,
 }
 
 static GenericValue executeFCMP_OLT(GenericValue Src1, GenericValue Src2,
-                                   Type *Ty) {
+                                    Type *Ty) {
   GenericValue Dest;
   switch (Ty->getTypeID()) {
     IMPLEMENT_FCMP(<, Float);
@@ -496,7 +517,7 @@ static GenericValue executeFCMP_OLT(GenericValue Src1, GenericValue Src2,
 }
 
 static GenericValue executeFCMP_OGT(GenericValue Src1, GenericValue Src2,
-                                     Type *Ty) {
+                                    Type *Ty) {
   GenericValue Dest;
   switch (Ty->getTypeID()) {
     IMPLEMENT_FCMP(>, Float);
@@ -509,15 +530,15 @@ static GenericValue executeFCMP_OGT(GenericValue Src1, GenericValue Src2,
   return Dest;
 }
 
-#define IMPLEMENT_UNORDERED(TY, X,Y)                                     \
-  if (TY->isFloatTy()) {                                                 \
-    if (X.FloatVal != X.FloatVal || Y.FloatVal != Y.FloatVal) {          \
-      Dest.IntVal = APInt(1,true);                                       \
-      return Dest;                                                       \
-    }                                                                    \
-  } else if (X.DoubleVal != X.DoubleVal || Y.DoubleVal != Y.DoubleVal) { \
-    Dest.IntVal = APInt(1,true);                                         \
-    return Dest;                                                         \
+#define IMPLEMENT_UNORDERED(TY, X, Y)                                          \
+  if (TY->isFloatTy()) {                                                       \
+    if (X.FloatVal != X.FloatVal || Y.FloatVal != Y.FloatVal) {                \
+      Dest.IntVal = APInt(1, true);                                            \
+      return Dest;                                                             \
+    }                                                                          \
+  } else if (X.DoubleVal != X.DoubleVal || Y.DoubleVal != Y.DoubleVal) {       \
+    Dest.IntVal = APInt(1, true);                                              \
+    return Dest;                                                               \
   }
 
 #define IMPLEMENT_VECTOR_UNORDERED(TY, X, Y, FUNC)                             \
@@ -531,17 +552,16 @@ static GenericValue executeFCMP_OGT(GenericValue Src1, GenericValue Src2,
   }
 
 static GenericValue executeFCMP_UEQ(GenericValue Src1, GenericValue Src2,
-                                   Type *Ty) {
+                                    Type *Ty) {
   GenericValue Dest;
   IMPLEMENT_UNORDERED(Ty, Src1, Src2)
   MASK_VECTOR_NANS(Ty, Src1, Src2, true)
   IMPLEMENT_VECTOR_UNORDERED(Ty, Src1, Src2, executeFCMP_OEQ)
   return executeFCMP_OEQ(Src1, Src2, Ty);
-
 }
 
 static GenericValue executeFCMP_UNE(GenericValue Src1, GenericValue Src2,
-                                   Type *Ty) {
+                                    Type *Ty) {
   GenericValue Dest;
   IMPLEMENT_UNORDERED(Ty, Src1, Src2)
   MASK_VECTOR_NANS(Ty, Src1, Src2, true)
@@ -550,7 +570,7 @@ static GenericValue executeFCMP_UNE(GenericValue Src1, GenericValue Src2,
 }
 
 static GenericValue executeFCMP_ULE(GenericValue Src1, GenericValue Src2,
-                                   Type *Ty) {
+                                    Type *Ty) {
   GenericValue Dest;
   IMPLEMENT_UNORDERED(Ty, Src1, Src2)
   MASK_VECTOR_NANS(Ty, Src1, Src2, true)
@@ -559,7 +579,7 @@ static GenericValue executeFCMP_ULE(GenericValue Src1, GenericValue Src2,
 }
 
 static GenericValue executeFCMP_UGE(GenericValue Src1, GenericValue Src2,
-                                   Type *Ty) {
+                                    Type *Ty) {
   GenericValue Dest;
   IMPLEMENT_UNORDERED(Ty, Src1, Src2)
   MASK_VECTOR_NANS(Ty, Src1, Src2, true)
@@ -568,7 +588,7 @@ static GenericValue executeFCMP_UGE(GenericValue Src1, GenericValue Src2,
 }
 
 static GenericValue executeFCMP_ULT(GenericValue Src1, GenericValue Src2,
-                                   Type *Ty) {
+                                    Type *Ty) {
   GenericValue Dest;
   IMPLEMENT_UNORDERED(Ty, Src1, Src2)
   MASK_VECTOR_NANS(Ty, Src1, Src2, true)
@@ -577,7 +597,7 @@ static GenericValue executeFCMP_ULT(GenericValue Src1, GenericValue Src2,
 }
 
 static GenericValue executeFCMP_UGT(GenericValue Src1, GenericValue Src2,
-                                     Type *Ty) {
+                                    Type *Ty) {
   GenericValue Dest;
   IMPLEMENT_UNORDERED(Ty, Src1, Src2)
   MASK_VECTOR_NANS(Ty, Src1, Src2, true)
@@ -586,63 +606,63 @@ static GenericValue executeFCMP_UGT(GenericValue Src1, GenericValue Src2,
 }
 
 static GenericValue executeFCMP_ORD(GenericValue Src1, GenericValue Src2,
-                                     Type *Ty) {
+                                    Type *Ty) {
   GenericValue Dest;
-  if(Ty->isVectorTy()) {
+  if (Ty->isVectorTy()) {
     assert(Src1.AggregateVal.size() == Src2.AggregateVal.size());
-    Dest.AggregateVal.resize( Src1.AggregateVal.size() );
+    Dest.AggregateVal.resize(Src1.AggregateVal.size());
     if (cast<VectorType>(Ty)->getElementType()->isFloatTy()) {
-      for( size_t _i=0;_i<Src1.AggregateVal.size();_i++)
-        Dest.AggregateVal[_i].IntVal = APInt(1,
-        ( (Src1.AggregateVal[_i].FloatVal ==
-        Src1.AggregateVal[_i].FloatVal) &&
-        (Src2.AggregateVal[_i].FloatVal ==
-        Src2.AggregateVal[_i].FloatVal)));
+      for (size_t _i = 0; _i < Src1.AggregateVal.size(); _i++)
+        Dest.AggregateVal[_i].IntVal =
+            APInt(1, ((Src1.AggregateVal[_i].FloatVal ==
+                       Src1.AggregateVal[_i].FloatVal) &&
+                      (Src2.AggregateVal[_i].FloatVal ==
+                       Src2.AggregateVal[_i].FloatVal)));
     } else {
-      for( size_t _i=0;_i<Src1.AggregateVal.size();_i++)
-        Dest.AggregateVal[_i].IntVal = APInt(1,
-        ( (Src1.AggregateVal[_i].DoubleVal ==
-        Src1.AggregateVal[_i].DoubleVal) &&
-        (Src2.AggregateVal[_i].DoubleVal ==
-        Src2.AggregateVal[_i].DoubleVal)));
+      for (size_t _i = 0; _i < Src1.AggregateVal.size(); _i++)
+        Dest.AggregateVal[_i].IntVal =
+            APInt(1, ((Src1.AggregateVal[_i].DoubleVal ==
+                       Src1.AggregateVal[_i].DoubleVal) &&
+                      (Src2.AggregateVal[_i].DoubleVal ==
+                       Src2.AggregateVal[_i].DoubleVal)));
     }
   } else if (Ty->isFloatTy())
-    Dest.IntVal = APInt(1,(Src1.FloatVal == Src1.FloatVal &&
-                           Src2.FloatVal == Src2.FloatVal));
+    Dest.IntVal = APInt(
+        1, (Src1.FloatVal == Src1.FloatVal && Src2.FloatVal == Src2.FloatVal));
   else {
-    Dest.IntVal = APInt(1,(Src1.DoubleVal == Src1.DoubleVal &&
-                           Src2.DoubleVal == Src2.DoubleVal));
+    Dest.IntVal = APInt(1, (Src1.DoubleVal == Src1.DoubleVal &&
+                            Src2.DoubleVal == Src2.DoubleVal));
   }
   return Dest;
 }
 
 static GenericValue executeFCMP_UNO(GenericValue Src1, GenericValue Src2,
-                                     Type *Ty) {
+                                    Type *Ty) {
   GenericValue Dest;
-  if(Ty->isVectorTy()) {
+  if (Ty->isVectorTy()) {
     assert(Src1.AggregateVal.size() == Src2.AggregateVal.size());
-    Dest.AggregateVal.resize( Src1.AggregateVal.size() );
+    Dest.AggregateVal.resize(Src1.AggregateVal.size());
     if (cast<VectorType>(Ty)->getElementType()->isFloatTy()) {
-      for( size_t _i=0;_i<Src1.AggregateVal.size();_i++)
-        Dest.AggregateVal[_i].IntVal = APInt(1,
-        ( (Src1.AggregateVal[_i].FloatVal !=
-           Src1.AggregateVal[_i].FloatVal) ||
-          (Src2.AggregateVal[_i].FloatVal !=
-           Src2.AggregateVal[_i].FloatVal)));
-      } else {
-        for( size_t _i=0;_i<Src1.AggregateVal.size();_i++)
-          Dest.AggregateVal[_i].IntVal = APInt(1,
-          ( (Src1.AggregateVal[_i].DoubleVal !=
-             Src1.AggregateVal[_i].DoubleVal) ||
-            (Src2.AggregateVal[_i].DoubleVal !=
-             Src2.AggregateVal[_i].DoubleVal)));
-      }
+      for (size_t _i = 0; _i < Src1.AggregateVal.size(); _i++)
+        Dest.AggregateVal[_i].IntVal =
+            APInt(1, ((Src1.AggregateVal[_i].FloatVal !=
+                       Src1.AggregateVal[_i].FloatVal) ||
+                      (Src2.AggregateVal[_i].FloatVal !=
+                       Src2.AggregateVal[_i].FloatVal)));
+    } else {
+      for (size_t _i = 0; _i < Src1.AggregateVal.size(); _i++)
+        Dest.AggregateVal[_i].IntVal =
+            APInt(1, ((Src1.AggregateVal[_i].DoubleVal !=
+                       Src1.AggregateVal[_i].DoubleVal) ||
+                      (Src2.AggregateVal[_i].DoubleVal !=
+                       Src2.AggregateVal[_i].DoubleVal)));
+    }
   } else if (Ty->isFloatTy())
-    Dest.IntVal = APInt(1,(Src1.FloatVal != Src1.FloatVal ||
-                           Src2.FloatVal != Src2.FloatVal));
+    Dest.IntVal = APInt(
+        1, (Src1.FloatVal != Src1.FloatVal || Src2.FloatVal != Src2.FloatVal));
   else {
-    Dest.IntVal = APInt(1,(Src1.DoubleVal != Src1.DoubleVal ||
-                           Src2.DoubleVal != Src2.DoubleVal));
+    Dest.IntVal = APInt(1, (Src1.DoubleVal != Src1.DoubleVal ||
+                            Src2.DoubleVal != Src2.DoubleVal));
   }
   return Dest;
 }
@@ -650,48 +670,78 @@ static GenericValue executeFCMP_UNO(GenericValue Src1, GenericValue Src2,
 static GenericValue executeFCMP_BOOL(GenericValue Src1, GenericValue Src2,
                                      Type *Ty, const bool val) {
   GenericValue Dest;
-    if(Ty->isVectorTy()) {
-      assert(Src1.AggregateVal.size() == Src2.AggregateVal.size());
-      Dest.AggregateVal.resize( Src1.AggregateVal.size() );
-      for( size_t _i=0; _i<Src1.AggregateVal.size(); _i++)
-        Dest.AggregateVal[_i].IntVal = APInt(1,val);
-    } else {
-      Dest.IntVal = APInt(1, val);
-    }
+  if (Ty->isVectorTy()) {
+    assert(Src1.AggregateVal.size() == Src2.AggregateVal.size());
+    Dest.AggregateVal.resize(Src1.AggregateVal.size());
+    for (size_t _i = 0; _i < Src1.AggregateVal.size(); _i++)
+      Dest.AggregateVal[_i].IntVal = APInt(1, val);
+  } else {
+    Dest.IntVal = APInt(1, val);
+  }
 
-    return Dest;
+  return Dest;
 }
 
 void Interpreter::visitFCmpInst(FCmpInst &I) {
   ExecutionContext &SF = ECStack.back();
-  Type *Ty    = I.getOperand(0)->getType();
+  Type *Ty = I.getOperand(0)->getType();
   GenericValue Src1 = getOperandValue(I.getOperand(0), SF);
   GenericValue Src2 = getOperandValue(I.getOperand(1), SF);
-  GenericValue R;   // Result
+  GenericValue R; // Result
 
   switch (I.getPredicate()) {
   default:
     dbgs() << "Don't know how to handle this FCmp predicate!\n-->" << I;
     llvm_unreachable(nullptr);
-  break;
-  case FCmpInst::FCMP_FALSE: R = executeFCMP_BOOL(Src1, Src2, Ty, false);
-  break;
-  case FCmpInst::FCMP_TRUE:  R = executeFCMP_BOOL(Src1, Src2, Ty, true);
-  break;
-  case FCmpInst::FCMP_ORD:   R = executeFCMP_ORD(Src1, Src2, Ty); break;
-  case FCmpInst::FCMP_UNO:   R = executeFCMP_UNO(Src1, Src2, Ty); break;
-  case FCmpInst::FCMP_UEQ:   R = executeFCMP_UEQ(Src1, Src2, Ty); break;
-  case FCmpInst::FCMP_OEQ:   R = executeFCMP_OEQ(Src1, Src2, Ty); break;
-  case FCmpInst::FCMP_UNE:   R = executeFCMP_UNE(Src1, Src2, Ty); break;
-  case FCmpInst::FCMP_ONE:   R = executeFCMP_ONE(Src1, Src2, Ty); break;
-  case FCmpInst::FCMP_ULT:   R = executeFCMP_ULT(Src1, Src2, Ty); break;
-  case FCmpInst::FCMP_OLT:   R = executeFCMP_OLT(Src1, Src2, Ty); break;
-  case FCmpInst::FCMP_UGT:   R = executeFCMP_UGT(Src1, Src2, Ty); break;
-  case FCmpInst::FCMP_OGT:   R = executeFCMP_OGT(Src1, Src2, Ty); break;
-  case FCmpInst::FCMP_ULE:   R = executeFCMP_ULE(Src1, Src2, Ty); break;
-  case FCmpInst::FCMP_OLE:   R = executeFCMP_OLE(Src1, Src2, Ty); break;
-  case FCmpInst::FCMP_UGE:   R = executeFCMP_UGE(Src1, Src2, Ty); break;
-  case FCmpInst::FCMP_OGE:   R = executeFCMP_OGE(Src1, Src2, Ty); break;
+    break;
+  case FCmpInst::FCMP_FALSE:
+    R = executeFCMP_BOOL(Src1, Src2, Ty, false);
+    break;
+  case FCmpInst::FCMP_TRUE:
+    R = executeFCMP_BOOL(Src1, Src2, Ty, true);
+    break;
+  case FCmpInst::FCMP_ORD:
+    R = executeFCMP_ORD(Src1, Src2, Ty);
+    break;
+  case FCmpInst::FCMP_UNO:
+    R = executeFCMP_UNO(Src1, Src2, Ty);
+    break;
+  case FCmpInst::FCMP_UEQ:
+    R = executeFCMP_UEQ(Src1, Src2, Ty);
+    break;
+  case FCmpInst::FCMP_OEQ:
+    R = executeFCMP_OEQ(Src1, Src2, Ty);
+    break;
+  case FCmpInst::FCMP_UNE:
+    R = executeFCMP_UNE(Src1, Src2, Ty);
+    break;
+  case FCmpInst::FCMP_ONE:
+    R = executeFCMP_ONE(Src1, Src2, Ty);
+    break;
+  case FCmpInst::FCMP_ULT:
+    R = executeFCMP_ULT(Src1, Src2, Ty);
+    break;
+  case FCmpInst::FCMP_OLT:
+    R = executeFCMP_OLT(Src1, Src2, Ty);
+    break;
+  case FCmpInst::FCMP_UGT:
+    R = executeFCMP_UGT(Src1, Src2, Ty);
+    break;
+  case FCmpInst::FCMP_OGT:
+    R = executeFCMP_OGT(Src1, Src2, Ty);
+    break;
+  case FCmpInst::FCMP_ULE:
+    R = executeFCMP_ULE(Src1, Src2, Ty);
+    break;
+  case FCmpInst::FCMP_OLE:
+    R = executeFCMP_OLE(Src1, Src2, Ty);
+    break;
+  case FCmpInst::FCMP_UGE:
+    R = executeFCMP_UGE(Src1, Src2, Ty);
+    break;
+  case FCmpInst::FCMP_OGE:
+    R = executeFCMP_OGE(Src1, Src2, Ty);
+    break;
   }
 
   SetValue(&I, R, SF);
@@ -701,32 +751,58 @@ static GenericValue executeCmpInst(unsigned predicate, GenericValue Src1,
                                    GenericValue Src2, Type *Ty) {
   GenericValue Result;
   switch (predicate) {
-  case ICmpInst::ICMP_EQ:    return executeICMP_EQ(Src1, Src2, Ty);
-  case ICmpInst::ICMP_NE:    return executeICMP_NE(Src1, Src2, Ty);
-  case ICmpInst::ICMP_UGT:   return executeICMP_UGT(Src1, Src2, Ty);
-  case ICmpInst::ICMP_SGT:   return executeICMP_SGT(Src1, Src2, Ty);
-  case ICmpInst::ICMP_ULT:   return executeICMP_ULT(Src1, Src2, Ty);
-  case ICmpInst::ICMP_SLT:   return executeICMP_SLT(Src1, Src2, Ty);
-  case ICmpInst::ICMP_UGE:   return executeICMP_UGE(Src1, Src2, Ty);
-  case ICmpInst::ICMP_SGE:   return executeICMP_SGE(Src1, Src2, Ty);
-  case ICmpInst::ICMP_ULE:   return executeICMP_ULE(Src1, Src2, Ty);
-  case ICmpInst::ICMP_SLE:   return executeICMP_SLE(Src1, Src2, Ty);
-  case FCmpInst::FCMP_ORD:   return executeFCMP_ORD(Src1, Src2, Ty);
-  case FCmpInst::FCMP_UNO:   return executeFCMP_UNO(Src1, Src2, Ty);
-  case FCmpInst::FCMP_OEQ:   return executeFCMP_OEQ(Src1, Src2, Ty);
-  case FCmpInst::FCMP_UEQ:   return executeFCMP_UEQ(Src1, Src2, Ty);
-  case FCmpInst::FCMP_ONE:   return executeFCMP_ONE(Src1, Src2, Ty);
-  case FCmpInst::FCMP_UNE:   return executeFCMP_UNE(Src1, Src2, Ty);
-  case FCmpInst::FCMP_OLT:   return executeFCMP_OLT(Src1, Src2, Ty);
-  case FCmpInst::FCMP_ULT:   return executeFCMP_ULT(Src1, Src2, Ty);
-  case FCmpInst::FCMP_OGT:   return executeFCMP_OGT(Src1, Src2, Ty);
-  case FCmpInst::FCMP_UGT:   return executeFCMP_UGT(Src1, Src2, Ty);
-  case FCmpInst::FCMP_OLE:   return executeFCMP_OLE(Src1, Src2, Ty);
-  case FCmpInst::FCMP_ULE:   return executeFCMP_ULE(Src1, Src2, Ty);
-  case FCmpInst::FCMP_OGE:   return executeFCMP_OGE(Src1, Src2, Ty);
-  case FCmpInst::FCMP_UGE:   return executeFCMP_UGE(Src1, Src2, Ty);
-  case FCmpInst::FCMP_FALSE: return executeFCMP_BOOL(Src1, Src2, Ty, false);
-  case FCmpInst::FCMP_TRUE:  return executeFCMP_BOOL(Src1, Src2, Ty, true);
+  case ICmpInst::ICMP_EQ:
+    return executeICMP_EQ(Src1, Src2, Ty);
+  case ICmpInst::ICMP_NE:
+    return executeICMP_NE(Src1, Src2, Ty);
+  case ICmpInst::ICMP_UGT:
+    return executeICMP_UGT(Src1, Src2, Ty);
+  case ICmpInst::ICMP_SGT:
+    return executeICMP_SGT(Src1, Src2, Ty);
+  case ICmpInst::ICMP_ULT:
+    return executeICMP_ULT(Src1, Src2, Ty);
+  case ICmpInst::ICMP_SLT:
+    return executeICMP_SLT(Src1, Src2, Ty);
+  case ICmpInst::ICMP_UGE:
+    return executeICMP_UGE(Src1, Src2, Ty);
+  case ICmpInst::ICMP_SGE:
+    return executeICMP_SGE(Src1, Src2, Ty);
+  case ICmpInst::ICMP_ULE:
+    return executeICMP_ULE(Src1, Src2, Ty);
+  case ICmpInst::ICMP_SLE:
+    return executeICMP_SLE(Src1, Src2, Ty);
+  case FCmpInst::FCMP_ORD:
+    return executeFCMP_ORD(Src1, Src2, Ty);
+  case FCmpInst::FCMP_UNO:
+    return executeFCMP_UNO(Src1, Src2, Ty);
+  case FCmpInst::FCMP_OEQ:
+    return executeFCMP_OEQ(Src1, Src2, Ty);
+  case FCmpInst::FCMP_UEQ:
+    return executeFCMP_UEQ(Src1, Src2, Ty);
+  case FCmpInst::FCMP_ONE:
+    return executeFCMP_ONE(Src1, Src2, Ty);
+  case FCmpInst::FCMP_UNE:
+    return executeFCMP_UNE(Src1, Src2, Ty);
+  case FCmpInst::FCMP_OLT:
+    return executeFCMP_OLT(Src1, Src2, Ty);
+  case FCmpInst::FCMP_ULT:
+    return executeFCMP_ULT(Src1, Src2, Ty);
+  case FCmpInst::FCMP_OGT:
+    return executeFCMP_OGT(Src1, Src2, Ty);
+  case FCmpInst::FCMP_UGT:
+    return executeFCMP_UGT(Src1, Src2, Ty);
+  case FCmpInst::FCMP_OLE:
+    return executeFCMP_OLE(Src1, Src2, Ty);
+  case FCmpInst::FCMP_ULE:
+    return executeFCMP_ULE(Src1, Src2, Ty);
+  case FCmpInst::FCMP_OGE:
+    return executeFCMP_OGE(Src1, Src2, Ty);
+  case FCmpInst::FCMP_UGE:
+    return executeFCMP_UGE(Src1, Src2, Ty);
+  case FCmpInst::FCMP_FALSE:
+    return executeFCMP_BOOL(Src1, Src2, Ty, false);
+  case FCmpInst::FCMP_TRUE:
+    return executeFCMP_BOOL(Src1, Src2, Ty, true);
   default:
     dbgs() << "Unhandled Cmp predicate\n";
     llvm_unreachable(nullptr);
@@ -735,10 +811,10 @@ static GenericValue executeCmpInst(unsigned predicate, GenericValue Src1,
 
 void Interpreter::visitBinaryOperator(BinaryOperator &I) {
   ExecutionContext &SF = ECStack.back();
-  Type *Ty    = I.getOperand(0)->getType();
+  Type *Ty = I.getOperand(0)->getType();
   GenericValue Src1 = getOperandValue(I.getOperand(0), SF);
   GenericValue Src2 = getOperandValue(I.getOperand(1), SF);
-  GenericValue R;   // Result
+  GenericValue R; // Result
 
   // First process vector operation
   if (Ty->isVectorTy()) {
@@ -746,69 +822,83 @@ void Interpreter::visitBinaryOperator(BinaryOperator &I) {
     R.AggregateVal.resize(Src1.AggregateVal.size());
 
     // Macros to execute binary operation 'OP' over integer vectors
-#define INTEGER_VECTOR_OPERATION(OP)                               \
-    for (unsigned i = 0; i < R.AggregateVal.size(); ++i)           \
-      R.AggregateVal[i].IntVal =                                   \
-      Src1.AggregateVal[i].IntVal OP Src2.AggregateVal[i].IntVal;
+#define INTEGER_VECTOR_OPERATION(OP)                                           \
+  for (unsigned i = 0; i < R.AggregateVal.size(); ++i)                         \
+    R.AggregateVal[i].IntVal =                                                 \
+        Src1.AggregateVal[i].IntVal OP Src2.AggregateVal[i].IntVal;
 
     // Additional macros to execute binary operations udiv/sdiv/urem/srem since
     // they have different notation.
-#define INTEGER_VECTOR_FUNCTION(OP)                                \
-    for (unsigned i = 0; i < R.AggregateVal.size(); ++i)           \
-      R.AggregateVal[i].IntVal =                                   \
-      Src1.AggregateVal[i].IntVal.OP(Src2.AggregateVal[i].IntVal);
+#define INTEGER_VECTOR_FUNCTION(OP)                                            \
+  for (unsigned i = 0; i < R.AggregateVal.size(); ++i)                         \
+    R.AggregateVal[i].IntVal =                                                 \
+        Src1.AggregateVal[i].IntVal.OP(Src2.AggregateVal[i].IntVal);
 
     // Macros to execute binary operation 'OP' over floating point type TY
     // (float or double) vectors
-#define FLOAT_VECTOR_FUNCTION(OP, TY)                               \
-      for (unsigned i = 0; i < R.AggregateVal.size(); ++i)          \
-        R.AggregateVal[i].TY =                                      \
-        Src1.AggregateVal[i].TY OP Src2.AggregateVal[i].TY;
+#define FLOAT_VECTOR_FUNCTION(OP, TY)                                          \
+  for (unsigned i = 0; i < R.AggregateVal.size(); ++i)                         \
+    R.AggregateVal[i].TY = Src1.AggregateVal[i].TY OP Src2.AggregateVal[i].TY;
 
     // Macros to choose appropriate TY: float or double and run operation
     // execution
-#define FLOAT_VECTOR_OP(OP) {                                         \
-  if (cast<VectorType>(Ty)->getElementType()->isFloatTy())            \
-    FLOAT_VECTOR_FUNCTION(OP, FloatVal)                               \
-  else {                                                              \
-    if (cast<VectorType>(Ty)->getElementType()->isDoubleTy())         \
-      FLOAT_VECTOR_FUNCTION(OP, DoubleVal)                            \
-    else {                                                            \
-      dbgs() << "Unhandled type for OP instruction: " << *Ty << "\n"; \
-      llvm_unreachable(0);                                            \
-    }                                                                 \
-  }                                                                   \
-}
-
-    switch(I.getOpcode()){
+#define FLOAT_VECTOR_OP(OP)                                                    \
+  {                                                                            \
+    if (cast<VectorType>(Ty)->getElementType()->isFloatTy())                   \
+      FLOAT_VECTOR_FUNCTION(OP, FloatVal)                                      \
+    else {                                                                     \
+      if (cast<VectorType>(Ty)->getElementType()->isDoubleTy())                \
+        FLOAT_VECTOR_FUNCTION(OP, DoubleVal)                                   \
+      else {                                                                   \
+        dbgs() << "Unhandled type for OP instruction: " << *Ty << "\n";        \
+        llvm_unreachable(0);                                                   \
+      }                                                                        \
+    }                                                                          \
+  }
+
+    switch (I.getOpcode()) {
     default:
       dbgs() << "Don't know how to handle this binary operator!\n-->" << I;
       llvm_unreachable(nullptr);
       break;
-    case Instruction::Add:   INTEGER_VECTOR_OPERATION(+) break;
-    case Instruction::Sub:   INTEGER_VECTOR_OPERATION(-) break;
-    case Instruction::Mul:   INTEGER_VECTOR_OPERATION(*) break;
-    case Instruction::UDiv:  INTEGER_VECTOR_FUNCTION(udiv) break;
-    case Instruction::SDiv:  INTEGER_VECTOR_FUNCTION(sdiv) break;
-    case Instruction::URem:  INTEGER_VECTOR_FUNCTION(urem) break;
-    case Instruction::SRem:  INTEGER_VECTOR_FUNCTION(srem) break;
-    case Instruction::And:   INTEGER_VECTOR_OPERATION(&) break;
-    case Instruction::Or:    INTEGER_VECTOR_OPERATION(|) break;
-    case Instruction::Xor:   INTEGER_VECTOR_OPERATION(^) break;
-    case Instruction::FAdd:  FLOAT_VECTOR_OP(+) break;
-    case Instruction::FSub:  FLOAT_VECTOR_OP(-) break;
-    case Instruction::FMul:  FLOAT_VECTOR_OP(*) break;
-    case Instruction::FDiv:  FLOAT_VECTOR_OP(/) break;
+    case Instruction::Add:
+      INTEGER_VECTOR_OPERATION(+) break;
+    case Instruction::Sub:
+      INTEGER_VECTOR_OPERATION(-) break;
+    case Instruction::Mul:
+      INTEGER_VECTOR_OPERATION(*) break;
+    case Instruction::UDiv:
+      INTEGER_VECTOR_FUNCTION(udiv) break;
+    case Instruction::SDiv:
+      INTEGER_VECTOR_FUNCTION(sdiv) break;
+    case Instruction::URem:
+      INTEGER_VECTOR_FUNCTION(urem) break;
+    case Instruction::SRem:
+      INTEGER_VECTOR_FUNCTION(srem) break;
+    case Instruction::And:
+      INTEGER_VECTOR_OPERATION(&) break;
+    case Instruction::Or:
+      INTEGER_VECTOR_OPERATION(|) break;
+    case Instruction::Xor:
+      INTEGER_VECTOR_OPERATION(^) break;
+    case Instruction::FAdd:
+      FLOAT_VECTOR_OP(+) break;
+    case Instruction::FSub:
+      FLOAT_VECTOR_OP(-) break;
+    case Instruction::FMul:
+      FLOAT_VECTOR_OP(*) break;
+    case Instruction::FDiv:
+      FLOAT_VECTOR_OP(/) break;
     case Instruction::FRem:
       if (cast<VectorType>(Ty)->getElementType()->isFloatTy())
         for (unsigned i = 0; i < R.AggregateVal.size(); ++i)
-          R.AggregateVal[i].FloatVal =
-          fmod(Src1.AggregateVal[i].FloatVal, Src2.AggregateVal[i].FloatVal);
+          R.AggregateVal[i].FloatVal = fmod(Src1.AggregateVal[i].FloatVal,
+                                            Src2.AggregateVal[i].FloatVal);
       else {
         if (cast<VectorType>(Ty)->getElementType()->isDoubleTy())
           for (unsigned i = 0; i < R.AggregateVal.size(); ++i)
-            R.AggregateVal[i].DoubleVal =
-            fmod(Src1.AggregateVal[i].DoubleVal, Src2.AggregateVal[i].DoubleVal);
+            R.AggregateVal[i].DoubleVal = fmod(Src1.AggregateVal[i].DoubleVal,
+                                               Src2.AggregateVal[i].DoubleVal);
         else {
           dbgs() << "Unhandled type for Rem instruction: " << *Ty << "\n";
           llvm_unreachable(nullptr);
@@ -822,21 +912,51 @@ void Interpreter::visitBinaryOperator(BinaryOperator &I) {
       dbgs() << "Don't know how to handle this binary operator!\n-->" << I;
       llvm_unreachable(nullptr);
       break;
-    case Instruction::Add:   R.IntVal = Src1.IntVal + Src2.IntVal; break;
-    case Instruction::Sub:   R.IntVal = Src1.IntVal - Src2.IntVal; break;
-    case Instruction::Mul:   R.IntVal = Src1.IntVal * Src2.IntVal; break;
-    case Instruction::FAdd:  executeFAddInst(R, Src1, Src2, Ty); break;
-    case Instruction::FSub:  executeFSubInst(R, Src1, Src2, Ty); break;
-    case Instruction::FMul:  executeFMulInst(R, Src1, Src2, Ty); break;
-    case Instruction::FDiv:  executeFDivInst(R, Src1, Src2, Ty); break;
-    case Instruction::FRem:  executeFRemInst(R, Src1, Src2, Ty); break;
-    case Instruction::UDiv:  R.IntVal = Src1.IntVal.udiv(Src2.IntVal); break;
-    case Instruction::SDiv:  R.IntVal = Src1.IntVal.sdiv(Src2.IntVal); break;
-    case Instruction::URem:  R.IntVal = Src1.IntVal.urem(Src2.IntVal); break;
-    case Instruction::SRem:  R.IntVal = Src1.IntVal.srem(Src2.IntVal); break;
-    case Instruction::And:   R.IntVal = Src1.IntVal & Src2.IntVal; break;
-    case Instruction::Or:    R.IntVal = Src1.IntVal | Src2.IntVal; break;
-    case Instruction::Xor:   R.IntVal = Src1.IntVal ^ Src2.IntVal; break;
+    case Instruction::Add:
+      R.IntVal = Src1.IntVal + Src2.IntVal;
+      break;
+    case Instruction::Sub:
+      R.IntVal = Src1.IntVal - Src2.IntVal;
+      break;
+    case Instruction::Mul:
+      R.IntVal = Src1.IntVal * Src2.IntVal;
+      break;
+    case Instruction::FAdd:
+      executeFAddInst(R, Src1, Src2, Ty);
+      break;
+    case Instruction::FSub:
+      executeFSubInst(R, Src1, Src2, Ty);
+      break;
+    case Instruction::FMul:
+      executeFMulInst(R, Src1, Src2, Ty);
+      break;
+    case Instruction::FDiv:
+      executeFDivInst(R, Src1, Src2, Ty);
+      break;
+    case Instruction::FRem:
+      executeFRemInst(R, Src1, Src2, Ty);
+      break;
+    case Instruction::UDiv:
+      R.IntVal = Src1.IntVal.udiv(Src2.IntVal);
+      break;
+    case Instruction::SDiv:
+      R.IntVal = Src1.IntVal.sdiv(Src2.IntVal);
+      break;
+    case Instruction::URem:
+      R.IntVal = Src1.IntVal.urem(Src2.IntVal);
+      break;
+    case Instruction::SRem:
+      R.IntVal = Src1.IntVal.srem(Src2.IntVal);
+      break;
+    case Instruction::And:
+      R.IntVal = Src1.IntVal & Src2.IntVal;
+      break;
+    case Instruction::Or:
+      R.IntVal = Src1.IntVal | Src2.IntVal;
+      break;
+    case Instruction::Xor:
+      R.IntVal = Src1.IntVal ^ Src2.IntVal;
+      break;
     }
   }
   SetValue(&I, R, SF);
@@ -844,23 +964,24 @@ void Interpreter::visitBinaryOperator(BinaryOperator &I) {
 
 static GenericValue executeSelectInst(GenericValue Src1, GenericValue Src2,
                                       GenericValue Src3, Type *Ty) {
-    GenericValue Dest;
-    if(Ty->isVectorTy()) {
-      assert(Src1.AggregateVal.size() == Src2.AggregateVal.size());
-      assert(Src2.AggregateVal.size() == Src3.AggregateVal.size());
-      Dest.AggregateVal.resize( Src1.AggregateVal.size() );
-      for (size_t i = 0; i < Src1.AggregateVal.size(); ++i)
-        Dest.AggregateVal[i] = (Src1.AggregateVal[i].IntVal == 0) ?
-          Src3.AggregateVal[i] : Src2.AggregateVal[i];
-    } else {
-      Dest = (Src1.IntVal == 0) ? Src3 : Src2;
-    }
-    return Dest;
+  GenericValue Dest;
+  if (Ty->isVectorTy()) {
+    assert(Src1.AggregateVal.size() == Src2.AggregateVal.size());
+    assert(Src2.AggregateVal.size() == Src3.AggregateVal.size());
+    Dest.AggregateVal.resize(Src1.AggregateVal.size());
+    for (size_t i = 0; i < Src1.AggregateVal.size(); ++i)
+      Dest.AggregateVal[i] = (Src1.AggregateVal[i].IntVal == 0)
+                                 ? Src3.AggregateVal[i]
+                                 : Src2.AggregateVal[i];
+  } else {
+    Dest = (Src1.IntVal == 0) ? Src3 : Src2;
+  }
+  return Dest;
 }
 
 void Interpreter::visitSelectInst(SelectInst &I) {
   ExecutionContext &SF = ECStack.back();
-  Type * Ty = I.getOperand(0)->getType();
+  Type *Ty = I.getOperand(0)->getType();
   GenericValue Src1 = getOperandValue(I.getOperand(0), SF);
   GenericValue Src2 = getOperandValue(I.getOperand(1), SF);
   GenericValue Src3 = getOperandValue(I.getOperand(2), SF);
@@ -894,9 +1015,9 @@ void Interpreter::popStackAndReturnValueToCaller(Type *RetTy,
   // Pop the current stack frame.
   ECStack.pop_back();
 
-  if (ECStack.empty()) {  // Finished main.  Put result into exit code...
-    if (RetTy && !RetTy->isVoidTy()) {          // Nonvoid return type?
-      ExitValue = Result;   // Capture the exit value of the program
+  if (ECStack.empty()) { // Finished main.  Put result into exit code...
+    if (RetTy && !RetTy->isVoidTy()) { // Nonvoid return type?
+      ExitValue = Result;              // Capture the exit value of the program
     } else {
       memset(&ExitValue.Untyped, 0, sizeof(ExitValue.Untyped));
     }
@@ -909,8 +1030,8 @@ void Interpreter::popStackAndReturnValueToCaller(Type *RetTy,
       if (!CallingSF.Caller->getType()->isVoidTy())
         SetValue(CallingSF.Caller, Result, CallingSF);
       if (InvokeInst *II = dyn_cast<InvokeInst>(CallingSF.Caller))
-        SwitchToNewBasicBlock (II->getNormalDest (), CallingSF);
-      CallingSF.Caller = nullptr;             // We returned from the call...
+        SwitchToNewBasicBlock(II->getNormalDest(), CallingSF);
+      CallingSF.Caller = nullptr; // We returned from the call...
     }
   }
 }
@@ -922,7 +1043,7 @@ void Interpreter::visitReturnInst(ReturnInst &I) {
 
   // Save away the return value... (if we are not 'ret void')
   if (I.getNumOperands()) {
-    RetTy  = I.getReturnValue()->getType();
+    RetTy = I.getReturnValue()->getType();
     Result = getOperandValue(I.getReturnValue(), SF);
   }
 
@@ -937,7 +1058,7 @@ void Interpreter::visitBranchInst(BranchInst &I) {
   ExecutionContext &SF = ECStack.back();
   BasicBlock *Dest;
 
-  Dest = I.getSuccessor(0);          // Uncond branches have a fixed dest...
+  Dest = I.getSuccessor(0); // Uncond branches have a fixed dest...
   if (!I.isUnconditional()) {
     Value *Cond = I.getCondition();
     if (getOperandValue(Cond, SF).IntVal == 0) // If false cond...
@@ -948,7 +1069,7 @@ void Interpreter::visitBranchInst(BranchInst &I) {
 
 void Interpreter::visitSwitchInst(SwitchInst &I) {
   ExecutionContext &SF = ECStack.back();
-  Value* Cond = I.getCondition();
+  Value *Cond = I.getCondition();
   Type *ElTy = Cond->getType();
   GenericValue CondVal = getOperandValue(Cond, SF);
 
@@ -961,17 +1082,17 @@ void Interpreter::visitSwitchInst(SwitchInst &I) {
       break;
     }
   }
-  if (!Dest) Dest = I.getDefaultDest();   // No cases matched: use default
+  if (!Dest)
+    Dest = I.getDefaultDest(); // No cases matched: use default
   SwitchToNewBasicBlock(Dest, SF);
 }
 
 void Interpreter::visitIndirectBrInst(IndirectBrInst &I) {
   ExecutionContext &SF = ECStack.back();
   void *Dest = GVTOP(getOperandValue(I.getAddress(), SF));
-  SwitchToNewBasicBlock((BasicBlock*)Dest, SF);
+  SwitchToNewBasicBlock((BasicBlock *)Dest, SF);
 }
 
-
 // SwitchToNewBasicBlock - This method is used to jump to a new basic block.
 // This function handles the actual updating of block and instruction iterators
 // as well as execution of all of the PHI nodes in the destination block.
@@ -982,12 +1103,14 @@ void Interpreter::visitIndirectBrInst(IndirectBrInst &I) {
 // their inputs.  If the input PHI node is updated before it is read, incorrect
 // results can happen.  Thus we use a two phase approach.
 //
-void Interpreter::SwitchToNewBasicBlock(BasicBlock *Dest, ExecutionContext &SF){
-  BasicBlock *PrevBB = SF.CurBB;      // Remember where we came from...
-  SF.CurBB   = Dest;                  // Update CurBB to branch destination
-  SF.CurInst = SF.CurBB->begin();     // Update new instruction ptr...
+void Interpreter::SwitchToNewBasicBlock(BasicBlock *Dest,
+                                        ExecutionContext &SF) {
+  BasicBlock *PrevBB = SF.CurBB;  // Remember where we came from...
+  SF.CurBB = Dest;                // Update CurBB to branch destination
+  SF.CurInst = SF.CurBB->begin(); // Update new instruction ptr...
 
-  if (!isa<PHINode>(SF.CurInst)) return;  // Nothing fancy to do
+  if (!isa<PHINode>(SF.CurInst))
+    return; // Nothing fancy to do
 
   // Loop over all of the PHI nodes in the current block, reading their inputs.
   std::vector<GenericValue> ResultValues;
@@ -1021,7 +1144,7 @@ void Interpreter::visitAllocaInst(AllocaInst &I) {
 
   // Get the number of elements being allocated by the array...
   unsigned NumElements =
-    getOperandValue(I.getOperand(0), SF).IntVal.getZExtValue();
+      getOperandValue(I.getOperand(0), SF).IntVal.getZExtValue();
 
   unsigned TypeSize = (size_t)getDataLayout().getTypeAllocSize(Ty);
 
@@ -1067,7 +1190,7 @@ GenericValue Interpreter::executeGEPOperation(Value *Ptr, gep_type_iterator I,
 
       int64_t Idx;
       unsigned BitWidth =
-        cast<IntegerType>(I.getOperand()->getType())->getBitWidth();
+          cast<IntegerType>(I.getOperand()->getType())->getBitWidth();
       if (BitWidth == 32)
         Idx = (int64_t)(int32_t)IdxGV.IntVal.getZExtValue();
       else {
@@ -1079,21 +1202,23 @@ GenericValue Interpreter::executeGEPOperation(Value *Ptr, gep_type_iterator I,
   }
 
   GenericValue Result;
-  Result.PointerVal = ((char*)getOperandValue(Ptr, SF).PointerVal) + Total;
+  Result.PointerVal = ((char *)getOperandValue(Ptr, SF).PointerVal) + Total;
   LLVM_DEBUG(dbgs() << "GEP Index " << Total << " bytes.\n");
   return Result;
 }
 
 void Interpreter::visitGetElementPtrInst(GetElementPtrInst &I) {
   ExecutionContext &SF = ECStack.back();
-  SetValue(&I, executeGEPOperation(I.getPointerOperand(),
-                                   gep_type_begin(I), gep_type_end(I), SF), SF);
+  SetValue(&I,
+           executeGEPOperation(I.getPointerOperand(), gep_type_begin(I),
+                               gep_type_end(I), SF),
+           SF);
 }
 
 void Interpreter::visitLoadInst(LoadInst &I) {
   ExecutionContext &SF = ECStack.back();
   GenericValue SRC = getOperandValue(I.getPointerOperand(), SF);
-  GenericValue *Ptr = (GenericValue*)GVTOP(SRC);
+  GenericValue *Ptr = (GenericValue *)GVTOP(SRC);
   GenericValue Result;
   LoadValueFromMemory(Result, Ptr, I.getType());
   SetValue(&I, Result, SF);
@@ -1168,7 +1293,7 @@ void Interpreter::visitCallBase(CallBase &I) {
   // To handle indirect calls, we must get the pointer value from the argument
   // and treat it as a function pointer.
   GenericValue SRC = getOperandValue(SF.Caller->getCalledOperand(), SF);
-  callFunction((Function*)GVTOP(SRC), ArgVals);
+  callFunction((Function *)GVTOP(SRC), ArgVals);
 }
 
 // auxiliary function for shift operations
@@ -1179,10 +1304,9 @@ static unsigned getShiftAmount(uint64_t orgShiftAmount,
     return orgShiftAmount;
   // according to the llvm documentation, if orgShiftAmount > valueWidth,
   // the result is undfeined. but we do shift by this rule:
-  return (NextPowerOf2(valueWidth-1) - 1) & orgShiftAmount;
+  return (NextPowerOf2(valueWidth - 1) - 1) & orgShiftAmount;
 }
 
-
 void Interpreter::visitShl(BinaryOperator &I) {
   ExecutionContext &SF = ECStack.back();
   GenericValue Src1 = getOperandValue(I.getOperand(0), SF);
@@ -1197,7 +1321,8 @@ void Interpreter::visitShl(BinaryOperator &I) {
       GenericValue Result;
       uint64_t shiftAmount = Src2.AggregateVal[i].IntVal.getZExtValue();
       llvm::APInt valueToShift = Src1.AggregateVal[i].IntVal;
-      Result.IntVal = valueToShift.shl(getShiftAmount(shiftAmount, valueToShift));
+      Result.IntVal =
+          valueToShift.shl(getShiftAmount(shiftAmount, valueToShift));
       Dest.AggregateVal.push_back(Result);
     }
   } else {
@@ -1224,7 +1349,8 @@ void Interpreter::visitLShr(BinaryOperator &I) {
       GenericValue Result;
       uint64_t shiftAmount = Src2.AggregateVal[i].IntVal.getZExtValue();
       llvm::APInt valueToShift = Src1.AggregateVal[i].IntVal;
-      Result.IntVal = valueToShift.lshr(getShiftAmount(shiftAmount, valueToShift));
+      Result.IntVal =
+          valueToShift.lshr(getShiftAmount(shiftAmount, valueToShift));
       Dest.AggregateVal.push_back(Result);
     }
   } else {
@@ -1251,7 +1377,8 @@ void Interpreter::visitAShr(BinaryOperator &I) {
       GenericValue Result;
       uint64_t shiftAmount = Src2.AggregateVal[i].IntVal.getZExtValue();
       llvm::APInt valueToShift = Src1.AggregateVal[i].IntVal;
-      Result.IntVal = valueToShift.ashr(getShiftAmount(shiftAmount, valueToShift));
+      Result.IntVal =
+          valueToShift.ashr(getShiftAmount(shiftAmount, valueToShift));
       Dest.AggregateVal.push_back(Result);
     }
   } else {
@@ -1517,7 +1644,7 @@ GenericValue Interpreter::executePtrToIntInst(Value *SrcVal, Type *DstTy,
   GenericValue Dest, Src = getOperandValue(SrcVal, SF);
   assert(SrcVal->getType()->isPointerTy() && "Invalid PtrToInt instruction");
 
-  Dest.IntVal = APInt(DBitWidth, (intptr_t) Src.PointerVal);
+  Dest.IntVal = APInt(DBitWidth, (intptr_t)Src.PointerVal);
   return Dest;
 }
 
@@ -1759,8 +1886,10 @@ void Interpreter::visitBitCastInst(BitCastInst &I) {
   SetValue(&I, executeBitCastInst(I.getOperand(0), I.getType(), SF), SF);
 }
 
-#define IMPLEMENT_VAARG(TY) \
-   case Type::TY##TyID: Dest.TY##Val = Src.TY##Val; break
+#define IMPLEMENT_VAARG(TY)                                                    \
+  case Type::TY##TyID:                                                         \
+    Dest.TY##Val = Src.TY##Val;                                                \
+    break
 
 void Interpreter::visitVAArgInst(VAArgInst &I) {
   ExecutionContext &SF = ECStack.back();
@@ -1769,16 +1898,16 @@ void Interpreter::visitVAArgInst(VAArgInst &I) {
   // (ec-stack-depth var-arg-index) pair.
   GenericValue VAList = getOperandValue(I.getOperand(0), SF);
   GenericValue Dest;
-  GenericValue Src = ECStack[VAList.UIntPairVal.first]
-                      .VarArgs[VAList.UIntPairVal.second];
+  GenericValue Src =
+      ECStack[VAList.UIntPairVal.first].VarArgs[VAList.UIntPairVal.second];
   Type *Ty = I.getType();
   switch (Ty->getTypeID()) {
   case Type::IntegerTyID:
     Dest.IntVal = Src.IntVal;
     break;
-  IMPLEMENT_VAARG(Pointer);
-  IMPLEMENT_VAARG(Float);
-  IMPLEMENT_VAARG(Double);
+    IMPLEMENT_VAARG(Pointer);
+    IMPLEMENT_VAARG(Float);
+    IMPLEMENT_VAARG(Double);
   default:
     dbgs() << "Unhandled dest type for vaarg instruction: " << *Ty << "\n";
     llvm_unreachable(nullptr);
@@ -1800,11 +1929,11 @@ void Interpreter::visitExtractElementInst(ExtractElementInst &I) {
   Type *Ty = I.getType();
   const unsigned indx = unsigned(Src2.IntVal.getZExtValue());
 
-  if(Src1.AggregateVal.size() > indx) {
+  if (Src1.AggregateVal.size() > indx) {
     switch (Ty->getTypeID()) {
     default:
       dbgs() << "Unhandled destination type for extractelement instruction: "
-      << *Ty << "\n";
+             << *Ty << "\n";
       llvm_unreachable(nullptr);
       break;
     case Type::IntegerTyID:
@@ -1838,25 +1967,25 @@ void Interpreter::visitInsertElementInst(InsertElementInst &I) {
   const unsigned indx = unsigned(Src3.IntVal.getZExtValue());
   Dest.AggregateVal = Src1.AggregateVal;
 
-  if(Src1.AggregateVal.size() <= indx)
-      llvm_unreachable("Invalid index in insertelement instruction");
+  if (Src1.AggregateVal.size() <= indx)
+    llvm_unreachable("Invalid index in insertelement instruction");
   switch (TyContained->getTypeID()) {
-    default:
-      llvm_unreachable("Unhandled dest type for insertelement instruction");
-    case Type::IntegerTyID:
-      Dest.AggregateVal[indx].IntVal = Src2.IntVal;
-      break;
-    case Type::FloatTyID:
-      Dest.AggregateVal[indx].FloatVal = Src2.FloatVal;
-      break;
-    case Type::DoubleTyID:
-      Dest.AggregateVal[indx].DoubleVal = Src2.DoubleVal;
-      break;
+  default:
+    llvm_unreachable("Unhandled dest type for insertelement instruction");
+  case Type::IntegerTyID:
+    Dest.AggregateVal[indx].IntVal = Src2.IntVal;
+    break;
+  case Type::FloatTyID:
+    Dest.AggregateVal[indx].FloatVal = Src2.FloatVal;
+    break;
+  case Type::DoubleTyID:
+    Dest.AggregateVal[indx].DoubleVal = Src2.DoubleVal;
+    break;
   }
   SetValue(&I, Dest, SF);
 }
 
-void Interpreter::visitShuffleVectorInst(ShuffleVectorInst &I){
+void Interpreter::visitShuffleVectorInst(ShuffleVectorInst &I) {
   ExecutionContext &SF = ECStack.back();
 
   VectorType *Ty = cast<VectorType>(I.getType());
@@ -1877,48 +2006,49 @@ void Interpreter::visitShuffleVectorInst(ShuffleVectorInst &I){
   Dest.AggregateVal.resize(src3Size);
 
   switch (TyContained->getTypeID()) {
-    default:
-      llvm_unreachable("Unhandled dest type for insertelement instruction");
-      break;
-    case Type::IntegerTyID:
-      for( unsigned i=0; i<src3Size; i++) {
-        unsigned j = std::max(0, I.getMaskValue(i));
-        if(j < src1Size)
-          Dest.AggregateVal[i].IntVal = Src1.AggregateVal[j].IntVal;
-        else if(j < src1Size + src2Size)
-          Dest.AggregateVal[i].IntVal = Src2.AggregateVal[j-src1Size].IntVal;
-        else
-          // The selector may not be greater than sum of lengths of first and
-          // second operands and llasm should not allow situation like
-          // %tmp = shufflevector <2 x i32> <i32 3, i32 4>, <2 x i32> undef,
-          //                      <2 x i32> < i32 0, i32 5 >,
-          // where i32 5 is invalid, but let it be additional check here:
-          llvm_unreachable("Invalid mask in shufflevector instruction");
-      }
-      break;
-    case Type::FloatTyID:
-      for( unsigned i=0; i<src3Size; i++) {
-        unsigned j = std::max(0, I.getMaskValue(i));
-        if(j < src1Size)
-          Dest.AggregateVal[i].FloatVal = Src1.AggregateVal[j].FloatVal;
-        else if(j < src1Size + src2Size)
-          Dest.AggregateVal[i].FloatVal = Src2.AggregateVal[j-src1Size].FloatVal;
-        else
-          llvm_unreachable("Invalid mask in shufflevector instruction");
-        }
-      break;
-    case Type::DoubleTyID:
-      for( unsigned i=0; i<src3Size; i++) {
-        unsigned j = std::max(0, I.getMaskValue(i));
-        if(j < src1Size)
-          Dest.AggregateVal[i].DoubleVal = Src1.AggregateVal[j].DoubleVal;
-        else if(j < src1Size + src2Size)
-          Dest.AggregateVal[i].DoubleVal =
-            Src2.AggregateVal[j-src1Size].DoubleVal;
-        else
-          llvm_unreachable("Invalid mask in shufflevector instruction");
-      }
-      break;
+  default:
+    llvm_unreachable("Unhandled dest type for insertelement instruction");
+    break;
+  case Type::IntegerTyID:
+    for (unsigned i = 0; i < src3Size; i++) {
+      unsigned j = std::max(0, I.getMaskValue(i));
+      if (j < src1Size)
+        Dest.AggregateVal[i].IntVal = Src1.AggregateVal[j].IntVal;
+      else if (j < src1Size + src2Size)
+        Dest.AggregateVal[i].IntVal = Src2.AggregateVal[j - src1Size].IntVal;
+      else
+        // The selector may not be greater than sum of lengths of first and
+        // second operands and llasm should not allow situation like
+        // %tmp = shufflevector <2 x i32> <i32 3, i32 4>, <2 x i32> undef,
+        //                      <2 x i32> < i32 0, i32 5 >,
+        // where i32 5 is invalid, but let it be additional check here:
+        llvm_unreachable("Invalid mask in shufflevector instruction");
+    }
+    break;
+  case Type::FloatTyID:
+    for (unsigned i = 0; i < src3Size; i++) {
+      unsigned j = std::max(0, I.getMaskValue(i));
+      if (j < src1Size)
+        Dest.AggregateVal[i].FloatVal = Src1.AggregateVal[j].FloatVal;
+      else if (j < src1Size + src2Size)
+        Dest.AggregateVal[i].FloatVal =
+            Src2.AggregateVal[j - src1Size].FloatVal;
+      else
+        llvm_unreachable("Invalid mask in shufflevector instruction");
+    }
+    break;
+  case Type::DoubleTyID:
+    for (unsigned i = 0; i < src3Size; i++) {
+      unsigned j = std::max(0, I.getMaskValue(i));
+      if (j < src1Size)
+        Dest.AggregateVal[i].DoubleVal = Src1.AggregateVal[j].DoubleVal;
+      else if (j < src1Size + src2Size)
+        Dest.AggregateVal[i].DoubleVal =
+            Src2.AggregateVal[j - src1Size].DoubleVal;
+      else
+        llvm_unreachable("Invalid mask in shufflevector instruction");
+    }
+    break;
   }
   SetValue(&I, Dest, SF);
 }
@@ -1933,33 +2063,34 @@ void Interpreter::visitExtractValueInst(ExtractValueInst &I) {
   unsigned Num = I.getNumIndices();
   GenericValue *pSrc = &Src;
 
-  for (unsigned i = 0 ; i < Num; ++i) {
+  for (unsigned i = 0; i < Num; ++i) {
     pSrc = &pSrc->AggregateVal[*IdxBegin];
     ++IdxBegin;
   }
 
-  Type *IndexedType = ExtractValueInst::getIndexedType(Agg->getType(), I.getIndices());
+  Type *IndexedType =
+      ExtractValueInst::getIndexedType(Agg->getType(), I.getIndices());
   switch (IndexedType->getTypeID()) {
-    default:
-      llvm_unreachable("Unhandled dest type for extractelement instruction");
+  default:
+    llvm_unreachable("Unhandled dest type for extractelement instruction");
     break;
-    case Type::IntegerTyID:
-      Dest.IntVal = pSrc->IntVal;
+  case Type::IntegerTyID:
+    Dest.IntVal = pSrc->IntVal;
     break;
-    case Type::FloatTyID:
-      Dest.FloatVal = pSrc->FloatVal;
+  case Type::FloatTyID:
+    Dest.FloatVal = pSrc->FloatVal;
     break;
-    case Type::DoubleTyID:
-      Dest.DoubleVal = pSrc->DoubleVal;
+  case Type::DoubleTyID:
+    Dest.DoubleVal = pSrc->DoubleVal;
     break;
-    case Type::ArrayTyID:
-    case Type::StructTyID:
-    case Type::FixedVectorTyID:
-    case Type::ScalableVectorTyID:
-      Dest.AggregateVal = pSrc->AggregateVal;
+  case Type::ArrayTyID:
+  case Type::StructTyID:
+  case Type::FixedVectorTyID:
+  case Type::ScalableVectorTyID:
+    Dest.AggregateVal = pSrc->AggregateVal;
     break;
-    case Type::PointerTyID:
-      Dest.PointerVal = pSrc->PointerVal;
+  case Type::PointerTyID:
+    Dest.PointerVal = pSrc->PointerVal;
     break;
   }
 
@@ -1979,83 +2110,83 @@ void Interpreter::visitInsertValueInst(InsertValueInst &I) {
   unsigned Num = I.getNumIndices();
 
   GenericValue *pDest = &Dest;
-  for (unsigned i = 0 ; i < Num; ++i) {
+  for (unsigned i = 0; i < Num; ++i) {
     pDest = &pDest->AggregateVal[*IdxBegin];
     ++IdxBegin;
   }
   // pDest points to the target value in the Dest now
 
-  Type *IndexedType = ExtractValueInst::getIndexedType(Agg->getType(), I.getIndices());
+  Type *IndexedType =
+      ExtractValueInst::getIndexedType(Agg->getType(), I.getIndices());
 
   switch (IndexedType->getTypeID()) {
-    default:
-      llvm_unreachable("Unhandled dest type for insertelement instruction");
+  default:
+    llvm_unreachable("Unhandled dest type for insertelement instruction");
     break;
-    case Type::IntegerTyID:
-      pDest->IntVal = Src2.IntVal;
+  case Type::IntegerTyID:
+    pDest->IntVal = Src2.IntVal;
     break;
-    case Type::FloatTyID:
-      pDest->FloatVal = Src2.FloatVal;
+  case Type::FloatTyID:
+    pDest->FloatVal = Src2.FloatVal;
     break;
-    case Type::DoubleTyID:
-      pDest->DoubleVal = Src2.DoubleVal;
+  case Type::DoubleTyID:
+    pDest->DoubleVal = Src2.DoubleVal;
     break;
-    case Type::ArrayTyID:
-    case Type::StructTyID:
-    case Type::FixedVectorTyID:
-    case Type::ScalableVectorTyID:
-      pDest->AggregateVal = Src2.AggregateVal;
+  case Type::ArrayTyID:
+  case Type::StructTyID:
+  case Type::FixedVectorTyID:
+  case Type::ScalableVectorTyID:
+    pDest->AggregateVal = Src2.AggregateVal;
     break;
-    case Type::PointerTyID:
-      pDest->PointerVal = Src2.PointerVal;
+  case Type::PointerTyID:
+    pDest->PointerVal = Src2.PointerVal;
     break;
   }
 
   SetValue(&I, Dest, SF);
 }
 
-GenericValue Interpreter::getConstantExprValue (ConstantExpr *CE,
-                                                ExecutionContext &SF) {
+GenericValue Interpreter::getConstantExprValue(ConstantExpr *CE,
+                                               ExecutionContext &SF) {
   switch (CE->getOpcode()) {
   case Instruction::Trunc:
-      return executeTruncInst(CE->getOperand(0), CE->getType(), SF);
+    return executeTruncInst(CE->getOperand(0), CE->getType(), SF);
   case Instruction::ZExt:
-      return executeZExtInst(CE->getOperand(0), CE->getType(), SF);
+    return executeZExtInst(CE->getOperand(0), CE->getType(), SF);
   case Instruction::SExt:
-      return executeSExtInst(CE->getOperand(0), CE->getType(), SF);
+    return executeSExtInst(CE->getOperand(0), CE->getType(), SF);
   case Instruction::FPTrunc:
-      return executeFPTruncInst(CE->getOperand(0), CE->getType(), SF);
+    return executeFPTruncInst(CE->getOperand(0), CE->getType(), SF);
   case Instruction::FPExt:
-      return executeFPExtInst(CE->getOperand(0), CE->getType(), SF);
+    return executeFPExtInst(CE->getOperand(0), CE->getType(), SF);
   case Instruction::UIToFP:
-      return executeUIToFPInst(CE->getOperand(0), CE->getType(), SF);
+    return executeUIToFPInst(CE->getOperand(0), CE->getType(), SF);
   case Instruction::SIToFP:
-      return executeSIToFPInst(CE->getOperand(0), CE->getType(), SF);
+    return executeSIToFPInst(CE->getOperand(0), CE->getType(), SF);
   case Instruction::FPToUI:
-      return executeFPToUIInst(CE->getOperand(0), CE->getType(), SF);
+    return executeFPToUIInst(CE->getOperand(0), CE->getType(), SF);
   case Instruction::FPToSI:
-      return executeFPToSIInst(CE->getOperand(0), CE->getType(), SF);
+    return executeFPToSIInst(CE->getOperand(0), CE->getType(), SF);
   case Instruction::PtrToInt:
-      return executePtrToIntInst(CE->getOperand(0), CE->getType(), SF);
+    return executePtrToIntInst(CE->getOperand(0), CE->getType(), SF);
   case Instruction::IntToPtr:
-      return executeIntToPtrInst(CE->getOperand(0), CE->getType(), SF);
+    return executeIntToPtrInst(CE->getOperand(0), CE->getType(), SF);
   case Instruction::BitCast:
-      return executeBitCastInst(CE->getOperand(0), CE->getType(), SF);
+    return executeBitCastInst(CE->getOperand(0), CE->getType(), SF);
   case Instruction::GetElementPtr:
     return executeGEPOperation(CE->getOperand(0), gep_type_begin(CE),
                                gep_type_end(CE), SF);
   case Instruction::FCmp:
   case Instruction::ICmp:
-    return executeCmpInst(CE->getPredicate(),
-                          getOperandValue(CE->getOperand(0), SF),
-                          getOperandValue(CE->getOperand(1), SF),
-                          CE->getOperand(0)->getType());
+    return executeCmpInst(
+        CE->getPredicate(), getOperandValue(CE->getOperand(0), SF),
+        getOperandValue(CE->getOperand(1), SF), CE->getOperand(0)->getType());
   case Instruction::Select:
     return executeSelectInst(getOperandValue(CE->getOperand(0), SF),
                              getOperandValue(CE->getOperand(1), SF),
                              getOperandValue(CE->getOperand(2), SF),
                              CE->getOperand(0)->getType());
-  default :
+  default:
     break;
   }
 
@@ -2064,23 +2195,53 @@ GenericValue Interpreter::getConstantExprValue (ConstantExpr *CE,
   GenericValue Op0 = getOperandValue(CE->getOperand(0), SF);
   GenericValue Op1 = getOperandValue(CE->getOperand(1), SF);
   GenericValue Dest;
-  Type * Ty = CE->getOperand(0)->getType();
+  Type *Ty = CE->getOperand(0)->getType();
   switch (CE->getOpcode()) {
-  case Instruction::Add:  Dest.IntVal = Op0.IntVal + Op1.IntVal; break;
-  case Instruction::Sub:  Dest.IntVal = Op0.IntVal - Op1.IntVal; break;
-  case Instruction::Mul:  Dest.IntVal = Op0.IntVal * Op1.IntVal; break;
-  case Instruction::FAdd: executeFAddInst(Dest, Op0, Op1, Ty); break;
-  case Instruction::FSub: executeFSubInst(Dest, Op0, Op1, Ty); break;
-  case Instruction::FMul: executeFMulInst(Dest, Op0, Op1, Ty); break;
-  case Instruction::FDiv: executeFDivInst(Dest, Op0, Op1, Ty); break;
-  case Instruction::FRem: executeFRemInst(Dest, Op0, Op1, Ty); break;
-  case Instruction::SDiv: Dest.IntVal = Op0.IntVal.sdiv(Op1.IntVal); break;
-  case Instruction::UDiv: Dest.IntVal = Op0.IntVal.udiv(Op1.IntVal); break;
-  case Instruction::URem: Dest.IntVal = Op0.IntVal.urem(Op1.IntVal); break;
-  case Instruction::SRem: Dest.IntVal = Op0.IntVal.srem(Op1.IntVal); break;
-  case Instruction::And:  Dest.IntVal = Op0.IntVal & Op1.IntVal; break;
-  case Instruction::Or:   Dest.IntVal = Op0.IntVal | Op1.IntVal; break;
-  case Instruction::Xor:  Dest.IntVal = Op0.IntVal ^ Op1.IntVal; break;
+  case Instruction::Add:
+    Dest.IntVal = Op0.IntVal + Op1.IntVal;
+    break;
+  case Instruction::Sub:
+    Dest.IntVal = Op0.IntVal - Op1.IntVal;
+    break;
+  case Instruction::Mul:
+    Dest.IntVal = Op0.IntVal * Op1.IntVal;
+    break;
+  case Instruction::FAdd:
+    executeFAddInst(Dest, Op0, Op1, Ty);
+    break;
+  case Instruction::FSub:
+    executeFSubInst(Dest, Op0, Op1, Ty);
+    break;
+  case Instruction::FMul:
+    executeFMulInst(Dest, Op0, Op1, Ty);
+    break;
+  case Instruction::FDiv:
+    executeFDivInst(Dest, Op0, Op1, Ty);
+    break;
+  case Instruction::FRem:
+    executeFRemInst(Dest, Op0, Op1, Ty);
+    break;
+  case Instruction::SDiv:
+    Dest.IntVal = Op0.IntVal.sdiv(Op1.IntVal);
+    break;
+  case Instruction::UDiv:
+    Dest.IntVal = Op0.IntVal.udiv(Op1.IntVal);
+    break;
+  case Instruction::URem:
+    Dest.IntVal = Op0.IntVal.urem(Op1.IntVal);
+    break;
+  case Instruction::SRem:
+    Dest.IntVal = Op0.IntVal.srem(Op1.IntVal);
+    break;
+  case Instruction::And:
+    Dest.IntVal = Op0.IntVal & Op1.IntVal;
+    break;
+  case Instruction::Or:
+    Dest.IntVal = Op0.IntVal | Op1.IntVal;
+    break;
+  case Instruction::Xor:
+    Dest.IntVal = Op0.IntVal ^ Op1.IntVal;
+    break;
   case Instruction::Shl:
     Dest.IntVal = Op0.IntVal.shl(Op1.IntVal.getZExtValue());
     break;
@@ -2127,42 +2288,42 @@ void Interpreter::callFunction(Function *F, ArrayRef<GenericValue> ArgVals) {
 
   // Special handling for external functions.
   if (F->isDeclaration()) {
-    GenericValue Result = callExternalFunction (F, ArgVals);
+    GenericValue Result = callExternalFunction(F, ArgVals);
     // Simulate a 'ret' instruction of the appropriate type.
-    popStackAndReturnValueToCaller (F->getReturnType (), Result);
+    popStackAndReturnValueToCaller(F->getReturnType(), Result);
     return;
   }
 
   // Get pointers to first LLVM BB & Instruction in function.
-  StackFrame.CurBB     = &F->front();
-  StackFrame.CurInst   = StackFrame.CurBB->begin();
+  StackFrame.CurBB = &F->front();
+  StackFrame.CurInst = StackFrame.CurBB->begin();
 
   // Run through the function arguments and initialize their values...
-  assert((ArgVals.size() == F->arg_size() ||
-         (ArgVals.size() > F->arg_size() && F->getFunctionType()->isVarArg()))&&
-         "Invalid number of values passed to function invocation!");
+  assert(
+      (ArgVals.size() == F->arg_size() ||
+       (ArgVals.size() > F->arg_size() && F->getFunctionType()->isVarArg())) &&
+      "Invalid number of values passed to function invocation!");
 
   // Handle non-varargs arguments...
   unsigned i = 0;
-  for (Function::arg_iterator AI = F->arg_begin(), E = F->arg_end();
-       AI != E; ++AI, ++i)
+  for (Function::arg_iterator AI = F->arg_begin(), E = F->arg_end(); AI != E;
+       ++AI, ++i)
     SetValue(&*AI, ArgVals[i], StackFrame);
 
   // Handle varargs arguments...
-  StackFrame.VarArgs.assign(ArgVals.begin()+i, ArgVals.end());
+  StackFrame.VarArgs.assign(ArgVals.begin() + i, ArgVals.end());
 }
 
-
 void Interpreter::run() {
   while (!ECStack.empty()) {
     // Interpret a single instruction & increment the "PC".
-    ExecutionContext &SF = ECStack.back();  // Current stack frame
-    Instruction &I = *SF.CurInst++;         // Increment before execute
+    ExecutionContext &SF = ECStack.back(); // Current stack frame
+    Instruction &I = *SF.CurInst++;        // Increment before execute
 
     // Track the number of dynamic instructions executed.
     ++NumDynamicInsts;
 
     LLVM_DEBUG(dbgs() << "About to interpret: " << I << "\n");
-    visit(I);   // Dispatch to one of the visit* methods...
+    visit(I); // Dispatch to one of the visit* methods...
   }
 }
diff --git a/llvm/lib/ExecutionEngine/Interpreter/ExternalFunctions.cpp b/llvm/lib/ExecutionEngine/Interpreter/ExternalFunctions.cpp
index 4f8f883a75f322d..225424cfdb9bb0f 100644
--- a/llvm/lib/ExecutionEngine/Interpreter/ExternalFunctions.cpp
+++ b/llvm/lib/ExecutionEngine/Interpreter/ExternalFunctions.cpp
@@ -81,23 +81,37 @@ static Interpreter *TheInterpreter;
 
 static char getTypeID(Type *Ty) {
   switch (Ty->getTypeID()) {
-  case Type::VoidTyID:    return 'V';
+  case Type::VoidTyID:
+    return 'V';
   case Type::IntegerTyID:
     switch (cast<IntegerType>(Ty)->getBitWidth()) {
-      case 1:  return 'o';
-      case 8:  return 'B';
-      case 16: return 'S';
-      case 32: return 'I';
-      case 64: return 'L';
-      default: return 'N';
+    case 1:
+      return 'o';
+    case 8:
+      return 'B';
+    case 16:
+      return 'S';
+    case 32:
+      return 'I';
+    case 64:
+      return 'L';
+    default:
+      return 'N';
     }
-  case Type::FloatTyID:   return 'F';
-  case Type::DoubleTyID:  return 'D';
-  case Type::PointerTyID: return 'P';
-  case Type::FunctionTyID:return 'M';
-  case Type::StructTyID:  return 'T';
-  case Type::ArrayTyID:   return 'A';
-  default: return 'U';
+  case Type::FloatTyID:
+    return 'F';
+  case Type::DoubleTyID:
+    return 'D';
+  case Type::PointerTyID:
+    return 'P';
+  case Type::FunctionTyID:
+    return 'M';
+  case Type::StructTyID:
+    return 'T';
+  case Type::ArrayTyID:
+    return 'A';
+  default:
+    return 'U';
   }
 }
 
@@ -121,7 +135,7 @@ static ExFunc lookupFunction(const Function *F) {
   ExFunc FnPtr = Fns.FuncNames[ExtName];
   if (!FnPtr)
     FnPtr = Fns.FuncNames[("lle_X_" + F->getName()).str()];
-  if (!FnPtr)  // Try calling a generic function... if it exists...
+  if (!FnPtr) // Try calling a generic function... if it exists...
     FnPtr = (ExFunc)(intptr_t)sys::DynamicLibrary::SearchForAddressOfSymbol(
         ("lle_X_" + F->getName()).str());
   if (FnPtr)
@@ -132,68 +146,77 @@ static ExFunc lookupFunction(const Function *F) {
 #ifdef USE_LIBFFI
 static ffi_type *ffiTypeFor(Type *Ty) {
   switch (Ty->getTypeID()) {
-    case Type::VoidTyID: return &ffi_type_void;
-    case Type::IntegerTyID:
-      switch (cast<IntegerType>(Ty)->getBitWidth()) {
-        case 8:  return &ffi_type_sint8;
-        case 16: return &ffi_type_sint16;
-        case 32: return &ffi_type_sint32;
-        case 64: return &ffi_type_sint64;
-      }
-      llvm_unreachable("Unhandled integer type bitwidth");
-    case Type::FloatTyID:   return &ffi_type_float;
-    case Type::DoubleTyID:  return &ffi_type_double;
-    case Type::PointerTyID: return &ffi_type_pointer;
-    default: break;
+  case Type::VoidTyID:
+    return &ffi_type_void;
+  case Type::IntegerTyID:
+    switch (cast<IntegerType>(Ty)->getBitWidth()) {
+    case 8:
+      return &ffi_type_sint8;
+    case 16:
+      return &ffi_type_sint16;
+    case 32:
+      return &ffi_type_sint32;
+    case 64:
+      return &ffi_type_sint64;
+    }
+    llvm_unreachable("Unhandled integer type bitwidth");
+  case Type::FloatTyID:
+    return &ffi_type_float;
+  case Type::DoubleTyID:
+    return &ffi_type_double;
+  case Type::PointerTyID:
+    return &ffi_type_pointer;
+  default:
+    break;
   }
   // TODO: Support other types such as StructTyID, ArrayTyID, OpaqueTyID, etc.
   report_fatal_error("Type could not be mapped for use with libffi.");
   return NULL;
 }
 
-static void *ffiValueFor(Type *Ty, const GenericValue &AV,
-                         void *ArgDataPtr) {
+static void *ffiValueFor(Type *Ty, const GenericValue &AV, void *ArgDataPtr) {
   switch (Ty->getTypeID()) {
-    case Type::IntegerTyID:
-      switch (cast<IntegerType>(Ty)->getBitWidth()) {
-        case 8: {
-          int8_t *I8Ptr = (int8_t *) ArgDataPtr;
-          *I8Ptr = (int8_t) AV.IntVal.getZExtValue();
-          return ArgDataPtr;
-        }
-        case 16: {
-          int16_t *I16Ptr = (int16_t *) ArgDataPtr;
-          *I16Ptr = (int16_t) AV.IntVal.getZExtValue();
-          return ArgDataPtr;
-        }
-        case 32: {
-          int32_t *I32Ptr = (int32_t *) ArgDataPtr;
-          *I32Ptr = (int32_t) AV.IntVal.getZExtValue();
-          return ArgDataPtr;
-        }
-        case 64: {
-          int64_t *I64Ptr = (int64_t *) ArgDataPtr;
-          *I64Ptr = (int64_t) AV.IntVal.getZExtValue();
-          return ArgDataPtr;
-        }
-      }
-      llvm_unreachable("Unhandled integer type bitwidth");
-    case Type::FloatTyID: {
-      float *FloatPtr = (float *) ArgDataPtr;
-      *FloatPtr = AV.FloatVal;
+  case Type::IntegerTyID:
+    switch (cast<IntegerType>(Ty)->getBitWidth()) {
+    case 8: {
+      int8_t *I8Ptr = (int8_t *)ArgDataPtr;
+      *I8Ptr = (int8_t)AV.IntVal.getZExtValue();
+      return ArgDataPtr;
+    }
+    case 16: {
+      int16_t *I16Ptr = (int16_t *)ArgDataPtr;
+      *I16Ptr = (int16_t)AV.IntVal.getZExtValue();
       return ArgDataPtr;
     }
-    case Type::DoubleTyID: {
-      double *DoublePtr = (double *) ArgDataPtr;
-      *DoublePtr = AV.DoubleVal;
+    case 32: {
+      int32_t *I32Ptr = (int32_t *)ArgDataPtr;
+      *I32Ptr = (int32_t)AV.IntVal.getZExtValue();
       return ArgDataPtr;
     }
-    case Type::PointerTyID: {
-      void **PtrPtr = (void **) ArgDataPtr;
-      *PtrPtr = GVTOP(AV);
+    case 64: {
+      int64_t *I64Ptr = (int64_t *)ArgDataPtr;
+      *I64Ptr = (int64_t)AV.IntVal.getZExtValue();
       return ArgDataPtr;
     }
-    default: break;
+    }
+    llvm_unreachable("Unhandled integer type bitwidth");
+  case Type::FloatTyID: {
+    float *FloatPtr = (float *)ArgDataPtr;
+    *FloatPtr = AV.FloatVal;
+    return ArgDataPtr;
+  }
+  case Type::DoubleTyID: {
+    double *DoublePtr = (double *)ArgDataPtr;
+    *DoublePtr = AV.DoubleVal;
+    return ArgDataPtr;
+  }
+  case Type::PointerTyID: {
+    void **PtrPtr = (void **)ArgDataPtr;
+    *PtrPtr = GVTOP(AV);
+    return ArgDataPtr;
+  }
+  default:
+    break;
   }
   // TODO: Support other types such as StructTyID, ArrayTyID, OpaqueTyID, etc.
   report_fatal_error("Type value could not be mapped for use with libffi.");
@@ -209,13 +232,13 @@ static bool ffiInvoke(RawFunc Fn, Function *F, ArrayRef<GenericValue> ArgVals,
   // TODO: We don't have type information about the remaining arguments, because
   // this information is never passed into ExecutionEngine::runFunction().
   if (ArgVals.size() > NumArgs && F->isVarArg()) {
-    report_fatal_error("Calling external var arg function '" + F->getName()
-                      + "' is not supported by the Interpreter.");
+    report_fatal_error("Calling external var arg function '" + F->getName() +
+                       "' is not supported by the Interpreter.");
   }
 
   unsigned ArgBytes = 0;
 
-  std::vector<ffi_type*> args(NumArgs);
+  std::vector<ffi_type *> args(NumArgs);
   for (Function::const_arg_iterator A = F->arg_begin(), E = F->arg_end();
        A != E; ++A) {
     const unsigned ArgNo = A->getArgNo();
@@ -227,7 +250,7 @@ static bool ffiInvoke(RawFunc Fn, Function *F, ArrayRef<GenericValue> ArgVals,
   SmallVector<uint8_t, 128> ArgData;
   ArgData.resize(ArgBytes);
   uint8_t *ArgDataPtr = ArgData.data();
-  SmallVector<void*, 16> values(NumArgs);
+  SmallVector<void *, 16> values(NumArgs);
   for (Function::const_arg_iterator A = F->arg_begin(), E = F->arg_end();
        A != E; ++A) {
     const unsigned ArgNo = A->getArgNo();
@@ -246,18 +269,33 @@ static bool ffiInvoke(RawFunc Fn, Function *F, ArrayRef<GenericValue> ArgVals,
       ret.resize(TD.getTypeStoreSize(RetTy));
     ffi_call(&cif, Fn, ret.data(), values.data());
     switch (RetTy->getTypeID()) {
-      case Type::IntegerTyID:
-        switch (cast<IntegerType>(RetTy)->getBitWidth()) {
-          case 8:  Result.IntVal = APInt(8 , *(int8_t *) ret.data()); break;
-          case 16: Result.IntVal = APInt(16, *(int16_t*) ret.data()); break;
-          case 32: Result.IntVal = APInt(32, *(int32_t*) ret.data()); break;
-          case 64: Result.IntVal = APInt(64, *(int64_t*) ret.data()); break;
-        }
+    case Type::IntegerTyID:
+      switch (cast<IntegerType>(RetTy)->getBitWidth()) {
+      case 8:
+        Result.IntVal = APInt(8, *(int8_t *)ret.data());
         break;
-      case Type::FloatTyID:   Result.FloatVal   = *(float *) ret.data(); break;
-      case Type::DoubleTyID:  Result.DoubleVal  = *(double*) ret.data(); break;
-      case Type::PointerTyID: Result.PointerVal = *(void **) ret.data(); break;
-      default: break;
+      case 16:
+        Result.IntVal = APInt(16, *(int16_t *)ret.data());
+        break;
+      case 32:
+        Result.IntVal = APInt(32, *(int32_t *)ret.data());
+        break;
+      case 64:
+        Result.IntVal = APInt(64, *(int64_t *)ret.data());
+        break;
+      }
+      break;
+    case Type::FloatTyID:
+      Result.FloatVal = *(float *)ret.data();
+      break;
+    case Type::DoubleTyID:
+      Result.DoubleVal = *(double *)ret.data();
+      break;
+    case Type::PointerTyID:
+      Result.PointerVal = *(void **)ret.data();
+      break;
+    default:
+      break;
     }
     return true;
   }
@@ -287,8 +325,8 @@ GenericValue Interpreter::callExternalFunction(Function *F,
   std::map<const Function *, RawFunc>::iterator RF = Fns.RawFunctions.find(F);
   RawFunc RawFn;
   if (RF == Fns.RawFunctions.end()) {
-    RawFn = (RawFunc)(intptr_t)
-      sys::DynamicLibrary::SearchForAddressOfSymbol(std::string(F->getName()));
+    RawFn = (RawFunc)(intptr_t)sys::DynamicLibrary::SearchForAddressOfSymbol(
+        std::string(F->getName()));
     if (!RawFn)
       RawFn = (RawFunc)(intptr_t)getPointerToGlobalIfAvailable(F);
     if (RawFn != 0)
@@ -305,8 +343,8 @@ GenericValue Interpreter::callExternalFunction(Function *F,
 #endif // USE_LIBFFI
 
   if (F->getName() == "__main")
-    errs() << "Tried to execute an unknown external function: "
-      << *F->getType() << " __main\n";
+    errs() << "Tried to execute an unknown external function: " << *F->getType()
+           << " __main\n";
   else
     report_fatal_error("Tried to execute an unknown external function: " +
                        F->getName());
@@ -324,7 +362,7 @@ GenericValue Interpreter::callExternalFunction(Function *F,
 static GenericValue lle_X_atexit(FunctionType *FT,
                                  ArrayRef<GenericValue> Args) {
   assert(Args.size() == 1);
-  TheInterpreter->addAtExitHandler((Function*)GVTOP(Args[0]));
+  TheInterpreter->addAtExitHandler((Function *)GVTOP(Args[0]));
   GenericValue GV;
   GV.IntVal = 0;
   return GV;
@@ -338,9 +376,9 @@ static GenericValue lle_X_exit(FunctionType *FT, ArrayRef<GenericValue> Args) {
 
 // void abort(void)
 static GenericValue lle_X_abort(FunctionType *FT, ArrayRef<GenericValue> Args) {
-  //FIXME: should we report or raise here?
-  //report_fatal_error("Interpreted program raised SIGABRT");
-  raise (SIGABRT);
+  // FIXME: should we report or raise here?
+  // report_fatal_error("Interpreted program raised SIGABRT");
+  raise(SIGABRT);
   return GenericValue();
 }
 
@@ -364,16 +402,18 @@ static GenericValue lle_X_sprintf(FunctionType *FT,
   GV.IntVal = APInt(32, strlen(FmtStr));
   while (true) {
     switch (*FmtStr) {
-    case 0: return GV;             // Null terminator...
-    default:                       // Normal nonspecial character
+    case 0:
+      return GV; // Null terminator...
+    default:     // Normal nonspecial character
       sprintf(OutputBuffer++, "%c", *FmtStr++);
       break;
-    case '\\': {                   // Handle escape codes
-      sprintf(OutputBuffer, "%c%c", *FmtStr, *(FmtStr+1));
-      FmtStr += 2; OutputBuffer += 2;
+    case '\\': { // Handle escape codes
+      sprintf(OutputBuffer, "%c%c", *FmtStr, *(FmtStr + 1));
+      FmtStr += 2;
+      OutputBuffer += 2;
       break;
     }
-    case '%': {                    // Handle format specifiers
+    case '%': { // Handle format specifiers
       char FmtBuf[100] = "", Buffer[1000] = "";
       char *FB = FmtBuf;
       *FB++ = *FmtStr++;
@@ -383,20 +423,25 @@ static GenericValue lle_X_sprintf(FunctionType *FT,
              Last != 'o' && Last != 'x' && Last != 'X' && Last != 'e' &&
              Last != 'E' && Last != 'g' && Last != 'G' && Last != 'f' &&
              Last != 'p' && Last != 's' && Last != '%') {
-        if (Last == 'l' || Last == 'L') HowLong++;  // Keep track of l's
+        if (Last == 'l' || Last == 'L')
+          HowLong++; // Keep track of l's
         Last = *FB++ = *FmtStr++;
       }
       *FB = 0;
 
       switch (Last) {
       case '%':
-        memcpy(Buffer, "%", 2); break;
+        memcpy(Buffer, "%", 2);
+        break;
       case 'c':
         sprintf(Buffer, FmtBuf, uint32_t(Args[ArgNo++].IntVal.getZExtValue()));
         break;
-      case 'd': case 'i':
-      case 'u': case 'o':
-      case 'x': case 'X':
+      case 'd':
+      case 'i':
+      case 'u':
+      case 'o':
+      case 'x':
+      case 'X':
         if (HowLong >= 1) {
           if (HowLong == 1 &&
               TheInterpreter->getDataLayout().getPointerSizeInBits() == 64 &&
@@ -404,29 +449,37 @@ static GenericValue lle_X_sprintf(FunctionType *FT,
             // Make sure we use %lld with a 64 bit argument because we might be
             // compiling LLI on a 32 bit compiler.
             unsigned Size = strlen(FmtBuf);
-            FmtBuf[Size] = FmtBuf[Size-1];
-            FmtBuf[Size+1] = 0;
-            FmtBuf[Size-1] = 'l';
+            FmtBuf[Size] = FmtBuf[Size - 1];
+            FmtBuf[Size + 1] = 0;
+            FmtBuf[Size - 1] = 'l';
           }
           sprintf(Buffer, FmtBuf, Args[ArgNo++].IntVal.getZExtValue());
         } else
-          sprintf(Buffer, FmtBuf,uint32_t(Args[ArgNo++].IntVal.getZExtValue()));
+          sprintf(Buffer, FmtBuf,
+                  uint32_t(Args[ArgNo++].IntVal.getZExtValue()));
+        break;
+      case 'e':
+      case 'E':
+      case 'g':
+      case 'G':
+      case 'f':
+        sprintf(Buffer, FmtBuf, Args[ArgNo++].DoubleVal);
         break;
-      case 'e': case 'E': case 'g': case 'G': case 'f':
-        sprintf(Buffer, FmtBuf, Args[ArgNo++].DoubleVal); break;
       case 'p':
-        sprintf(Buffer, FmtBuf, (void*)GVTOP(Args[ArgNo++])); break;
+        sprintf(Buffer, FmtBuf, (void *)GVTOP(Args[ArgNo++]));
+        break;
       case 's':
-        sprintf(Buffer, FmtBuf, (char*)GVTOP(Args[ArgNo++])); break;
+        sprintf(Buffer, FmtBuf, (char *)GVTOP(Args[ArgNo++]));
+        break;
       default:
         errs() << "<unknown printf code '" << *FmtStr << "'!>";
-        ArgNo++; break;
+        ArgNo++;
+        break;
       }
       size_t Len = strlen(Buffer);
       memcpy(OutputBuffer, Buffer, Len + 1);
       OutputBuffer += Len;
-      }
-      break;
+    } break;
     }
   }
   return GV;
@@ -441,7 +494,7 @@ static GenericValue lle_X_printf(FunctionType *FT,
                                  ArrayRef<GenericValue> Args) {
   char Buffer[10000];
   std::vector<GenericValue> NewArgs;
-  NewArgs.push_back(PTOGV((void*)&Buffer[0]));
+  NewArgs.push_back(PTOGV((void *)&Buffer[0]));
   llvm::append_range(NewArgs, Args);
   GenericValue GV = lle_X_sprintf(FT, NewArgs);
   outs() << Buffer;
@@ -455,11 +508,11 @@ static GenericValue lle_X_sscanf(FunctionType *FT,
 
   char *Args[10];
   for (unsigned i = 0; i < args.size(); ++i)
-    Args[i] = (char*)GVTOP(args[i]);
+    Args[i] = (char *)GVTOP(args[i]);
 
   GenericValue GV;
   GV.IntVal = APInt(32, sscanf(Args[0], Args[1], Args[2], Args[3], Args[4],
-                    Args[5], Args[6], Args[7], Args[8], Args[9]));
+                               Args[5], Args[6], Args[7], Args[8], Args[9]));
   return GV;
 }
 
@@ -469,11 +522,11 @@ static GenericValue lle_X_scanf(FunctionType *FT, ArrayRef<GenericValue> args) {
 
   char *Args[10];
   for (unsigned i = 0; i < args.size(); ++i)
-    Args[i] = (char*)GVTOP(args[i]);
+    Args[i] = (char *)GVTOP(args[i]);
 
   GenericValue GV;
-  GV.IntVal = APInt(32, scanf( Args[0], Args[1], Args[2], Args[3], Args[4],
-                    Args[5], Args[6], Args[7], Args[8], Args[9]));
+  GV.IntVal = APInt(32, scanf(Args[0], Args[1], Args[2], Args[3], Args[4],
+                              Args[5], Args[6], Args[7], Args[8], Args[9]));
   return GV;
 }
 
@@ -485,10 +538,10 @@ static GenericValue lle_X_fprintf(FunctionType *FT,
   char Buffer[10000];
   std::vector<GenericValue> NewArgs;
   NewArgs.push_back(PTOGV(Buffer));
-  NewArgs.insert(NewArgs.end(), Args.begin()+1, Args.end());
+  NewArgs.insert(NewArgs.end(), Args.begin() + 1, Args.end());
   GenericValue GV = lle_X_sprintf(FT, NewArgs);
 
-  fputs(Buffer, (FILE *) GVTOP(Args[0]));
+  fputs(Buffer, (FILE *)GVTOP(Args[0]));
   return GV;
 }
 
@@ -519,15 +572,15 @@ static GenericValue lle_X_memcpy(FunctionType *FT,
 void Interpreter::initializeExternalFunctions() {
   auto &Fns = getFunctions();
   sys::ScopedLock Writer(Fns.Lock);
-  Fns.FuncNames["lle_X_atexit"]       = lle_X_atexit;
-  Fns.FuncNames["lle_X_exit"]         = lle_X_exit;
-  Fns.FuncNames["lle_X_abort"]        = lle_X_abort;
-
-  Fns.FuncNames["lle_X_printf"]       = lle_X_printf;
-  Fns.FuncNames["lle_X_sprintf"]      = lle_X_sprintf;
-  Fns.FuncNames["lle_X_sscanf"]       = lle_X_sscanf;
-  Fns.FuncNames["lle_X_scanf"]        = lle_X_scanf;
-  Fns.FuncNames["lle_X_fprintf"]      = lle_X_fprintf;
-  Fns.FuncNames["lle_X_memset"]       = lle_X_memset;
-  Fns.FuncNames["lle_X_memcpy"]       = lle_X_memcpy;
+  Fns.FuncNames["lle_X_atexit"] = lle_X_atexit;
+  Fns.FuncNames["lle_X_exit"] = lle_X_exit;
+  Fns.FuncNames["lle_X_abort"] = lle_X_abort;
+
+  Fns.FuncNames["lle_X_printf"] = lle_X_printf;
+  Fns.FuncNames["lle_X_sprintf"] = lle_X_sprintf;
+  Fns.FuncNames["lle_X_sscanf"] = lle_X_sscanf;
+  Fns.FuncNames["lle_X_scanf"] = lle_X_scanf;
+  Fns.FuncNames["lle_X_fprintf"] = lle_X_fprintf;
+  Fns.FuncNames["lle_X_memset"] = lle_X_memset;
+  Fns.FuncNames["lle_X_memcpy"] = lle_X_memcpy;
 }
diff --git a/llvm/lib/ExecutionEngine/Interpreter/Interpreter.cpp b/llvm/lib/ExecutionEngine/Interpreter/Interpreter.cpp
index d4235cfa2ccf5f0..d41f513c854eab2 100644
--- a/llvm/lib/ExecutionEngine/Interpreter/Interpreter.cpp
+++ b/llvm/lib/ExecutionEngine/Interpreter/Interpreter.cpp
@@ -25,9 +25,9 @@ static struct RegisterInterp {
   RegisterInterp() { Interpreter::Register(); }
 } InterpRegistrator;
 
-}
+} // namespace
 
-extern "C" void LLVMLinkInInterpreter() { }
+extern "C" void LLVMLinkInInterpreter() {}
 
 /// Create a new interpreter object.
 ///
@@ -36,9 +36,8 @@ ExecutionEngine *Interpreter::create(std::unique_ptr<Module> M,
   // Tell this Module to materialize everything and release the GVMaterializer.
   if (Error Err = M->materializeAll()) {
     std::string Msg;
-    handleAllErrors(std::move(Err), [&](ErrorInfoBase &EIB) {
-      Msg = EIB.message();
-    });
+    handleAllErrors(std::move(Err),
+                    [&](ErrorInfoBase &EIB) { Msg = EIB.message(); });
     if (ErrStr)
       *ErrStr = Msg;
     // We got an error, just return 0
@@ -63,11 +62,9 @@ Interpreter::Interpreter(std::unique_ptr<Module> M)
   IL = new IntrinsicLowering(getDataLayout());
 }
 
-Interpreter::~Interpreter() {
-  delete IL;
-}
+Interpreter::~Interpreter() { delete IL; }
 
-void Interpreter::runAtExitHandlers () {
+void Interpreter::runAtExitHandlers() {
   while (!AtExitHandlers.empty()) {
     callFunction(AtExitHandlers.back(), std::nullopt);
     AtExitHandlers.pop_back();
@@ -79,7 +76,7 @@ void Interpreter::runAtExitHandlers () {
 ///
 GenericValue Interpreter::runFunction(Function *F,
                                       ArrayRef<GenericValue> ArgValues) {
-  assert (F && "Function *F was null at entry to run()");
+  assert(F && "Function *F was null at entry to run()");
 
   // Try extra hard not to pass extra args to a function that isn't
   // expecting them.  C programmers frequently bend the rules and
diff --git a/llvm/lib/ExecutionEngine/Interpreter/Interpreter.h b/llvm/lib/ExecutionEngine/Interpreter/Interpreter.h
index 41a0389442d3865..e17cb42e53c8c65 100644
--- a/llvm/lib/ExecutionEngine/Interpreter/Interpreter.h
+++ b/llvm/lib/ExecutionEngine/Interpreter/Interpreter.h
@@ -24,11 +24,10 @@
 namespace llvm {
 
 class IntrinsicLowering;
-template<typename T> class generic_gep_type_iterator;
+template <typename T> class generic_gep_type_iterator;
 class ConstantExpr;
 typedef generic_gep_type_iterator<User::const_op_iterator> gep_type_iterator;
 
-
 // AllocaHolder - Object to track all of the blocks of memory allocated by
 // alloca.  When the function returns, this object is popped off the execution
 // stack, which causes the dtor to be run, which frees all the alloca'd memory.
@@ -57,14 +56,14 @@ typedef std::vector<GenericValue> ValuePlaneTy;
 // executing.
 //
 struct ExecutionContext {
-  Function             *CurFunction;// The currently executing function
-  BasicBlock           *CurBB;      // The currently executing BB
-  BasicBlock::iterator  CurInst;    // The next instruction to execute
-  CallBase             *Caller;     // Holds the call that called subframes.
-                                    // NULL if main func or debugger invoked fn
+  Function *CurFunction;        // The currently executing function
+  BasicBlock *CurBB;            // The currently executing BB
+  BasicBlock::iterator CurInst; // The next instruction to execute
+  CallBase *Caller;             // Holds the call that called subframes.
+                                // NULL if main func or debugger invoked fn
   std::map<Value *, GenericValue> Values; // LLVM values used in this invocation
-  std::vector<GenericValue>  VarArgs; // Values passed through an ellipsis
-  AllocaHolder Allocas;            // Track memory allocated by alloca
+  std::vector<GenericValue> VarArgs;      // Values passed through an ellipsis
+  AllocaHolder Allocas;                   // Track memory allocated by alloca
 
   ExecutionContext() : CurFunction(nullptr), CurBB(nullptr), CurInst(nullptr) {}
 };
@@ -72,7 +71,7 @@ struct ExecutionContext {
 // Interpreter - This class represents the entirety of the interpreter.
 //
 class Interpreter : public ExecutionEngine, public InstVisitor<Interpreter> {
-  GenericValue ExitValue;          // The return value of the called function
+  GenericValue ExitValue; // The return value of the called function
   IntrinsicLowering *IL;
 
   // The runtime stack of executing code.  The top of the stack is the current
@@ -81,7 +80,7 @@ class Interpreter : public ExecutionEngine, public InstVisitor<Interpreter> {
 
   // AtExitHandlers - List of functions to call when the program exits,
   // registered with the atexit() library function.
-  std::vector<Function*> AtExitHandlers;
+  std::vector<Function *> AtExitHandlers;
 
 public:
   explicit Interpreter(std::unique_ptr<Module> M);
@@ -92,9 +91,7 @@ class Interpreter : public ExecutionEngine, public InstVisitor<Interpreter> {
   ///
   void runAtExitHandlers();
 
-  static void Register() {
-    InterpCtor = create;
-  }
+  static void Register() { InterpCtor = create; }
 
   /// Create an interpreter ExecutionEngine.
   ///
@@ -115,7 +112,7 @@ class Interpreter : public ExecutionEngine, public InstVisitor<Interpreter> {
   // Methods used to execute code:
   // Place a call on the stack
   void callFunction(Function *F, ArrayRef<GenericValue> ArgVals);
-  void run();                // Execute instructions until nothing left to do
+  void run(); // Execute instructions until nothing left to do
 
   // Opcode Implementations
   void visitReturnInst(ReturnInst &I);
@@ -176,15 +173,11 @@ class Interpreter : public ExecutionEngine, public InstVisitor<Interpreter> {
                                     ArrayRef<GenericValue> ArgVals);
   void exitCalled(GenericValue GV);
 
-  void addAtExitHandler(Function *F) {
-    AtExitHandlers.push_back(F);
-  }
+  void addAtExitHandler(Function *F) { AtExitHandlers.push_back(F); }
 
-  GenericValue *getFirstVarArg () {
-    return &(ECStack.back ().VarArgs[0]);
-  }
+  GenericValue *getFirstVarArg() { return &(ECStack.back().VarArgs[0]); }
 
-private:  // Helper functions
+private: // Helper functions
   GenericValue executeGEPOperation(Value *Ptr, gep_type_iterator I,
                                    gep_type_iterator E, ExecutionContext &SF);
 
@@ -194,9 +187,9 @@ class Interpreter : public ExecutionEngine, public InstVisitor<Interpreter> {
   //
   void SwitchToNewBasicBlock(BasicBlock *Dest, ExecutionContext &SF);
 
-  void *getPointerToFunction(Function *F) override { return (void*)F; }
+  void *getPointerToFunction(Function *F) override { return (void *)F; }
 
-  void initializeExecutionEngine() { }
+  void initializeExecutionEngine() {}
   void initializeExternalFunctions();
   GenericValue getConstantExprValue(ConstantExpr *CE, ExecutionContext &SF);
   GenericValue getOperandValue(Value *V, ExecutionContext &SF);
@@ -225,9 +218,8 @@ class Interpreter : public ExecutionEngine, public InstVisitor<Interpreter> {
   GenericValue executeBitCastInst(Value *SrcVal, Type *DstTy,
                                   ExecutionContext &SF);
   void popStackAndReturnValueToCaller(Type *RetTy, GenericValue Result);
-
 };
 
-} // End llvm namespace
+} // namespace llvm
 
 #endif
diff --git a/llvm/lib/ExecutionEngine/JITLink/CMakeLists.txt b/llvm/lib/ExecutionEngine/JITLink/CMakeLists.txt
index e5f5a99c39bc004..24acb56a418ef7f 100644
--- a/llvm/lib/ExecutionEngine/JITLink/CMakeLists.txt
+++ b/llvm/lib/ExecutionEngine/JITLink/CMakeLists.txt
@@ -1,69 +1,42 @@
-set(LLVM_TARGET_DEFINITIONS COFFOptions.td)
-tablegen(LLVM COFFOptions.inc -gen-opt-parser-defs)
-add_public_tablegen_target(JITLinkTableGen)
+set(LLVM_TARGET_DEFINITIONS COFFOptions.td) tablegen(
+    LLVM COFFOptions.inc - gen - opt - parser -
+    defs) add_public_tablegen_target(JITLinkTableGen)
 
-add_llvm_component_library(LLVMJITLink
-  DWARFRecordSectionSplitter.cpp
-  EHFrameSupport.cpp
-  JITLink.cpp
-  JITLinkGeneric.cpp
-  JITLinkMemoryManager.cpp
+    add_llvm_component_library(
+        LLVMJITLink DWARFRecordSectionSplitter.cpp EHFrameSupport.cpp JITLink
+            .cpp JITLinkGeneric.cpp JITLinkMemoryManager
+            .cpp
 
-  # Formats:
+#Formats:
 
-  # MachO
-  MachO.cpp
-  MachO_arm64.cpp
-  MachO_x86_64.cpp
-  MachOLinkGraphBuilder.cpp
+#MachO
+                MachO.cpp MachO_arm64.cpp MachO_x86_64.cpp MachOLinkGraphBuilder
+            .cpp
 
-  # ELF
-  ELF.cpp
-  ELFLinkGraphBuilder.cpp
-  ELF_aarch32.cpp
-  ELF_aarch64.cpp
-  ELF_i386.cpp
-  ELF_loongarch.cpp
-  ELF_ppc64.cpp
-  ELF_riscv.cpp
-  ELF_x86_64.cpp
+#ELF
+                ELF.cpp ELFLinkGraphBuilder.cpp ELF_aarch32.cpp ELF_aarch64
+            .cpp ELF_i386.cpp ELF_loongarch.cpp ELF_ppc64.cpp ELF_riscv
+            .cpp ELF_x86_64
+            .cpp
 
-  # COFF
-  COFF.cpp
-  COFFDirectiveParser.cpp
-  COFFLinkGraphBuilder.cpp
-  COFF_x86_64.cpp
+#COFF
+                COFF.cpp COFFDirectiveParser.cpp COFFLinkGraphBuilder
+            .cpp COFF_x86_64
+            .cpp
 
-  # Architectures:
-  aarch32.cpp
-  aarch64.cpp
-  i386.cpp
-  loongarch.cpp
-  ppc64.cpp
-  riscv.cpp
-  x86_64.cpp
+#Architectures:
+                aarch32.cpp aarch64.cpp i386.cpp loongarch.cpp ppc64.cpp riscv
+            .cpp x86_64.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/ExecutionEngine/JITLink
+                ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+        llvm / ExecutionEngine /
+        JITLink
 
-  DEPENDS
-  intrinsics_gen
-  JITLinkTableGen
+            DEPENDS intrinsics_gen JITLinkTableGen
 
-  LINK_COMPONENTS
-  BinaryFormat
-  Object
-  Option
-  OrcTargetProcess
-  Support
-  TargetParser
-  )
+                LINK_COMPONENTS BinaryFormat Object Option OrcTargetProcess
+                    Support TargetParser)
 
-target_link_libraries(LLVMJITLink
-  PRIVATE
-  LLVMObject
-  LLVMOrcShared
-  LLVMOrcTargetProcess
-  LLVMSupport
-  LLVMTargetParser
-)
+        target_link_libraries(
+            LLVMJITLink PRIVATE LLVMObject LLVMOrcShared LLVMOrcTargetProcess
+                LLVMSupport LLVMTargetParser)
diff --git a/llvm/lib/ExecutionEngine/JITLink/COFFOptions.td b/llvm/lib/ExecutionEngine/JITLink/COFFOptions.td
index 0a0ce2fc76dde91..6af1dcfc4754e94 100644
--- a/llvm/lib/ExecutionEngine/JITLink/COFFOptions.td
+++ b/llvm/lib/ExecutionEngine/JITLink/COFFOptions.td
@@ -6,16 +6,15 @@ include "llvm/Option/OptParser.td"
 class F<string name> : Flag<["/", "-", "/?", "-?"], name>;
 
 // Flag that takes one argument after ":".
-class P<string name> :
-      Joined<["/", "-", "/?", "-?"], name#":">;
+class P<string name> : Joined<["/", "-", "/?", "-?"], name #":">;
 
 // Boolean flag which can be suffixed by ":no". Using it unsuffixed turns the
 // flag on and using it suffixed by ":no" turns it off.
 multiclass B_priv<string name> {
   def "" : F<name>;
-  def _no : F<name#":no">;
+  def _no : F<name #":no">;
 }
 
-def export  : P<"export">;
+def export : P<"export">;
 def alternatename : P<"alternatename">;
 def incl : Joined<["/", "-", "/?", "-?"], "include:">;
\ No newline at end of file
diff --git a/llvm/lib/ExecutionEngine/JITLink/EHFrameSupport.cpp b/llvm/lib/ExecutionEngine/JITLink/EHFrameSupport.cpp
index 86249591a9be053..531f7509467c74e 100644
--- a/llvm/lib/ExecutionEngine/JITLink/EHFrameSupport.cpp
+++ b/llvm/lib/ExecutionEngine/JITLink/EHFrameSupport.cpp
@@ -326,9 +326,9 @@ Error EHFrameEdgeFixer::processFDE(ParseContext &PC, Block &B,
   {
     // Process the CIE pointer field.
     auto CIEEdgeItr = BlockEdges.find(RecordOffset + CIEDeltaFieldOffset);
-    orc::ExecutorAddr CIEAddress =
-        RecordAddress + orc::ExecutorAddrDiff(CIEDeltaFieldOffset) -
-        orc::ExecutorAddrDiff(CIEDelta);
+    orc::ExecutorAddr CIEAddress = RecordAddress +
+                                   orc::ExecutorAddrDiff(CIEDeltaFieldOffset) -
+                                   orc::ExecutorAddrDiff(CIEDelta);
     if (CIEEdgeItr == BlockEdges.end()) {
 
       LLVM_DEBUG({
diff --git a/llvm/lib/ExecutionEngine/JITLink/EHFrameSupportImpl.h b/llvm/lib/ExecutionEngine/JITLink/EHFrameSupportImpl.h
index 55cf7fc63ee7957..c4df0e73778d5be 100644
--- a/llvm/lib/ExecutionEngine/JITLink/EHFrameSupportImpl.h
+++ b/llvm/lib/ExecutionEngine/JITLink/EHFrameSupportImpl.h
@@ -35,7 +35,6 @@ class EHFrameEdgeFixer {
   Error operator()(LinkGraph &G);
 
 private:
-
   struct AugmentationInfo {
     bool AugmentationDataPresent = false;
     bool EHDataFieldPresent = false;
diff --git a/llvm/lib/ExecutionEngine/JITLink/ELFLinkGraphBuilder.h b/llvm/lib/ExecutionEngine/JITLink/ELFLinkGraphBuilder.h
index f221497b5efe4d8..7e719cd3b9ead29 100644
--- a/llvm/lib/ExecutionEngine/JITLink/ELFLinkGraphBuilder.h
+++ b/llvm/lib/ExecutionEngine/JITLink/ELFLinkGraphBuilder.h
@@ -529,11 +529,9 @@ template <typename ELFT> Error ELFLinkGraphBuilder<ELFT>::graphifySymbols() {
         // anonymous symbol.
         auto &GSym =
             Name->empty()
-                ? G->addAnonymousSymbol(*B, Offset, Sym.st_size,
-                                        false, false)
-                : G->addDefinedSymbol(*B, Offset, *Name, Sym.st_size, L,
-                                      S, Sym.getType() == ELF::STT_FUNC,
-                                      false);
+                ? G->addAnonymousSymbol(*B, Offset, Sym.st_size, false, false)
+                : G->addDefinedSymbol(*B, Offset, *Name, Sym.st_size, L, S,
+                                      Sym.getType() == ELF::STT_FUNC, false);
 
         GSym.setTargetFlags(Flags);
         setGraphSymbol(SymIndex, GSym);
diff --git a/llvm/lib/ExecutionEngine/JITLink/ELF_aarch32.cpp b/llvm/lib/ExecutionEngine/JITLink/ELF_aarch32.cpp
index a1bc4c853323102..a3cab1dbb589749 100644
--- a/llvm/lib/ExecutionEngine/JITLink/ELF_aarch32.cpp
+++ b/llvm/lib/ExecutionEngine/JITLink/ELF_aarch32.cpp
@@ -74,8 +74,8 @@ Expected<uint32_t> getELFRelocationType(Edge::Kind Kind) {
     return ELF::R_ARM_THM_MOVT_ABS;
   }
 
-  return make_error<JITLinkError>(formatv("Invalid aarch32 edge {0:d}: ",
-                                          Kind));
+  return make_error<JITLinkError>(
+      formatv("Invalid aarch32 edge {0:d}: ", Kind));
 }
 
 /// Get a human-readable name for the given ELF AArch32 edge kind.
diff --git a/llvm/lib/ExecutionEngine/JITLink/ELF_riscv.cpp b/llvm/lib/ExecutionEngine/JITLink/ELF_riscv.cpp
index 410dd7fedad1a40..d70370aa8133f6a 100644
--- a/llvm/lib/ExecutionEngine/JITLink/ELF_riscv.cpp
+++ b/llvm/lib/ExecutionEngine/JITLink/ELF_riscv.cpp
@@ -270,8 +270,8 @@ class ELFJITLinker_riscv : public JITLinker<ELFJITLinker_riscv> {
       auto RelHI20 = getRISCVPCRelHi20(E);
       if (!RelHI20)
         return RelHI20.takeError();
-      int64_t Value = RelHI20->getTarget().getAddress() +
-                      RelHI20->getAddend() - E.getTarget().getAddress();
+      int64_t Value = RelHI20->getTarget().getAddress() + RelHI20->getAddend() -
+                      E.getTarget().getAddress();
       int64_t Lo = Value & 0xFFF;
       uint32_t RawInstr = *(little32_t *)FixupPtr;
       *(little32_t *)FixupPtr =
@@ -285,8 +285,8 @@ class ELFJITLinker_riscv : public JITLinker<ELFJITLinker_riscv> {
       auto RelHI20 = getRISCVPCRelHi20(E);
       if (!RelHI20)
         return RelHI20.takeError();
-      int64_t Value = RelHI20->getTarget().getAddress() +
-                      RelHI20->getAddend() - E.getTarget().getAddress();
+      int64_t Value = RelHI20->getTarget().getAddress() + RelHI20->getAddend() -
+                      E.getTarget().getAddress();
       int64_t Lo = Value & 0xFFF;
       uint32_t Imm11_5 = extractBits(Lo, 5, 7) << 25;
       uint32_t Imm4_0 = extractBits(Lo, 0, 5) << 7;
diff --git a/llvm/lib/ExecutionEngine/JITLink/JITLinkGeneric.h b/llvm/lib/ExecutionEngine/JITLink/JITLinkGeneric.h
index 25c533a5103be5c..e96397667188348 100644
--- a/llvm/lib/ExecutionEngine/JITLink/JITLinkGeneric.h
+++ b/llvm/lib/ExecutionEngine/JITLink/JITLinkGeneric.h
@@ -111,7 +111,7 @@ template <typename LinkerImpl> class JITLinker : public JITLinkerBase {
   /// Link constructs a LinkerImpl instance and calls linkPhase1.
   /// Link should be called with the constructor arguments for LinkerImpl, which
   /// will be forwarded to the constructor.
-  template <typename... ArgTs> static void link(ArgTs &&... Args) {
+  template <typename... ArgTs> static void link(ArgTs &&...Args) {
     auto L = std::make_unique<LinkerImpl>(std::forward<ArgTs>(Args)...);
 
     // Ownership of the linker is passed into the linker's doLink function to
diff --git a/llvm/lib/ExecutionEngine/JITLink/MachOLinkGraphBuilder.cpp b/llvm/lib/ExecutionEngine/JITLink/MachOLinkGraphBuilder.cpp
index e876876014abb95..149e068402c3501 100644
--- a/llvm/lib/ExecutionEngine/JITLink/MachOLinkGraphBuilder.cpp
+++ b/llvm/lib/ExecutionEngine/JITLink/MachOLinkGraphBuilder.cpp
@@ -516,7 +516,7 @@ Error MachOLinkGraphBuilder::graphifyRegularSymbols() {
       while (!SecNSymStack.empty() &&
              (isAltEntry(*SecNSymStack.back()) ||
               SecNSymStack.back()->Value == BlockSyms.back()->Value ||
-             !SubsectionsViaSymbols)) {
+              !SubsectionsViaSymbols)) {
         BlockSyms.push_back(SecNSymStack.back());
         SecNSymStack.pop_back();
       }
@@ -680,10 +680,17 @@ Error MachOLinkGraphBuilder::graphifyCStringSection(
                << ", align-ofs = " << B.getAlignmentOffset() << " for \"";
         for (size_t J = 0; J != std::min(B.getSize(), size_t(16)); ++J)
           switch (B.getContent()[J]) {
-          case '\0': break;
-          case '\n': dbgs() << "\\n"; break;
-          case '\t': dbgs() << "\\t"; break;
-          default:   dbgs() << B.getContent()[J]; break;
+          case '\0':
+            break;
+          case '\n':
+            dbgs() << "\\n";
+            break;
+          case '\t':
+            dbgs() << "\\t";
+            break;
+          default:
+            dbgs() << B.getContent()[J];
+            break;
           }
         if (B.getSize() > 16)
           dbgs() << "...";
diff --git a/llvm/lib/ExecutionEngine/JITLink/MachOLinkGraphBuilder.h b/llvm/lib/ExecutionEngine/JITLink/MachOLinkGraphBuilder.h
index 2805c2960b9bdda..3504ab3c8724ed6 100644
--- a/llvm/lib/ExecutionEngine/JITLink/MachOLinkGraphBuilder.h
+++ b/llvm/lib/ExecutionEngine/JITLink/MachOLinkGraphBuilder.h
@@ -32,7 +32,6 @@ class MachOLinkGraphBuilder {
   Expected<std::unique_ptr<LinkGraph>> buildGraph();
 
 protected:
-
   struct NormalizedSymbol {
     friend class MachOLinkGraphBuilder;
 
@@ -98,7 +97,7 @@ class MachOLinkGraphBuilder {
 
   /// Create a symbol.
   template <typename... ArgTs>
-  NormalizedSymbol &createNormalizedSymbol(ArgTs &&... Args) {
+  NormalizedSymbol &createNormalizedSymbol(ArgTs &&...Args) {
     NormalizedSymbol *Sym = reinterpret_cast<NormalizedSymbol *>(
         Allocator.Allocate<NormalizedSymbol>());
     new (Sym) NormalizedSymbol(std::forward<ArgTs>(Args)...);
diff --git a/llvm/lib/ExecutionEngine/JITLink/PerGraphGOTAndPLTStubsBuilder.h b/llvm/lib/ExecutionEngine/JITLink/PerGraphGOTAndPLTStubsBuilder.h
index 6e325f92bafbe4d..9ff890abb33a9fa 100644
--- a/llvm/lib/ExecutionEngine/JITLink/PerGraphGOTAndPLTStubsBuilder.h
+++ b/llvm/lib/ExecutionEngine/JITLink/PerGraphGOTAndPLTStubsBuilder.h
@@ -28,8 +28,7 @@ namespace jitlink {
 /// in the JITLinkDylib, but allows graphs to be trivially removed independently
 /// without affecting other graphs (since those other graphs will have their own
 /// copies of any required entries).
-template <typename BuilderImplT>
-class PerGraphGOTAndPLTStubsBuilder {
+template <typename BuilderImplT> class PerGraphGOTAndPLTStubsBuilder {
 public:
   PerGraphGOTAndPLTStubsBuilder(LinkGraph &G) : G(G) {}
 
diff --git a/llvm/lib/ExecutionEngine/JITLink/aarch32.cpp b/llvm/lib/ExecutionEngine/JITLink/aarch32.cpp
index e9221a898ff63da..009142a56e4565f 100644
--- a/llvm/lib/ExecutionEngine/JITLink/aarch32.cpp
+++ b/llvm/lib/ExecutionEngine/JITLink/aarch32.cpp
@@ -243,8 +243,8 @@ Expected<int64_t> readAddendThumb(LinkGraph &G, Block &B, const Edge &E,
     if (!checkOpcode<Thumb_Jump24>(R))
       return makeUnexpectedOpcodeError(G, R, Kind);
     return LLVM_LIKELY(ArmCfg.J1J2BranchEncoding)
-                  ? decodeImmBT4BlT1BlxT2_J1J2(R.Hi, R.Lo)
-                  : decodeImmBT4BlT1BlxT2(R.Hi, R.Lo);
+               ? decodeImmBT4BlT1BlxT2_J1J2(R.Hi, R.Lo)
+               : decodeImmBT4BlT1BlxT2(R.Hi, R.Lo);
 
   case Thumb_MovwAbsNC:
     if (!checkOpcode<Thumb_MovwAbsNC>(R))
diff --git a/llvm/lib/ExecutionEngine/MCJIT/CMakeLists.txt b/llvm/lib/ExecutionEngine/MCJIT/CMakeLists.txt
index f7a4610e0500f38..6c4535f5b69ee79 100644
--- a/llvm/lib/ExecutionEngine/MCJIT/CMakeLists.txt
+++ b/llvm/lib/ExecutionEngine/MCJIT/CMakeLists.txt
@@ -1,14 +1,6 @@
-add_llvm_component_library(LLVMMCJIT
-  MCJIT.cpp
+add_llvm_component_library(LLVMMCJIT MCJIT.cpp
 
-  DEPENDS
-  intrinsics_gen
+                               DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  Core
-  ExecutionEngine
-  Object
-  RuntimeDyld
-  Support
-  Target
-  )
+                                   LINK_COMPONENTS Core ExecutionEngine Object
+                                       RuntimeDyld Support Target)
diff --git a/llvm/lib/ExecutionEngine/MCJIT/MCJIT.cpp b/llvm/lib/ExecutionEngine/MCJIT/MCJIT.cpp
index 869b383dd064f26..1bfc30b0ab54dd5 100644
--- a/llvm/lib/ExecutionEngine/MCJIT/MCJIT.cpp
+++ b/llvm/lib/ExecutionEngine/MCJIT/MCJIT.cpp
@@ -36,10 +36,9 @@ static struct RegisterJIT {
   RegisterJIT() { MCJIT::Register(); }
 } JITRegistrator;
 
-}
+} // namespace
 
-extern "C" void LLVMLinkInMCJIT() {
-}
+extern "C" void LLVMLinkInMCJIT() {}
 
 ExecutionEngine *
 MCJIT::createJIT(std::unique_ptr<Module> M, std::string *ErrorStr,
@@ -138,7 +137,7 @@ void MCJIT::addArchive(object::OwningBinary<object::Archive> A) {
   Archives.push_back(std::move(A));
 }
 
-void MCJIT::setObjectCache(ObjectCache* NewCache) {
+void MCJIT::setObjectCache(ObjectCache *NewCache) {
   std::lock_guard<sys::Mutex> locked(lock);
   ObjCache = NewCache;
 }
@@ -214,7 +213,7 @@ void MCJIT::generateCodeForModule(Module *M) {
   // Load the object into the dynamic linker.
   // MCJIT now owns the ObjectImage pointer (via its LoadedObjects list).
   Expected<std::unique_ptr<object::ObjectFile>> LoadedObject =
-    object::ObjectFile::createObjectFile(ObjectToLoad->getMemBufferRef());
+      object::ObjectFile::createObjectFile(ObjectToLoad->getMemBufferRef());
   if (!LoadedObject) {
     std::string Buf;
     raw_string_ostream OS(Buf);
@@ -222,7 +221,7 @@ void MCJIT::generateCodeForModule(Module *M) {
     report_fatal_error(Twine(OS.str()));
   }
   std::unique_ptr<RuntimeDyld::LoadedObjectInfo> L =
-    Dyld.loadObject(*LoadedObject.get());
+      Dyld.loadObject(*LoadedObject.get());
 
   if (Dyld.hasError())
     report_fatal_error(Dyld.getErrorString());
@@ -260,7 +259,7 @@ void MCJIT::finalizeObject() {
 
   // Generate code for module is going to move objects out of the 'added' list,
   // so we need to copy that out before using it:
-  SmallVector<Module*, 16> ModsToAdd;
+  SmallVector<Module *, 16> ModsToAdd;
   for (auto *M : OwnedModules.added())
     ModsToAdd.push_back(M);
 
@@ -274,7 +273,8 @@ void MCJIT::finalizeModule(Module *M) {
   std::lock_guard<sys::Mutex> locked(lock);
 
   // This must be a module which has already been added to this MCJIT instance.
-  assert(OwnedModules.ownsModule(M) && "MCJIT::finalizeModule: Unknown module.");
+  assert(OwnedModules.ownsModule(M) &&
+         "MCJIT::finalizeModule: Unknown module.");
 
   // If the module hasn't been compiled, just do that.
   if (!OwnedModules.hasModuleBeenLoaded(M))
@@ -285,8 +285,7 @@ void MCJIT::finalizeModule(Module *M) {
 
 JITSymbol MCJIT::findExistingSymbol(const std::string &Name) {
   if (void *Addr = getPointerToGlobalIfAvailable(Name))
-    return JITSymbol(static_cast<uint64_t>(
-                         reinterpret_cast<uintptr_t>(Addr)),
+    return JITSymbol(static_cast<uint64_t>(reinterpret_cast<uintptr_t>(Addr)),
                      JITSymbolFlags::Exported);
 
   return Dyld.getSymbol(Name);
@@ -336,8 +335,7 @@ uint64_t MCJIT::getSymbolAddress(const std::string &Name,
   return 0;
 }
 
-JITSymbol MCJIT::findSymbol(const std::string &Name,
-                            bool CheckFunctionsOnly) {
+JITSymbol MCJIT::findSymbol(const std::string &Name, bool CheckFunctionsOnly) {
   std::lock_guard<sys::Mutex> locked(lock);
 
   // First, check to see if we already have this symbol.
@@ -386,7 +384,7 @@ JITSymbol MCJIT::findSymbol(const std::string &Name,
   // FIXME: Should we instead have a LazySymbolCreator callback?
   if (LazyFunctionCreator) {
     auto Addr = static_cast<uint64_t>(
-                  reinterpret_cast<uintptr_t>(LazyFunctionCreator(Name)));
+        reinterpret_cast<uintptr_t>(LazyFunctionCreator(Name)));
     return JITSymbol(Addr, JITSymbolFlags::Exported);
   }
 
@@ -425,7 +423,8 @@ void *MCJIT::getPointerToFunction(Function *F) {
   }
 
   Module *M = F->getParent();
-  bool HasBeenAddedButNotLoaded = OwnedModules.hasModuleBeenAddedButNotLoaded(M);
+  bool HasBeenAddedButNotLoaded =
+      OwnedModules.hasModuleBeenAddedButNotLoaded(M);
 
   // Make sure the relevant module has been compiled and loaded.
   if (HasBeenAddedButNotLoaded)
@@ -442,7 +441,7 @@ void *MCJIT::getPointerToFunction(Function *F) {
   //
   // This is the accessor for the target address, so make sure to check the
   // load address of the symbol, not the local address.
-  return (void*)Dyld.getSymbol(Name).getAddress();
+  return (void *)Dyld.getSymbol(Name).getAddress();
 }
 
 void MCJIT::runStaticConstructorsDestructorsInModulePtrSet(
@@ -473,10 +472,10 @@ Function *MCJIT::FindFunctionNamedInModulePtrSet(StringRef FnName,
   return nullptr;
 }
 
-GlobalVariable *MCJIT::FindGlobalVariableNamedInModulePtrSet(StringRef Name,
-                                                             bool AllowInternal,
-                                                             ModulePtrSet::iterator I,
-                                                             ModulePtrSet::iterator E) {
+GlobalVariable *
+MCJIT::FindGlobalVariableNamedInModulePtrSet(StringRef Name, bool AllowInternal,
+                                             ModulePtrSet::iterator I,
+                                             ModulePtrSet::iterator E) {
   for (; I != E; ++I) {
     GlobalVariable *GV = (*I)->getGlobalVariable(Name, AllowInternal);
     if (GV && !GV->isDeclaration())
@@ -485,7 +484,6 @@ GlobalVariable *MCJIT::FindGlobalVariableNamedInModulePtrSet(StringRef Name,
   return nullptr;
 }
 
-
 Function *MCJIT::FindFunctionNamed(StringRef FnName) {
   Function *F = FindFunctionNamedInModulePtrSet(
       FnName, OwnedModules.begin_added(), OwnedModules.end_added());
@@ -498,15 +496,19 @@ Function *MCJIT::FindFunctionNamed(StringRef FnName) {
   return F;
 }
 
-GlobalVariable *MCJIT::FindGlobalVariableNamed(StringRef Name, bool AllowInternal) {
+GlobalVariable *MCJIT::FindGlobalVariableNamed(StringRef Name,
+                                               bool AllowInternal) {
   GlobalVariable *GV = FindGlobalVariableNamedInModulePtrSet(
-      Name, AllowInternal, OwnedModules.begin_added(), OwnedModules.end_added());
+      Name, AllowInternal, OwnedModules.begin_added(),
+      OwnedModules.end_added());
   if (!GV)
-    GV = FindGlobalVariableNamedInModulePtrSet(Name, AllowInternal, OwnedModules.begin_loaded(),
-                                        OwnedModules.end_loaded());
+    GV = FindGlobalVariableNamedInModulePtrSet(Name, AllowInternal,
+                                               OwnedModules.begin_loaded(),
+                                               OwnedModules.end_loaded());
   if (!GV)
-    GV = FindGlobalVariableNamedInModulePtrSet(Name, AllowInternal, OwnedModules.begin_finalized(),
-                                        OwnedModules.end_finalized());
+    GV = FindGlobalVariableNamedInModulePtrSet(Name, AllowInternal,
+                                               OwnedModules.begin_finalized(),
+                                               OwnedModules.end_finalized());
   return GV;
 }
 
@@ -534,7 +536,7 @@ GenericValue MCJIT::runFunction(Function *F, ArrayRef<GenericValue> ArgValues) {
           FTy->getParamType(1)->isPointerTy() &&
           FTy->getParamType(2)->isPointerTy()) {
         int (*PF)(int, char **, const char **) =
-          (int(*)(int, char **, const char **))(intptr_t)FPtr;
+            (int (*)(int, char **, const char **))(intptr_t)FPtr;
 
         // Call the function.
         GenericValue rv;
@@ -547,7 +549,7 @@ GenericValue MCJIT::runFunction(Function *F, ArrayRef<GenericValue> ArgValues) {
     case 2:
       if (FTy->getParamType(0)->isIntegerTy(32) &&
           FTy->getParamType(1)->isPointerTy()) {
-        int (*PF)(int, char **) = (int(*)(int, char **))(intptr_t)FPtr;
+        int (*PF)(int, char **) = (int (*)(int, char **))(intptr_t)FPtr;
 
         // Call the function.
         GenericValue rv;
@@ -557,10 +559,9 @@ GenericValue MCJIT::runFunction(Function *F, ArrayRef<GenericValue> ArgValues) {
       }
       break;
     case 1:
-      if (FTy->getNumParams() == 1 &&
-          FTy->getParamType(0)->isIntegerTy(32)) {
+      if (FTy->getNumParams() == 1 && FTy->getParamType(0)->isIntegerTy(32)) {
         GenericValue rv;
-        int (*PF)(int) = (int(*)(int))(intptr_t)FPtr;
+        int (*PF)(int) = (int (*)(int))(intptr_t)FPtr;
         rv.IntVal = APInt(32, PF(ArgValues[0].IntVal.getZExtValue()));
         return rv;
       }
@@ -572,17 +573,18 @@ GenericValue MCJIT::runFunction(Function *F, ArrayRef<GenericValue> ArgValues) {
   if (ArgValues.empty()) {
     GenericValue rv;
     switch (RetTy->getTypeID()) {
-    default: llvm_unreachable("Unknown return type for function call!");
+    default:
+      llvm_unreachable("Unknown return type for function call!");
     case Type::IntegerTyID: {
       unsigned BitWidth = cast<IntegerType>(RetTy)->getBitWidth();
       if (BitWidth == 1)
-        rv.IntVal = APInt(BitWidth, ((bool(*)())(intptr_t)FPtr)());
+        rv.IntVal = APInt(BitWidth, ((bool (*)())(intptr_t)FPtr)());
       else if (BitWidth <= 8)
-        rv.IntVal = APInt(BitWidth, ((char(*)())(intptr_t)FPtr)());
+        rv.IntVal = APInt(BitWidth, ((char (*)())(intptr_t)FPtr)());
       else if (BitWidth <= 16)
-        rv.IntVal = APInt(BitWidth, ((short(*)())(intptr_t)FPtr)());
+        rv.IntVal = APInt(BitWidth, ((short (*)())(intptr_t)FPtr)());
       else if (BitWidth <= 32)
-        rv.IntVal = APInt(BitWidth, ((int(*)())(intptr_t)FPtr)());
+        rv.IntVal = APInt(BitWidth, ((int (*)())(intptr_t)FPtr)());
       else if (BitWidth <= 64)
         rv.IntVal = APInt(BitWidth, ((int64_t(*)())(intptr_t)FPtr)());
       else
@@ -590,20 +592,20 @@ GenericValue MCJIT::runFunction(Function *F, ArrayRef<GenericValue> ArgValues) {
       return rv;
     }
     case Type::VoidTyID:
-      rv.IntVal = APInt(32, ((int(*)())(intptr_t)FPtr)());
+      rv.IntVal = APInt(32, ((int (*)())(intptr_t)FPtr)());
       return rv;
     case Type::FloatTyID:
-      rv.FloatVal = ((float(*)())(intptr_t)FPtr)();
+      rv.FloatVal = ((float (*)())(intptr_t)FPtr)();
       return rv;
     case Type::DoubleTyID:
-      rv.DoubleVal = ((double(*)())(intptr_t)FPtr)();
+      rv.DoubleVal = ((double (*)())(intptr_t)FPtr)();
       return rv;
     case Type::X86_FP80TyID:
     case Type::FP128TyID:
     case Type::PPC_FP128TyID:
       llvm_unreachable("long double not supported yet");
     case Type::PointerTyID:
-      return PTOGV(((void*(*)())(intptr_t)FPtr)());
+      return PTOGV(((void *(*)())(intptr_t)FPtr)());
     }
   }
 
@@ -617,8 +619,7 @@ void *MCJIT::getPointerToNamedFunction(StringRef Name, bool AbortOnFailure) {
   if (!isSymbolSearchingDisabled()) {
     if (auto Sym = Resolver.findSymbol(std::string(Name))) {
       if (auto AddrOrErr = Sym.getAddress())
-        return reinterpret_cast<void*>(
-                 static_cast<uintptr_t>(*AddrOrErr));
+        return reinterpret_cast<void *>(static_cast<uintptr_t>(*AddrOrErr));
     } else if (auto Err = Sym.takeError())
       report_fatal_error(std::move(Err));
   }
@@ -629,7 +630,7 @@ void *MCJIT::getPointerToNamedFunction(StringRef Name, bool AbortOnFailure) {
       return RP;
 
   if (AbortOnFailure) {
-    report_fatal_error("Program used external function '"+Name+
+    report_fatal_error("Program used external function '" + Name +
                        "' which could not be resolved!");
   }
   return nullptr;
@@ -671,8 +672,7 @@ void MCJIT::notifyFreeingObject(const object::ObjectFile &Obj) {
     L->notifyFreeingObject(Key);
 }
 
-JITSymbol
-LinkingSymbolResolver::findSymbol(const std::string &Name) {
+JITSymbol LinkingSymbolResolver::findSymbol(const std::string &Name) {
   auto Result = ParentEngine.findSymbol(Name, false);
   if (Result)
     return Result;
diff --git a/llvm/lib/ExecutionEngine/MCJIT/MCJIT.h b/llvm/lib/ExecutionEngine/MCJIT/MCJIT.h
index f6c4cdbb8c91a9d..043ac1ffc37cf66 100644
--- a/llvm/lib/ExecutionEngine/MCJIT/MCJIT.h
+++ b/llvm/lib/ExecutionEngine/MCJIT/MCJIT.h
@@ -88,7 +88,9 @@ class MCJIT : public ExecutionEngine {
     ModulePtrSet::iterator begin_loaded() { return LoadedModules.begin(); }
     ModulePtrSet::iterator end_loaded() { return LoadedModules.end(); }
 
-    ModulePtrSet::iterator begin_finalized() { return FinalizedModules.begin(); }
+    ModulePtrSet::iterator begin_finalized() {
+      return FinalizedModules.begin();
+    }
     ModulePtrSet::iterator end_finalized() { return FinalizedModules.end(); }
 
     void addModule(std::unique_ptr<Module> M) {
@@ -114,7 +116,7 @@ class MCJIT : public ExecutionEngine {
       return FinalizedModules.contains(M);
     }
 
-    bool ownsModule(Module* M) {
+    bool ownsModule(Module *M) {
       return AddedModules.contains(M) || LoadedModules.contains(M) ||
              FinalizedModules.contains(M);
     }
@@ -160,7 +162,7 @@ class MCJIT : public ExecutionEngine {
     ModulePtrSet LoadedModules;
     ModulePtrSet FinalizedModules;
 
-    void freeModulePtrSet(ModulePtrSet& MPS) {
+    void freeModulePtrSet(ModulePtrSet &MPS) {
       // Go through the module set and delete everything.
       for (Module *M : MPS)
         delete M;
@@ -173,7 +175,7 @@ class MCJIT : public ExecutionEngine {
   std::shared_ptr<MCJITMemoryManager> MemMgr;
   LinkingSymbolResolver Resolver;
   RuntimeDyld Dyld;
-  std::vector<JITEventListener*> EventListeners;
+  std::vector<JITEventListener *> EventListeners;
 
   OwningModuleContainer OwnedModules;
 
@@ -190,10 +192,10 @@ class MCJIT : public ExecutionEngine {
                                             ModulePtrSet::iterator I,
                                             ModulePtrSet::iterator E);
 
-  GlobalVariable *FindGlobalVariableNamedInModulePtrSet(StringRef Name,
-                                                        bool AllowInternal,
-                                                        ModulePtrSet::iterator I,
-                                                        ModulePtrSet::iterator E);
+  GlobalVariable *
+  FindGlobalVariableNamedInModulePtrSet(StringRef Name, bool AllowInternal,
+                                        ModulePtrSet::iterator I,
+                                        ModulePtrSet::iterator E);
 
   void runStaticConstructorsDestructorsInModulePtrSet(bool isDtors,
                                                       ModulePtrSet::iterator I,
@@ -210,9 +212,9 @@ class MCJIT : public ExecutionEngine {
   void addArchive(object::OwningBinary<object::Archive> O) override;
   bool removeModule(Module *M) override;
 
-  /// FindFunctionNamed - Search all of the active modules to find the function that
-  /// defines FnName.  This is very slow operation and shouldn't be used for
-  /// general code.
+  /// FindFunctionNamed - Search all of the active modules to find the function
+  /// that defines FnName.  This is very slow operation and shouldn't be used
+  /// for general code.
   Function *FindFunctionNamed(StringRef FnName) override;
 
   /// FindGlobalVariableNamed - Search all of the active modules to find the
@@ -288,9 +290,7 @@ class MCJIT : public ExecutionEngine {
   /// @name (Private) Registration Interfaces
   /// @{
 
-  static void Register() {
-    MCJITCtor = createJIT;
-  }
+  static void Register() { MCJITCtor = createJIT; }
 
   static ExecutionEngine *
   createJIT(std::unique_ptr<Module> M, std::string *ErrorStr,
@@ -311,8 +311,7 @@ class MCJIT : public ExecutionEngine {
   //
   // getSymbolAddress takes an unmangled name and returns the corresponding
   // JITSymbol if a definition of the name has been added to the JIT.
-  uint64_t getSymbolAddress(const std::string &Name,
-                            bool CheckFunctionsOnly);
+  uint64_t getSymbolAddress(const std::string &Name, bool CheckFunctionsOnly);
 
 protected:
   /// emitObject -- Generate a JITed object in memory from the specified module
@@ -330,6 +329,6 @@ class MCJIT : public ExecutionEngine {
   Module *findModuleForSymbol(const std::string &Name, bool CheckFunctionsOnly);
 };
 
-} // end llvm namespace
+} // namespace llvm
 
 #endif // LLVM_LIB_EXECUTIONENGINE_MCJIT_MCJIT_H
diff --git a/llvm/lib/ExecutionEngine/OProfileJIT/CMakeLists.txt b/llvm/lib/ExecutionEngine/OProfileJIT/CMakeLists.txt
index 8d37238221f9f5b..9d59243706b7432 100644
--- a/llvm/lib/ExecutionEngine/OProfileJIT/CMakeLists.txt
+++ b/llvm/lib/ExecutionEngine/OProfileJIT/CMakeLists.txt
@@ -1,13 +1,7 @@
 
-include_directories( ${LLVM_OPROFILE_DIR} ${CMAKE_CURRENT_SOURCE_DIR}/.. )
+include_directories(${LLVM_OPROFILE_DIR} ${CMAKE_CURRENT_SOURCE_DIR} /..)
 
-add_llvm_component_library(LLVMOProfileJIT
-  OProfileJITEventListener.cpp
-  OProfileWrapper.cpp
+    add_llvm_component_library(
+        LLVMOProfileJIT OProfileJITEventListener.cpp OProfileWrapper.cpp
 
-  LINK_COMPONENTS
-  DebugInfoDWARF
-  Support
-  Object
-  ExecutionEngine
-  )
+            LINK_COMPONENTS DebugInfoDWARF Support Object ExecutionEngine)
diff --git a/llvm/lib/ExecutionEngine/OProfileJIT/OProfileJITEventListener.cpp b/llvm/lib/ExecutionEngine/OProfileJIT/OProfileJITEventListener.cpp
index bb5d96051da942e..9061fa4dbb42a7e 100644
--- a/llvm/lib/ExecutionEngine/OProfileJIT/OProfileJITEventListener.cpp
+++ b/llvm/lib/ExecutionEngine/OProfileJIT/OProfileJITEventListener.cpp
@@ -43,7 +43,7 @@ class OProfileJITEventListener : public JITEventListener {
 
 public:
   OProfileJITEventListener(std::unique_ptr<OProfileWrapper> LibraryWrapper)
-    : Wrapper(std::move(LibraryWrapper)) {
+      : Wrapper(std::move(LibraryWrapper)) {
     initialize();
   }
 
@@ -119,7 +119,7 @@ void OProfileJITEventListener::notifyObjectLoaded(
     debug_line = (struct debug_line_info *)calloc(
         num_entries, sizeof(struct debug_line_info));
 
-    for (auto& It : Lines) {
+    for (auto &It : Lines) {
       debug_line[i].vma = (unsigned long)It.first;
       debug_line[i].lineno = It.second.Line;
       debug_line[i].filename =
@@ -150,8 +150,7 @@ void OProfileJITEventListener::notifyFreeingObject(ObjectKey Key) {
     const ObjectFile &DebugObj = *DebugObjects[Key].getBinary();
 
     // Use symbol info to iterate functions in the object.
-    for (symbol_iterator I = DebugObj.symbol_begin(),
-                         E = DebugObj.symbol_end();
+    for (symbol_iterator I = DebugObj.symbol_begin(), E = DebugObj.symbol_end();
          I != E; ++I) {
       if (I->getType() && *I->getType() == SymbolRef::ST_Function) {
         Expected<uint64_t> AddrOrErr = I->getAddress();
@@ -173,7 +172,7 @@ void OProfileJITEventListener::notifyFreeingObject(ObjectKey Key) {
   DebugObjects.erase(Key);
 }
 
-}  // anonymous namespace.
+} // anonymous namespace.
 
 namespace llvm {
 JITEventListener *JITEventListener::createOProfileJITEventListener() {
@@ -182,7 +181,6 @@ JITEventListener *JITEventListener::createOProfileJITEventListener() {
 
 } // namespace llvm
 
-LLVMJITEventListenerRef LLVMCreateOProfileJITEventListener(void)
-{
+LLVMJITEventListenerRef LLVMCreateOProfileJITEventListener(void) {
   return wrap(JITEventListener::createOProfileJITEventListener());
 }
diff --git a/llvm/lib/ExecutionEngine/OProfileJIT/OProfileWrapper.cpp b/llvm/lib/ExecutionEngine/OProfileJIT/OProfileWrapper.cpp
index b78d2531382d039..24a7dd73b1b4d54 100644
--- a/llvm/lib/ExecutionEngine/OProfileJIT/OProfileWrapper.cpp
+++ b/llvm/lib/ExecutionEngine/OProfileJIT/OProfileWrapper.cpp
@@ -38,17 +38,9 @@ llvm::sys::Mutex OProfileInitializationMutex;
 namespace llvm {
 
 OProfileWrapper::OProfileWrapper()
-: Agent(0),
-  OpenAgentFunc(0),
-  CloseAgentFunc(0),
-  WriteNativeCodeFunc(0),
-  WriteDebugLineInfoFunc(0),
-  UnloadNativeCodeFunc(0),
-  MajorVersionFunc(0),
-  MinorVersionFunc(0),
-  IsOProfileRunningFunc(0),
-  Initialized(false) {
-}
+    : Agent(0), OpenAgentFunc(0), CloseAgentFunc(0), WriteNativeCodeFunc(0),
+      WriteDebugLineInfoFunc(0), UnloadNativeCodeFunc(0), MajorVersionFunc(0),
+      MinorVersionFunc(0), IsOProfileRunningFunc(0), Initialized(false) {}
 
 bool OProfileWrapper::initialize() {
   using namespace llvm;
@@ -68,7 +60,7 @@ bool OProfileWrapper::initialize() {
   }
 
   std::string error;
-  if(!DynamicLibrary::LoadLibraryPermanently("libopagent.so", &error)) {
+  if (!DynamicLibrary::LoadLibraryPermanently("libopagent.so", &error)) {
     LLVM_DEBUG(
         dbgs()
         << "OProfile connector library libopagent.so could not be loaded: "
@@ -76,27 +68,26 @@ bool OProfileWrapper::initialize() {
   }
 
   // Get the addresses of the opagent functions
-  OpenAgentFunc = (op_open_agent_ptr_t)(intptr_t)
-          DynamicLibrary::SearchForAddressOfSymbol("op_open_agent");
-  CloseAgentFunc = (op_close_agent_ptr_t)(intptr_t)
-          DynamicLibrary::SearchForAddressOfSymbol("op_close_agent");
+  OpenAgentFunc =
+      (op_open_agent_ptr_t)(intptr_t)DynamicLibrary::SearchForAddressOfSymbol(
+          "op_open_agent");
+  CloseAgentFunc =
+      (op_close_agent_ptr_t)(intptr_t)DynamicLibrary::SearchForAddressOfSymbol(
+          "op_close_agent");
   WriteNativeCodeFunc = (op_write_native_code_ptr_t)(intptr_t)
-          DynamicLibrary::SearchForAddressOfSymbol("op_write_native_code");
+      DynamicLibrary::SearchForAddressOfSymbol("op_write_native_code");
   WriteDebugLineInfoFunc = (op_write_debug_line_info_ptr_t)(intptr_t)
-          DynamicLibrary::SearchForAddressOfSymbol("op_write_debug_line_info");
+      DynamicLibrary::SearchForAddressOfSymbol("op_write_debug_line_info");
   UnloadNativeCodeFunc = (op_unload_native_code_ptr_t)(intptr_t)
-          DynamicLibrary::SearchForAddressOfSymbol("op_unload_native_code");
+      DynamicLibrary::SearchForAddressOfSymbol("op_unload_native_code");
   MajorVersionFunc = (op_major_version_ptr_t)(intptr_t)
-          DynamicLibrary::SearchForAddressOfSymbol("op_major_version");
+      DynamicLibrary::SearchForAddressOfSymbol("op_major_version");
   MinorVersionFunc = (op_major_version_ptr_t)(intptr_t)
-          DynamicLibrary::SearchForAddressOfSymbol("op_minor_version");
+      DynamicLibrary::SearchForAddressOfSymbol("op_minor_version");
 
   // With missing functions, we can do nothing
-  if (!OpenAgentFunc
-      || !CloseAgentFunc
-      || !WriteNativeCodeFunc
-      || !WriteDebugLineInfoFunc
-      || !UnloadNativeCodeFunc) {
+  if (!OpenAgentFunc || !CloseAgentFunc || !WriteNativeCodeFunc ||
+      !WriteDebugLineInfoFunc || !UnloadNativeCodeFunc) {
     OpenAgentFunc = 0;
     CloseAgentFunc = 0;
     WriteNativeCodeFunc = 0;
@@ -115,29 +106,29 @@ bool OProfileWrapper::isOProfileRunning() {
 }
 
 bool OProfileWrapper::checkForOProfileProcEntry() {
-  DIR* ProcDir;
+  DIR *ProcDir;
 
   ProcDir = opendir("/proc");
   if (!ProcDir)
     return false;
 
   // Walk the /proc tree looking for the oprofile daemon
-  struct dirent* Entry;
+  struct dirent *Entry;
   while (0 != (Entry = readdir(ProcDir))) {
     if (Entry->d_type == DT_DIR) {
       // Build a path from the current entry name
       SmallString<256> CmdLineFName;
-      raw_svector_ostream(CmdLineFName) << "/proc/" << Entry->d_name
-                                        << "/cmdline";
+      raw_svector_ostream(CmdLineFName)
+          << "/proc/" << Entry->d_name << "/cmdline";
 
       // Open the cmdline file
       int CmdLineFD = open(CmdLineFName.c_str(), S_IRUSR);
       if (CmdLineFD != -1) {
-        char    ExeName[PATH_MAX+1];
-        char*   BaseName = 0;
+        char ExeName[PATH_MAX + 1];
+        char *BaseName = 0;
 
         // Read the cmdline file
-        ssize_t NumRead = read(CmdLineFD, ExeName, PATH_MAX+1);
+        ssize_t NumRead = read(CmdLineFD, ExeName, PATH_MAX + 1);
         close(CmdLineFD);
         ssize_t Idx = 0;
 
@@ -146,7 +137,7 @@ bool OProfileWrapper::checkForOProfileProcEntry() {
         }
 
         // Find the terminator for the first string
-        while (Idx < NumRead-1 && ExeName[Idx] != 0) {
+        while (Idx < NumRead - 1 && ExeName[Idx] != 0) {
           Idx++;
         }
 
@@ -163,8 +154,8 @@ bool OProfileWrapper::checkForOProfileProcEntry() {
         }
 
         // Test this to see if it is the oprofile daemon
-        if (BaseName != 0 && (!strcmp("oprofiled", BaseName) ||
-                              !strcmp("operf", BaseName))) {
+        if (BaseName != 0 &&
+            (!strcmp("oprofiled", BaseName) || !strcmp("operf", BaseName))) {
           // If it is, we're done
           closedir(ProcDir);
           return true;
@@ -204,13 +195,10 @@ int OProfileWrapper::op_close_agent() {
   return ret;
 }
 
-bool OProfileWrapper::isAgentAvailable() {
-  return Agent != 0;
-}
+bool OProfileWrapper::isAgentAvailable() { return Agent != 0; }
 
-int OProfileWrapper::op_write_native_code(const char* Name,
-                                          uint64_t Addr,
-                                          void const* Code,
+int OProfileWrapper::op_write_native_code(const char *Name, uint64_t Addr,
+                                          void const *Code,
                                           const unsigned int Size) {
   if (!Initialized)
     initialize();
@@ -222,9 +210,7 @@ int OProfileWrapper::op_write_native_code(const char* Name,
 }
 
 int OProfileWrapper::op_write_debug_line_info(
-  void const* Code,
-  size_t NumEntries,
-  struct debug_line_info const* Info) {
+    void const *Code, size_t NumEntries, struct debug_line_info const *Info) {
   if (!Initialized)
     initialize();
 
@@ -254,7 +240,7 @@ int OProfileWrapper::op_minor_version() {
   return -1;
 }
 
-int  OProfileWrapper::op_unload_native_code(uint64_t Addr) {
+int OProfileWrapper::op_unload_native_code(uint64_t Addr) {
   if (!Initialized)
     initialize();
 
diff --git a/llvm/lib/ExecutionEngine/Orc/CMakeLists.txt b/llvm/lib/ExecutionEngine/Orc/CMakeLists.txt
index c15c2eac0d044d2..c0c18d18c04ab50 100644
--- a/llvm/lib/ExecutionEngine/Orc/CMakeLists.txt
+++ b/llvm/lib/ExecutionEngine/Orc/CMakeLists.txt
@@ -1,88 +1,41 @@
 if (NOT HAVE_CXX_ATOMICS64_WITHOUT_LIB)
-  set (atomic_lib atomic)
-endif()
+set(atomic_lib atomic) endif()
 
-if( CMAKE_HOST_UNIX AND HAVE_LIBRT )
-  set(rt_lib rt)
-endif()
+    if (CMAKE_HOST_UNIX AND HAVE_LIBRT) set(rt_lib rt) endif()
 
-add_llvm_component_library(LLVMOrcJIT
-  COFFVCRuntimeSupport.cpp
-  COFFPlatform.cpp
-  CompileOnDemandLayer.cpp
-  CompileUtils.cpp
-  Core.cpp
-  DebugObjectManagerPlugin.cpp
-  DebuggerSupportPlugin.cpp
-  DebugUtils.cpp
-  EPCDynamicLibrarySearchGenerator.cpp
-  EPCDebugObjectRegistrar.cpp
-  EPCEHFrameRegistrar.cpp
-  EPCGenericDylibManager.cpp
-  EPCGenericJITLinkMemoryManager.cpp
-  EPCGenericRTDyldMemoryManager.cpp
-  EPCIndirectionUtils.cpp
-  ExecutionUtils.cpp
-  ObjectFileInterface.cpp
-  IndirectionUtils.cpp
-  IRCompileLayer.cpp
-  IRTransformLayer.cpp
-  JITTargetMachineBuilder.cpp
-  LazyReexports.cpp
-  Layer.cpp
-  LookupAndRecordAddrs.cpp
-  LLJIT.cpp
-  MachOPlatform.cpp
-  MapperJITLinkMemoryManager.cpp
-  MemoryMapper.cpp
-  ELFNixPlatform.cpp
-  Mangling.cpp
-  ObjectLinkingLayer.cpp
-  ObjectTransformLayer.cpp
-  OrcABISupport.cpp
-  OrcV2CBindings.cpp
-  RTDyldObjectLinkingLayer.cpp
-  SimpleRemoteEPC.cpp
-  Speculation.cpp
-  SpeculateAnalyses.cpp
-  ExecutorProcessControl.cpp
-  TaskDispatch.cpp
-  ThreadSafeModule.cpp
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/ExecutionEngine/Orc
+        add_llvm_component_library(
+            LLVMOrcJIT COFFVCRuntimeSupport.cpp COFFPlatform
+                .cpp CompileOnDemandLayer.cpp CompileUtils.cpp Core
+                .cpp DebugObjectManagerPlugin.cpp DebuggerSupportPlugin
+                .cpp DebugUtils.cpp EPCDynamicLibrarySearchGenerator
+                .cpp EPCDebugObjectRegistrar.cpp EPCEHFrameRegistrar
+                .cpp EPCGenericDylibManager.cpp EPCGenericJITLinkMemoryManager
+                .cpp EPCGenericRTDyldMemoryManager.cpp EPCIndirectionUtils
+                .cpp ExecutionUtils.cpp ObjectFileInterface.cpp IndirectionUtils
+                .cpp IRCompileLayer.cpp IRTransformLayer
+                .cpp JITTargetMachineBuilder.cpp LazyReexports.cpp Layer
+                .cpp LookupAndRecordAddrs.cpp LLJIT.cpp MachOPlatform
+                .cpp MapperJITLinkMemoryManager.cpp MemoryMapper
+                .cpp ELFNixPlatform.cpp Mangling.cpp ObjectLinkingLayer
+                .cpp ObjectTransformLayer.cpp OrcABISupport.cpp OrcV2CBindings
+                .cpp RTDyldObjectLinkingLayer.cpp SimpleRemoteEPC
+                .cpp Speculation.cpp SpeculateAnalyses
+                .cpp ExecutorProcessControl.cpp TaskDispatch
+                .cpp ThreadSafeModule.cpp ADDITIONAL_HEADER_DIRS ${
+                    LLVM_MAIN_INCLUDE_DIR} /
+            llvm / ExecutionEngine /
+            Orc
 
-  DEPENDS
-  intrinsics_gen
+                DEPENDS intrinsics_gen
 
-  LINK_LIBS
-  ${LLVM_PTHREAD_LIB}
-  ${rt_lib}
-  ${atomic_lib}
+                    LINK_LIBS ${LLVM_PTHREAD_LIB} ${rt_lib} ${atomic_lib}
 
-  LINK_COMPONENTS
-  Core
-  ExecutionEngine
-  JITLink
-  Object
-  OrcShared
-  OrcTargetProcess
-  WindowsDriver
-  MC
-  Passes
-  RuntimeDyld
-  Support
-  Target
-  TargetParser
-  TransformUtils
-  )
+            LINK_COMPONENTS Core ExecutionEngine JITLink Object OrcShared
+                OrcTargetProcess WindowsDriver MC Passes RuntimeDyld Support
+                    Target TargetParser TransformUtils)
 
-add_subdirectory(Shared)
-add_subdirectory(TargetProcess)
+            add_subdirectory(Shared) add_subdirectory(TargetProcess)
 
-target_link_libraries(LLVMOrcJIT
-  PRIVATE
-  LLVMAnalysis
-  LLVMBitReader
-  LLVMBitWriter
-  LLVMPasses
-  )
+                target_link_libraries(
+                    LLVMOrcJIT PRIVATE LLVMAnalysis LLVMBitReader LLVMBitWriter
+                        LLVMPasses)
diff --git a/llvm/lib/ExecutionEngine/Orc/COFFPlatform.cpp b/llvm/lib/ExecutionEngine/Orc/COFFPlatform.cpp
index 7c869bead0b0028..1d7a685b280d150 100644
--- a/llvm/lib/ExecutionEngine/Orc/COFFPlatform.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/COFFPlatform.cpp
@@ -432,30 +432,30 @@ COFFPlatform::COFFPlatform(
     return;
   }
 
-  for (auto& Lib : DylibsToPreload)
+  for (auto &Lib : DylibsToPreload)
     if (auto E2 = this->LoadDynLibrary(PlatformJD, Lib)) {
       Err = std::move(E2);
       return;
     }
 
   if (StaticVCRuntime)
-      if (auto E2 = VCRuntimeBootstrap->initializeStaticVCRuntime(PlatformJD)) {
-          Err = std::move(E2);
-          return;
-      }
+    if (auto E2 = VCRuntimeBootstrap->initializeStaticVCRuntime(PlatformJD)) {
+      Err = std::move(E2);
+      return;
+    }
 
   // Associate wrapper function tags with JIT-side function implementations.
   if (auto E2 = associateRuntimeSupportFunctions(PlatformJD)) {
-      Err = std::move(E2);
-      return;
+    Err = std::move(E2);
+    return;
   }
 
   // Lookup addresses of runtime functions callable by the platform,
   // call the platform bootstrap function to initialize the platform-state
   // object in the executor.
   if (auto E2 = bootstrapCOFFRuntime(PlatformJD)) {
-      Err = std::move(E2);
-      return;
+    Err = std::move(E2);
+    return;
   }
 
   Bootstrapping.store(false);
diff --git a/llvm/lib/ExecutionEngine/Orc/CompileOnDemandLayer.cpp b/llvm/lib/ExecutionEngine/Orc/CompileOnDemandLayer.cpp
index 6448adaa0ceb36f..4c05c548f1f465f 100644
--- a/llvm/lib/ExecutionEngine/Orc/CompileOnDemandLayer.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/CompileOnDemandLayer.cpp
@@ -315,53 +315,53 @@ void CompileOnDemandLayer::emitPartition(
   //
   // FIXME: We apply this promotion once per partitioning. It's safe, but
   // overkill.
-  auto ExtractedTSM =
-      TSM.withModuleDo([&](Module &M) -> Expected<ThreadSafeModule> {
-        auto PromotedGlobals = PromoteSymbols(M);
-        if (!PromotedGlobals.empty()) {
-
-          MangleAndInterner Mangle(ES, M.getDataLayout());
-          SymbolFlagsMap SymbolFlags;
-          IRSymbolMapper::add(ES, *getManglingOptions(),
-                              PromotedGlobals, SymbolFlags);
-
-          if (auto Err = R->defineMaterializing(SymbolFlags))
-            return std::move(Err);
-        }
-
-        expandPartition(*GVsToExtract);
-
-        // Submodule name is given by hashing the names of the globals.
-        std::string SubModuleName;
-        {
-          std::vector<const GlobalValue*> HashGVs;
-          HashGVs.reserve(GVsToExtract->size());
-          for (const auto *GV : *GVsToExtract)
-            HashGVs.push_back(GV);
-          llvm::sort(HashGVs, [](const GlobalValue *LHS, const GlobalValue *RHS) {
-              return LHS->getName() < RHS->getName();
-            });
-          hash_code HC(0);
-          for (const auto *GV : HashGVs) {
-            assert(GV->hasName() && "All GVs to extract should be named by now");
-            auto GVName = GV->getName();
-            HC = hash_combine(HC, hash_combine_range(GVName.begin(), GVName.end()));
-          }
-          raw_string_ostream(SubModuleName)
-            << ".submodule."
-            << formatv(sizeof(size_t) == 8 ? "{0:x16}" : "{0:x8}",
-                       static_cast<size_t>(HC))
-            << ".ll";
-        }
-
-        // Extract the requested partiton (plus any necessary aliases) and
-        // put the rest back into the impl dylib.
-        auto ShouldExtract = [&](const GlobalValue &GV) -> bool {
-          return GVsToExtract->count(&GV);
-        };
-
-        return extractSubModule(TSM, SubModuleName , ShouldExtract);
+  auto ExtractedTSM = TSM.withModuleDo([&](Module &M)
+                                           -> Expected<ThreadSafeModule> {
+    auto PromotedGlobals = PromoteSymbols(M);
+    if (!PromotedGlobals.empty()) {
+
+      MangleAndInterner Mangle(ES, M.getDataLayout());
+      SymbolFlagsMap SymbolFlags;
+      IRSymbolMapper::add(ES, *getManglingOptions(), PromotedGlobals,
+                          SymbolFlags);
+
+      if (auto Err = R->defineMaterializing(SymbolFlags))
+        return std::move(Err);
+    }
+
+    expandPartition(*GVsToExtract);
+
+    // Submodule name is given by hashing the names of the globals.
+    std::string SubModuleName;
+    {
+      std::vector<const GlobalValue *> HashGVs;
+      HashGVs.reserve(GVsToExtract->size());
+      for (const auto *GV : *GVsToExtract)
+        HashGVs.push_back(GV);
+      llvm::sort(HashGVs, [](const GlobalValue *LHS, const GlobalValue *RHS) {
+        return LHS->getName() < RHS->getName();
       });
+      hash_code HC(0);
+      for (const auto *GV : HashGVs) {
+        assert(GV->hasName() && "All GVs to extract should be named by now");
+        auto GVName = GV->getName();
+        HC = hash_combine(HC, hash_combine_range(GVName.begin(), GVName.end()));
+      }
+      raw_string_ostream(SubModuleName)
+          << ".submodule."
+          << formatv(sizeof(size_t) == 8 ? "{0:x16}" : "{0:x8}",
+                     static_cast<size_t>(HC))
+          << ".ll";
+    }
+
+    // Extract the requested partiton (plus any necessary aliases) and
+    // put the rest back into the impl dylib.
+    auto ShouldExtract = [&](const GlobalValue &GV) -> bool {
+      return GVsToExtract->count(&GV);
+    };
+
+    return extractSubModule(TSM, SubModuleName, ShouldExtract);
+  });
 
   if (!ExtractedTSM) {
     ES.reportError(ExtractedTSM.takeError());
diff --git a/llvm/lib/ExecutionEngine/Orc/Core.cpp b/llvm/lib/ExecutionEngine/Orc/Core.cpp
index f4c0ecf784cdea4..692f05bbfa9237f 100644
--- a/llvm/lib/ExecutionEngine/Orc/Core.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/Core.cpp
@@ -144,8 +144,7 @@ std::error_code MissingSymbolDefinitions::convertToErrorCode() const {
 }
 
 void MissingSymbolDefinitions::log(raw_ostream &OS) const {
-  OS << "Missing definitions in module " << ModuleName
-     << ": " << Symbols;
+  OS << "Missing definitions in module " << ModuleName << ": " << Symbols;
 }
 
 std::error_code UnexpectedSymbolDefinitions::convertToErrorCode() const {
@@ -153,8 +152,7 @@ std::error_code UnexpectedSymbolDefinitions::convertToErrorCode() const {
 }
 
 void UnexpectedSymbolDefinitions::log(raw_ostream &OS) const {
-  OS << "Unexpected definitions in module " << ModuleName
-     << ": " << Symbols;
+  OS << "Unexpected definitions in module " << ModuleName << ": " << Symbols;
 }
 
 AsynchronousSymbolQuery::AsynchronousSymbolQuery(
@@ -742,7 +740,7 @@ JITDylib::defineMaterializing(MaterializationResponsibility &FromMR,
         continue;
       } else
         EntryItr =
-          Symbols.insert(std::make_pair(Name, SymbolTableEntry(Flags))).first;
+            Symbols.insert(std::make_pair(Name, SymbolTableEntry(Flags))).first;
 
       AddedSyms.push_back(NonOwningSymbolStringPtr(Name));
       EntryItr->second.setState(SymbolState::Materializing);
@@ -764,63 +762,62 @@ Error JITDylib::replace(MaterializationResponsibility &FromMR,
   std::unique_ptr<MaterializationUnit> MustRunMU;
   std::unique_ptr<MaterializationResponsibility> MustRunMR;
 
-  auto Err =
-      ES.runSessionLocked([&, this]() -> Error {
-        if (FromMR.RT->isDefunct())
-          return make_error<ResourceTrackerDefunct>(std::move(FromMR.RT));
+  auto Err = ES.runSessionLocked([&, this]() -> Error {
+    if (FromMR.RT->isDefunct())
+      return make_error<ResourceTrackerDefunct>(std::move(FromMR.RT));
 
 #ifndef NDEBUG
-        for (auto &KV : MU->getSymbols()) {
-          auto SymI = Symbols.find(KV.first);
-          assert(SymI != Symbols.end() && "Replacing unknown symbol");
-          assert(SymI->second.getState() == SymbolState::Materializing &&
-                 "Can not replace a symbol that ha is not materializing");
-          assert(!SymI->second.hasMaterializerAttached() &&
-                 "Symbol should not have materializer attached already");
-          assert(UnmaterializedInfos.count(KV.first) == 0 &&
-                 "Symbol being replaced should have no UnmaterializedInfo");
-        }
+    for (auto &KV : MU->getSymbols()) {
+      auto SymI = Symbols.find(KV.first);
+      assert(SymI != Symbols.end() && "Replacing unknown symbol");
+      assert(SymI->second.getState() == SymbolState::Materializing &&
+             "Can not replace a symbol that ha is not materializing");
+      assert(!SymI->second.hasMaterializerAttached() &&
+             "Symbol should not have materializer attached already");
+      assert(UnmaterializedInfos.count(KV.first) == 0 &&
+             "Symbol being replaced should have no UnmaterializedInfo");
+    }
 #endif // NDEBUG
 
-        // If the tracker is defunct we need to bail out immediately.
-
-        // If any symbol has pending queries against it then we need to
-        // materialize MU immediately.
-        for (auto &KV : MU->getSymbols()) {
-          auto MII = MaterializingInfos.find(KV.first);
-          if (MII != MaterializingInfos.end()) {
-            if (MII->second.hasQueriesPending()) {
-              MustRunMR = ES.createMaterializationResponsibility(
-                  *FromMR.RT, std::move(MU->SymbolFlags),
-                  std::move(MU->InitSymbol));
-              MustRunMU = std::move(MU);
-              return Error::success();
-            }
-          }
+    // If the tracker is defunct we need to bail out immediately.
+
+    // If any symbol has pending queries against it then we need to
+    // materialize MU immediately.
+    for (auto &KV : MU->getSymbols()) {
+      auto MII = MaterializingInfos.find(KV.first);
+      if (MII != MaterializingInfos.end()) {
+        if (MII->second.hasQueriesPending()) {
+          MustRunMR = ES.createMaterializationResponsibility(
+              *FromMR.RT, std::move(MU->SymbolFlags),
+              std::move(MU->InitSymbol));
+          MustRunMU = std::move(MU);
+          return Error::success();
         }
+      }
+    }
 
-        // Otherwise, make MU responsible for all the symbols.
-        auto UMI = std::make_shared<UnmaterializedInfo>(std::move(MU),
-                                                        FromMR.RT.get());
-        for (auto &KV : UMI->MU->getSymbols()) {
-          auto SymI = Symbols.find(KV.first);
-          assert(SymI->second.getState() == SymbolState::Materializing &&
-                 "Can not replace a symbol that is not materializing");
-          assert(!SymI->second.hasMaterializerAttached() &&
-                 "Can not replace a symbol that has a materializer attached");
-          assert(UnmaterializedInfos.count(KV.first) == 0 &&
-                 "Unexpected materializer entry in map");
-          SymI->second.setAddress(SymI->second.getAddress());
-          SymI->second.setMaterializerAttached(true);
-
-          auto &UMIEntry = UnmaterializedInfos[KV.first];
-          assert((!UMIEntry || !UMIEntry->MU) &&
-                 "Replacing symbol with materializer still attached");
-          UMIEntry = UMI;
-        }
+    // Otherwise, make MU responsible for all the symbols.
+    auto UMI =
+        std::make_shared<UnmaterializedInfo>(std::move(MU), FromMR.RT.get());
+    for (auto &KV : UMI->MU->getSymbols()) {
+      auto SymI = Symbols.find(KV.first);
+      assert(SymI->second.getState() == SymbolState::Materializing &&
+             "Can not replace a symbol that is not materializing");
+      assert(!SymI->second.hasMaterializerAttached() &&
+             "Can not replace a symbol that has a materializer attached");
+      assert(UnmaterializedInfos.count(KV.first) == 0 &&
+             "Unexpected materializer entry in map");
+      SymI->second.setAddress(SymI->second.getAddress());
+      SymI->second.setMaterializerAttached(true);
+
+      auto &UMIEntry = UnmaterializedInfos[KV.first];
+      assert((!UMIEntry || !UMIEntry->MU) &&
+             "Replacing symbol with materializer still attached");
+      UMIEntry = UMI;
+    }
 
-        return Error::success();
-      });
+    return Error::success();
+  });
 
   if (Err)
     return Err;
@@ -3156,8 +3153,8 @@ void ExecutionSession::OL_addDependenciesForAll(
     MaterializationResponsibility &MR,
     const SymbolDependenceMap &Dependencies) {
   LLVM_DEBUG({
-    dbgs() << "Adding dependencies for all symbols in " << MR.SymbolFlags << ": "
-           << Dependencies << "\n";
+    dbgs() << "Adding dependencies for all symbols in " << MR.SymbolFlags
+           << ": " << Dependencies << "\n";
   });
   for (auto &KV : MR.SymbolFlags)
     MR.JD.addDependencies(KV.first, Dependencies);
diff --git a/llvm/lib/ExecutionEngine/Orc/DebugObjectManagerPlugin.cpp b/llvm/lib/ExecutionEngine/Orc/DebugObjectManagerPlugin.cpp
index acbf33888adee51..56f02bc02e0dd08 100644
--- a/llvm/lib/ExecutionEngine/Orc/DebugObjectManagerPlugin.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/DebugObjectManagerPlugin.cpp
@@ -458,28 +458,27 @@ Error DebugObjectManagerPlugin::notifyEmitted(
   std::promise<MSVCPError> FinalizePromise;
   std::future<MSVCPError> FinalizeErr = FinalizePromise.get_future();
 
-  It->second->finalizeAsync(
-      [this, &FinalizePromise, &MR](Expected<ExecutorAddrRange> TargetMem) {
-        // Any failure here will fail materialization.
-        if (!TargetMem) {
-          FinalizePromise.set_value(TargetMem.takeError());
-          return;
-        }
-        if (Error Err =
-                Target->registerDebugObject(*TargetMem, AutoRegisterCode)) {
-          FinalizePromise.set_value(std::move(Err));
-          return;
-        }
-
-        // Once our tracking info is updated, notifyEmitted() can return and
-        // finish materialization.
-        FinalizePromise.set_value(MR.withResourceKeyDo([&](ResourceKey K) {
-          assert(PendingObjs.count(&MR) && "We still hold PendingObjsLock");
-          std::lock_guard<std::mutex> Lock(RegisteredObjsLock);
-          RegisteredObjs[K].push_back(std::move(PendingObjs[&MR]));
-          PendingObjs.erase(&MR);
-        }));
-      });
+  It->second->finalizeAsync([this, &FinalizePromise,
+                             &MR](Expected<ExecutorAddrRange> TargetMem) {
+    // Any failure here will fail materialization.
+    if (!TargetMem) {
+      FinalizePromise.set_value(TargetMem.takeError());
+      return;
+    }
+    if (Error Err = Target->registerDebugObject(*TargetMem, AutoRegisterCode)) {
+      FinalizePromise.set_value(std::move(Err));
+      return;
+    }
+
+    // Once our tracking info is updated, notifyEmitted() can return and
+    // finish materialization.
+    FinalizePromise.set_value(MR.withResourceKeyDo([&](ResourceKey K) {
+      assert(PendingObjs.count(&MR) && "We still hold PendingObjsLock");
+      std::lock_guard<std::mutex> Lock(RegisteredObjsLock);
+      RegisteredObjs[K].push_back(std::move(PendingObjs[&MR]));
+      PendingObjs.erase(&MR);
+    }));
+  });
 
   return FinalizeErr.get();
 }
diff --git a/llvm/lib/ExecutionEngine/Orc/ELFNixPlatform.cpp b/llvm/lib/ExecutionEngine/Orc/ELFNixPlatform.cpp
index c08b8b037fa298d..d34a8ed27f1ae81 100644
--- a/llvm/lib/ExecutionEngine/Orc/ELFNixPlatform.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/ELFNixPlatform.cpp
@@ -428,9 +428,8 @@ void ELFNixPlatform::rt_getDeinitializers(
 void ELFNixPlatform::rt_lookupSymbol(SendSymbolAddressFn SendResult,
                                      ExecutorAddr Handle,
                                      StringRef SymbolName) {
-  LLVM_DEBUG({
-    dbgs() << "ELFNixPlatform::rt_lookupSymbol(\"" << Handle << "\")\n";
-  });
+  LLVM_DEBUG(
+      { dbgs() << "ELFNixPlatform::rt_lookupSymbol(\"" << Handle << "\")\n"; });
 
   JITDylib *JD = nullptr;
 
diff --git a/llvm/lib/ExecutionEngine/Orc/EPCGenericJITLinkMemoryManager.cpp b/llvm/lib/ExecutionEngine/Orc/EPCGenericJITLinkMemoryManager.cpp
index b05f08fd7cdfe82..3e84ca8af047fde 100644
--- a/llvm/lib/ExecutionEngine/Orc/EPCGenericJITLinkMemoryManager.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/EPCGenericJITLinkMemoryManager.cpp
@@ -22,7 +22,6 @@ namespace orc {
 class EPCGenericJITLinkMemoryManager::InFlightAlloc
     : public jitlink::JITLinkMemoryManager::InFlightAlloc {
 public:
-
   // FIXME: The C++98 initializer is an attempt to work around compile failures
   // due to http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.html#1397.
   // We should be able to switch this back to member initialization once that
diff --git a/llvm/lib/ExecutionEngine/Orc/ExecutionUtils.cpp b/llvm/lib/ExecutionEngine/Orc/ExecutionUtils.cpp
index 8ed9cadd2abdced..3d95e0d81e80fb1 100644
--- a/llvm/lib/ExecutionEngine/Orc/ExecutionUtils.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/ExecutionUtils.cpp
@@ -24,10 +24,9 @@ namespace llvm {
 namespace orc {
 
 CtorDtorIterator::CtorDtorIterator(const GlobalVariable *GV, bool End)
-  : InitList(
-      GV ? dyn_cast_or_null<ConstantArray>(GV->getInitializer()) : nullptr),
-    I((InitList && End) ? InitList->getNumOperands() : 0) {
-}
+    : InitList(GV ? dyn_cast_or_null<ConstantArray>(GV->getInitializer())
+                  : nullptr),
+      I((InitList && End) ? InitList->getNumOperands() : 0) {}
 
 bool CtorDtorIterator::operator==(const CtorDtorIterator &Other) const {
   assert(InitList == Other.InitList && "Incomparable iterators.");
@@ -38,7 +37,7 @@ bool CtorDtorIterator::operator!=(const CtorDtorIterator &Other) const {
   return !(*this == Other);
 }
 
-CtorDtorIterator& CtorDtorIterator::operator++() {
+CtorDtorIterator &CtorDtorIterator::operator++() {
   ++I;
   return *this;
 }
@@ -167,7 +166,7 @@ Error CtorDtorRunner::run() {
 }
 
 void LocalCXXRuntimeOverridesBase::runDestructors() {
-  auto& CXXDestructorDataPairs = DSOHandleOverride;
+  auto &CXXDestructorDataPairs = DSOHandleOverride;
   for (auto &P : CXXDestructorDataPairs)
     P.first(P.second);
   CXXDestructorDataPairs.clear();
@@ -176,14 +175,14 @@ void LocalCXXRuntimeOverridesBase::runDestructors() {
 int LocalCXXRuntimeOverridesBase::CXAAtExitOverride(DestructorPtr Destructor,
                                                     void *Arg,
                                                     void *DSOHandle) {
-  auto& CXXDestructorDataPairs =
-    *reinterpret_cast<CXXDestructorDataPairList*>(DSOHandle);
+  auto &CXXDestructorDataPairs =
+      *reinterpret_cast<CXXDestructorDataPairList *>(DSOHandle);
   CXXDestructorDataPairs.push_back(std::make_pair(Destructor, Arg));
   return 0;
 }
 
 Error LocalCXXRuntimeOverrides::enable(JITDylib &JD,
-                                        MangleAndInterner &Mangle) {
+                                       MangleAndInterner &Mangle) {
   SymbolMap RuntimeInterposes;
   RuntimeInterposes[Mangle("__dso_handle")] = {
       ExecutorAddr::fromPtr(&DSOHandleOverride), JITSymbolFlags::Exported};
diff --git a/llvm/lib/ExecutionEngine/Orc/ExecutorProcessControl.cpp b/llvm/lib/ExecutionEngine/Orc/ExecutorProcessControl.cpp
index fc928f2e6146bf5..d0bf469d011f908 100644
--- a/llvm/lib/ExecutionEngine/Orc/ExecutorProcessControl.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/ExecutorProcessControl.cpp
@@ -53,8 +53,7 @@ SelfExecutorProcessControl::SelfExecutorProcessControl(
 
 Expected<std::unique_ptr<SelfExecutorProcessControl>>
 SelfExecutorProcessControl::Create(
-    std::shared_ptr<SymbolStringPool> SSP,
-    std::unique_ptr<TaskDispatcher> D,
+    std::shared_ptr<SymbolStringPool> SSP, std::unique_ptr<TaskDispatcher> D,
     std::unique_ptr<jitlink::JITLinkMemoryManager> MemMgr) {
 
   if (!SSP)
diff --git a/llvm/lib/ExecutionEngine/Orc/IndirectionUtils.cpp b/llvm/lib/ExecutionEngine/Orc/IndirectionUtils.cpp
index a0d81cdf2086722..6dbbc8e40c5b3eb 100644
--- a/llvm/lib/ExecutionEngine/Orc/IndirectionUtils.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/IndirectionUtils.cpp
@@ -126,131 +126,126 @@ createLocalCompileCallbackManager(const Triple &T, ExecutionSession &ES,
   case Triple::aarch64_32: {
     typedef orc::LocalJITCompileCallbackManager<orc::OrcAArch64> CCMgrT;
     return CCMgrT::Create(ES, ErrorHandlerAddress);
-    }
+  }
 
-    case Triple::x86: {
-      typedef orc::LocalJITCompileCallbackManager<orc::OrcI386> CCMgrT;
-      return CCMgrT::Create(ES, ErrorHandlerAddress);
-    }
+  case Triple::x86: {
+    typedef orc::LocalJITCompileCallbackManager<orc::OrcI386> CCMgrT;
+    return CCMgrT::Create(ES, ErrorHandlerAddress);
+  }
 
-    case Triple::loongarch64: {
-      typedef orc::LocalJITCompileCallbackManager<orc::OrcLoongArch64> CCMgrT;
-      return CCMgrT::Create(ES, ErrorHandlerAddress);
-    }
+  case Triple::loongarch64: {
+    typedef orc::LocalJITCompileCallbackManager<orc::OrcLoongArch64> CCMgrT;
+    return CCMgrT::Create(ES, ErrorHandlerAddress);
+  }
 
-    case Triple::mips: {
-      typedef orc::LocalJITCompileCallbackManager<orc::OrcMips32Be> CCMgrT;
-      return CCMgrT::Create(ES, ErrorHandlerAddress);
-    }
-    case Triple::mipsel: {
-      typedef orc::LocalJITCompileCallbackManager<orc::OrcMips32Le> CCMgrT;
-      return CCMgrT::Create(ES, ErrorHandlerAddress);
-    }
+  case Triple::mips: {
+    typedef orc::LocalJITCompileCallbackManager<orc::OrcMips32Be> CCMgrT;
+    return CCMgrT::Create(ES, ErrorHandlerAddress);
+  }
+  case Triple::mipsel: {
+    typedef orc::LocalJITCompileCallbackManager<orc::OrcMips32Le> CCMgrT;
+    return CCMgrT::Create(ES, ErrorHandlerAddress);
+  }
 
-    case Triple::mips64:
-    case Triple::mips64el: {
-      typedef orc::LocalJITCompileCallbackManager<orc::OrcMips64> CCMgrT;
-      return CCMgrT::Create(ES, ErrorHandlerAddress);
-    }
+  case Triple::mips64:
+  case Triple::mips64el: {
+    typedef orc::LocalJITCompileCallbackManager<orc::OrcMips64> CCMgrT;
+    return CCMgrT::Create(ES, ErrorHandlerAddress);
+  }
 
-    case Triple::riscv64: {
-      typedef orc::LocalJITCompileCallbackManager<orc::OrcRiscv64> CCMgrT;
-      return CCMgrT::Create(ES, ErrorHandlerAddress);
-    }
+  case Triple::riscv64: {
+    typedef orc::LocalJITCompileCallbackManager<orc::OrcRiscv64> CCMgrT;
+    return CCMgrT::Create(ES, ErrorHandlerAddress);
+  }
 
-    case Triple::x86_64: {
-      if (T.getOS() == Triple::OSType::Win32) {
-        typedef orc::LocalJITCompileCallbackManager<orc::OrcX86_64_Win32> CCMgrT;
-        return CCMgrT::Create(ES, ErrorHandlerAddress);
-      } else {
-        typedef orc::LocalJITCompileCallbackManager<orc::OrcX86_64_SysV> CCMgrT;
-        return CCMgrT::Create(ES, ErrorHandlerAddress);
-      }
+  case Triple::x86_64: {
+    if (T.getOS() == Triple::OSType::Win32) {
+      typedef orc::LocalJITCompileCallbackManager<orc::OrcX86_64_Win32> CCMgrT;
+      return CCMgrT::Create(ES, ErrorHandlerAddress);
+    } else {
+      typedef orc::LocalJITCompileCallbackManager<orc::OrcX86_64_SysV> CCMgrT;
+      return CCMgrT::Create(ES, ErrorHandlerAddress);
     }
-
+  }
   }
 }
 
 std::function<std::unique_ptr<IndirectStubsManager>()>
 createLocalIndirectStubsManagerBuilder(const Triple &T) {
   switch (T.getArch()) {
-    default:
-      return [](){
-        return std::make_unique<
-                       orc::LocalIndirectStubsManager<orc::OrcGenericABI>>();
-      };
-
-    case Triple::aarch64:
-    case Triple::aarch64_32:
-      return [](){
-        return std::make_unique<
-                       orc::LocalIndirectStubsManager<orc::OrcAArch64>>();
-      };
-
-    case Triple::x86:
-      return [](){
-        return std::make_unique<
-                       orc::LocalIndirectStubsManager<orc::OrcI386>>();
-      };
+  default:
+    return []() {
+      return std::make_unique<
+          orc::LocalIndirectStubsManager<orc::OrcGenericABI>>();
+    };
 
-    case Triple::loongarch64:
+  case Triple::aarch64:
+  case Triple::aarch64_32:
+    return []() {
+      return std::make_unique<
+          orc::LocalIndirectStubsManager<orc::OrcAArch64>>();
+    };
+
+  case Triple::x86:
+    return []() {
+      return std::make_unique<orc::LocalIndirectStubsManager<orc::OrcI386>>();
+    };
+
+  case Triple::loongarch64:
+    return []() {
+      return std::make_unique<
+          orc::LocalIndirectStubsManager<orc::OrcLoongArch64>>();
+    };
+
+  case Triple::mips:
+    return []() {
+      return std::make_unique<
+          orc::LocalIndirectStubsManager<orc::OrcMips32Be>>();
+    };
+
+  case Triple::mipsel:
+    return []() {
+      return std::make_unique<
+          orc::LocalIndirectStubsManager<orc::OrcMips32Le>>();
+    };
+
+  case Triple::mips64:
+  case Triple::mips64el:
+    return []() {
+      return std::make_unique<orc::LocalIndirectStubsManager<orc::OrcMips64>>();
+    };
+
+  case Triple::riscv64:
+    return []() {
+      return std::make_unique<
+          orc::LocalIndirectStubsManager<orc::OrcRiscv64>>();
+    };
+
+  case Triple::x86_64:
+    if (T.getOS() == Triple::OSType::Win32) {
       return []() {
         return std::make_unique<
-            orc::LocalIndirectStubsManager<orc::OrcLoongArch64>>();
-      };
-
-    case Triple::mips:
-      return [](){
-          return std::make_unique<
-                      orc::LocalIndirectStubsManager<orc::OrcMips32Be>>();
-      };
-
-    case Triple::mipsel:
-      return [](){
-          return std::make_unique<
-                      orc::LocalIndirectStubsManager<orc::OrcMips32Le>>();
+            orc::LocalIndirectStubsManager<orc::OrcX86_64_Win32>>();
       };
-
-    case Triple::mips64:
-    case Triple::mips64el:
-      return [](){
-          return std::make_unique<
-                      orc::LocalIndirectStubsManager<orc::OrcMips64>>();
-      };
-
-    case Triple::riscv64:
+    } else {
       return []() {
         return std::make_unique<
-            orc::LocalIndirectStubsManager<orc::OrcRiscv64>>();
+            orc::LocalIndirectStubsManager<orc::OrcX86_64_SysV>>();
       };
-
-    case Triple::x86_64:
-      if (T.getOS() == Triple::OSType::Win32) {
-        return [](){
-          return std::make_unique<
-                     orc::LocalIndirectStubsManager<orc::OrcX86_64_Win32>>();
-        };
-      } else {
-        return [](){
-          return std::make_unique<
-                     orc::LocalIndirectStubsManager<orc::OrcX86_64_SysV>>();
-        };
-      }
-
+    }
   }
 }
 
-Constant* createIRTypedAddress(FunctionType &FT, ExecutorAddr Addr) {
+Constant *createIRTypedAddress(FunctionType &FT, ExecutorAddr Addr) {
   Constant *AddrIntVal =
-    ConstantInt::get(Type::getInt64Ty(FT.getContext()), Addr.getValue());
-  Constant *AddrPtrVal =
-    ConstantExpr::getCast(Instruction::IntToPtr, AddrIntVal,
-                          PointerType::get(&FT, 0));
+      ConstantInt::get(Type::getInt64Ty(FT.getContext()), Addr.getValue());
+  Constant *AddrPtrVal = ConstantExpr::getCast(
+      Instruction::IntToPtr, AddrIntVal, PointerType::get(&FT, 0));
   return AddrPtrVal;
 }
 
-GlobalVariable* createImplPointer(PointerType &PT, Module &M,
-                                  const Twine &Name, Constant *Initializer) {
+GlobalVariable *createImplPointer(PointerType &PT, Module &M, const Twine &Name,
+                                  Constant *Initializer) {
   auto IP = new GlobalVariable(M, &PT, false, GlobalValue::ExternalLinkage,
                                Initializer, Name, nullptr,
                                GlobalValue::NotThreadLocal, 0, true);
@@ -265,7 +260,7 @@ void makeStub(Function &F, Value &ImplPointer) {
   BasicBlock *EntryBlock = BasicBlock::Create(M.getContext(), "entry", &F);
   IRBuilder<> Builder(EntryBlock);
   LoadInst *ImplAddr = Builder.CreateLoad(F.getType(), &ImplPointer);
-  std::vector<Value*> CallArgs;
+  std::vector<Value *> CallArgs;
   for (auto &A : F.args())
     CallArgs.push_back(&A);
   CallInst *Call = Builder.CreateCall(F.getFunctionType(), ImplAddr, CallArgs);
@@ -307,11 +302,10 @@ std::vector<GlobalValue *> SymbolLinkagePromoter::operator()(Module &M) {
   return PromotedGlobals;
 }
 
-Function* cloneFunctionDecl(Module &Dst, const Function &F,
+Function *cloneFunctionDecl(Module &Dst, const Function &F,
                             ValueToValueMapTy *VMap) {
-  Function *NewF =
-    Function::Create(cast<FunctionType>(F.getValueType()),
-                     F.getLinkage(), F.getName(), &Dst);
+  Function *NewF = Function::Create(cast<FunctionType>(F.getValueType()),
+                                    F.getLinkage(), F.getName(), &Dst);
   NewF->copyAttributesFrom(&F);
 
   if (VMap) {
@@ -325,19 +319,19 @@ Function* cloneFunctionDecl(Module &Dst, const Function &F,
   return NewF;
 }
 
-GlobalVariable* cloneGlobalVariableDecl(Module &Dst, const GlobalVariable &GV,
+GlobalVariable *cloneGlobalVariableDecl(Module &Dst, const GlobalVariable &GV,
                                         ValueToValueMapTy *VMap) {
   GlobalVariable *NewGV = new GlobalVariable(
-      Dst, GV.getValueType(), GV.isConstant(),
-      GV.getLinkage(), nullptr, GV.getName(), nullptr,
-      GV.getThreadLocalMode(), GV.getType()->getAddressSpace());
+      Dst, GV.getValueType(), GV.isConstant(), GV.getLinkage(), nullptr,
+      GV.getName(), nullptr, GV.getThreadLocalMode(),
+      GV.getType()->getAddressSpace());
   NewGV->copyAttributesFrom(&GV);
   if (VMap)
     (*VMap)[&GV] = NewGV;
   return NewGV;
 }
 
-GlobalAlias* cloneGlobalAliasDecl(Module &Dst, const GlobalAlias &OrigA,
+GlobalAlias *cloneGlobalAliasDecl(Module &Dst, const GlobalAlias &OrigA,
                                   ValueToValueMapTy &VMap) {
   assert(OrigA.getAliasee() && "Original alias doesn't have an aliasee?");
   auto *NewA = GlobalAlias::create(OrigA.getValueType(),
diff --git a/llvm/lib/ExecutionEngine/Orc/JITTargetMachineBuilder.cpp b/llvm/lib/ExecutionEngine/Orc/JITTargetMachineBuilder.cpp
index b66f52f1ec5d692..6a892ead6c42fca 100644
--- a/llvm/lib/ExecutionEngine/Orc/JITTargetMachineBuilder.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/JITTargetMachineBuilder.cpp
@@ -99,8 +99,7 @@ void JITTargetMachineBuilderPrinter::print(raw_ostream &OS) const {
   } else
     OS << "unspecified (will use target default)";
 
-  OS << "\n"
-     << Indent << "  Code Model = ";
+  OS << "\n" << Indent << "  Code Model = ";
 
   if (JTMB.CM) {
     switch (*JTMB.CM) {
@@ -123,8 +122,7 @@ void JITTargetMachineBuilderPrinter::print(raw_ostream &OS) const {
   } else
     OS << "unspecified (will use target default)";
 
-  OS << "\n"
-     << Indent << "  Optimization Level = ";
+  OS << "\n" << Indent << "  Optimization Level = ";
   switch (JTMB.OptLevel) {
   case CodeGenOpt::None:
     OS << "None";
diff --git a/llvm/lib/ExecutionEngine/Orc/LLJIT.cpp b/llvm/lib/ExecutionEngine/Orc/LLJIT.cpp
index b92c30dd6b722f9..96463ff6451f1a5 100644
--- a/llvm/lib/ExecutionEngine/Orc/LLJIT.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/LLJIT.cpp
@@ -330,9 +330,8 @@ class GenericLLVMIRPlatformSupport : public LLJIT::PlatformSupport {
   }
 
   void registerInitFunc(JITDylib &JD, SymbolStringPtr InitName) {
-    getExecutionSession().runSessionLocked([&]() {
-        InitFunctions[&JD].add(InitName);
-      });
+    getExecutionSession().runSessionLocked(
+        [&]() { InitFunctions[&JD].add(InitName); });
   }
 
   void registerDeInitFunc(JITDylib &JD, SymbolStringPtr DeInitName) {
@@ -885,8 +884,8 @@ Error LLJIT::addObjectFile(JITDylib &JD, std::unique_ptr<MemoryBuffer> Obj) {
 Expected<ExecutorAddr> LLJIT::lookupLinkerMangled(JITDylib &JD,
                                                   SymbolStringPtr Name) {
   if (auto Sym = ES->lookup(
-        makeJITDylibSearchOrder(&JD, JITDylibLookupFlags::MatchAllSymbols),
-        Name))
+          makeJITDylibSearchOrder(&JD, JITDylibLookupFlags::MatchAllSymbols),
+          Name))
     return Sym->getAddress();
   else
     return Sym.takeError();
diff --git a/llvm/lib/ExecutionEngine/Orc/MachOPlatform.cpp b/llvm/lib/ExecutionEngine/Orc/MachOPlatform.cpp
index a3a766d602c1ae7..7378ef2e1516541 100644
--- a/llvm/lib/ExecutionEngine/Orc/MachOPlatform.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/MachOPlatform.cpp
@@ -667,9 +667,8 @@ void MachOPlatform::rt_pushInitializers(PushInitializersSendResultFn SendResult,
 
 void MachOPlatform::rt_lookupSymbol(SendSymbolAddressFn SendResult,
                                     ExecutorAddr Handle, StringRef SymbolName) {
-  LLVM_DEBUG({
-    dbgs() << "MachOPlatform::rt_lookupSymbol(\"" << Handle << "\")\n";
-  });
+  LLVM_DEBUG(
+      { dbgs() << "MachOPlatform::rt_lookupSymbol(\"" << Handle << "\")\n"; });
 
   JITDylib *JD = nullptr;
 
diff --git a/llvm/lib/ExecutionEngine/Orc/MemoryMapper.cpp b/llvm/lib/ExecutionEngine/Orc/MemoryMapper.cpp
index ca4950077ffe92a..4e0c6a2c94da3d8 100644
--- a/llvm/lib/ExecutionEngine/Orc/MemoryMapper.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/MemoryMapper.cpp
@@ -305,7 +305,8 @@ char *SharedMemoryMapper::prepare(ExecutorAddr Addr, size_t ContentSize) {
 void SharedMemoryMapper::initialize(MemoryMapper::AllocInfo &AI,
                                     OnInitializedFunction OnInitialized) {
   auto Reservation = Reservations.upper_bound(AI.MappingBase);
-  assert(Reservation != Reservations.begin() && "Attempt to initialize unreserved range");
+  assert(Reservation != Reservations.begin() &&
+         "Attempt to initialize unreserved range");
   Reservation--;
 
   auto AllocationOffset = AI.MappingBase - Reservation->first;
diff --git a/llvm/lib/ExecutionEngine/Orc/ObjectLinkingLayer.cpp b/llvm/lib/ExecutionEngine/Orc/ObjectLinkingLayer.cpp
index a29f3d1c3aec8da..f626c08cd17d506 100644
--- a/llvm/lib/ExecutionEngine/Orc/ObjectLinkingLayer.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/ObjectLinkingLayer.cpp
@@ -165,7 +165,7 @@ class ObjectLinkingLayerJITLinkContext final : public JITLinkContext {
     for (auto &P : Layer.Plugins)
       P->notifyMaterializing(*MR, G, *this,
                              ObjBuffer ? ObjBuffer->getMemBufferRef()
-                             : MemoryBufferRef());
+                                       : MemoryBufferRef());
   }
 
   void notifyFailed(Error Err) override {
diff --git a/llvm/lib/ExecutionEngine/Orc/OrcABISupport.cpp b/llvm/lib/ExecutionEngine/Orc/OrcABISupport.cpp
index 6d568199378a025..2bb57620efad4e8 100644
--- a/llvm/lib/ExecutionEngine/Orc/OrcABISupport.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/OrcABISupport.cpp
@@ -49,79 +49,79 @@ void OrcAArch64::writeResolverCode(char *ResolverWorkingMem,
                                    ExecutorAddr ReentryCtxAddr) {
 
   const uint32_t ResolverCode[] = {
-    // resolver_entry:
-    0xa9bf47fd,        // 0x000:  stp  x29, x17, [sp, #-16]!
-    0x910003fd,        // 0x004:  mov  x29, sp
-    0xa9bf73fb,        // 0x008:  stp  x27, x28, [sp, #-16]!
-    0xa9bf6bf9,        // 0x00c:  stp  x25, x26, [sp, #-16]!
-    0xa9bf63f7,        // 0x010:  stp  x23, x24, [sp, #-16]!
-    0xa9bf5bf5,        // 0x014:  stp  x21, x22, [sp, #-16]!
-    0xa9bf53f3,        // 0x018:  stp  x19, x20, [sp, #-16]!
-    0xa9bf3fee,        // 0x01c:  stp  x14, x15, [sp, #-16]!
-    0xa9bf37ec,        // 0x020:  stp  x12, x13, [sp, #-16]!
-    0xa9bf2fea,        // 0x024:  stp  x10, x11, [sp, #-16]!
-    0xa9bf27e8,        // 0x028:  stp   x8,  x9, [sp, #-16]!
-    0xa9bf1fe6,        // 0x02c:  stp   x6,  x7, [sp, #-16]!
-    0xa9bf17e4,        // 0x030:  stp   x4,  x5, [sp, #-16]!
-    0xa9bf0fe2,        // 0x034:  stp   x2,  x3, [sp, #-16]!
-    0xa9bf07e0,        // 0x038:  stp   x0,  x1, [sp, #-16]!
-    0xadbf7ffe,        // 0x03c:  stp  q30, q31, [sp, #-32]!
-    0xadbf77fc,        // 0x040:  stp  q28, q29, [sp, #-32]!
-    0xadbf6ffa,        // 0x044:  stp  q26, q27, [sp, #-32]!
-    0xadbf67f8,        // 0x048:  stp  q24, q25, [sp, #-32]!
-    0xadbf5ff6,        // 0x04c:  stp  q22, q23, [sp, #-32]!
-    0xadbf57f4,        // 0x050:  stp  q20, q21, [sp, #-32]!
-    0xadbf4ff2,        // 0x054:  stp  q18, q19, [sp, #-32]!
-    0xadbf47f0,        // 0x058:  stp  q16, q17, [sp, #-32]!
-    0xadbf3fee,        // 0x05c:  stp  q14, q15, [sp, #-32]!
-    0xadbf37ec,        // 0x060:  stp  q12, q13, [sp, #-32]!
-    0xadbf2fea,        // 0x064:  stp  q10, q11, [sp, #-32]!
-    0xadbf27e8,        // 0x068:  stp   q8,  q9, [sp, #-32]!
-    0xadbf1fe6,        // 0x06c:  stp   q6,  q7, [sp, #-32]!
-    0xadbf17e4,        // 0x070:  stp   q4,  q5, [sp, #-32]!
-    0xadbf0fe2,        // 0x074:  stp   q2,  q3, [sp, #-32]!
-    0xadbf07e0,        // 0x078:  stp   q0,  q1, [sp, #-32]!
-    0x580004e0,        // 0x07c:  ldr   x0, Lreentry_ctx_ptr
-    0xaa1e03e1,        // 0x080:  mov   x1, x30
-    0xd1003021,        // 0x084:  sub   x1,  x1, #12
-    0x58000442,        // 0x088:  ldr   x2, Lreentry_fn_ptr
-    0xd63f0040,        // 0x08c:  blr   x2
-    0xaa0003f1,        // 0x090:  mov   x17, x0
-    0xacc107e0,        // 0x094:  ldp   q0,  q1, [sp], #32
-    0xacc10fe2,        // 0x098:  ldp   q2,  q3, [sp], #32
-    0xacc117e4,        // 0x09c:  ldp   q4,  q5, [sp], #32
-    0xacc11fe6,        // 0x0a0:  ldp   q6,  q7, [sp], #32
-    0xacc127e8,        // 0x0a4:  ldp   q8,  q9, [sp], #32
-    0xacc12fea,        // 0x0a8:  ldp  q10, q11, [sp], #32
-    0xacc137ec,        // 0x0ac:  ldp  q12, q13, [sp], #32
-    0xacc13fee,        // 0x0b0:  ldp  q14, q15, [sp], #32
-    0xacc147f0,        // 0x0b4:  ldp  q16, q17, [sp], #32
-    0xacc14ff2,        // 0x0b8:  ldp  q18, q19, [sp], #32
-    0xacc157f4,        // 0x0bc:  ldp  q20, q21, [sp], #32
-    0xacc15ff6,        // 0x0c0:  ldp  q22, q23, [sp], #32
-    0xacc167f8,        // 0x0c4:  ldp  q24, q25, [sp], #32
-    0xacc16ffa,        // 0x0c8:  ldp  q26, q27, [sp], #32
-    0xacc177fc,        // 0x0cc:  ldp  q28, q29, [sp], #32
-    0xacc17ffe,        // 0x0d0:  ldp  q30, q31, [sp], #32
-    0xa8c107e0,        // 0x0d4:  ldp   x0,  x1, [sp], #16
-    0xa8c10fe2,        // 0x0d8:  ldp   x2,  x3, [sp], #16
-    0xa8c117e4,        // 0x0dc:  ldp   x4,  x5, [sp], #16
-    0xa8c11fe6,        // 0x0e0:  ldp   x6,  x7, [sp], #16
-    0xa8c127e8,        // 0x0e4:  ldp   x8,  x9, [sp], #16
-    0xa8c12fea,        // 0x0e8:  ldp  x10, x11, [sp], #16
-    0xa8c137ec,        // 0x0ec:  ldp  x12, x13, [sp], #16
-    0xa8c13fee,        // 0x0f0:  ldp  x14, x15, [sp], #16
-    0xa8c153f3,        // 0x0f4:  ldp  x19, x20, [sp], #16
-    0xa8c15bf5,        // 0x0f8:  ldp  x21, x22, [sp], #16
-    0xa8c163f7,        // 0x0fc:  ldp  x23, x24, [sp], #16
-    0xa8c16bf9,        // 0x100:  ldp  x25, x26, [sp], #16
-    0xa8c173fb,        // 0x104:  ldp  x27, x28, [sp], #16
-    0xa8c17bfd,        // 0x108:  ldp  x29, x30, [sp], #16
-    0xd65f0220,        // 0x10c:  ret  x17
-    0x01234567,        // 0x110:  Lreentry_fn_ptr:
-    0xdeadbeef,        // 0x114:      .quad 0
-    0x98765432,        // 0x118:  Lreentry_ctx_ptr:
-    0xcafef00d         // 0x11c:      .quad 0
+      // resolver_entry:
+      0xa9bf47fd, // 0x000:  stp  x29, x17, [sp, #-16]!
+      0x910003fd, // 0x004:  mov  x29, sp
+      0xa9bf73fb, // 0x008:  stp  x27, x28, [sp, #-16]!
+      0xa9bf6bf9, // 0x00c:  stp  x25, x26, [sp, #-16]!
+      0xa9bf63f7, // 0x010:  stp  x23, x24, [sp, #-16]!
+      0xa9bf5bf5, // 0x014:  stp  x21, x22, [sp, #-16]!
+      0xa9bf53f3, // 0x018:  stp  x19, x20, [sp, #-16]!
+      0xa9bf3fee, // 0x01c:  stp  x14, x15, [sp, #-16]!
+      0xa9bf37ec, // 0x020:  stp  x12, x13, [sp, #-16]!
+      0xa9bf2fea, // 0x024:  stp  x10, x11, [sp, #-16]!
+      0xa9bf27e8, // 0x028:  stp   x8,  x9, [sp, #-16]!
+      0xa9bf1fe6, // 0x02c:  stp   x6,  x7, [sp, #-16]!
+      0xa9bf17e4, // 0x030:  stp   x4,  x5, [sp, #-16]!
+      0xa9bf0fe2, // 0x034:  stp   x2,  x3, [sp, #-16]!
+      0xa9bf07e0, // 0x038:  stp   x0,  x1, [sp, #-16]!
+      0xadbf7ffe, // 0x03c:  stp  q30, q31, [sp, #-32]!
+      0xadbf77fc, // 0x040:  stp  q28, q29, [sp, #-32]!
+      0xadbf6ffa, // 0x044:  stp  q26, q27, [sp, #-32]!
+      0xadbf67f8, // 0x048:  stp  q24, q25, [sp, #-32]!
+      0xadbf5ff6, // 0x04c:  stp  q22, q23, [sp, #-32]!
+      0xadbf57f4, // 0x050:  stp  q20, q21, [sp, #-32]!
+      0xadbf4ff2, // 0x054:  stp  q18, q19, [sp, #-32]!
+      0xadbf47f0, // 0x058:  stp  q16, q17, [sp, #-32]!
+      0xadbf3fee, // 0x05c:  stp  q14, q15, [sp, #-32]!
+      0xadbf37ec, // 0x060:  stp  q12, q13, [sp, #-32]!
+      0xadbf2fea, // 0x064:  stp  q10, q11, [sp, #-32]!
+      0xadbf27e8, // 0x068:  stp   q8,  q9, [sp, #-32]!
+      0xadbf1fe6, // 0x06c:  stp   q6,  q7, [sp, #-32]!
+      0xadbf17e4, // 0x070:  stp   q4,  q5, [sp, #-32]!
+      0xadbf0fe2, // 0x074:  stp   q2,  q3, [sp, #-32]!
+      0xadbf07e0, // 0x078:  stp   q0,  q1, [sp, #-32]!
+      0x580004e0, // 0x07c:  ldr   x0, Lreentry_ctx_ptr
+      0xaa1e03e1, // 0x080:  mov   x1, x30
+      0xd1003021, // 0x084:  sub   x1,  x1, #12
+      0x58000442, // 0x088:  ldr   x2, Lreentry_fn_ptr
+      0xd63f0040, // 0x08c:  blr   x2
+      0xaa0003f1, // 0x090:  mov   x17, x0
+      0xacc107e0, // 0x094:  ldp   q0,  q1, [sp], #32
+      0xacc10fe2, // 0x098:  ldp   q2,  q3, [sp], #32
+      0xacc117e4, // 0x09c:  ldp   q4,  q5, [sp], #32
+      0xacc11fe6, // 0x0a0:  ldp   q6,  q7, [sp], #32
+      0xacc127e8, // 0x0a4:  ldp   q8,  q9, [sp], #32
+      0xacc12fea, // 0x0a8:  ldp  q10, q11, [sp], #32
+      0xacc137ec, // 0x0ac:  ldp  q12, q13, [sp], #32
+      0xacc13fee, // 0x0b0:  ldp  q14, q15, [sp], #32
+      0xacc147f0, // 0x0b4:  ldp  q16, q17, [sp], #32
+      0xacc14ff2, // 0x0b8:  ldp  q18, q19, [sp], #32
+      0xacc157f4, // 0x0bc:  ldp  q20, q21, [sp], #32
+      0xacc15ff6, // 0x0c0:  ldp  q22, q23, [sp], #32
+      0xacc167f8, // 0x0c4:  ldp  q24, q25, [sp], #32
+      0xacc16ffa, // 0x0c8:  ldp  q26, q27, [sp], #32
+      0xacc177fc, // 0x0cc:  ldp  q28, q29, [sp], #32
+      0xacc17ffe, // 0x0d0:  ldp  q30, q31, [sp], #32
+      0xa8c107e0, // 0x0d4:  ldp   x0,  x1, [sp], #16
+      0xa8c10fe2, // 0x0d8:  ldp   x2,  x3, [sp], #16
+      0xa8c117e4, // 0x0dc:  ldp   x4,  x5, [sp], #16
+      0xa8c11fe6, // 0x0e0:  ldp   x6,  x7, [sp], #16
+      0xa8c127e8, // 0x0e4:  ldp   x8,  x9, [sp], #16
+      0xa8c12fea, // 0x0e8:  ldp  x10, x11, [sp], #16
+      0xa8c137ec, // 0x0ec:  ldp  x12, x13, [sp], #16
+      0xa8c13fee, // 0x0f0:  ldp  x14, x15, [sp], #16
+      0xa8c153f3, // 0x0f4:  ldp  x19, x20, [sp], #16
+      0xa8c15bf5, // 0x0f8:  ldp  x21, x22, [sp], #16
+      0xa8c163f7, // 0x0fc:  ldp  x23, x24, [sp], #16
+      0xa8c16bf9, // 0x100:  ldp  x25, x26, [sp], #16
+      0xa8c173fb, // 0x104:  ldp  x27, x28, [sp], #16
+      0xa8c17bfd, // 0x108:  ldp  x29, x30, [sp], #16
+      0xd65f0220, // 0x10c:  ret  x17
+      0x01234567, // 0x110:  Lreentry_fn_ptr:
+      0xdeadbeef, // 0x114:      .quad 0
+      0x98765432, // 0x118:  Lreentry_ctx_ptr:
+      0xcafef00d  // 0x11c:      .quad 0
   };
 
   const unsigned ReentryFnAddrOffset = 0x110;
@@ -286,9 +286,9 @@ void OrcX86_64_SysV::writeResolverCode(char *ResolverWorkingMem,
       // 0x28: JIT re-entry ctx addr.
       0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 
-      0x48, 0x8b, 0x75, 0x08,                   // 0x30: movq      8(%rbp), %rsi
-      0x48, 0x83, 0xee, 0x06,                   // 0x34: subq      $6, %rsi
-      0x48, 0xb8,                               // 0x38: movabsq   <REntry>, %rax
+      0x48, 0x8b, 0x75, 0x08, // 0x30: movq      8(%rbp), %rsi
+      0x48, 0x83, 0xee, 0x06, // 0x34: subq      $6, %rsi
+      0x48, 0xb8,             // 0x38: movabsq   <REntry>, %rax
 
       // 0x3a: JIT re-entry fn addr:
       0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
@@ -313,7 +313,7 @@ void OrcX86_64_SysV::writeResolverCode(char *ResolverWorkingMem,
       0x58,                                     // 0x69: popq      %rax
       0x5d,                                     // 0x6a: popq      %rbp
       0xc3,                                     // 0x6b: retq
- };
+  };
 
   const unsigned ReentryFnAddrOffset = 0x3a;
   const unsigned ReentryCtxAddrOffset = 0x28;
@@ -335,62 +335,61 @@ void OrcX86_64_Win32::writeResolverCode(char *ResolverWorkingMem,
   // order, shadow space allocation on stack
   const uint8_t ResolverCode[] = {
       // resolver_entry:
-      0x55,                                      // 0x00: pushq     %rbp
-      0x48, 0x89, 0xe5,                          // 0x01: movq      %rsp, %rbp
-      0x50,                                      // 0x04: pushq     %rax
-      0x53,                                      // 0x05: pushq     %rbx
-      0x51,                                      // 0x06: pushq     %rcx
-      0x52,                                      // 0x07: pushq     %rdx
-      0x56,                                      // 0x08: pushq     %rsi
-      0x57,                                      // 0x09: pushq     %rdi
-      0x41, 0x50,                                // 0x0a: pushq     %r8
-      0x41, 0x51,                                // 0x0c: pushq     %r9
-      0x41, 0x52,                                // 0x0e: pushq     %r10
-      0x41, 0x53,                                // 0x10: pushq     %r11
-      0x41, 0x54,                                // 0x12: pushq     %r12
-      0x41, 0x55,                                // 0x14: pushq     %r13
-      0x41, 0x56,                                // 0x16: pushq     %r14
-      0x41, 0x57,                                // 0x18: pushq     %r15
-      0x48, 0x81, 0xec, 0x08, 0x02, 0x00, 0x00,  // 0x1a: subq      0x208, %rsp
-      0x48, 0x0f, 0xae, 0x04, 0x24,              // 0x21: fxsave64  (%rsp)
-
-      0x48, 0xb9,                                // 0x26: movabsq   <CBMgr>, %rcx
+      0x55,                                     // 0x00: pushq     %rbp
+      0x48, 0x89, 0xe5,                         // 0x01: movq      %rsp, %rbp
+      0x50,                                     // 0x04: pushq     %rax
+      0x53,                                     // 0x05: pushq     %rbx
+      0x51,                                     // 0x06: pushq     %rcx
+      0x52,                                     // 0x07: pushq     %rdx
+      0x56,                                     // 0x08: pushq     %rsi
+      0x57,                                     // 0x09: pushq     %rdi
+      0x41, 0x50,                               // 0x0a: pushq     %r8
+      0x41, 0x51,                               // 0x0c: pushq     %r9
+      0x41, 0x52,                               // 0x0e: pushq     %r10
+      0x41, 0x53,                               // 0x10: pushq     %r11
+      0x41, 0x54,                               // 0x12: pushq     %r12
+      0x41, 0x55,                               // 0x14: pushq     %r13
+      0x41, 0x56,                               // 0x16: pushq     %r14
+      0x41, 0x57,                               // 0x18: pushq     %r15
+      0x48, 0x81, 0xec, 0x08, 0x02, 0x00, 0x00, // 0x1a: subq      0x208, %rsp
+      0x48, 0x0f, 0xae, 0x04, 0x24,             // 0x21: fxsave64  (%rsp)
+
+      0x48, 0xb9, // 0x26: movabsq   <CBMgr>, %rcx
       // 0x28: JIT re-entry ctx addr.
       0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 
-      0x48, 0x8B, 0x55, 0x08,                    // 0x30: mov       rdx, [rbp+0x8]
-      0x48, 0x83, 0xea, 0x06,                    // 0x34: sub       rdx, 0x6
+      0x48, 0x8B, 0x55, 0x08, // 0x30: mov       rdx, [rbp+0x8]
+      0x48, 0x83, 0xea, 0x06, // 0x34: sub       rdx, 0x6
 
-      0x48, 0xb8,                                // 0x38: movabsq   <REntry>, %rax
+      0x48, 0xb8, // 0x38: movabsq   <REntry>, %rax
       // 0x3a: JIT re-entry fn addr:
       0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
 
       // 0x42: sub       rsp, 0x20 (Allocate shadow space)
-      0x48, 0x83, 0xEC, 0x20,
-      0xff, 0xd0,                                // 0x46: callq     *%rax
+      0x48, 0x83, 0xEC, 0x20, 0xff, 0xd0, // 0x46: callq     *%rax
 
       // 0x48: add       rsp, 0x20 (Free shadow space)
       0x48, 0x83, 0xC4, 0x20,
 
-      0x48, 0x89, 0x45, 0x08,                    // 0x4C: movq      %rax, 8(%rbp)
-      0x48, 0x0f, 0xae, 0x0c, 0x24,              // 0x50: fxrstor64 (%rsp)
-      0x48, 0x81, 0xc4, 0x08, 0x02, 0x00, 0x00,  // 0x55: addq      0x208, %rsp
-      0x41, 0x5f,                                // 0x5C: popq      %r15
-      0x41, 0x5e,                                // 0x5E: popq      %r14
-      0x41, 0x5d,                                // 0x60: popq      %r13
-      0x41, 0x5c,                                // 0x62: popq      %r12
-      0x41, 0x5b,                                // 0x64: popq      %r11
-      0x41, 0x5a,                                // 0x66: popq      %r10
-      0x41, 0x59,                                // 0x68: popq      %r9
-      0x41, 0x58,                                // 0x6a: popq      %r8
-      0x5f,                                      // 0x6c: popq      %rdi
-      0x5e,                                      // 0x6d: popq      %rsi
-      0x5a,                                      // 0x6e: popq      %rdx
-      0x59,                                      // 0x6f: popq      %rcx
-      0x5b,                                      // 0x70: popq      %rbx
-      0x58,                                      // 0x71: popq      %rax
-      0x5d,                                      // 0x72: popq      %rbp
-      0xc3,                                      // 0x73: retq
+      0x48, 0x89, 0x45, 0x08,                   // 0x4C: movq      %rax, 8(%rbp)
+      0x48, 0x0f, 0xae, 0x0c, 0x24,             // 0x50: fxrstor64 (%rsp)
+      0x48, 0x81, 0xc4, 0x08, 0x02, 0x00, 0x00, // 0x55: addq      0x208, %rsp
+      0x41, 0x5f,                               // 0x5C: popq      %r15
+      0x41, 0x5e,                               // 0x5E: popq      %r14
+      0x41, 0x5d,                               // 0x60: popq      %r13
+      0x41, 0x5c,                               // 0x62: popq      %r12
+      0x41, 0x5b,                               // 0x64: popq      %r11
+      0x41, 0x5a,                               // 0x66: popq      %r10
+      0x41, 0x59,                               // 0x68: popq      %r9
+      0x41, 0x58,                               // 0x6a: popq      %r8
+      0x5f,                                     // 0x6c: popq      %rdi
+      0x5e,                                     // 0x6d: popq      %rsi
+      0x5a,                                     // 0x6e: popq      %rdx
+      0x59,                                     // 0x6f: popq      %rcx
+      0x5b,                                     // 0x70: popq      %rbx
+      0x58,                                     // 0x71: popq      %rax
+      0x5d,                                     // 0x72: popq      %rbp
+      0xc3,                                     // 0x73: retq
   };
 
   const unsigned ReentryFnAddrOffset = 0x3a;
@@ -518,79 +517,79 @@ void OrcMips32_Base::writeResolverCode(char *ResolverWorkingMem,
 
   const uint32_t ResolverCode[] = {
       // resolver_entry:
-      0x27bdff98,                    // 0x00: addiu $sp,$sp,-104
-      0xafa20000,                    // 0x04: sw $v0,0($sp)
-      0xafa30004,                    // 0x08: sw $v1,4($sp)
-      0xafa40008,                    // 0x0c: sw $a0,8($sp)
-      0xafa5000c,                    // 0x10: sw $a1,12($sp)
-      0xafa60010,                    // 0x14: sw $a2,16($sp)
-      0xafa70014,                    // 0x18: sw $a3,20($sp)
-      0xafb00018,                    // 0x1c: sw $s0,24($sp)
-      0xafb1001c,                    // 0x20: sw $s1,28($sp)
-      0xafb20020,                    // 0x24: sw $s2,32($sp)
-      0xafb30024,                    // 0x28: sw $s3,36($sp)
-      0xafb40028,                    // 0x2c: sw $s4,40($sp)
-      0xafb5002c,                    // 0x30: sw $s5,44($sp)
-      0xafb60030,                    // 0x34: sw $s6,48($sp)
-      0xafb70034,                    // 0x38: sw $s7,52($sp)
-      0xafa80038,                    // 0x3c: sw $t0,56($sp)
-      0xafa9003c,                    // 0x40: sw $t1,60($sp)
-      0xafaa0040,                    // 0x44: sw $t2,64($sp)
-      0xafab0044,                    // 0x48: sw $t3,68($sp)
-      0xafac0048,                    // 0x4c: sw $t4,72($sp)
-      0xafad004c,                    // 0x50: sw $t5,76($sp)
-      0xafae0050,                    // 0x54: sw $t6,80($sp)
-      0xafaf0054,                    // 0x58: sw $t7,84($sp)
-      0xafb80058,                    // 0x5c: sw $t8,88($sp)
-      0xafb9005c,                    // 0x60: sw $t9,92($sp)
-      0xafbe0060,                    // 0x64: sw $fp,96($sp)
-      0xafbf0064,                    // 0x68: sw $ra,100($sp)
+      0x27bdff98, // 0x00: addiu $sp,$sp,-104
+      0xafa20000, // 0x04: sw $v0,0($sp)
+      0xafa30004, // 0x08: sw $v1,4($sp)
+      0xafa40008, // 0x0c: sw $a0,8($sp)
+      0xafa5000c, // 0x10: sw $a1,12($sp)
+      0xafa60010, // 0x14: sw $a2,16($sp)
+      0xafa70014, // 0x18: sw $a3,20($sp)
+      0xafb00018, // 0x1c: sw $s0,24($sp)
+      0xafb1001c, // 0x20: sw $s1,28($sp)
+      0xafb20020, // 0x24: sw $s2,32($sp)
+      0xafb30024, // 0x28: sw $s3,36($sp)
+      0xafb40028, // 0x2c: sw $s4,40($sp)
+      0xafb5002c, // 0x30: sw $s5,44($sp)
+      0xafb60030, // 0x34: sw $s6,48($sp)
+      0xafb70034, // 0x38: sw $s7,52($sp)
+      0xafa80038, // 0x3c: sw $t0,56($sp)
+      0xafa9003c, // 0x40: sw $t1,60($sp)
+      0xafaa0040, // 0x44: sw $t2,64($sp)
+      0xafab0044, // 0x48: sw $t3,68($sp)
+      0xafac0048, // 0x4c: sw $t4,72($sp)
+      0xafad004c, // 0x50: sw $t5,76($sp)
+      0xafae0050, // 0x54: sw $t6,80($sp)
+      0xafaf0054, // 0x58: sw $t7,84($sp)
+      0xafb80058, // 0x5c: sw $t8,88($sp)
+      0xafb9005c, // 0x60: sw $t9,92($sp)
+      0xafbe0060, // 0x64: sw $fp,96($sp)
+      0xafbf0064, // 0x68: sw $ra,100($sp)
 
       // JIT re-entry ctx addr.
-      0x00000000,                    // 0x6c: lui $a0,ctx
-      0x00000000,                    // 0x70: addiu $a0,$a0,ctx
+      0x00000000, // 0x6c: lui $a0,ctx
+      0x00000000, // 0x70: addiu $a0,$a0,ctx
 
-      0x03e02825,                    // 0x74: move $a1, $ra
-      0x24a5ffec,                    // 0x78: addiu $a1,$a1,-20
+      0x03e02825, // 0x74: move $a1, $ra
+      0x24a5ffec, // 0x78: addiu $a1,$a1,-20
 
       // JIT re-entry fn addr:
-      0x00000000,                    // 0x7c: lui $t9,reentry
-      0x00000000,                    // 0x80: addiu $t9,$t9,reentry
-
-      0x0320f809,                    // 0x84: jalr $t9
-      0x00000000,                    // 0x88: nop
-      0x8fbf0064,                    // 0x8c: lw $ra,100($sp)
-      0x8fbe0060,                    // 0x90: lw $fp,96($sp)
-      0x8fb9005c,                    // 0x94: lw $t9,92($sp)
-      0x8fb80058,                    // 0x98: lw $t8,88($sp)
-      0x8faf0054,                    // 0x9c: lw $t7,84($sp)
-      0x8fae0050,                    // 0xa0: lw $t6,80($sp)
-      0x8fad004c,                    // 0xa4: lw $t5,76($sp)
-      0x8fac0048,                    // 0xa8: lw $t4,72($sp)
-      0x8fab0044,                    // 0xac: lw $t3,68($sp)
-      0x8faa0040,                    // 0xb0: lw $t2,64($sp)
-      0x8fa9003c,                    // 0xb4: lw $t1,60($sp)
-      0x8fa80038,                    // 0xb8: lw $t0,56($sp)
-      0x8fb70034,                    // 0xbc: lw $s7,52($sp)
-      0x8fb60030,                    // 0xc0: lw $s6,48($sp)
-      0x8fb5002c,                    // 0xc4: lw $s5,44($sp)
-      0x8fb40028,                    // 0xc8: lw $s4,40($sp)
-      0x8fb30024,                    // 0xcc: lw $s3,36($sp)
-      0x8fb20020,                    // 0xd0: lw $s2,32($sp)
-      0x8fb1001c,                    // 0xd4: lw $s1,28($sp)
-      0x8fb00018,                    // 0xd8: lw $s0,24($sp)
-      0x8fa70014,                    // 0xdc: lw $a3,20($sp)
-      0x8fa60010,                    // 0xe0: lw $a2,16($sp)
-      0x8fa5000c,                    // 0xe4: lw $a1,12($sp)
-      0x8fa40008,                    // 0xe8: lw $a0,8($sp)
-      0x27bd0068,                    // 0xec: addiu $sp,$sp,104
-      0x0300f825,                    // 0xf0: move $ra, $t8
-      0x03200008,                    // 0xf4: jr $t9
-      0x00000000,                    // 0xf8: move $t9, $v0/v1
+      0x00000000, // 0x7c: lui $t9,reentry
+      0x00000000, // 0x80: addiu $t9,$t9,reentry
+
+      0x0320f809, // 0x84: jalr $t9
+      0x00000000, // 0x88: nop
+      0x8fbf0064, // 0x8c: lw $ra,100($sp)
+      0x8fbe0060, // 0x90: lw $fp,96($sp)
+      0x8fb9005c, // 0x94: lw $t9,92($sp)
+      0x8fb80058, // 0x98: lw $t8,88($sp)
+      0x8faf0054, // 0x9c: lw $t7,84($sp)
+      0x8fae0050, // 0xa0: lw $t6,80($sp)
+      0x8fad004c, // 0xa4: lw $t5,76($sp)
+      0x8fac0048, // 0xa8: lw $t4,72($sp)
+      0x8fab0044, // 0xac: lw $t3,68($sp)
+      0x8faa0040, // 0xb0: lw $t2,64($sp)
+      0x8fa9003c, // 0xb4: lw $t1,60($sp)
+      0x8fa80038, // 0xb8: lw $t0,56($sp)
+      0x8fb70034, // 0xbc: lw $s7,52($sp)
+      0x8fb60030, // 0xc0: lw $s6,48($sp)
+      0x8fb5002c, // 0xc4: lw $s5,44($sp)
+      0x8fb40028, // 0xc8: lw $s4,40($sp)
+      0x8fb30024, // 0xcc: lw $s3,36($sp)
+      0x8fb20020, // 0xd0: lw $s2,32($sp)
+      0x8fb1001c, // 0xd4: lw $s1,28($sp)
+      0x8fb00018, // 0xd8: lw $s0,24($sp)
+      0x8fa70014, // 0xdc: lw $a3,20($sp)
+      0x8fa60010, // 0xe0: lw $a2,16($sp)
+      0x8fa5000c, // 0xe4: lw $a1,12($sp)
+      0x8fa40008, // 0xe8: lw $a0,8($sp)
+      0x27bd0068, // 0xec: addiu $sp,$sp,104
+      0x0300f825, // 0xf0: move $ra, $t8
+      0x03200008, // 0xf4: jr $t9
+      0x00000000, // 0xf8: move $t9, $v0/v1
   };
 
-  const unsigned ReentryFnAddrOffset = 0x7c;   // JIT re-entry fn addr lui
-  const unsigned ReentryCtxAddrOffset = 0x6c;  // JIT re-entry context addr lui
+  const unsigned ReentryFnAddrOffset = 0x7c;  // JIT re-entry fn addr lui
+  const unsigned ReentryCtxAddrOffset = 0x6c; // JIT re-entry context addr lui
   const unsigned Offsett = 0xf8;
 
   memcpy(ResolverWorkingMem, ResolverCode, sizeof(ResolverCode));
@@ -693,88 +692,88 @@ void OrcMips64::writeResolverCode(char *ResolverWorkingMem,
                                   ExecutorAddr ReentryCtxAddr) {
 
   const uint32_t ResolverCode[] = {
-       //resolver_entry:
-      0x67bdff30,                     // 0x00: daddiu $sp,$sp,-208
-      0xffa20000,                     // 0x04: sd v0,0(sp)
-      0xffa30008,                     // 0x08: sd v1,8(sp)
-      0xffa40010,                     // 0x0c: sd a0,16(sp)
-      0xffa50018,                     // 0x10: sd a1,24(sp)
-      0xffa60020,                     // 0x14: sd a2,32(sp)
-      0xffa70028,                     // 0x18: sd a3,40(sp)
-      0xffa80030,                     // 0x1c: sd a4,48(sp)
-      0xffa90038,                     // 0x20: sd a5,56(sp)
-      0xffaa0040,                     // 0x24: sd a6,64(sp)
-      0xffab0048,                     // 0x28: sd a7,72(sp)
-      0xffac0050,                     // 0x2c: sd t0,80(sp)
-      0xffad0058,                     // 0x30: sd t1,88(sp)
-      0xffae0060,                     // 0x34: sd t2,96(sp)
-      0xffaf0068,                     // 0x38: sd t3,104(sp)
-      0xffb00070,                     // 0x3c: sd s0,112(sp)
-      0xffb10078,                     // 0x40: sd s1,120(sp)
-      0xffb20080,                     // 0x44: sd s2,128(sp)
-      0xffb30088,                     // 0x48: sd s3,136(sp)
-      0xffb40090,                     // 0x4c: sd s4,144(sp)
-      0xffb50098,                     // 0x50: sd s5,152(sp)
-      0xffb600a0,                     // 0x54: sd s6,160(sp)
-      0xffb700a8,                     // 0x58: sd s7,168(sp)
-      0xffb800b0,                     // 0x5c: sd t8,176(sp)
-      0xffb900b8,                     // 0x60: sd t9,184(sp)
-      0xffbe00c0,                     // 0x64: sd fp,192(sp)
-      0xffbf00c8,                     // 0x68: sd ra,200(sp)
+      // resolver_entry:
+      0x67bdff30, // 0x00: daddiu $sp,$sp,-208
+      0xffa20000, // 0x04: sd v0,0(sp)
+      0xffa30008, // 0x08: sd v1,8(sp)
+      0xffa40010, // 0x0c: sd a0,16(sp)
+      0xffa50018, // 0x10: sd a1,24(sp)
+      0xffa60020, // 0x14: sd a2,32(sp)
+      0xffa70028, // 0x18: sd a3,40(sp)
+      0xffa80030, // 0x1c: sd a4,48(sp)
+      0xffa90038, // 0x20: sd a5,56(sp)
+      0xffaa0040, // 0x24: sd a6,64(sp)
+      0xffab0048, // 0x28: sd a7,72(sp)
+      0xffac0050, // 0x2c: sd t0,80(sp)
+      0xffad0058, // 0x30: sd t1,88(sp)
+      0xffae0060, // 0x34: sd t2,96(sp)
+      0xffaf0068, // 0x38: sd t3,104(sp)
+      0xffb00070, // 0x3c: sd s0,112(sp)
+      0xffb10078, // 0x40: sd s1,120(sp)
+      0xffb20080, // 0x44: sd s2,128(sp)
+      0xffb30088, // 0x48: sd s3,136(sp)
+      0xffb40090, // 0x4c: sd s4,144(sp)
+      0xffb50098, // 0x50: sd s5,152(sp)
+      0xffb600a0, // 0x54: sd s6,160(sp)
+      0xffb700a8, // 0x58: sd s7,168(sp)
+      0xffb800b0, // 0x5c: sd t8,176(sp)
+      0xffb900b8, // 0x60: sd t9,184(sp)
+      0xffbe00c0, // 0x64: sd fp,192(sp)
+      0xffbf00c8, // 0x68: sd ra,200(sp)
 
       // JIT re-entry ctx addr.
-      0x00000000,                     // 0x6c: lui $a0,heighest(ctx)
-      0x00000000,                     // 0x70: daddiu $a0,$a0,heigher(ctx)
-      0x00000000,                     // 0x74: dsll $a0,$a0,16
-      0x00000000,                     // 0x78: daddiu $a0,$a0,hi(ctx)
-      0x00000000,                     // 0x7c: dsll $a0,$a0,16
-      0x00000000,                     // 0x80: daddiu $a0,$a0,lo(ctx)
+      0x00000000, // 0x6c: lui $a0,heighest(ctx)
+      0x00000000, // 0x70: daddiu $a0,$a0,heigher(ctx)
+      0x00000000, // 0x74: dsll $a0,$a0,16
+      0x00000000, // 0x78: daddiu $a0,$a0,hi(ctx)
+      0x00000000, // 0x7c: dsll $a0,$a0,16
+      0x00000000, // 0x80: daddiu $a0,$a0,lo(ctx)
 
-      0x03e02825,                     // 0x84: move $a1, $ra
-      0x64a5ffdc,                     // 0x88: daddiu $a1,$a1,-36
+      0x03e02825, // 0x84: move $a1, $ra
+      0x64a5ffdc, // 0x88: daddiu $a1,$a1,-36
 
       // JIT re-entry fn addr:
-      0x00000000,                     // 0x8c: lui $t9,reentry
-      0x00000000,                     // 0x90: daddiu $t9,$t9,reentry
-      0x00000000,                     // 0x94: dsll $t9,$t9,
-      0x00000000,                     // 0x98: daddiu $t9,$t9,
-      0x00000000,                     // 0x9c: dsll $t9,$t9,
-      0x00000000,                     // 0xa0: daddiu $t9,$t9,
-      0x0320f809,                     // 0xa4: jalr $t9
-      0x00000000,                     // 0xa8: nop
-      0xdfbf00c8,                     // 0xac: ld ra, 200(sp)
-      0xdfbe00c0,                     // 0xb0: ld fp, 192(sp)
-      0xdfb900b8,                     // 0xb4: ld t9, 184(sp)
-      0xdfb800b0,                     // 0xb8: ld t8, 176(sp)
-      0xdfb700a8,                     // 0xbc: ld s7, 168(sp)
-      0xdfb600a0,                     // 0xc0: ld s6, 160(sp)
-      0xdfb50098,                     // 0xc4: ld s5, 152(sp)
-      0xdfb40090,                     // 0xc8: ld s4, 144(sp)
-      0xdfb30088,                     // 0xcc: ld s3, 136(sp)
-      0xdfb20080,                     // 0xd0: ld s2, 128(sp)
-      0xdfb10078,                     // 0xd4: ld s1, 120(sp)
-      0xdfb00070,                     // 0xd8: ld s0, 112(sp)
-      0xdfaf0068,                     // 0xdc: ld t3, 104(sp)
-      0xdfae0060,                     // 0xe0: ld t2, 96(sp)
-      0xdfad0058,                     // 0xe4: ld t1, 88(sp)
-      0xdfac0050,                     // 0xe8: ld t0, 80(sp)
-      0xdfab0048,                     // 0xec: ld a7, 72(sp)
-      0xdfaa0040,                     // 0xf0: ld a6, 64(sp)
-      0xdfa90038,                     // 0xf4: ld a5, 56(sp)
-      0xdfa80030,                     // 0xf8: ld a4, 48(sp)
-      0xdfa70028,                     // 0xfc: ld a3, 40(sp)
-      0xdfa60020,                     // 0x100: ld a2, 32(sp)
-      0xdfa50018,                     // 0x104: ld a1, 24(sp)
-      0xdfa40010,                     // 0x108: ld a0, 16(sp)
-      0xdfa30008,                     // 0x10c: ld v1, 8(sp)
-      0x67bd00d0,                     // 0x110: daddiu $sp,$sp,208
-      0x0300f825,                     // 0x114: move $ra, $t8
-      0x03200008,                     // 0x118: jr $t9
-      0x0040c825,                     // 0x11c: move $t9, $v0
+      0x00000000, // 0x8c: lui $t9,reentry
+      0x00000000, // 0x90: daddiu $t9,$t9,reentry
+      0x00000000, // 0x94: dsll $t9,$t9,
+      0x00000000, // 0x98: daddiu $t9,$t9,
+      0x00000000, // 0x9c: dsll $t9,$t9,
+      0x00000000, // 0xa0: daddiu $t9,$t9,
+      0x0320f809, // 0xa4: jalr $t9
+      0x00000000, // 0xa8: nop
+      0xdfbf00c8, // 0xac: ld ra, 200(sp)
+      0xdfbe00c0, // 0xb0: ld fp, 192(sp)
+      0xdfb900b8, // 0xb4: ld t9, 184(sp)
+      0xdfb800b0, // 0xb8: ld t8, 176(sp)
+      0xdfb700a8, // 0xbc: ld s7, 168(sp)
+      0xdfb600a0, // 0xc0: ld s6, 160(sp)
+      0xdfb50098, // 0xc4: ld s5, 152(sp)
+      0xdfb40090, // 0xc8: ld s4, 144(sp)
+      0xdfb30088, // 0xcc: ld s3, 136(sp)
+      0xdfb20080, // 0xd0: ld s2, 128(sp)
+      0xdfb10078, // 0xd4: ld s1, 120(sp)
+      0xdfb00070, // 0xd8: ld s0, 112(sp)
+      0xdfaf0068, // 0xdc: ld t3, 104(sp)
+      0xdfae0060, // 0xe0: ld t2, 96(sp)
+      0xdfad0058, // 0xe4: ld t1, 88(sp)
+      0xdfac0050, // 0xe8: ld t0, 80(sp)
+      0xdfab0048, // 0xec: ld a7, 72(sp)
+      0xdfaa0040, // 0xf0: ld a6, 64(sp)
+      0xdfa90038, // 0xf4: ld a5, 56(sp)
+      0xdfa80030, // 0xf8: ld a4, 48(sp)
+      0xdfa70028, // 0xfc: ld a3, 40(sp)
+      0xdfa60020, // 0x100: ld a2, 32(sp)
+      0xdfa50018, // 0x104: ld a1, 24(sp)
+      0xdfa40010, // 0x108: ld a0, 16(sp)
+      0xdfa30008, // 0x10c: ld v1, 8(sp)
+      0x67bd00d0, // 0x110: daddiu $sp,$sp,208
+      0x0300f825, // 0x114: move $ra, $t8
+      0x03200008, // 0x118: jr $t9
+      0x0040c825, // 0x11c: move $t9, $v0
   };
 
-  const unsigned ReentryFnAddrOffset = 0x8c;   // JIT re-entry fn addr lui
-  const unsigned ReentryCtxAddrOffset = 0x6c;  // JIT re-entry ctx addr lui
+  const unsigned ReentryFnAddrOffset = 0x8c;  // JIT re-entry fn addr lui
+  const unsigned ReentryCtxAddrOffset = 0x6c; // JIT re-entry ctx addr lui
 
   memcpy(ResolverWorkingMem, ResolverCode, sizeof(ResolverCode));
 
@@ -846,17 +845,21 @@ void OrcMips64::writeTrampolines(char *TrampolineBlockWorkingMem,
   uint64_t HiAddr = ((ResolverAddr.getValue() + 0x8000) >> 16);
 
   for (unsigned I = 0; I < NumTrampolines; ++I) {
-    Trampolines[10 * I + 0] = 0x03e0c025;                            // move $t8,$ra
-    Trampolines[10 * I + 1] = 0x3c190000 | (HeighestAddr & 0xFFFF);  // lui $t9,resolveAddr
-    Trampolines[10 * I + 2] = 0x67390000 | (HeigherAddr & 0xFFFF);   // daddiu $t9,$t9,%higher(resolveAddr)
-    Trampolines[10 * I + 3] = 0x0019cc38;                            // dsll $t9,$t9,16
-    Trampolines[10 * I + 4] = 0x67390000 | (HiAddr & 0xFFFF);        // daddiu $t9,$t9,%hi(ptr)
-    Trampolines[10 * I + 5] = 0x0019cc38;                            // dsll $t9,$t9,16
+    Trampolines[10 * I + 0] = 0x03e0c025; // move $t8,$ra
+    Trampolines[10 * I + 1] =
+        0x3c190000 | (HeighestAddr & 0xFFFF); // lui $t9,resolveAddr
+    Trampolines[10 * I + 2] =
+        0x67390000 |
+        (HeigherAddr & 0xFFFF);           // daddiu $t9,$t9,%higher(resolveAddr)
+    Trampolines[10 * I + 3] = 0x0019cc38; // dsll $t9,$t9,16
+    Trampolines[10 * I + 4] =
+        0x67390000 | (HiAddr & 0xFFFF);   // daddiu $t9,$t9,%hi(ptr)
+    Trampolines[10 * I + 5] = 0x0019cc38; // dsll $t9,$t9,16
     Trampolines[10 * I + 6] = 0x67390000 | (ResolverAddr.getValue() &
                                             0xFFFF); // daddiu $t9,$t9,%lo(ptr)
-    Trampolines[10 * I + 7] = 0x0320f809;                            // jalr $t9
-    Trampolines[10 * I + 8] = 0x00000000;                            // nop
-    Trampolines[10 * I + 9] = 0x00000000;                            // nop
+    Trampolines[10 * I + 7] = 0x0320f809;            // jalr $t9
+    Trampolines[10 * I + 8] = 0x00000000;            // nop
+    Trampolines[10 * I + 9] = 0x00000000;            // nop
   }
 }
 
@@ -904,14 +907,15 @@ void OrcMips64::writeIndirectStubsBlock(char *StubsBlockWorkingMem,
     uint64_t HeighestAddr = ((PtrAddr + 0x800080008000) >> 48);
     uint64_t HeigherAddr = ((PtrAddr + 0x80008000) >> 32);
     uint64_t HiAddr = ((PtrAddr + 0x8000) >> 16);
-    Stub[8 * I + 0] = 0x3c190000 | (HeighestAddr & 0xFFFF);  // lui $t9,ptr1
-    Stub[8 * I + 1] = 0x67390000 | (HeigherAddr & 0xFFFF);   // daddiu $t9,$t9,%higher(ptr)
-    Stub[8 * I + 2] = 0x0019cc38;                            // dsll $t9,$t9,16
-    Stub[8 * I + 3] = 0x67390000 | (HiAddr & 0xFFFF);        // daddiu $t9,$t9,%hi(ptr)
-    Stub[8 * I + 4] = 0x0019cc38;                            // dsll $t9,$t9,16
-    Stub[8 * I + 5] = 0xdf390000 | (PtrAddr & 0xFFFF);       // ld $t9,%lo(ptr)
-    Stub[8 * I + 6] = 0x03200008;                            // jr $t9
-    Stub[8 * I + 7] = 0x00000000;                            // nop
+    Stub[8 * I + 0] = 0x3c190000 | (HeighestAddr & 0xFFFF); // lui $t9,ptr1
+    Stub[8 * I + 1] =
+        0x67390000 | (HeigherAddr & 0xFFFF); // daddiu $t9,$t9,%higher(ptr)
+    Stub[8 * I + 2] = 0x0019cc38;            // dsll $t9,$t9,16
+    Stub[8 * I + 3] = 0x67390000 | (HiAddr & 0xFFFF); // daddiu $t9,$t9,%hi(ptr)
+    Stub[8 * I + 4] = 0x0019cc38;                     // dsll $t9,$t9,16
+    Stub[8 * I + 5] = 0xdf390000 | (PtrAddr & 0xFFFF); // ld $t9,%lo(ptr)
+    Stub[8 * I + 6] = 0x03200008;                      // jr $t9
+    Stub[8 * I + 7] = 0x00000000;                      // nop
   }
 }
 
@@ -1032,9 +1036,9 @@ void OrcRiscv64::writeTrampolines(char *TrampolineBlockWorkingMem,
     uint32_t Lo12 = OffsetToPtr - Hi20;
     Trampolines[4 * I + 0] = 0x00000297 | Hi20; // auipc t0, %hi(Lptr)
     Trampolines[4 * I + 1] =
-        0x0002b283 | ((Lo12 & 0xFFF) << 20);    // ld t0, %lo(Lptr)
-    Trampolines[4 * I + 2] = 0x00028367;        // jalr t1, t0
-    Trampolines[4 * I + 3] = 0xdeadface;        // padding
+        0x0002b283 | ((Lo12 & 0xFFF) << 20); // ld t0, %lo(Lptr)
+    Trampolines[4 * I + 2] = 0x00028367;     // jalr t1, t0
+    Trampolines[4 * I + 3] = 0xdeadface;     // padding
   }
 }
 
@@ -1076,7 +1080,7 @@ void OrcRiscv64::writeIndirectStubsBlock(
         PointersBlockTargetAddress - StubsBlockTargetAddress;
     uint32_t Hi20 = (PtrDisplacement + 0x800) & 0xFFFFF000;
     uint32_t Lo12 = PtrDisplacement - Hi20;
-    Stub[4 * I + 0] = 0x00000297 | Hi20;                   // auipc t0, %hi(Lptr)
+    Stub[4 * I + 0] = 0x00000297 | Hi20; // auipc t0, %hi(Lptr)
     Stub[4 * I + 1] = 0x0002b283 | ((Lo12 & 0xFFF) << 20); // ld t0, %lo(Lptr)
     Stub[4 * I + 2] = 0x00028067;                          // jr t0
     Stub[4 * I + 3] = 0xfeedbeef;                          // padding
diff --git a/llvm/lib/ExecutionEngine/Orc/OrcV2CBindings.cpp b/llvm/lib/ExecutionEngine/Orc/OrcV2CBindings.cpp
index a73aec6d98c64c9..55e6bd28114fd41 100644
--- a/llvm/lib/ExecutionEngine/Orc/OrcV2CBindings.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/OrcV2CBindings.cpp
@@ -847,9 +847,10 @@ LLVMErrorRef LLVMOrcObjectLayerAddObjectFile(LLVMOrcObjectLayerRef ObjLayer,
       *unwrap(JD), std::unique_ptr<MemoryBuffer>(unwrap(ObjBuffer))));
 }
 
-LLVMErrorRef LLVMOrcObjectLayerAddObjectFileWithRT(LLVMOrcObjectLayerRef ObjLayer,
-                                                   LLVMOrcResourceTrackerRef RT,
-                                                   LLVMMemoryBufferRef ObjBuffer) {
+LLVMErrorRef
+LLVMOrcObjectLayerAddObjectFileWithRT(LLVMOrcObjectLayerRef ObjLayer,
+                                      LLVMOrcResourceTrackerRef RT,
+                                      LLVMMemoryBufferRef ObjBuffer) {
   return wrap(
       unwrap(ObjLayer)->add(ResourceTrackerSP(unwrap(RT)),
                             std::unique_ptr<MemoryBuffer>(unwrap(ObjBuffer))));
diff --git a/llvm/lib/ExecutionEngine/Orc/RTDyldObjectLinkingLayer.cpp b/llvm/lib/ExecutionEngine/Orc/RTDyldObjectLinkingLayer.cpp
index f9630161b95ed64..efbd3352a9e0c16 100644
--- a/llvm/lib/ExecutionEngine/Orc/RTDyldObjectLinkingLayer.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/RTDyldObjectLinkingLayer.cpp
@@ -18,7 +18,8 @@ class JITDylibSearchOrderResolver : public JITSymbolResolver {
 public:
   JITDylibSearchOrderResolver(MaterializationResponsibility &MR) : MR(MR) {}
 
-  void lookup(const LookupSet &Symbols, OnResolvedFunction OnResolved) override {
+  void lookup(const LookupSet &Symbols,
+              OnResolvedFunction OnResolved) override {
     auto &ES = MR.getTargetJITDylib().getExecutionSession();
     SymbolLookupSet InternedSymbols;
 
diff --git a/llvm/lib/ExecutionEngine/Orc/SimpleRemoteEPC.cpp b/llvm/lib/ExecutionEngine/Orc/SimpleRemoteEPC.cpp
index 3d3ca891d881088..0c44be090c7c76b 100644
--- a/llvm/lib/ExecutionEngine/Orc/SimpleRemoteEPC.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/SimpleRemoteEPC.cpp
@@ -282,8 +282,7 @@ Error SimpleRemoteEPC::setup(Setup S) {
 
   // Prepare a handler for the setup packet.
   PendingCallWrapperResults[0] =
-    RunInPlace()(
-      [&](shared::WrapperFunctionResult SetupMsgBytes) {
+      RunInPlace()([&](shared::WrapperFunctionResult SetupMsgBytes) {
         if (const char *ErrMsg = SetupMsgBytes.getOutOfBandError()) {
           EIP.set_value(
               make_error<StringError>(ErrMsg, inconvertibleErrorCode()));
diff --git a/llvm/lib/ExecutionEngine/Orc/ThreadSafeModule.cpp b/llvm/lib/ExecutionEngine/Orc/ThreadSafeModule.cpp
index 2e128dd23744393..292ad114158182d 100644
--- a/llvm/lib/ExecutionEngine/Orc/ThreadSafeModule.cpp
+++ b/llvm/lib/ExecutionEngine/Orc/ThreadSafeModule.cpp
@@ -1,5 +1,5 @@
 //===-- ThreadSafeModule.cpp - Thread safe Module, Context, and Utilities
-//h-===//
+// h-===//
 //
 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
 // See https://llvm.org/LICENSE.txt for license information.
diff --git a/llvm/lib/ExecutionEngine/PerfJITEvents/CMakeLists.txt b/llvm/lib/ExecutionEngine/PerfJITEvents/CMakeLists.txt
index 52793fbe74879ca..522dc1ca9fbf724 100644
--- a/llvm/lib/ExecutionEngine/PerfJITEvents/CMakeLists.txt
+++ b/llvm/lib/ExecutionEngine/PerfJITEvents/CMakeLists.txt
@@ -1,13 +1,6 @@
-add_llvm_component_library(LLVMPerfJITEvents
-  PerfJITEventListener.cpp
+add_llvm_component_library(LLVMPerfJITEvents PerfJITEventListener.cpp
 
-  LINK_COMPONENTS
-  CodeGen
-  Core
-  DebugInfoDWARF
-  ExecutionEngine
-  Object
-  Support
-  )
+                               LINK_COMPONENTS CodeGen Core DebugInfoDWARF
+                                   ExecutionEngine Object Support)
 
-add_dependencies(LLVMPerfJITEvents LLVMCodeGen)
+    add_dependencies(LLVMPerfJITEvents LLVMCodeGen)
diff --git a/llvm/lib/ExecutionEngine/PerfJITEvents/PerfJITEventListener.cpp b/llvm/lib/ExecutionEngine/PerfJITEvents/PerfJITEventListener.cpp
index e2b5ce49ba2ec1f..55423108216c8f1 100644
--- a/llvm/lib/ExecutionEngine/PerfJITEvents/PerfJITEventListener.cpp
+++ b/llvm/lib/ExecutionEngine/PerfJITEvents/PerfJITEventListener.cpp
@@ -32,9 +32,9 @@
 #include "llvm/Support/raw_ostream.h"
 #include <mutex>
 
-#include <sys/mman.h>  // mmap()
-#include <time.h>      // clock_gettime(), time(), localtime_r() */
-#include <unistd.h>    // for read(), close()
+#include <sys/mman.h> // mmap()
+#include <time.h>     // clock_gettime(), time(), localtime_r() */
+#include <unistd.h>   // for read(), close()
 
 using namespace llvm;
 using namespace llvm::object;
@@ -195,9 +195,8 @@ PerfJITEventListener::PerfJITEventListener()
   // Need to open ourselves, because we need to hand the FD to OpenMarker() and
   // raw_fd_ostream doesn't expose the FD.
   using sys::fs::openFileForWrite;
-  if (auto EC =
-          openFileForReadWrite(FilenameBuf.str(), DumpFd,
-			       sys::fs::CD_CreateNew, sys::fs::OF_None)) {
+  if (auto EC = openFileForReadWrite(FilenameBuf.str(), DumpFd,
+                                     sys::fs::CD_CreateNew, sys::fs::OF_None)) {
     errs() << "could not open JIT dump file " << FilenameBuf.str() << ": "
            << EC.message() << "\n";
     return;
@@ -271,8 +270,8 @@ void PerfJITEventListener::notifyObjectLoaded(
 
     uint64_t SectionIndex = object::SectionedAddress::UndefSection;
     if (auto SectOrErr = Sym.getSection())
-        if (*SectOrErr != Obj.section_end())
-            SectionIndex = SectOrErr.get()->getIndex();
+      if (*SectOrErr != Obj.section_end())
+        SectionIndex = SectOrErr.get()->getIndex();
 
     // According to spec debugging info has to come before loading the
     // corresponding code load.
@@ -372,9 +371,7 @@ bool PerfJITEventListener::FillMachine(LLVMPerfJitHeader &hdr) {
   size_t RequiredMemory = sizeof(id) + sizeof(info);
 
   ErrorOr<std::unique_ptr<MemoryBuffer>> MB =
-    MemoryBuffer::getFileSlice("/proc/self/exe",
-			       RequiredMemory,
-			       0);
+      MemoryBuffer::getFileSlice("/proc/self/exe", RequiredMemory, 0);
 
   // This'll not guarantee that enough data was actually read from the
   // underlying file. Instead the trailing part of the buffer would be
@@ -499,7 +496,6 @@ JITEventListener *JITEventListener::createPerfJITEventListener() {
 
 } // namespace llvm
 
-LLVMJITEventListenerRef LLVMCreatePerfJITEventListener(void)
-{
+LLVMJITEventListenerRef LLVMCreatePerfJITEventListener(void) {
   return wrap(JITEventListener::createPerfJITEventListener());
 }
diff --git a/llvm/lib/ExecutionEngine/RuntimeDyld/CMakeLists.txt b/llvm/lib/ExecutionEngine/RuntimeDyld/CMakeLists.txt
index 1278e2f43c3bcd6..9558eb6103a2910 100644
--- a/llvm/lib/ExecutionEngine/RuntimeDyld/CMakeLists.txt
+++ b/llvm/lib/ExecutionEngine/RuntimeDyld/CMakeLists.txt
@@ -1,21 +1,9 @@
-add_llvm_component_library(LLVMRuntimeDyld
-  JITSymbol.cpp
-  RTDyldMemoryManager.cpp
-  RuntimeDyld.cpp
-  RuntimeDyldChecker.cpp
-  RuntimeDyldCOFF.cpp
-  RuntimeDyldELF.cpp
-  RuntimeDyldMachO.cpp
-  Targets/RuntimeDyldELFMips.cpp
+add_llvm_component_library(
+    LLVMRuntimeDyld JITSymbol.cpp RTDyldMemoryManager.cpp RuntimeDyld
+        .cpp RuntimeDyldChecker.cpp RuntimeDyldCOFF.cpp RuntimeDyldELF
+        .cpp RuntimeDyldMachO.cpp Targets /
+    RuntimeDyldELFMips.cpp
 
-  DEPENDS
-  intrinsics_gen
+        DEPENDS intrinsics_gen
 
-
-  LINK_COMPONENTS
-  Core
-  MC
-  Object
-  Support
-  TargetParser
-  )
+            LINK_COMPONENTS Core MC Object Support TargetParser)
diff --git a/llvm/lib/ExecutionEngine/RuntimeDyld/JITSymbol.cpp b/llvm/lib/ExecutionEngine/RuntimeDyld/JITSymbol.cpp
index 210fbf6e43e3504..facefe1c7b0c3da 100644
--- a/llvm/lib/ExecutionEngine/RuntimeDyld/JITSymbol.cpp
+++ b/llvm/lib/ExecutionEngine/RuntimeDyld/JITSymbol.cpp
@@ -56,7 +56,8 @@ JITSymbolFlags llvm::JITSymbolFlags::fromSummary(GlobalValueSummary *S) {
     Flags |= JITSymbolFlags::Weak;
   if (GlobalValue::isCommonLinkage(L))
     Flags |= JITSymbolFlags::Common;
-  if (GlobalValue::isExternalLinkage(L) || GlobalValue::isExternalWeakLinkage(L))
+  if (GlobalValue::isExternalLinkage(L) ||
+      GlobalValue::isExternalWeakLinkage(L))
     Flags |= JITSymbolFlags::Exported;
 
   if (isa<FunctionSummary>(S))
diff --git a/llvm/lib/ExecutionEngine/RuntimeDyld/RTDyldMemoryManager.cpp b/llvm/lib/ExecutionEngine/RuntimeDyld/RTDyldMemoryManager.cpp
index fd11450b635b487..3223af66ae84cbd 100644
--- a/llvm/lib/ExecutionEngine/RuntimeDyld/RTDyldMemoryManager.cpp
+++ b/llvm/lib/ExecutionEngine/RuntimeDyld/RTDyldMemoryManager.cpp
@@ -10,21 +10,21 @@
 //
 //===----------------------------------------------------------------------===//
 
-#include "llvm/Config/config.h"
 #include "llvm/ExecutionEngine/RTDyldMemoryManager.h"
+#include "llvm/Config/config.h"
 #include "llvm/Support/Compiler.h"
 #include "llvm/Support/DynamicLibrary.h"
 #include "llvm/Support/ErrorHandling.h"
 #include <cstdlib>
 
 #ifdef __linux__
-  // These includes used by RTDyldMemoryManager::getPointerToNamedFunction()
-  // for Glibc trickery. See comments in this function for more information.
-  #ifdef HAVE_SYS_STAT_H
-    #include <sys/stat.h>
-  #endif
-  #include <fcntl.h>
-  #include <unistd.h>
+// These includes used by RTDyldMemoryManager::getPointerToNamedFunction()
+// for Glibc trickery. See comments in this function for more information.
+#ifdef HAVE_SYS_STAT_H
+#include <sys/stat.h>
+#endif
+#include <fcntl.h>
+#include <unistd.h>
 #endif
 
 namespace llvm {
@@ -127,7 +127,7 @@ void RTDyldMemoryManager::deregisterEHFramesInProcess(uint8_t *Addr,
 #endif
 
 void RTDyldMemoryManager::registerEHFrames(uint8_t *Addr, uint64_t LoadAddr,
-                                          size_t Size) {
+                                           size_t Size) {
   registerEHFramesInProcess(Addr, Size);
   EHFrames.push_back({Addr, Size});
 }
@@ -138,9 +138,7 @@ void RTDyldMemoryManager::deregisterEHFrames() {
   EHFrames.clear();
 }
 
-static int jit_noop() {
-  return 0;
-}
+static int jit_noop() { return 0; }
 
 // ARM math functions are statically linked on Android from libgcc.a, but not
 // available at runtime for dynamic linking. On Linux these are usually placed
@@ -152,54 +150,54 @@ static int jit_noop() {
 // user-requested symbol in getSymbolAddress with ARM_MATH_CHECK. The test
 // assumes that all functions start with __aeabi_ and getSymbolAddress must be
 // modified if that changes.
-#define ARM_MATH_IMPORTS(PP) \
-  PP(__aeabi_d2f) \
-  PP(__aeabi_d2iz) \
-  PP(__aeabi_d2lz) \
-  PP(__aeabi_d2uiz) \
-  PP(__aeabi_d2ulz) \
-  PP(__aeabi_dadd) \
-  PP(__aeabi_dcmpeq) \
-  PP(__aeabi_dcmpge) \
-  PP(__aeabi_dcmpgt) \
-  PP(__aeabi_dcmple) \
-  PP(__aeabi_dcmplt) \
-  PP(__aeabi_dcmpun) \
-  PP(__aeabi_ddiv) \
-  PP(__aeabi_dmul) \
-  PP(__aeabi_dsub) \
-  PP(__aeabi_f2d) \
-  PP(__aeabi_f2iz) \
-  PP(__aeabi_f2lz) \
-  PP(__aeabi_f2uiz) \
-  PP(__aeabi_f2ulz) \
-  PP(__aeabi_fadd) \
-  PP(__aeabi_fcmpeq) \
-  PP(__aeabi_fcmpge) \
-  PP(__aeabi_fcmpgt) \
-  PP(__aeabi_fcmple) \
-  PP(__aeabi_fcmplt) \
-  PP(__aeabi_fcmpun) \
-  PP(__aeabi_fdiv) \
-  PP(__aeabi_fmul) \
-  PP(__aeabi_fsub) \
-  PP(__aeabi_i2d) \
-  PP(__aeabi_i2f) \
-  PP(__aeabi_idiv) \
-  PP(__aeabi_idivmod) \
-  PP(__aeabi_l2d) \
-  PP(__aeabi_l2f) \
-  PP(__aeabi_lasr) \
-  PP(__aeabi_ldivmod) \
-  PP(__aeabi_llsl) \
-  PP(__aeabi_llsr) \
-  PP(__aeabi_lmul) \
-  PP(__aeabi_ui2d) \
-  PP(__aeabi_ui2f) \
-  PP(__aeabi_uidiv) \
-  PP(__aeabi_uidivmod) \
-  PP(__aeabi_ul2d) \
-  PP(__aeabi_ul2f) \
+#define ARM_MATH_IMPORTS(PP)                                                   \
+  PP(__aeabi_d2f)                                                              \
+  PP(__aeabi_d2iz)                                                             \
+  PP(__aeabi_d2lz)                                                             \
+  PP(__aeabi_d2uiz)                                                            \
+  PP(__aeabi_d2ulz)                                                            \
+  PP(__aeabi_dadd)                                                             \
+  PP(__aeabi_dcmpeq)                                                           \
+  PP(__aeabi_dcmpge)                                                           \
+  PP(__aeabi_dcmpgt)                                                           \
+  PP(__aeabi_dcmple)                                                           \
+  PP(__aeabi_dcmplt)                                                           \
+  PP(__aeabi_dcmpun)                                                           \
+  PP(__aeabi_ddiv)                                                             \
+  PP(__aeabi_dmul)                                                             \
+  PP(__aeabi_dsub)                                                             \
+  PP(__aeabi_f2d)                                                              \
+  PP(__aeabi_f2iz)                                                             \
+  PP(__aeabi_f2lz)                                                             \
+  PP(__aeabi_f2uiz)                                                            \
+  PP(__aeabi_f2ulz)                                                            \
+  PP(__aeabi_fadd)                                                             \
+  PP(__aeabi_fcmpeq)                                                           \
+  PP(__aeabi_fcmpge)                                                           \
+  PP(__aeabi_fcmpgt)                                                           \
+  PP(__aeabi_fcmple)                                                           \
+  PP(__aeabi_fcmplt)                                                           \
+  PP(__aeabi_fcmpun)                                                           \
+  PP(__aeabi_fdiv)                                                             \
+  PP(__aeabi_fmul)                                                             \
+  PP(__aeabi_fsub)                                                             \
+  PP(__aeabi_i2d)                                                              \
+  PP(__aeabi_i2f)                                                              \
+  PP(__aeabi_idiv)                                                             \
+  PP(__aeabi_idivmod)                                                          \
+  PP(__aeabi_l2d)                                                              \
+  PP(__aeabi_l2f)                                                              \
+  PP(__aeabi_lasr)                                                             \
+  PP(__aeabi_ldivmod)                                                          \
+  PP(__aeabi_llsl)                                                             \
+  PP(__aeabi_llsr)                                                             \
+  PP(__aeabi_lmul)                                                             \
+  PP(__aeabi_ui2d)                                                             \
+  PP(__aeabi_ui2f)                                                             \
+  PP(__aeabi_uidiv)                                                            \
+  PP(__aeabi_uidivmod)                                                         \
+  PP(__aeabi_ul2d)                                                             \
+  PP(__aeabi_ul2f)                                                             \
   PP(__aeabi_uldivmod)
 
 // Declare statically linked math functions on ARM. The function declarations
@@ -212,8 +210,8 @@ ARM_MATH_IMPORTS(ARM_MATH_DECL)
 #undef ARM_MATH_DECL
 #endif
 
-#if defined(__linux__) && defined(__GLIBC__) && \
-      (defined(__i386__) || defined(__x86_64__))
+#if defined(__linux__) && defined(__GLIBC__) &&                                \
+    (defined(__i386__) || defined(__x86_64__))
 extern "C" LLVM_ATTRIBUTE_WEAK void __morestack();
 #endif
 
@@ -232,14 +230,22 @@ RTDyldMemoryManager::getSymbolAddressInProcess(const std::string &Name) {
   // not inlined, and hiding their real definitions in a separate archive file
   // that the dynamic linker can't see. For more info, search for
   // 'libc_nonshared.a' on Google, or read http://llvm.org/PR274.
-  if (Name == "stat") return (uint64_t)&stat;
-  if (Name == "fstat") return (uint64_t)&fstat;
-  if (Name == "lstat") return (uint64_t)&lstat;
-  if (Name == "stat64") return (uint64_t)&stat64;
-  if (Name == "fstat64") return (uint64_t)&fstat64;
-  if (Name == "lstat64") return (uint64_t)&lstat64;
-  if (Name == "atexit") return (uint64_t)&atexit;
-  if (Name == "mknod") return (uint64_t)&mknod;
+  if (Name == "stat")
+    return (uint64_t)&stat;
+  if (Name == "fstat")
+    return (uint64_t)&fstat;
+  if (Name == "lstat")
+    return (uint64_t)&lstat;
+  if (Name == "stat64")
+    return (uint64_t)&stat64;
+  if (Name == "fstat64")
+    return (uint64_t)&fstat64;
+  if (Name == "lstat64")
+    return (uint64_t)&lstat64;
+  if (Name == "atexit")
+    return (uint64_t)&atexit;
+  if (Name == "mknod")
+    return (uint64_t)&mknod;
 
 #if defined(__i386__) || defined(__x86_64__)
   // __morestack lives in libgcc, a static library.
@@ -248,12 +254,14 @@ RTDyldMemoryManager::getSymbolAddressInProcess(const std::string &Name) {
 #endif
 #endif // __linux__ && __GLIBC__
 
-  // See ARM_MATH_IMPORTS definition for explanation
+    // See ARM_MATH_IMPORTS definition for explanation
 #if defined(__BIONIC__) && defined(__arm__)
   if (Name.compare(0, 8, "__aeabi_") == 0) {
     // Check if the user has requested any of the functions listed in
     // ARM_MATH_IMPORTS, and if so redirect to the statically linked symbol.
-#define ARM_MATH_CHECK(fn) if (Name == #fn) return (uint64_t)&fn;
+#define ARM_MATH_CHECK(fn)                                                     \
+  if (Name == #fn)                                                             \
+    return (uint64_t)&fn;
     ARM_MATH_IMPORTS(ARM_MATH_CHECK)
 #undef ARM_MATH_CHECK
   }
@@ -265,7 +273,8 @@ RTDyldMemoryManager::getSymbolAddressInProcess(const std::string &Name) {
   // (and register wrong callee's dtors with atexit(3)).
   // We expect ExecutionEngine::runStaticConstructorsDestructors()
   // is called before ExecutionEngine::runFunctionAsMain() is called.
-  if (Name == "__main") return (uint64_t)&jit_noop;
+  if (Name == "__main")
+    return (uint64_t)&jit_noop;
 
   const char *NameStr = Name.c_str();
 
@@ -287,7 +296,7 @@ void *RTDyldMemoryManager::getPointerToNamedFunction(const std::string &Name,
     report_fatal_error(Twine("Program used external function '") + Name +
                        "' which could not be resolved!");
 
-  return (void*)Addr;
+  return (void *)Addr;
 }
 
 void RTDyldMemoryManager::anchor() {}
diff --git a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyld.cpp b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyld.cpp
index a9aaff42433f655..8d5b66a682f8230 100644
--- a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyld.cpp
+++ b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyld.cpp
@@ -31,9 +31,7 @@ using namespace llvm::object;
 
 namespace {
 
-enum RuntimeDyldErrorCode {
-  GenericRTDyldError = 1
-};
+enum RuntimeDyldErrorCode { GenericRTDyldError = 1 };
 
 // FIXME: This class is only here to support the transition to llvm::Error. It
 // will be removed once this transition is complete. Clients should prefer to
@@ -44,19 +42,18 @@ class RuntimeDyldErrorCategory : public std::error_category {
 
   std::string message(int Condition) const override {
     switch (static_cast<RuntimeDyldErrorCode>(Condition)) {
-      case GenericRTDyldError: return "Generic RuntimeDyld error";
+    case GenericRTDyldError:
+      return "Generic RuntimeDyld error";
     }
     llvm_unreachable("Unrecognized RuntimeDyldErrorCode");
   }
 };
 
-}
+} // namespace
 
 char RuntimeDyldError::ID = 0;
 
-void RuntimeDyldError::log(raw_ostream &OS) const {
-  OS << ErrMsg << "\n";
-}
+void RuntimeDyldError::log(raw_ostream &OS) const { OS << ErrMsg << "\n"; }
 
 std::error_code RuntimeDyldError::convertToErrorCode() const {
   static RuntimeDyldErrorCategory RTDyldErrorCategory;
@@ -73,9 +70,7 @@ namespace llvm {
 
 void RuntimeDyldImpl::registerEHFrames() {}
 
-void RuntimeDyldImpl::deregisterEHFrames() {
-  MemMgr.deregisterEHFrames();
-}
+void RuntimeDyldImpl::deregisterEHFrames() { MemMgr.deregisterEHFrames(); }
 
 #ifndef NDEBUG
 static void dumpSectionMemory(const SectionEntry &S, StringRef State) {
@@ -96,8 +91,9 @@ static void dumpSectionMemory(const SectionEntry &S, StringRef State) {
   unsigned BytesRemaining = S.getSize();
 
   if (StartPadding) {
-    dbgs() << "\n" << format("0x%016" PRIx64,
-                             LoadAddr & ~(uint64_t)(ColsPerRow - 1)) << ":";
+    dbgs() << "\n"
+           << format("0x%016" PRIx64, LoadAddr & ~(uint64_t)(ColsPerRow - 1))
+           << ":";
     while (StartPadding--)
       dbgs() << "   ";
   }
@@ -169,8 +165,7 @@ void RuntimeDyldImpl::mapSectionAddress(const void *LocalAddress,
   llvm_unreachable("Attempting to remap address of unknown section!");
 }
 
-static Error getOffset(const SymbolRef &Sym, SectionRef Sec,
-                       uint64_t &Result) {
+static Error getOffset(const SymbolRef &Sym, SectionRef Sec, uint64_t &Result) {
   Expected<uint64_t> AddressOrErr = Sym.getAddress();
   if (!AddressOrErr)
     return AddressOrErr.takeError();
@@ -381,8 +376,8 @@ RuntimeDyldImpl::loadObjectImpl(const object::ObjectFile &Obj) {
 
     bool IsCode = RelocatedSection->isText();
     unsigned SectionID = 0;
-    if (auto SectionIDOrErr = findOrEmitSection(Obj, *RelocatedSection, IsCode,
-                                                LocalSections))
+    if (auto SectionIDOrErr =
+            findOrEmitSection(Obj, *RelocatedSection, IsCode, LocalSections))
       SectionID = *SectionIDOrErr;
     else
       return SectionIDOrErr.takeError();
@@ -390,7 +385,8 @@ RuntimeDyldImpl::loadObjectImpl(const object::ObjectFile &Obj) {
     LLVM_DEBUG(dbgs() << "\tSectionID: " << SectionID << "\n");
 
     for (; I != E;)
-      if (auto IOrErr = processRelocationRef(SectionID, I, Obj, LocalSections, Stubs))
+      if (auto IOrErr =
+              processRelocationRef(SectionID, I, Obj, LocalSections, Stubs))
         I = *IOrErr;
       else
         return IOrErr.takeError();
@@ -412,7 +408,8 @@ RuntimeDyldImpl::loadObjectImpl(const object::ObjectFile &Obj) {
           continue;
         }
 
-        // Otherwise we will have to try a reverse lookup on the globla symbol table.
+        // Otherwise we will have to try a reverse lookup on the globla symbol
+        // table.
         for (auto &GSTMapEntry : GlobalSymbolTable) {
           StringRef SymbolName = GSTMapEntry.first();
           auto &GSTEntry = GSTMapEntry.second;
@@ -450,8 +447,9 @@ RuntimeDyldImpl::loadObjectImpl(const object::ObjectFile &Obj) {
   if (auto Err = finalizeLoad(Obj, LocalSections))
     return std::move(Err);
 
-//   for (auto E : LocalSections)
-//     llvm::dbgs() << "Added: " << E.first.getRawDataRefImpl() << " -> " << E.second << "\n";
+  //   for (auto E : LocalSections)
+  //     llvm::dbgs() << "Added: " << E.first.getRawDataRefImpl() << " -> " <<
+  //     E.second << "\n";
 
   return LocalSections;
 }
@@ -498,12 +496,9 @@ static bool isReadOnlyData(const SectionRef Section) {
              (ELF::SHF_WRITE | ELF::SHF_EXECINSTR));
   if (auto *COFFObj = dyn_cast<object::COFFObjectFile>(Obj))
     return ((COFFObj->getCOFFSection(Section)->Characteristics &
-             (COFF::IMAGE_SCN_CNT_INITIALIZED_DATA
-             | COFF::IMAGE_SCN_MEM_READ
-             | COFF::IMAGE_SCN_MEM_WRITE))
-             ==
-             (COFF::IMAGE_SCN_CNT_INITIALIZED_DATA
-             | COFF::IMAGE_SCN_MEM_READ));
+             (COFF::IMAGE_SCN_CNT_INITIALIZED_DATA | COFF::IMAGE_SCN_MEM_READ |
+              COFF::IMAGE_SCN_MEM_WRITE)) ==
+            (COFF::IMAGE_SCN_CNT_INITIALIZED_DATA | COFF::IMAGE_SCN_MEM_READ));
 
   assert(isa<MachOObjectFile>(Obj));
   return false;
@@ -515,7 +510,7 @@ static bool isZeroInit(const SectionRef Section) {
     return ELFSectionRef(Section).getType() == ELF::SHT_NOBITS;
   if (auto *COFFObj = dyn_cast<object::COFFObjectFile>(Obj))
     return COFFObj->getCOFFSection(Section)->Characteristics &
-            COFF::IMAGE_SCN_CNT_UNINITIALIZED_DATA;
+           COFF::IMAGE_SCN_CNT_UNINITIALIZED_DATA;
 
   auto *MachO = cast<MachOObjectFile>(Obj);
   unsigned SectionType = MachO->getSectionType(Section);
@@ -797,10 +792,9 @@ Error RuntimeDyldImpl::emitCommonSymbols(const ObjectFile &Obj,
   return Error::success();
 }
 
-Expected<unsigned>
-RuntimeDyldImpl::emitSection(const ObjectFile &Obj,
-                             const SectionRef &Section,
-                             bool IsCode) {
+Expected<unsigned> RuntimeDyldImpl::emitSection(const ObjectFile &Obj,
+                                                const SectionRef &Section,
+                                                bool IsCode) {
   StringRef data;
   Align Alignment = Section.getAlignment();
 
@@ -836,7 +830,8 @@ RuntimeDyldImpl::emitSection(const ObjectFile &Obj,
   // grab a reference to them.
   if (!IsVirtual && !IsZeroInit) {
     // In either case, set the location of the unrelocated section in memory,
-    // since we still process relocations for it even if we're not applying them.
+    // since we still process relocations for it even if we're not applying
+    // them.
     if (Expected<StringRef> E = Section.getContents())
       data = *E;
     else
@@ -925,8 +920,7 @@ RuntimeDyldImpl::emitSection(const ObjectFile &Obj,
 
 Expected<unsigned>
 RuntimeDyldImpl::findOrEmitSection(const ObjectFile &Obj,
-                                   const SectionRef &Section,
-                                   bool IsCode,
+                                   const SectionRef &Section, bool IsCode,
                                    ObjSectionToIDMap &LocalSections) {
 
   unsigned SectionID = 0;
@@ -975,11 +969,14 @@ uint8_t *RuntimeDyldImpl::createStubFunction(uint8_t *Addr,
     // since symbol lookup won't necessarily find a handy, in-range,
     // PLT stub for functions which could be anywhere.
     // Stub can use ip0 (== x16) to calculate address
-    writeBytesUnaligned(0xd2e00010, Addr,    4); // movz ip0, #:abs_g3:<addr>
-    writeBytesUnaligned(0xf2c00010, Addr+4,  4); // movk ip0, #:abs_g2_nc:<addr>
-    writeBytesUnaligned(0xf2a00010, Addr+8,  4); // movk ip0, #:abs_g1_nc:<addr>
-    writeBytesUnaligned(0xf2800010, Addr+12, 4); // movk ip0, #:abs_g0_nc:<addr>
-    writeBytesUnaligned(0xd61f0200, Addr+16, 4); // br ip0
+    writeBytesUnaligned(0xd2e00010, Addr, 4); // movz ip0, #:abs_g3:<addr>
+    writeBytesUnaligned(0xf2c00010, Addr + 4,
+                        4); // movk ip0, #:abs_g2_nc:<addr>
+    writeBytesUnaligned(0xf2a00010, Addr + 8,
+                        4); // movk ip0, #:abs_g1_nc:<addr>
+    writeBytesUnaligned(0xf2800010, Addr + 12,
+                        4); // movk ip0, #:abs_g0_nc:<addr>
+    writeBytesUnaligned(0xd61f0200, Addr + 16, 4); // br ip0
 
     return Addr;
   } else if (Arch == Triple::arm || Arch == Triple::armeb) {
@@ -1033,42 +1030,42 @@ uint8_t *RuntimeDyldImpl::createStubFunction(uint8_t *Addr,
     // Depending on which version of the ELF ABI is in use, we need to
     // generate one of two variants of the stub.  They both start with
     // the same sequence to load the target address into r12.
-    writeInt32BE(Addr,    0x3D800000); // lis   r12, highest(addr)
-    writeInt32BE(Addr+4,  0x618C0000); // ori   r12, higher(addr)
-    writeInt32BE(Addr+8,  0x798C07C6); // sldi  r12, r12, 32
-    writeInt32BE(Addr+12, 0x658C0000); // oris  r12, r12, h(addr)
-    writeInt32BE(Addr+16, 0x618C0000); // ori   r12, r12, l(addr)
+    writeInt32BE(Addr, 0x3D800000);      // lis   r12, highest(addr)
+    writeInt32BE(Addr + 4, 0x618C0000);  // ori   r12, higher(addr)
+    writeInt32BE(Addr + 8, 0x798C07C6);  // sldi  r12, r12, 32
+    writeInt32BE(Addr + 12, 0x658C0000); // oris  r12, r12, h(addr)
+    writeInt32BE(Addr + 16, 0x618C0000); // ori   r12, r12, l(addr)
     if (AbiVariant == 2) {
       // PowerPC64 stub ELFv2 ABI: The address points to the function itself.
       // The address is already in r12 as required by the ABI.  Branch to it.
-      writeInt32BE(Addr+20, 0xF8410018); // std   r2,  24(r1)
-      writeInt32BE(Addr+24, 0x7D8903A6); // mtctr r12
-      writeInt32BE(Addr+28, 0x4E800420); // bctr
+      writeInt32BE(Addr + 20, 0xF8410018); // std   r2,  24(r1)
+      writeInt32BE(Addr + 24, 0x7D8903A6); // mtctr r12
+      writeInt32BE(Addr + 28, 0x4E800420); // bctr
     } else {
       // PowerPC64 stub ELFv1 ABI: The address points to a function descriptor.
       // Load the function address on r11 and sets it to control register. Also
       // loads the function TOC in r2 and environment pointer to r11.
-      writeInt32BE(Addr+20, 0xF8410028); // std   r2,  40(r1)
-      writeInt32BE(Addr+24, 0xE96C0000); // ld    r11, 0(r12)
-      writeInt32BE(Addr+28, 0xE84C0008); // ld    r2,  0(r12)
-      writeInt32BE(Addr+32, 0x7D6903A6); // mtctr r11
-      writeInt32BE(Addr+36, 0xE96C0010); // ld    r11, 16(r2)
-      writeInt32BE(Addr+40, 0x4E800420); // bctr
+      writeInt32BE(Addr + 20, 0xF8410028); // std   r2,  40(r1)
+      writeInt32BE(Addr + 24, 0xE96C0000); // ld    r11, 0(r12)
+      writeInt32BE(Addr + 28, 0xE84C0008); // ld    r2,  0(r12)
+      writeInt32BE(Addr + 32, 0x7D6903A6); // mtctr r11
+      writeInt32BE(Addr + 36, 0xE96C0010); // ld    r11, 16(r2)
+      writeInt32BE(Addr + 40, 0x4E800420); // bctr
     }
     return Addr;
   } else if (Arch == Triple::systemz) {
-    writeInt16BE(Addr,    0xC418);     // lgrl %r1,.+8
-    writeInt16BE(Addr+2,  0x0000);
-    writeInt16BE(Addr+4,  0x0004);
-    writeInt16BE(Addr+6,  0x07F1);     // brc 15,%r1
+    writeInt16BE(Addr, 0xC418); // lgrl %r1,.+8
+    writeInt16BE(Addr + 2, 0x0000);
+    writeInt16BE(Addr + 4, 0x0004);
+    writeInt16BE(Addr + 6, 0x07F1); // brc 15,%r1
     // 8-byte address stored at Addr + 8
     return Addr;
   } else if (Arch == Triple::x86_64) {
-    *Addr      = 0xFF; // jmp
-    *(Addr+1)  = 0x25; // rip
+    *Addr = 0xFF;       // jmp
+    *(Addr + 1) = 0x25; // rip
     // 32-bit PC-relative address of the GOT entry will be stored at Addr+2
   } else if (Arch == Triple::x86) {
-    *Addr      = 0xE9; // 32-bit pc-relative jump.
+    *Addr = 0xE9; // 32-bit pc-relative jump.
   }
   return Addr;
 }
@@ -1130,8 +1127,8 @@ void RuntimeDyldImpl::applyExternalSymbolRelocations(
         // We found the symbol in our global table.  It was probably in a
         // Module that we loaded previously.
         const auto &SymInfo = Loc->second;
-        Addr = getSectionLoadAddress(SymInfo.getSectionID()) +
-               SymInfo.getOffset();
+        Addr =
+            getSectionLoadAddress(SymInfo.getSectionID()) + SymInfo.getOffset();
         Flags = SymInfo.getFlags();
       }
 
@@ -1145,8 +1142,8 @@ void RuntimeDyldImpl::applyExternalSymbolRelocations(
       if (Addr != UINT64_MAX) {
 
         // Tweak the address based on the symbol flags if necessary.
-        // For example, this is used by RuntimeDyldMachOARM to toggle the low bit
-        // if the target symbol is Thumb.
+        // For example, this is used by RuntimeDyldMachOARM to toggle the low
+        // bit if the target symbol is Thumb.
         Addr = modifyAddressBasedOnFlags(Addr, Flags);
 
         LLVM_DEBUG(dbgs() << "Resolving relocations Name: " << Name << "\t"
@@ -1271,7 +1268,7 @@ void RuntimeDyldImpl::finalizeAsync(
 // RuntimeDyld class implementation
 
 uint64_t RuntimeDyld::LoadedObjectInfo::getSectionLoadAddress(
-                                          const object::SectionRef &Sec) const {
+    const object::SectionRef &Sec) const {
 
   auto I = ObjSecToIDMap.find(Sec);
   if (I != ObjSecToIDMap.end())
@@ -1307,13 +1304,12 @@ RuntimeDyld::RuntimeDyld(RuntimeDyld::MemoryManager &MemMgr,
 
 RuntimeDyld::~RuntimeDyld() = default;
 
-static std::unique_ptr<RuntimeDyldCOFF>
-createRuntimeDyldCOFF(
-                     Triple::ArchType Arch, RuntimeDyld::MemoryManager &MM,
-                     JITSymbolResolver &Resolver, bool ProcessAllSections,
-                     RuntimeDyld::NotifyStubEmittedFunction NotifyStubEmitted) {
+static std::unique_ptr<RuntimeDyldCOFF> createRuntimeDyldCOFF(
+    Triple::ArchType Arch, RuntimeDyld::MemoryManager &MM,
+    JITSymbolResolver &Resolver, bool ProcessAllSections,
+    RuntimeDyld::NotifyStubEmittedFunction NotifyStubEmitted) {
   std::unique_ptr<RuntimeDyldCOFF> Dyld =
-    RuntimeDyldCOFF::create(Arch, MM, Resolver);
+      RuntimeDyldCOFF::create(Arch, MM, Resolver);
   Dyld->setProcessAllSections(ProcessAllSections);
   Dyld->setNotifyStubEmitted(std::move(NotifyStubEmitted));
   return Dyld;
@@ -1330,14 +1326,12 @@ createRuntimeDyldELF(Triple::ArchType Arch, RuntimeDyld::MemoryManager &MM,
   return Dyld;
 }
 
-static std::unique_ptr<RuntimeDyldMachO>
-createRuntimeDyldMachO(
-                     Triple::ArchType Arch, RuntimeDyld::MemoryManager &MM,
-                     JITSymbolResolver &Resolver,
-                     bool ProcessAllSections,
-                     RuntimeDyld::NotifyStubEmittedFunction NotifyStubEmitted) {
+static std::unique_ptr<RuntimeDyldMachO> createRuntimeDyldMachO(
+    Triple::ArchType Arch, RuntimeDyld::MemoryManager &MM,
+    JITSymbolResolver &Resolver, bool ProcessAllSections,
+    RuntimeDyld::NotifyStubEmittedFunction NotifyStubEmitted) {
   std::unique_ptr<RuntimeDyldMachO> Dyld =
-    RuntimeDyldMachO::create(Arch, MM, Resolver);
+      RuntimeDyldMachO::create(Arch, MM, Resolver);
   Dyld->setProcessAllSections(ProcessAllSections);
   Dyld->setNotifyStubEmitted(std::move(NotifyStubEmitted));
   return Dyld;
@@ -1347,18 +1341,17 @@ std::unique_ptr<RuntimeDyld::LoadedObjectInfo>
 RuntimeDyld::loadObject(const ObjectFile &Obj) {
   if (!Dyld) {
     if (Obj.isELF())
-      Dyld =
-          createRuntimeDyldELF(static_cast<Triple::ArchType>(Obj.getArch()),
-                               MemMgr, Resolver, ProcessAllSections,
-                               std::move(NotifyStubEmitted));
+      Dyld = createRuntimeDyldELF(static_cast<Triple::ArchType>(Obj.getArch()),
+                                  MemMgr, Resolver, ProcessAllSections,
+                                  std::move(NotifyStubEmitted));
     else if (Obj.isMachO())
       Dyld = createRuntimeDyldMachO(
-               static_cast<Triple::ArchType>(Obj.getArch()), MemMgr, Resolver,
-               ProcessAllSections, std::move(NotifyStubEmitted));
+          static_cast<Triple::ArchType>(Obj.getArch()), MemMgr, Resolver,
+          ProcessAllSections, std::move(NotifyStubEmitted));
     else if (Obj.isCOFF())
-      Dyld = createRuntimeDyldCOFF(
-               static_cast<Triple::ArchType>(Obj.getArch()), MemMgr, Resolver,
-               ProcessAllSections, std::move(NotifyStubEmitted));
+      Dyld = createRuntimeDyldCOFF(static_cast<Triple::ArchType>(Obj.getArch()),
+                                   MemMgr, Resolver, ProcessAllSections,
+                                   std::move(NotifyStubEmitted));
     else
       report_fatal_error("Incompatible object format!");
   }
diff --git a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldCOFF.cpp b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldCOFF.cpp
index 9255311f992d032..26182fe8c8c1a0b 100644
--- a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldCOFF.cpp
+++ b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldCOFF.cpp
@@ -41,7 +41,7 @@ class LoadedCOFFObjectInfo final
     return OwningBinary<ObjectFile>();
   }
 };
-}
+} // namespace
 
 namespace llvm {
 
@@ -50,7 +50,8 @@ llvm::RuntimeDyldCOFF::create(Triple::ArchType Arch,
                               RuntimeDyld::MemoryManager &MemMgr,
                               JITSymbolResolver &Resolver) {
   switch (Arch) {
-  default: llvm_unreachable("Unsupported target for RuntimeDyldCOFF.");
+  default:
+    llvm_unreachable("Unsupported target for RuntimeDyldCOFF.");
   case Triple::x86:
     return std::make_unique<RuntimeDyldCOFFI386>(MemMgr, Resolver);
   case Triple::thumb:
diff --git a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldChecker.cpp b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldChecker.cpp
index ab561ecd0057984..43f435a98cb4217 100644
--- a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldChecker.cpp
+++ b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldChecker.cpp
@@ -413,8 +413,8 @@ class RuntimeDyldCheckerExprEval {
 
     uint64_t StubAddr;
     std::string ErrorMsg;
-    std::tie(StubAddr, ErrorMsg) = Checker.getSectionAddr(
-        FileName, SectionName, PCtx.IsInsideLoad);
+    std::tie(StubAddr, ErrorMsg) =
+        Checker.getSectionAddr(FileName, SectionName, PCtx.IsInsideLoad);
 
     if (ErrorMsg != "")
       return std::make_pair(EvalResult(ErrorMsg), "");
@@ -586,7 +586,8 @@ class RuntimeDyldCheckerExprEval {
     else
       return std::make_pair(
           unexpectedToken(Expr, Expr,
-                          "expected '(', '*', identifier, or number"), "");
+                          "expected '(', '*', identifier, or number"),
+          "");
 
     if (SubExprResult.hasError())
       return std::make_pair(SubExprResult, RemainingExpr);
@@ -798,7 +799,7 @@ uint64_t RuntimeDyldCheckerImpl::readMemoryAtAddr(uint64_t SrcAddr,
                                                   unsigned Size) const {
   uintptr_t PtrSizedAddr = static_cast<uintptr_t>(SrcAddr);
   assert(PtrSizedAddr == SrcAddr && "Linker memory pointer out-of-range.");
-  void *Ptr = reinterpret_cast<void*>(PtrSizedAddr);
+  void *Ptr = reinterpret_cast<void *>(PtrSizedAddr);
 
   switch (Size) {
   case 1:
diff --git a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldCheckerImpl.h b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldCheckerImpl.h
index f564b0035bffdec..cf042777502e86a 100644
--- a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldCheckerImpl.h
+++ b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldCheckerImpl.h
@@ -17,8 +17,7 @@ class RuntimeDyldCheckerImpl {
   friend class RuntimeDyldChecker;
   friend class RuntimeDyldCheckerExprEval;
 
-  using IsSymbolValidFunction =
-    RuntimeDyldChecker::IsSymbolValidFunction;
+  using IsSymbolValidFunction = RuntimeDyldChecker::IsSymbolValidFunction;
   using GetSymbolInfoFunction = RuntimeDyldChecker::GetSymbolInfoFunction;
   using GetSectionInfoFunction = RuntimeDyldChecker::GetSectionInfoFunction;
   using GetStubInfoFunction = RuntimeDyldChecker::GetStubInfoFunction;
@@ -36,7 +35,6 @@ class RuntimeDyldCheckerImpl {
   bool checkAllRulesInBuffer(StringRef RulePrefix, MemoryBuffer *MemBuf) const;
 
 private:
-
   // StubMap typedefs.
 
   Expected<JITSymbolResolver::LookupResult>
@@ -69,6 +67,6 @@ class RuntimeDyldCheckerImpl {
   MCInstPrinter *InstPrinter;
   llvm::raw_ostream &ErrStream;
 };
-}
+} // namespace llvm
 
 #endif
diff --git a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldELF.cpp b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldELF.cpp
index d439b1b4ebfbfbf..67a69ed6146a2c8 100644
--- a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldELF.cpp
+++ b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldELF.cpp
@@ -74,13 +74,9 @@ template <class ELFT> class DyldELFObject : public ELFObjectFile<ELFT> {
     return (isa<ELFObjectFile<ELFT>>(v) &&
             classof(cast<ELFObjectFile<ELFT>>(v)));
   }
-  static bool classof(const ELFObjectFile<ELFT> *v) {
-    return v->isDyldType();
-  }
+  static bool classof(const ELFObjectFile<ELFT> *v) { return v->isDyldType(); }
 };
 
-
-
 // The MemoryBuffer passed into this constructor is just a wrapper around the
 // actual memory.  Ultimately, the Binary parent class will take ownership of
 // this MemoryBuffer object but not the underlying memory.
@@ -161,8 +157,8 @@ createRTDyldELFObject(MemoryBufferRef Buffer, const ObjectFile &SourceObject,
 
     if (*NameOrErr != "") {
       DataRefImpl ShdrRef = Sec.getRawDataRefImpl();
-      Elf_Shdr *shdr = const_cast<Elf_Shdr *>(
-          reinterpret_cast<const Elf_Shdr *>(ShdrRef.p));
+      Elf_Shdr *shdr =
+          const_cast<Elf_Shdr *>(reinterpret_cast<const Elf_Shdr *>(ShdrRef.p));
 
       if (uint64_t SecLoadAddr = L.getSectionLoadAddress(*SI)) {
         // This assumes that the address passed in matches the target address
@@ -181,7 +177,7 @@ createELFDebugObject(const ObjectFile &Obj, const LoadedELFObjectInfo &L) {
   assert(Obj.isELF() && "Not an ELF object file.");
 
   std::unique_ptr<MemoryBuffer> Buffer =
-    MemoryBuffer::getMemBufferCopy(Obj.getData(), Obj.getFileName());
+      MemoryBuffer::getMemBufferCopy(Obj.getData(), Obj.getFileName());
 
   Expected<std::unique_ptr<ObjectFile>> DebugObj(nullptr);
   handleAllErrors(DebugObj.takeError());
@@ -681,12 +677,10 @@ Error RuntimeDyldELF::findPPC64TOCSection(const ELFObjectFileBase &Obj,
       return NameOrErr.takeError();
     StringRef SectionName = *NameOrErr;
 
-    if (SectionName == ".got"
-        || SectionName == ".toc"
-        || SectionName == ".tocbss"
-        || SectionName == ".plt") {
+    if (SectionName == ".got" || SectionName == ".toc" ||
+        SectionName == ".tocbss" || SectionName == ".plt") {
       if (auto SectionIDOrErr =
-            findOrEmitSection(Obj, Section, false, LocalSections))
+              findOrEmitSection(Obj, Section, false, LocalSections))
         Rel.SectionID = *SectionIDOrErr;
       else
         return SectionIDOrErr.takeError();
@@ -769,8 +763,8 @@ Error RuntimeDyldELF::findOPDEntrySection(const ELFObjectFileBase &Obj,
       assert(TSI != Obj.section_end() && "TSI should refer to a valid section");
 
       bool IsCode = TSI->isText();
-      if (auto SectionIDOrErr = findOrEmitSection(Obj, *TSI, IsCode,
-                                                  LocalSections))
+      if (auto SectionIDOrErr =
+              findOrEmitSection(Obj, *TSI, IsCode, LocalSections))
         Rel.SectionID = *SectionIDOrErr;
       else
         return SectionIDOrErr.takeError();
@@ -792,7 +786,7 @@ static inline uint16_t applyPPChi(uint64_t value) {
   return (value >> 16) & 0xffff;
 }
 
-static inline uint16_t applyPPCha (uint64_t value) {
+static inline uint16_t applyPPCha(uint64_t value) {
   return ((value + 0x8000) >> 16) & 0xffff;
 }
 
@@ -800,7 +794,7 @@ static inline uint16_t applyPPChigher(uint64_t value) {
   return (value >> 32) & 0xffff;
 }
 
-static inline uint16_t applyPPChighera (uint64_t value) {
+static inline uint16_t applyPPChighera(uint64_t value) {
   return ((value + 0x8000) >> 32) & 0xffff;
 }
 
@@ -808,7 +802,7 @@ static inline uint16_t applyPPChighest(uint64_t value) {
   return (value >> 48) & 0xffff;
 }
 
-static inline uint16_t applyPPChighesta (uint64_t value) {
+static inline uint16_t applyPPChighesta(uint64_t value) {
   return ((value + 0x8000) >> 48) & 0xffff;
 }
 
@@ -1003,7 +997,8 @@ void RuntimeDyldELF::resolveBPFRelocation(const SectionEntry &Section,
   case ELF::R_BPF_64_ABS32: {
     Value += Addend;
     assert(Value <= UINT32_MAX);
-    write(isBE, Section.getAddressWithOffset(Offset), static_cast<uint32_t>(Value));
+    write(isBE, Section.getAddressWithOffset(Offset),
+          static_cast<uint32_t>(Value));
     LLVM_DEBUG(dbgs() << "Writing " << format("%p", Value) << " at "
                       << format("%p\n", Section.getAddressWithOffset(Offset)));
     break;
@@ -1081,11 +1076,14 @@ void RuntimeDyldELF::resolveRelocation(const SectionEntry &Section,
   }
 }
 
-void *RuntimeDyldELF::computePlaceholderAddress(unsigned SectionID, uint64_t Offset) const {
+void *RuntimeDyldELF::computePlaceholderAddress(unsigned SectionID,
+                                                uint64_t Offset) const {
   return (void *)(Sections[SectionID].getObjAddress() + Offset);
 }
 
-void RuntimeDyldELF::processSimpleRelocation(unsigned SectionID, uint64_t Offset, unsigned RelType, RelocationValueRef Value) {
+void RuntimeDyldELF::processSimpleRelocation(unsigned SectionID,
+                                             uint64_t Offset, unsigned RelType,
+                                             RelocationValueRef Value) {
   RelocationEntry RE(SectionID, Offset, RelType, Value.Addend, Value.Offset);
   if (Value.SymbolName)
     addRelocationForSymbol(RE, Value.SymbolName);
@@ -1213,8 +1211,7 @@ void RuntimeDyldELF::resolveAArch64Branch(unsigned SectionID,
   }
 }
 
-Expected<relocation_iterator>
-RuntimeDyldELF::processRelocationRef(
+Expected<relocation_iterator> RuntimeDyldELF::processRelocationRef(
     unsigned SectionID, relocation_iterator RelI, const ObjectFile &O,
     ObjSectionToIDMap &ObjSectionToID, StubMap &Stubs) {
   const auto &Obj = cast<ELFObjectFileBase>(O);
@@ -1276,8 +1273,8 @@ RuntimeDyldELF::processRelocationRef(
         llvm_unreachable("Symbol section not found, bad object file format!");
       LLVM_DEBUG(dbgs() << "\t\tThis is section symbol\n");
       bool isCode = si->isText();
-      if (auto SectionIDOrErr = findOrEmitSection(Obj, (*si), isCode,
-                                                  ObjSectionToID))
+      if (auto SectionIDOrErr =
+              findOrEmitSection(Obj, (*si), isCode, ObjSectionToID))
         Value.SectionID = *SectionIDOrErr;
       else
         return SectionIDOrErr.takeError();
@@ -1330,7 +1327,7 @@ RuntimeDyldELF::processRelocationRef(
     }
   } else if (Arch == Triple::arm) {
     if (RelType == ELF::R_ARM_PC24 || RelType == ELF::R_ARM_CALL ||
-      RelType == ELF::R_ARM_JUMP24) {
+        RelType == ELF::R_ARM_JUMP24) {
       // This is an ARM branch relocation, need to use a stub function.
       LLVM_DEBUG(dbgs() << "\t\tThis is an ARM branch relocation.\n");
       SectionEntry &Section = Sections[SectionID];
@@ -1356,21 +1353,24 @@ RuntimeDyldELF::processRelocationRef(
         else
           addRelocationForSection(RE, Value.SectionID);
 
-        resolveRelocation(Section, Offset, reinterpret_cast<uint64_t>(
-                                               Section.getAddressWithOffset(
-                                                   Section.getStubOffset())),
-                          RelType, 0);
+        resolveRelocation(
+            Section, Offset,
+            reinterpret_cast<uint64_t>(
+                Section.getAddressWithOffset(Section.getStubOffset())),
+            RelType, 0);
         Section.advanceStubOffset(getMaxStubSize());
       }
     } else {
-      uint32_t *Placeholder =
-        reinterpret_cast<uint32_t*>(computePlaceholderAddress(SectionID, Offset));
+      uint32_t *Placeholder = reinterpret_cast<uint32_t *>(
+          computePlaceholderAddress(SectionID, Offset));
       if (RelType == ELF::R_ARM_PREL31 || RelType == ELF::R_ARM_TARGET1 ||
           RelType == ELF::R_ARM_ABS32) {
         Value.Addend += *Placeholder;
-      } else if (RelType == ELF::R_ARM_MOVW_ABS_NC || RelType == ELF::R_ARM_MOVT_ABS) {
+      } else if (RelType == ELF::R_ARM_MOVW_ABS_NC ||
+                 RelType == ELF::R_ARM_MOVT_ABS) {
         // See ELF for ARM documentation
-        Value.Addend += (int16_t)((*Placeholder & 0xFFF) | (((*Placeholder >> 16) & 0xF) << 12));
+        Value.Addend += (int16_t)((*Placeholder & 0xFFF) |
+                                  (((*Placeholder >> 16) & 0xF) << 12));
       }
       processSimpleRelocation(SectionID, Offset, RelType, Value);
     }
@@ -1467,8 +1467,8 @@ RuntimeDyldELF::processRelocationRef(
   } else if (IsMipsN32ABI || IsMipsN64ABI) {
     uint32_t r_type = RelType & 0xff;
     RelocationEntry RE(SectionID, Offset, RelType, Value.Addend);
-    if (r_type == ELF::R_MIPS_CALL16 || r_type == ELF::R_MIPS_GOT_PAGE
-        || r_type == ELF::R_MIPS_GOT_DISP) {
+    if (r_type == ELF::R_MIPS_CALL16 || r_type == ELF::R_MIPS_GOT_PAGE ||
+        r_type == ELF::R_MIPS_GOT_DISP) {
       StringMap<uint64_t>::iterator i = GOTSymbolOffsets.find(TargetName);
       if (i != GOTSymbolOffsets.end())
         RE.SymOffset = i->second;
@@ -1573,7 +1573,7 @@ RuntimeDyldELF::processRelocationRef(
         } else {
           // In the ELFv2 ABI, a function symbol may provide a local entry
           // point, which must be used for direct calls.
-          if (Value.SectionID == SectionID){
+          if (Value.SectionID == SectionID) {
             uint8_t SymOther = Symbol->getOther();
             Value.Addend += ELF::decodePPC64LocalEntryOffset(SymOther);
           }
@@ -1585,7 +1585,7 @@ RuntimeDyldELF::processRelocationRef(
         if (SignExtend64<26>(delta) != delta) {
           RangeOverflow = true;
         } else if ((AbiVariant != 2) ||
-                   (AbiVariant == 2  && Value.SectionID == SectionID)) {
+                   (AbiVariant == 2 && Value.SectionID == SectionID)) {
           RelocationEntry RE(SectionID, Offset, RelType, Value.Addend);
           addRelocationForSection(RE, Value.SectionID);
         }
@@ -1641,10 +1641,11 @@ RuntimeDyldELF::processRelocationRef(
             addRelocationForSection(REl, Value.SectionID);
           }
 
-          resolveRelocation(Section, Offset, reinterpret_cast<uint64_t>(
-                                                 Section.getAddressWithOffset(
-                                                     Section.getStubOffset())),
-                            RelType, 0);
+          resolveRelocation(
+              Section, Offset,
+              reinterpret_cast<uint64_t>(
+                  Section.getAddressWithOffset(Section.getStubOffset())),
+              RelType, 0);
           Section.advanceStubOffset(getMaxStubSize());
         }
         if (IsExtern || (AbiVariant == 2 && Value.SectionID != SectionID)) {
@@ -1672,13 +1673,26 @@ RuntimeDyldELF::processRelocationRef(
       // that the two sections are actually the same.  Thus they cancel out
       // and we can immediately resolve the relocation right now.
       switch (RelType) {
-      case ELF::R_PPC64_TOC16: RelType = ELF::R_PPC64_ADDR16; break;
-      case ELF::R_PPC64_TOC16_DS: RelType = ELF::R_PPC64_ADDR16_DS; break;
-      case ELF::R_PPC64_TOC16_LO: RelType = ELF::R_PPC64_ADDR16_LO; break;
-      case ELF::R_PPC64_TOC16_LO_DS: RelType = ELF::R_PPC64_ADDR16_LO_DS; break;
-      case ELF::R_PPC64_TOC16_HI: RelType = ELF::R_PPC64_ADDR16_HI; break;
-      case ELF::R_PPC64_TOC16_HA: RelType = ELF::R_PPC64_ADDR16_HA; break;
-      default: llvm_unreachable("Wrong relocation type.");
+      case ELF::R_PPC64_TOC16:
+        RelType = ELF::R_PPC64_ADDR16;
+        break;
+      case ELF::R_PPC64_TOC16_DS:
+        RelType = ELF::R_PPC64_ADDR16_DS;
+        break;
+      case ELF::R_PPC64_TOC16_LO:
+        RelType = ELF::R_PPC64_ADDR16_LO;
+        break;
+      case ELF::R_PPC64_TOC16_LO_DS:
+        RelType = ELF::R_PPC64_ADDR16_LO_DS;
+        break;
+      case ELF::R_PPC64_TOC16_HI:
+        RelType = ELF::R_PPC64_ADDR16_HI;
+        break;
+      case ELF::R_PPC64_TOC16_HA:
+        RelType = ELF::R_PPC64_ADDR16_HA;
+        break;
+      default:
+        llvm_unreachable("Wrong relocation type.");
       }
 
       RelocationValueRef TOCValue;
@@ -1762,17 +1776,17 @@ RuntimeDyldELF::processRelocationRef(
       // PLT and this relocation makes a PC-relative call into the PLT.  The PLT
       // entry will then jump to an address provided by the GOT.  On first call,
       // the
-      // GOT address will point back into PLT code that resolves the symbol. After
-      // the first call, the GOT entry points to the actual function.
+      // GOT address will point back into PLT code that resolves the symbol.
+      // After the first call, the GOT entry points to the actual function.
       //
       // For local functions we're ignoring all of that here and just replacing
-      // the PLT32 relocation type with PC32, which will translate the relocation
-      // into a PC-relative call directly to the function. For external symbols we
-      // can't be sure the function will be within 2^32 bytes of the call site, so
-      // we need to create a stub, which calls into the GOT.  This case is
-      // equivalent to the usual PLT implementation except that we use the stub
-      // mechanism in RuntimeDyld (which puts stubs at the end of the section)
-      // rather than allocating a PLT section.
+      // the PLT32 relocation type with PC32, which will translate the
+      // relocation into a PC-relative call directly to the function. For
+      // external symbols we can't be sure the function will be within 2^32
+      // bytes of the call site, so we need to create a stub, which calls into
+      // the GOT.  This case is equivalent to the usual PLT implementation
+      // except that we use the stub mechanism in RuntimeDyld (which puts stubs
+      // at the end of the section) rather than allocating a PLT section.
       if (Value.SymbolName && MemMgr.allowStubAllocation()) {
         // This is a call to an external function.
         // Look for an existing stub.
@@ -1861,10 +1875,12 @@ RuntimeDyldELF::processRelocationRef(
       (void)allocateGOTEntries(0);
       processSimpleRelocation(SectionID, Offset, RelType, Value);
     } else if (RelType == ELF::R_X86_64_PC32) {
-      Value.Addend += support::ulittle32_t::ref(computePlaceholderAddress(SectionID, Offset));
+      Value.Addend += support::ulittle32_t::ref(
+          computePlaceholderAddress(SectionID, Offset));
       processSimpleRelocation(SectionID, Offset, RelType, Value);
     } else if (RelType == ELF::R_X86_64_PC64) {
-      Value.Addend += support::ulittle64_t::ref(computePlaceholderAddress(SectionID, Offset));
+      Value.Addend += support::ulittle64_t::ref(
+          computePlaceholderAddress(SectionID, Offset));
       processSimpleRelocation(SectionID, Offset, RelType, Value);
     } else if (RelType == ELF::R_X86_64_GOTTPOFF) {
       processX86_64GOTTPOFFRelocation(SectionID, Offset, Value, Addend);
@@ -1880,7 +1896,8 @@ RuntimeDyldELF::processRelocationRef(
     }
   } else {
     if (Arch == Triple::x86) {
-      Value.Addend += support::ulittle32_t::ref(computePlaceholderAddress(SectionID, Offset));
+      Value.Addend += support::ulittle32_t::ref(
+          computePlaceholderAddress(SectionID, Offset));
     }
     processSimpleRelocation(SectionID, Offset, RelType, Value);
   }
@@ -2293,7 +2310,8 @@ RelocationEntry RuntimeDyldELF::computeGOTOffsetRE(uint64_t GOTOffset,
   return RelocationEntry(GOTSectionID, GOTOffset, Type, SymbolOffset);
 }
 
-void RuntimeDyldELF::processNewSymbol(const SymbolRef &ObjSymbol, SymbolTableEntry& Symbol) {
+void RuntimeDyldELF::processNewSymbol(const SymbolRef &ObjSymbol,
+                                      SymbolTableEntry &Symbol) {
   // This should never return an error as `processNewSymbol` wouldn't have been
   // called if getFlags() returned an error before.
   auto ObjSymbolFlags = cantFail(ObjSymbol.getFlags());
@@ -2319,7 +2337,7 @@ void RuntimeDyldELF::processNewSymbol(const SymbolRef &ObjSymbol, SymbolTableEnt
 }
 
 Error RuntimeDyldELF::finalizeLoad(const ObjectFile &Obj,
-                                  ObjSectionToIDMap &SectionMap) {
+                                   ObjSectionToIDMap &SectionMap) {
   if (IsMipsO32ABI)
     if (!PendingRelocs.empty())
       return make_error<RuntimeDyldError>("Can't find matching LO16 reloc");
@@ -2533,20 +2551,18 @@ bool RuntimeDyldELF::relocationNeedsGot(const RelocationRef &R) const {
 
   if (Arch == Triple::x86_64)
     return RelTy == ELF::R_X86_64_GOTPCREL ||
-           RelTy == ELF::R_X86_64_GOTPCRELX ||
-           RelTy == ELF::R_X86_64_GOT64 ||
+           RelTy == ELF::R_X86_64_GOTPCRELX || RelTy == ELF::R_X86_64_GOT64 ||
            RelTy == ELF::R_X86_64_REX_GOTPCRELX;
   return false;
 }
 
 bool RuntimeDyldELF::relocationNeedsStub(const RelocationRef &R) const {
   if (Arch != Triple::x86_64)
-    return true;  // Conservative answer
+    return true; // Conservative answer
 
   switch (R.getType()) {
   default:
-    return true;  // Conservative answer
-
+    return true; // Conservative answer
 
   case ELF::R_X86_64_GOTPCREL:
   case ELF::R_X86_64_GOTPCRELX:
diff --git a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldELF.h b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldELF.h
index b73d2af8c0c4908..7356f8c5d6ba32d 100644
--- a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldELF.h
+++ b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldELF.h
@@ -123,7 +123,8 @@ class RuntimeDyldELF : public RuntimeDyldImpl {
 
   // Split out common case for creating the RelocationEntry for when the
   // relocation requires no particular advanced processing.
-  void processSimpleRelocation(unsigned SectionID, uint64_t Offset, unsigned RelType, RelocationValueRef Value);
+  void processSimpleRelocation(unsigned SectionID, uint64_t Offset,
+                               unsigned RelType, RelocationValueRef Value);
 
   // Return matching *LO16 relocation (Mips specific)
   uint32_t getMatchingLoRelocation(uint32_t RelType,
@@ -222,8 +223,7 @@ class RuntimeDyldELF : public RuntimeDyldImpl {
   void resolveRelocation(const RelocationEntry &RE, uint64_t Value) override;
   Expected<relocation_iterator>
   processRelocationRef(unsigned SectionID, relocation_iterator RelI,
-                       const ObjectFile &Obj,
-                       ObjSectionToIDMap &ObjSectionToID,
+                       const ObjectFile &Obj, ObjSectionToIDMap &ObjSectionToID,
                        StubMap &Stubs) override;
   bool isCompatibleFile(const object::ObjectFile &Obj) const override;
   void registerEHFrames() override;
diff --git a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldImpl.h b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldImpl.h
index 6435dc05cdc9254..60a15e552a4b836 100644
--- a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldImpl.h
+++ b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldImpl.h
@@ -36,8 +36,8 @@ using namespace llvm::object;
 
 namespace llvm {
 
-#define UNIMPLEMENTED_RELOC(RelType) \
-  case RelType: \
+#define UNIMPLEMENTED_RELOC(RelType)                                           \
+  case RelType:                                                                \
     return make_error<RuntimeDyldError>("Unimplemented relocation: " #RelType)
 
 /// SectionEntry - represents a section emitted into memory by the dynamic
@@ -130,8 +130,8 @@ class RelocationEntry {
   int64_t Addend;
 
   struct SectionPair {
-      uint32_t SectionA;
-      uint32_t SectionB;
+    uint32_t SectionA;
+    uint32_t SectionB;
   };
 
   /// SymOffset - Section offset of the relocation entry's symbol (used for GOT
@@ -237,6 +237,7 @@ typedef StringMap<SymbolTableEntry> RTDyldSymbolTable;
 
 class RuntimeDyldImpl {
   friend class RuntimeDyld::LoadedObjectInfo;
+
 protected:
   static const unsigned AbsoluteSymbolSection = ~0U;
 
@@ -282,7 +283,6 @@ class RuntimeDyldImpl {
   // modules.  This map is indexed by symbol name.
   StringMap<RelocationList> ExternalSymbolRelocations;
 
-
   typedef std::map<RelocationValueRef, uintptr_t> StubMap;
 
   Triple::ArchType Arch;
@@ -307,8 +307,7 @@ class RuntimeDyldImpl {
   // the end of the list while the list is being processed.
   sys::Mutex lock;
 
-  using NotifyStubEmittedFunction =
-    RuntimeDyld::NotifyStubEmittedFunction;
+  using NotifyStubEmittedFunction = RuntimeDyld::NotifyStubEmittedFunction;
   NotifyStubEmittedFunction NotifyStubEmitted;
 
   virtual unsigned getMaxStubSize() const = 0;
@@ -369,8 +368,7 @@ class RuntimeDyldImpl {
   ///        used for emits, else allocateDataSection() will be used.
   /// \return SectionID.
   Expected<unsigned> emitSection(const ObjectFile &Obj,
-                                 const SectionRef &Section,
-                                 bool IsCode);
+                                 const SectionRef &Section, bool IsCode);
 
   /// Find Section in LocalSections. If the secton is not found - emit
   ///        it and store in LocalSections.
@@ -437,7 +435,8 @@ class RuntimeDyldImpl {
 
   // Hook for the subclasses to do further processing when a symbol is added to
   // the global symbol table. This function may modify the symbol table entry.
-  virtual void processNewSymbol(const SymbolRef &ObjSymbol, SymbolTableEntry& Entry) {}
+  virtual void processNewSymbol(const SymbolRef &ObjSymbol,
+                                SymbolTableEntry &Entry) {}
 
   // Return true if the relocation R may require allocating a GOT entry.
   virtual bool relocationNeedsGot(const RelocationRef &R) const {
@@ -446,15 +445,14 @@ class RuntimeDyldImpl {
 
   // Return true if the relocation R may require allocating a stub.
   virtual bool relocationNeedsStub(const RelocationRef &R) const {
-    return true;    // Conservative answer
+    return true; // Conservative answer
   }
 
 public:
   RuntimeDyldImpl(RuntimeDyld::MemoryManager &MemMgr,
                   JITSymbolResolver &Resolver)
-    : MemMgr(MemMgr), Resolver(Resolver),
-      ProcessAllSections(false), HasError(false) {
-  }
+      : MemMgr(MemMgr), Resolver(Resolver), ProcessAllSections(false),
+        HasError(false) {}
 
   virtual ~RuntimeDyldImpl();
 
@@ -488,7 +486,7 @@ class RuntimeDyldImpl {
           Sections[SectionID].getStubOffset() + getMaxStubSize());
   }
 
-  uint8_t* getSymbolLocalAddress(StringRef Name) const {
+  uint8_t *getSymbolLocalAddress(StringRef Name) const {
     // FIXME: Just look up as a function for now. Overly simple of course.
     // Work in progress.
     RTDyldSymbolTable::const_iterator pos = GlobalSymbolTable.find(Name);
@@ -534,8 +532,8 @@ class RuntimeDyldImpl {
     for (const auto &KV : GlobalSymbolTable) {
       auto SectionID = KV.second.getSectionID();
       uint64_t SectionAddr = getSectionLoadAddress(SectionID);
-      Result[KV.first()] =
-        JITEvaluatedSymbol(SectionAddr + KV.second.getOffset(), KV.second.getFlags());
+      Result[KV.first()] = JITEvaluatedSymbol(
+          SectionAddr + KV.second.getOffset(), KV.second.getFlags());
     }
 
     return Result;
diff --git a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.cpp b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.cpp
index 9ca76602ea18e05..f07636c4e35b6a3 100644
--- a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.cpp
+++ b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.cpp
@@ -39,7 +39,7 @@ class LoadedMachOObjectInfo final
   }
 };
 
-}
+} // namespace
 
 namespace llvm {
 
@@ -50,16 +50,12 @@ int64_t RuntimeDyldMachO::memcpyAddend(const RelocationEntry &RE) const {
   return static_cast<int64_t>(readBytesUnaligned(Src, NumBytes));
 }
 
-Expected<relocation_iterator>
-RuntimeDyldMachO::processScatteredVANILLA(
-                          unsigned SectionID, relocation_iterator RelI,
-                          const ObjectFile &BaseObjT,
-                          RuntimeDyldMachO::ObjSectionToIDMap &ObjSectionToID,
-                          bool TargetIsLocalThumbFunc) {
-  const MachOObjectFile &Obj =
-    static_cast<const MachOObjectFile&>(BaseObjT);
-  MachO::any_relocation_info RE =
-    Obj.getRelocation(RelI->getRawDataRefImpl());
+Expected<relocation_iterator> RuntimeDyldMachO::processScatteredVANILLA(
+    unsigned SectionID, relocation_iterator RelI, const ObjectFile &BaseObjT,
+    RuntimeDyldMachO::ObjSectionToIDMap &ObjSectionToID,
+    bool TargetIsLocalThumbFunc) {
+  const MachOObjectFile &Obj = static_cast<const MachOObjectFile &>(BaseObjT);
+  MachO::any_relocation_info RE = Obj.getRelocation(RelI->getRawDataRefImpl());
 
   SectionEntry &Section = Sections[SectionID];
   uint32_t RelocType = Obj.getAnyRelocationType(RE);
@@ -78,7 +74,7 @@ RuntimeDyldMachO::processScatteredVANILLA(
   bool IsCode = TargetSection.isText();
   uint32_t TargetSectionID = ~0U;
   if (auto TargetSectionIDOrErr =
-        findOrEmitSection(Obj, TargetSection, IsCode, ObjSectionToID))
+          findOrEmitSection(Obj, TargetSection, IsCode, ObjSectionToID))
     TargetSectionID = *TargetSectionIDOrErr;
   else
     return TargetSectionIDOrErr.takeError();
@@ -92,14 +88,11 @@ RuntimeDyldMachO::processScatteredVANILLA(
   return ++RelI;
 }
 
-
-Expected<RelocationValueRef>
-RuntimeDyldMachO::getRelocationValueRef(
+Expected<RelocationValueRef> RuntimeDyldMachO::getRelocationValueRef(
     const ObjectFile &BaseTObj, const relocation_iterator &RI,
     const RelocationEntry &RE, ObjSectionToIDMap &ObjSectionToID) {
 
-  const MachOObjectFile &Obj =
-      static_cast<const MachOObjectFile &>(BaseTObj);
+  const MachOObjectFile &Obj = static_cast<const MachOObjectFile &>(BaseTObj);
   MachO::any_relocation_info RelInfo =
       Obj.getRelocation(RI->getRawDataRefImpl());
   RelocationValueRef Value;
@@ -113,7 +106,7 @@ RuntimeDyldMachO::getRelocationValueRef(
     else
       return TargetNameOrErr.takeError();
     RTDyldSymbolTable::const_iterator SI =
-      GlobalSymbolTable.find(TargetName.data());
+        GlobalSymbolTable.find(TargetName.data());
     if (SI != GlobalSymbolTable.end()) {
       const auto &SymInfo = SI->second;
       Value.SectionID = SymInfo.getSectionID();
@@ -125,8 +118,8 @@ RuntimeDyldMachO::getRelocationValueRef(
   } else {
     SectionRef Sec = Obj.getAnyRelocationSection(RelInfo);
     bool IsCode = Sec.isText();
-    if (auto SectionIDOrErr = findOrEmitSection(Obj, Sec, IsCode,
-                                                ObjSectionToID))
+    if (auto SectionIDOrErr =
+            findOrEmitSection(Obj, Sec, IsCode, ObjSectionToID))
       Value.SectionID = *SectionIDOrErr;
     else
       return SectionIDOrErr.takeError();
@@ -154,9 +147,9 @@ void RuntimeDyldMachO::dumpRelocationToResolve(const RelocationEntry &RE,
   dbgs() << "resolveRelocation Section: " << RE.SectionID
          << " LocalAddress: " << format("%p", LocalAddress)
          << " FinalAddress: " << format("0x%016" PRIx64, FinalAddress)
-         << " Value: " << format("0x%016" PRIx64, Value) << " Addend: " << RE.Addend
-         << " isPCRel: " << RE.IsPCRel << " MachoType: " << RE.RelType
-         << " Size: " << (1 << RE.Size) << "\n";
+         << " Value: " << format("0x%016" PRIx64, Value)
+         << " Addend: " << RE.Addend << " isPCRel: " << RE.IsPCRel
+         << " MachoType: " << RE.RelType << " Size: " << (1 << RE.Size) << "\n";
 }
 
 section_iterator
@@ -175,12 +168,10 @@ RuntimeDyldMachO::getSectionByAddress(const MachOObjectFile &Obj,
   return SE;
 }
 
-
 // Populate __pointers section.
 Error RuntimeDyldMachO::populateIndirectSymbolPointersSection(
-                                                    const MachOObjectFile &Obj,
-                                                    const SectionRef &PTSection,
-                                                    unsigned PTSectionID) {
+    const MachOObjectFile &Obj, const SectionRef &PTSection,
+    unsigned PTSectionID) {
   assert(!Obj.is64Bit() &&
          "Pointer table section not supported in 64-bit MachO.");
 
@@ -202,7 +193,7 @@ Error RuntimeDyldMachO::populateIndirectSymbolPointersSection(
 
   for (unsigned i = 0; i < NumPTEntries; ++i) {
     unsigned SymbolIndex =
-      Obj.getIndirectSymbolTableEntry(DySymTabCmd, FirstIndirectSymbol + i);
+        Obj.getIndirectSymbolTableEntry(DySymTabCmd, FirstIndirectSymbol + i);
     symbol_iterator SI = Obj.getSymbolByIndex(SymbolIndex);
     StringRef IndirectSymbolName;
     if (auto IndirectSymbolNameOrErr = SI->getName())
@@ -211,8 +202,8 @@ Error RuntimeDyldMachO::populateIndirectSymbolPointersSection(
       return IndirectSymbolNameOrErr.takeError();
     LLVM_DEBUG(dbgs() << "  " << IndirectSymbolName << ": index " << SymbolIndex
                       << ", PT offset: " << PTEntryOffset << "\n");
-    RelocationEntry RE(PTSectionID, PTEntryOffset,
-                       MachO::GENERIC_RELOC_VANILLA, 0, false, 2);
+    RelocationEntry RE(PTSectionID, PTEntryOffset, MachO::GENERIC_RELOC_VANILLA,
+                       0, false, 2);
     addRelocationForSymbol(RE, IndirectSymbolName);
     PTEntryOffset += PTEntrySize;
   }
@@ -224,9 +215,8 @@ bool RuntimeDyldMachO::isCompatibleFile(const object::ObjectFile &Obj) const {
 }
 
 template <typename Impl>
-Error
-RuntimeDyldMachOCRTPBase<Impl>::finalizeLoad(const ObjectFile &Obj,
-                                             ObjSectionToIDMap &SectionMap) {
+Error RuntimeDyldMachOCRTPBase<Impl>::finalizeLoad(
+    const ObjectFile &Obj, ObjSectionToIDMap &SectionMap) {
   unsigned EHFrameSID = RTDYLD_INVALID_SECTION_ID;
   unsigned TextSID = RTDYLD_INVALID_SECTION_ID;
   unsigned ExceptTabSID = RTDYLD_INVALID_SECTION_ID;
@@ -247,14 +237,14 @@ RuntimeDyldMachOCRTPBase<Impl>::finalizeLoad(const ObjectFile &Obj,
       else
         return TextSIDOrErr.takeError();
     } else if (Name == "__eh_frame") {
-      if (auto EHFrameSIDOrErr = findOrEmitSection(Obj, Section, false,
-                                                   SectionMap))
+      if (auto EHFrameSIDOrErr =
+              findOrEmitSection(Obj, Section, false, SectionMap))
         EHFrameSID = *EHFrameSIDOrErr;
       else
         return EHFrameSIDOrErr.takeError();
     } else if (Name == "__gcc_except_tab") {
-      if (auto ExceptTabSIDOrErr = findOrEmitSection(Obj, Section, true,
-                                                     SectionMap))
+      if (auto ExceptTabSIDOrErr =
+              findOrEmitSection(Obj, Section, true, SectionMap))
         ExceptTabSID = *ExceptTabSIDOrErr;
       else
         return ExceptTabSIDOrErr.takeError();
@@ -266,7 +256,7 @@ RuntimeDyldMachOCRTPBase<Impl>::finalizeLoad(const ObjectFile &Obj,
     }
   }
   UnregisteredEHFrameSections.push_back(
-    EHFrameRelatedSections(EHFrameSID, TextSID, ExceptTabSID));
+      EHFrameRelatedSections(EHFrameSID, TextSID, ExceptTabSID));
 
   return Error::success();
 }
@@ -369,8 +359,7 @@ RuntimeDyldMachO::create(Triple::ArchType Arch,
 std::unique_ptr<RuntimeDyld::LoadedObjectInfo>
 RuntimeDyldMachO::loadObject(const object::ObjectFile &O) {
   if (auto ObjSectionToIDOrErr = loadObjectImpl(O))
-    return std::make_unique<LoadedMachOObjectInfo>(*this,
-                                                    *ObjSectionToIDOrErr);
+    return std::make_unique<LoadedMachOObjectInfo>(*this, *ObjSectionToIDOrErr);
   else {
     HasError = true;
     raw_string_ostream ErrStream(ErrorStr);
diff --git a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.h b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.h
index 650e7b79fbb8e1b..d00422682c027af 100644
--- a/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.h
+++ b/llvm/lib/ExecutionEngine/RuntimeDyld/RuntimeDyldMachO.h
@@ -64,16 +64,15 @@ class RuntimeDyldMachO : public RuntimeDyldImpl {
   RelocationEntry getRelocationEntry(unsigned SectionID,
                                      const ObjectFile &BaseTObj,
                                      const relocation_iterator &RI) const {
-    const MachOObjectFile &Obj =
-      static_cast<const MachOObjectFile &>(BaseTObj);
+    const MachOObjectFile &Obj = static_cast<const MachOObjectFile &>(BaseTObj);
     MachO::any_relocation_info RelInfo =
-      Obj.getRelocation(RI->getRawDataRefImpl());
+        Obj.getRelocation(RI->getRawDataRefImpl());
 
     bool IsPCRel = Obj.getAnyRelocationPCRel(RelInfo);
     unsigned Size = Obj.getAnyRelocationLength(RelInfo);
     uint64_t Offset = RI->getOffset();
-    MachO::RelocationInfoType RelType =
-      static_cast<MachO::RelocationInfoType>(Obj.getAnyRelocationType(RelInfo));
+    MachO::RelocationInfoType RelType = static_cast<MachO::RelocationInfoType>(
+        Obj.getAnyRelocationType(RelInfo));
 
     return RelocationEntry(SectionID, Offset, RelType, 0, IsPCRel, Size);
   }
@@ -94,11 +93,9 @@ class RuntimeDyldMachO : public RuntimeDyldImpl {
   /// In both cases the Addend field is *NOT* fixed up to be PC-relative. That
   /// should be done by the caller where appropriate by calling makePCRel on
   /// the RelocationValueRef.
-  Expected<RelocationValueRef>
-  getRelocationValueRef(const ObjectFile &BaseTObj,
-                        const relocation_iterator &RI,
-                        const RelocationEntry &RE,
-                        ObjSectionToIDMap &ObjSectionToID);
+  Expected<RelocationValueRef> getRelocationValueRef(
+      const ObjectFile &BaseTObj, const relocation_iterator &RI,
+      const RelocationEntry &RE, ObjSectionToIDMap &ObjSectionToID);
 
   /// Make the RelocationValueRef addend PC-relative.
   void makeValueAddendPCRel(RelocationValueRef &Value,
@@ -112,18 +109,15 @@ class RuntimeDyldMachO : public RuntimeDyldImpl {
   static section_iterator getSectionByAddress(const MachOObjectFile &Obj,
                                               uint64_t Addr);
 
-
   // Populate __pointers section.
   Error populateIndirectSymbolPointersSection(const MachOObjectFile &Obj,
                                               const SectionRef &PTSection,
                                               unsigned PTSectionID);
 
 public:
-
   /// Create a RuntimeDyldMachO instance for the given target architecture.
   static std::unique_ptr<RuntimeDyldMachO>
-  create(Triple::ArchType Arch,
-         RuntimeDyld::MemoryManager &MemMgr,
+  create(Triple::ArchType Arch, RuntimeDyld::MemoryManager &MemMgr,
          JITSymbolResolver &Resolver);
 
   std::unique_ptr<RuntimeDyld::LoadedObjectInfo>
@@ -153,7 +147,7 @@ class RuntimeDyldMachOCRTPBase : public RuntimeDyldMachO {
 public:
   RuntimeDyldMachOCRTPBase(RuntimeDyld::MemoryManager &MemMgr,
                            JITSymbolResolver &Resolver)
-    : RuntimeDyldMachO(MemMgr, Resolver) {}
+      : RuntimeDyldMachO(MemMgr, Resolver) {}
 
   Error finalizeLoad(const ObjectFile &Obj,
                      ObjSectionToIDMap &SectionMap) override;
diff --git a/llvm/lib/ExecutionEngine/TargetSelect.cpp b/llvm/lib/ExecutionEngine/TargetSelect.cpp
index 72fb16fbf203caa..08db6eb43c6aeea 100644
--- a/llvm/lib/ExecutionEngine/TargetSelect.cpp
+++ b/llvm/lib/ExecutionEngine/TargetSelect.cpp
@@ -36,10 +36,10 @@ TargetMachine *EngineBuilder::selectTarget() {
 
 /// selectTarget - Pick a target either via -march or by guessing the native
 /// arch.  Add any CPU features specified via -mcpu or -mattr.
-TargetMachine *EngineBuilder::selectTarget(const Triple &TargetTriple,
-                              StringRef MArch,
-                              StringRef MCPU,
-                              const SmallVectorImpl<std::string>& MAttrs) {
+TargetMachine *
+EngineBuilder::selectTarget(const Triple &TargetTriple, StringRef MArch,
+                            StringRef MCPU,
+                            const SmallVectorImpl<std::string> &MAttrs) {
   Triple TheTriple(TargetTriple);
   if (TheTriple.getTriple().empty())
     TheTriple.setTriple(sys::getProcessTriple());
@@ -87,7 +87,7 @@ TargetMachine *EngineBuilder::selectTarget(const Triple &TargetTriple,
   TargetMachine *Target =
       TheTarget->createTargetMachine(TheTriple.getTriple(), MCPU, FeaturesStr,
                                      Options, RelocModel, CMModel, OptLevel,
-				     /*JIT*/ true);
+                                     /*JIT*/ true);
   Target->Options.EmulatedTLS = EmulatedTLS;
 
   assert(Target && "Could not allocate target machine!");
diff --git a/llvm/lib/Extensions/CMakeLists.txt b/llvm/lib/Extensions/CMakeLists.txt
index c1007dfcde58caf..f12251cc5a7e6ca 100644
--- a/llvm/lib/Extensions/CMakeLists.txt
+++ b/llvm/lib/Extensions/CMakeLists.txt
@@ -1,6 +1,3 @@
-add_llvm_component_library(LLVMExtensions
-  Extensions.cpp
+add_llvm_component_library(LLVMExtensions Extensions.cpp
 
-  LINK_COMPONENTS
-  Support
-)
+                               LINK_COMPONENTS Support)
diff --git a/llvm/lib/Extensions/Extensions.cpp b/llvm/lib/Extensions/Extensions.cpp
index 0d25cbda38e0045..dd63b78f604623d 100644
--- a/llvm/lib/Extensions/Extensions.cpp
+++ b/llvm/lib/Extensions/Extensions.cpp
@@ -1,15 +1,13 @@
 #include "llvm/Passes/PassPlugin.h"
 #define HANDLE_EXTENSION(Ext)                                                  \
-		llvm::PassPluginLibraryInfo get##Ext##PluginInfo();
+  llvm::PassPluginLibraryInfo get##Ext##PluginInfo();
 #include "llvm/Support/Extension.def"
 
-
 namespace llvm {
-	namespace details {
-		void extensions_anchor() {
-#define HANDLE_EXTENSION(Ext)                                                  \
-			get##Ext##PluginInfo();
+namespace details {
+void extensions_anchor() {
+#define HANDLE_EXTENSION(Ext) get##Ext##PluginInfo();
 #include "llvm/Support/Extension.def"
-		}
-	}
 }
+} // namespace details
+} // namespace llvm
diff --git a/llvm/lib/FileCheck/CMakeLists.txt b/llvm/lib/FileCheck/CMakeLists.txt
index 91c80e1482f1985..bbc323b1f5c0449 100644
--- a/llvm/lib/FileCheck/CMakeLists.txt
+++ b/llvm/lib/FileCheck/CMakeLists.txt
@@ -1,8 +1,6 @@
-add_llvm_component_library(LLVMFileCheck
-  FileCheck.cpp
+add_llvm_component_library(LLVMFileCheck FileCheck.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  "${LLVM_MAIN_INCLUDE_DIR}/llvm/FileCheck"
-)
+                               ADDITIONAL_HEADER_DIRS
+                           "${LLVM_MAIN_INCLUDE_DIR}/llvm/FileCheck")
 
-target_link_libraries(LLVMFileCheck LLVMSupport)
+    target_link_libraries(LLVMFileCheck LLVMSupport)
diff --git a/llvm/lib/FileCheck/FileCheck.cpp b/llvm/lib/FileCheck/FileCheck.cpp
index 49fda8fb63cd36c..129699169d84014 100644
--- a/llvm/lib/FileCheck/FileCheck.cpp
+++ b/llvm/lib/FileCheck/FileCheck.cpp
@@ -9,8 +9,8 @@
 // FileCheck does a line-by line check of a file that validates whether it
 // contains the expected content.  This is useful for regression tests etc.
 //
-// This file implements most of the API that will be used by the FileCheck utility
-// as well as various unittests.
+// This file implements most of the API that will be used by the FileCheck
+// utility as well as various unittests.
 //===----------------------------------------------------------------------===//
 
 #include "llvm/FileCheck/FileCheck.h"
@@ -307,7 +307,7 @@ Pattern::parseVariable(StringRef &Str, const SourceMgr &SM) {
 
   StringRef Name = Str.take_front(I);
   Str = Str.substr(I);
-  return VariableProperties {Name, IsPseudo};
+  return VariableProperties{Name, IsPseudo};
 }
 
 // StringRef holding all characters considered as horizontal whitespaces by
@@ -1847,20 +1847,22 @@ bool FileCheck::readCheckFile(
 
     // Verify that CHECK-LABEL lines do not define or use variables
     if ((CheckTy == Check::CheckLabel) && P.hasVariable()) {
-      SM.PrintMessage(
-          SMLoc::getFromPointer(UsedPrefixStart), SourceMgr::DK_Error,
-          "found '" + UsedPrefix + "-LABEL:'"
-                                   " with variable definition or use");
+      SM.PrintMessage(SMLoc::getFromPointer(UsedPrefixStart),
+                      SourceMgr::DK_Error,
+                      "found '" + UsedPrefix +
+                          "-LABEL:'"
+                          " with variable definition or use");
       return true;
     }
 
-    // Verify that CHECK-NEXT/SAME/EMPTY lines have at least one CHECK line before them.
+    // Verify that CHECK-NEXT/SAME/EMPTY lines have at least one CHECK line
+    // before them.
     if ((CheckTy == Check::CheckNext || CheckTy == Check::CheckSame ||
          CheckTy == Check::CheckEmpty) &&
         CheckStrings->empty()) {
-      StringRef Type = CheckTy == Check::CheckNext
-                           ? "NEXT"
-                           : CheckTy == Check::CheckEmpty ? "EMPTY" : "SAME";
+      StringRef Type = CheckTy == Check::CheckNext    ? "NEXT"
+                       : CheckTy == Check::CheckEmpty ? "EMPTY"
+                                                      : "SAME";
       SM.PrintMessage(SMLoc::getFromPointer(UsedPrefixStart),
                       SourceMgr::DK_Error,
                       "found '" + UsedPrefix + "-" + Type +
diff --git a/llvm/lib/Frontend/CMakeLists.txt b/llvm/lib/Frontend/CMakeLists.txt
index fa48c975a8b3e5a..231726c49590ee6 100644
--- a/llvm/lib/Frontend/CMakeLists.txt
+++ b/llvm/lib/Frontend/CMakeLists.txt
@@ -1,3 +1 @@
-add_subdirectory(HLSL)
-add_subdirectory(OpenACC)
-add_subdirectory(OpenMP)
+add_subdirectory(HLSL) add_subdirectory(OpenACC) add_subdirectory(OpenMP)
diff --git a/llvm/lib/Frontend/HLSL/CMakeLists.txt b/llvm/lib/Frontend/HLSL/CMakeLists.txt
index eda6cb8e69a4971..980d9118acdcf48 100644
--- a/llvm/lib/Frontend/HLSL/CMakeLists.txt
+++ b/llvm/lib/Frontend/HLSL/CMakeLists.txt
@@ -1,14 +1,10 @@
-add_llvm_component_library(LLVMFrontendHLSL
-  HLSLResource.cpp
+add_llvm_component_library(LLVMFrontendHLSL HLSLResource.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/Frontend
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/Frontend/HLSL
+                               ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+                           llvm / Frontend ${LLVM_MAIN_INCLUDE_DIR} / llvm /
+                           Frontend /
+                           HLSL
 
-  DEPENDS
-  intrinsics_gen
+                               DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  Core
-  Support
-  )
+                                   LINK_COMPONENTS Core Support)
diff --git a/llvm/lib/Frontend/OpenACC/CMakeLists.txt b/llvm/lib/Frontend/OpenACC/CMakeLists.txt
index f35201497869058..348d07347c79018 100644
--- a/llvm/lib/Frontend/OpenACC/CMakeLists.txt
+++ b/llvm/lib/Frontend/OpenACC/CMakeLists.txt
@@ -1,13 +1,10 @@
-add_llvm_component_library(LLVMFrontendOpenACC
-  ACC.cpp
+add_llvm_component_library(LLVMFrontendOpenACC ACC.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/Frontend
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/Frontend/OpenACC
+                               ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+                           llvm / Frontend ${LLVM_MAIN_INCLUDE_DIR} / llvm /
+                           Frontend /
+                           OpenACC
 
-  DEPENDS
-  acc_gen
-)
-
-target_link_libraries(LLVMFrontendOpenACC LLVMSupport)
+                               DEPENDS acc_gen)
 
+    target_link_libraries(LLVMFrontendOpenACC LLVMSupport)
diff --git a/llvm/lib/Frontend/OpenMP/CMakeLists.txt b/llvm/lib/Frontend/OpenMP/CMakeLists.txt
index a2eedabc3ed6903..35f7192c5fe9553 100644
--- a/llvm/lib/Frontend/OpenMP/CMakeLists.txt
+++ b/llvm/lib/Frontend/OpenMP/CMakeLists.txt
@@ -1,23 +1,11 @@
-add_llvm_component_library(LLVMFrontendOpenMP
-  OMP.cpp
-  OMPContext.cpp
-  OMPIRBuilder.cpp
+add_llvm_component_library(
+    LLVMFrontendOpenMP OMP.cpp OMPContext.cpp OMPIRBuilder.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/Frontend
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/Frontend/OpenMP
+        ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+    llvm / Frontend ${LLVM_MAIN_INCLUDE_DIR} / llvm / Frontend /
+    OpenMP
 
-  DEPENDS
-  intrinsics_gen
-  omp_gen
+        DEPENDS intrinsics_gen omp_gen
 
-  LINK_COMPONENTS
-  Core
-  Support
-  TargetParser
-  TransformUtils
-  Analysis
-  MC
-  Scalar
-  BitReader
-  )
+            LINK_COMPONENTS Core Support TargetParser TransformUtils Analysis MC
+                Scalar BitReader)
diff --git a/llvm/lib/FuzzMutate/CMakeLists.txt b/llvm/lib/FuzzMutate/CMakeLists.txt
index e162c1bbe8e1078..4fa1aa2bed6eb96 100644
--- a/llvm/lib/FuzzMutate/CMakeLists.txt
+++ b/llvm/lib/FuzzMutate/CMakeLists.txt
@@ -1,37 +1,23 @@
-# Generic helper for fuzzer binaries.
-# This should not depend on LLVM IR etc.
-add_llvm_component_library(LLVMFuzzerCLI
-  FuzzerCLI.cpp
-  PARTIAL_SOURCES_INTENDED
+#Generic helper for fuzzer binaries.
+#This should not depend on LLVM IR etc.
+add_llvm_component_library(LLVMFuzzerCLI FuzzerCLI.cpp PARTIAL_SOURCES_INTENDED
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/FuzzMutate
+                               ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+                           llvm /
+                           FuzzMutate
 
-  LINK_COMPONENTS
-  Support
-  TargetParser
-  )
+                               LINK_COMPONENTS Support TargetParser)
 
-# Library for using LLVM IR together with fuzzers.
-add_llvm_component_library(LLVMFuzzMutate
-  IRMutator.cpp
-  OpDescriptor.cpp
-  Operations.cpp
-  RandomIRBuilder.cpp
+#Library for using LLVM IR together with fuzzers.
+    add_llvm_component_library(
+        LLVMFuzzMutate IRMutator.cpp OpDescriptor.cpp Operations
+            .cpp RandomIRBuilder.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/FuzzMutate
+                ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+        llvm /
+        FuzzMutate
 
-  DEPENDS
-  intrinsics_gen
+            DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  Analysis
-  BitReader
-  BitWriter
-  Core
-  Scalar
-  Support
-  TargetParser
-  TransformUtils
-  )
+                LINK_COMPONENTS Analysis BitReader BitWriter Core Scalar Support
+                    TargetParser TransformUtils)
diff --git a/llvm/lib/Fuzzer/README.txt b/llvm/lib/Fuzzer/README.txt
index 53ac637638f64ea..de140b0406de1f2 100644
--- a/llvm/lib/Fuzzer/README.txt
+++ b/llvm/lib/Fuzzer/README.txt
@@ -1 +1 @@
-libFuzzer was moved to compiler-rt in  https://reviews.llvm.org/D36908.
+libFuzzer was moved to compiler - rt in https: // reviews.llvm.org/D36908.
diff --git a/llvm/lib/IR/AbstractCallSite.cpp b/llvm/lib/IR/AbstractCallSite.cpp
index b7a10846a0d3d78..297fd58f2105779 100644
--- a/llvm/lib/IR/AbstractCallSite.cpp
+++ b/llvm/lib/IR/AbstractCallSite.cpp
@@ -118,7 +118,8 @@ AbstractCallSite::AbstractCallSite(const Use *U)
 
   NumCallbackCallSites++;
 
-  assert(CallbackEncMD->getNumOperands() >= 2 && "Incomplete !callback metadata");
+  assert(CallbackEncMD->getNumOperands() >= 2 &&
+         "Incomplete !callback metadata");
 
   unsigned NumCallOperands = CB->arg_size();
   // Skip the var-arg flag at the end when reading the metadata.
diff --git a/llvm/lib/IR/AsmWriter.cpp b/llvm/lib/IR/AsmWriter.cpp
index e190d82127908db..35aa84589b3c02d 100644
--- a/llvm/lib/IR/AsmWriter.cpp
+++ b/llvm/lib/IR/AsmWriter.cpp
@@ -287,27 +287,69 @@ static const Module *getModuleFromVal(const Value *V) {
 
 static void PrintCallingConv(unsigned cc, raw_ostream &Out) {
   switch (cc) {
-  default:                         Out << "cc" << cc; break;
-  case CallingConv::Fast:          Out << "fastcc"; break;
-  case CallingConv::Cold:          Out << "coldcc"; break;
-  case CallingConv::WebKit_JS:     Out << "webkit_jscc"; break;
-  case CallingConv::AnyReg:        Out << "anyregcc"; break;
-  case CallingConv::PreserveMost:  Out << "preserve_mostcc"; break;
-  case CallingConv::PreserveAll:   Out << "preserve_allcc"; break;
-  case CallingConv::CXX_FAST_TLS:  Out << "cxx_fast_tlscc"; break;
-  case CallingConv::GHC:           Out << "ghccc"; break;
-  case CallingConv::Tail:          Out << "tailcc"; break;
-  case CallingConv::CFGuard_Check: Out << "cfguard_checkcc"; break;
-  case CallingConv::X86_StdCall:   Out << "x86_stdcallcc"; break;
-  case CallingConv::X86_FastCall:  Out << "x86_fastcallcc"; break;
-  case CallingConv::X86_ThisCall:  Out << "x86_thiscallcc"; break;
-  case CallingConv::X86_RegCall:   Out << "x86_regcallcc"; break;
-  case CallingConv::X86_VectorCall:Out << "x86_vectorcallcc"; break;
-  case CallingConv::Intel_OCL_BI:  Out << "intel_ocl_bicc"; break;
-  case CallingConv::ARM_APCS:      Out << "arm_apcscc"; break;
-  case CallingConv::ARM_AAPCS:     Out << "arm_aapcscc"; break;
-  case CallingConv::ARM_AAPCS_VFP: Out << "arm_aapcs_vfpcc"; break;
-  case CallingConv::AArch64_VectorCall: Out << "aarch64_vector_pcs"; break;
+  default:
+    Out << "cc" << cc;
+    break;
+  case CallingConv::Fast:
+    Out << "fastcc";
+    break;
+  case CallingConv::Cold:
+    Out << "coldcc";
+    break;
+  case CallingConv::WebKit_JS:
+    Out << "webkit_jscc";
+    break;
+  case CallingConv::AnyReg:
+    Out << "anyregcc";
+    break;
+  case CallingConv::PreserveMost:
+    Out << "preserve_mostcc";
+    break;
+  case CallingConv::PreserveAll:
+    Out << "preserve_allcc";
+    break;
+  case CallingConv::CXX_FAST_TLS:
+    Out << "cxx_fast_tlscc";
+    break;
+  case CallingConv::GHC:
+    Out << "ghccc";
+    break;
+  case CallingConv::Tail:
+    Out << "tailcc";
+    break;
+  case CallingConv::CFGuard_Check:
+    Out << "cfguard_checkcc";
+    break;
+  case CallingConv::X86_StdCall:
+    Out << "x86_stdcallcc";
+    break;
+  case CallingConv::X86_FastCall:
+    Out << "x86_fastcallcc";
+    break;
+  case CallingConv::X86_ThisCall:
+    Out << "x86_thiscallcc";
+    break;
+  case CallingConv::X86_RegCall:
+    Out << "x86_regcallcc";
+    break;
+  case CallingConv::X86_VectorCall:
+    Out << "x86_vectorcallcc";
+    break;
+  case CallingConv::Intel_OCL_BI:
+    Out << "intel_ocl_bicc";
+    break;
+  case CallingConv::ARM_APCS:
+    Out << "arm_apcscc";
+    break;
+  case CallingConv::ARM_AAPCS:
+    Out << "arm_aapcscc";
+    break;
+  case CallingConv::ARM_AAPCS_VFP:
+    Out << "arm_aapcs_vfpcc";
+    break;
+  case CallingConv::AArch64_VectorCall:
+    Out << "aarch64_vector_pcs";
+    break;
   case CallingConv::AArch64_SVE_VectorCall:
     Out << "aarch64_sve_vector_pcs";
     break;
@@ -317,39 +359,81 @@ static void PrintCallingConv(unsigned cc, raw_ostream &Out) {
   case CallingConv::AArch64_SME_ABI_Support_Routines_PreserveMost_From_X2:
     Out << "aarch64_sme_preservemost_from_x2";
     break;
-  case CallingConv::MSP430_INTR:   Out << "msp430_intrcc"; break;
-  case CallingConv::AVR_INTR:      Out << "avr_intrcc "; break;
-  case CallingConv::AVR_SIGNAL:    Out << "avr_signalcc "; break;
-  case CallingConv::PTX_Kernel:    Out << "ptx_kernel"; break;
-  case CallingConv::PTX_Device:    Out << "ptx_device"; break;
-  case CallingConv::X86_64_SysV:   Out << "x86_64_sysvcc"; break;
-  case CallingConv::Win64:         Out << "win64cc"; break;
-  case CallingConv::SPIR_FUNC:     Out << "spir_func"; break;
-  case CallingConv::SPIR_KERNEL:   Out << "spir_kernel"; break;
-  case CallingConv::Swift:         Out << "swiftcc"; break;
-  case CallingConv::SwiftTail:     Out << "swifttailcc"; break;
-  case CallingConv::X86_INTR:      Out << "x86_intrcc"; break;
+  case CallingConv::MSP430_INTR:
+    Out << "msp430_intrcc";
+    break;
+  case CallingConv::AVR_INTR:
+    Out << "avr_intrcc ";
+    break;
+  case CallingConv::AVR_SIGNAL:
+    Out << "avr_signalcc ";
+    break;
+  case CallingConv::PTX_Kernel:
+    Out << "ptx_kernel";
+    break;
+  case CallingConv::PTX_Device:
+    Out << "ptx_device";
+    break;
+  case CallingConv::X86_64_SysV:
+    Out << "x86_64_sysvcc";
+    break;
+  case CallingConv::Win64:
+    Out << "win64cc";
+    break;
+  case CallingConv::SPIR_FUNC:
+    Out << "spir_func";
+    break;
+  case CallingConv::SPIR_KERNEL:
+    Out << "spir_kernel";
+    break;
+  case CallingConv::Swift:
+    Out << "swiftcc";
+    break;
+  case CallingConv::SwiftTail:
+    Out << "swifttailcc";
+    break;
+  case CallingConv::X86_INTR:
+    Out << "x86_intrcc";
+    break;
   case CallingConv::DUMMY_HHVM:
     Out << "hhvmcc";
     break;
   case CallingConv::DUMMY_HHVM_C:
     Out << "hhvm_ccc";
     break;
-  case CallingConv::AMDGPU_VS:     Out << "amdgpu_vs"; break;
-  case CallingConv::AMDGPU_LS:     Out << "amdgpu_ls"; break;
-  case CallingConv::AMDGPU_HS:     Out << "amdgpu_hs"; break;
-  case CallingConv::AMDGPU_ES:     Out << "amdgpu_es"; break;
-  case CallingConv::AMDGPU_GS:     Out << "amdgpu_gs"; break;
-  case CallingConv::AMDGPU_PS:     Out << "amdgpu_ps"; break;
-  case CallingConv::AMDGPU_CS:     Out << "amdgpu_cs"; break;
+  case CallingConv::AMDGPU_VS:
+    Out << "amdgpu_vs";
+    break;
+  case CallingConv::AMDGPU_LS:
+    Out << "amdgpu_ls";
+    break;
+  case CallingConv::AMDGPU_HS:
+    Out << "amdgpu_hs";
+    break;
+  case CallingConv::AMDGPU_ES:
+    Out << "amdgpu_es";
+    break;
+  case CallingConv::AMDGPU_GS:
+    Out << "amdgpu_gs";
+    break;
+  case CallingConv::AMDGPU_PS:
+    Out << "amdgpu_ps";
+    break;
+  case CallingConv::AMDGPU_CS:
+    Out << "amdgpu_cs";
+    break;
   case CallingConv::AMDGPU_CS_Chain:
     Out << "amdgpu_cs_chain";
     break;
   case CallingConv::AMDGPU_CS_ChainPreserve:
     Out << "amdgpu_cs_chain_preserve";
     break;
-  case CallingConv::AMDGPU_KERNEL: Out << "amdgpu_kernel"; break;
-  case CallingConv::AMDGPU_Gfx:    Out << "amdgpu_gfx"; break;
+  case CallingConv::AMDGPU_KERNEL:
+    Out << "amdgpu_kernel";
+    break;
+  case CallingConv::AMDGPU_Gfx:
+    Out << "amdgpu_gfx";
+    break;
   }
 }
 
@@ -545,19 +629,45 @@ void TypePrinting::incorporateTypes() {
 /// names or up references to shorten the type name where possible.
 void TypePrinting::print(Type *Ty, raw_ostream &OS) {
   switch (Ty->getTypeID()) {
-  case Type::VoidTyID:      OS << "void"; return;
-  case Type::HalfTyID:      OS << "half"; return;
-  case Type::BFloatTyID:    OS << "bfloat"; return;
-  case Type::FloatTyID:     OS << "float"; return;
-  case Type::DoubleTyID:    OS << "double"; return;
-  case Type::X86_FP80TyID:  OS << "x86_fp80"; return;
-  case Type::FP128TyID:     OS << "fp128"; return;
-  case Type::PPC_FP128TyID: OS << "ppc_fp128"; return;
-  case Type::LabelTyID:     OS << "label"; return;
-  case Type::MetadataTyID:  OS << "metadata"; return;
-  case Type::X86_MMXTyID:   OS << "x86_mmx"; return;
-  case Type::X86_AMXTyID:   OS << "x86_amx"; return;
-  case Type::TokenTyID:     OS << "token"; return;
+  case Type::VoidTyID:
+    OS << "void";
+    return;
+  case Type::HalfTyID:
+    OS << "half";
+    return;
+  case Type::BFloatTyID:
+    OS << "bfloat";
+    return;
+  case Type::FloatTyID:
+    OS << "float";
+    return;
+  case Type::DoubleTyID:
+    OS << "double";
+    return;
+  case Type::X86_FP80TyID:
+    OS << "x86_fp80";
+    return;
+  case Type::FP128TyID:
+    OS << "fp128";
+    return;
+  case Type::PPC_FP128TyID:
+    OS << "ppc_fp128";
+    return;
+  case Type::LabelTyID:
+    OS << "label";
+    return;
+  case Type::MetadataTyID:
+    OS << "metadata";
+    return;
+  case Type::X86_MMXTyID:
+    OS << "x86_mmx";
+    return;
+  case Type::X86_AMXTyID:
+    OS << "x86_amx";
+    return;
+  case Type::TokenTyID:
+    OS << "token";
+    return;
   case Type::IntegerTyID:
     OS << 'i' << cast<IntegerType>(Ty)->getBitWidth();
     return;
@@ -589,7 +699,7 @@ void TypePrinting::print(Type *Ty, raw_ostream &OS) {
     const auto I = Type2Number.find(STy);
     if (I != Type2Number.end())
       OS << '%' << I->second;
-    else  // Not enumerated, print the hex address.
+    else // Not enumerated, print the hex address.
       OS << "%\"type " << STy << '\"';
     return;
   }
@@ -681,10 +791,10 @@ class SlotTracker : public AbstractSlotTrackerStorage {
 
 private:
   /// TheModule - The module for which we are holding slot numbers.
-  const Module* TheModule;
+  const Module *TheModule;
 
   /// TheFunction - The function for which we are holding slot numbers.
-  const Function* TheFunction = nullptr;
+  const Function *TheFunction = nullptr;
   bool FunctionProcessed = false;
   bool ShouldInitializeAllMetadata;
 
@@ -705,7 +815,7 @@ class SlotTracker : public AbstractSlotTrackerStorage {
   unsigned fNext = 0;
 
   /// mdnMap - Map for MDNodes.
-  DenseMap<const MDNode*, unsigned> mdnMap;
+  DenseMap<const MDNode *, unsigned> mdnMap;
   unsigned mdnNext = 0;
 
   /// asMap - The slot map for attribute sets.
@@ -783,7 +893,7 @@ class SlotTracker : public AbstractSlotTrackerStorage {
   void purgeFunction();
 
   /// MDNode map iterators.
-  using mdn_iterator = DenseMap<const MDNode*, unsigned>::iterator;
+  using mdn_iterator = DenseMap<const MDNode *, unsigned>::iterator;
 
   mdn_iterator mdn_begin() { return mdnMap.begin(); }
   mdn_iterator mdn_end() { return mdnMap.end(); }
@@ -793,10 +903,10 @@ class SlotTracker : public AbstractSlotTrackerStorage {
   /// AttributeSet map iterators.
   using as_iterator = DenseMap<AttributeSet, unsigned>::iterator;
 
-  as_iterator as_begin()   { return asMap.begin(); }
-  as_iterator as_end()     { return asMap.end(); }
+  as_iterator as_begin() { return asMap.begin(); }
+  as_iterator as_end() { return asMap.end(); }
   unsigned as_size() const { return asMap.size(); }
-  bool as_empty() const    { return asMap.empty(); }
+  bool as_empty() const { return asMap.empty(); }
 
   /// GUID map iterators.
   using guid_iterator = DenseMap<GlobalValue::GUID, unsigned>::iterator;
@@ -1027,8 +1137,9 @@ void SlotTracker::processFunction() {
     processFunctionMetadata(*TheFunction);
 
   // Add all the function arguments with no names.
-  for(Function::const_arg_iterator AI = TheFunction->arg_begin(),
-      AE = TheFunction->arg_end(); AI != AE; ++AI)
+  for (Function::const_arg_iterator AI = TheFunction->arg_begin(),
+                                    AE = TheFunction->arg_end();
+       AI != AE; ++AI)
     if (!AI->hasName())
       CreateFunctionSlot(&*AI);
 
@@ -1229,13 +1340,17 @@ void SlotTracker::CreateModuleSlot(const GlobalValue *V) {
   unsigned DestSlot = mNext++;
   mMap[V] = DestSlot;
 
-  ST_DEBUG("  Inserting value [" << V->getType() << "] = " << V << " slot=" <<
-           DestSlot << " [");
+  ST_DEBUG("  Inserting value [" << V->getType() << "] = " << V
+                                 << " slot=" << DestSlot << " [");
   // G = Global, F = Function, A = Alias, I = IFunc, o = other
-  ST_DEBUG((isa<GlobalVariable>(V) ? 'G' :
-            (isa<Function>(V) ? 'F' :
-             (isa<GlobalAlias>(V) ? 'A' :
-              (isa<GlobalIFunc>(V) ? 'I' : 'o')))) << "]\n");
+  ST_DEBUG(
+      (isa<GlobalVariable>(V)
+           ? 'G'
+           : (isa<Function>(V)
+                  ? 'F'
+                  : (isa<GlobalAlias>(V) ? 'A'
+                                         : (isa<GlobalIFunc>(V) ? 'I' : 'o'))))
+      << "]\n");
 }
 
 /// CreateSlot - Create a new slot for the specified value if it has no name.
@@ -1246,8 +1361,8 @@ void SlotTracker::CreateFunctionSlot(const Value *V) {
   fMap[V] = DestSlot;
 
   // G = Global, F = Function, o = other
-  ST_DEBUG("  Inserting value [" << V->getType() << "] = " << V << " slot=" <<
-           DestSlot << " [o]\n");
+  ST_DEBUG("  Inserting value [" << V->getType() << "] = " << V
+                                 << " slot=" << DestSlot << " [o]\n");
 }
 
 /// CreateModuleSlot - Insert the specified MDNode* into the slot table.
@@ -1335,13 +1450,13 @@ static void WriteOptimizationInfo(raw_ostream &Out, const User *U) {
     Out << FPO->getFastMathFlags();
 
   if (const OverflowingBinaryOperator *OBO =
-        dyn_cast<OverflowingBinaryOperator>(U)) {
+          dyn_cast<OverflowingBinaryOperator>(U)) {
     if (OBO->hasNoUnsignedWrap())
       Out << " nuw";
     if (OBO->hasNoSignedWrap())
       Out << " nsw";
   } else if (const PossiblyExactOperator *Div =
-               dyn_cast<PossiblyExactOperator>(U)) {
+                 dyn_cast<PossiblyExactOperator>(U)) {
     if (Div->isExact())
       Out << " exact";
   } else if (const GEPOperator *GEP = dyn_cast<GEPOperator>(U)) {
@@ -1601,13 +1716,14 @@ static void WriteConstantInternal(raw_ostream &Out, const Constant *CV,
         ++*InRangeOp;
     }
 
-    for (User::const_op_iterator OI=CE->op_begin(); OI != CE->op_end(); ++OI) {
+    for (User::const_op_iterator OI = CE->op_begin(); OI != CE->op_end();
+         ++OI) {
       if (InRangeOp && unsigned(OI - CE->op_begin()) == *InRangeOp)
         Out << "inrange ";
       WriterCtx.TypePrinter->print((*OI)->getType(), Out);
       Out << ' ';
       WriteAsOperandInternal(Out, *OI, WriterCtx);
-      if (OI+1 != CE->op_end())
+      if (OI + 1 != CE->op_end())
         Out << ", ";
     }
 
@@ -2583,15 +2699,12 @@ class AssemblyWriter {
   void writeOperand(const Value *Op, bool PrintType);
   void writeParamOperand(const Value *Operand, AttributeSet Attrs);
   void writeOperandBundles(const CallBase *Call);
-  void writeSyncScope(const LLVMContext &Context,
-                      SyncScope::ID SSID);
-  void writeAtomic(const LLVMContext &Context,
-                   AtomicOrdering Ordering,
+  void writeSyncScope(const LLVMContext &Context, SyncScope::ID SSID);
+  void writeAtomic(const LLVMContext &Context, AtomicOrdering Ordering,
                    SyncScope::ID SSID);
   void writeAtomicCmpXchg(const LLVMContext &Context,
                           AtomicOrdering SuccessOrdering,
-                          AtomicOrdering FailureOrdering,
-                          SyncScope::ID SSID);
+                          AtomicOrdering FailureOrdering, SyncScope::ID SSID);
 
   void writeAllMDNodes();
   void writeMDNode(unsigned Slot, const MDNode *Node);
@@ -2700,8 +2813,7 @@ void AssemblyWriter::writeSyncScope(const LLVMContext &Context,
 }
 
 void AssemblyWriter::writeAtomic(const LLVMContext &Context,
-                                 AtomicOrdering Ordering,
-                                 SyncScope::ID SSID) {
+                                 AtomicOrdering Ordering, SyncScope::ID SSID) {
   if (Ordering == AtomicOrdering::NotAtomic)
     return;
 
@@ -2836,18 +2948,22 @@ void AssemblyWriter::printModule(const Module *M) {
   }
 
   // Output all globals.
-  if (!M->global_empty()) Out << '\n';
+  if (!M->global_empty())
+    Out << '\n';
   for (const GlobalVariable &GV : M->globals()) {
-    printGlobal(&GV); Out << '\n';
+    printGlobal(&GV);
+    Out << '\n';
   }
 
   // Output all aliases.
-  if (!M->alias_empty()) Out << "\n";
+  if (!M->alias_empty())
+    Out << "\n";
   for (const GlobalAlias &GA : M->aliases())
     printAlias(&GA);
 
   // Output all ifuncs.
-  if (!M->ifunc_empty()) Out << "\n";
+  if (!M->ifunc_empty())
+    Out << "\n";
   for (const GlobalIFunc &GI : M->ifuncs())
     printIFunc(&GI);
 
@@ -2867,7 +2983,8 @@ void AssemblyWriter::printModule(const Module *M) {
   }
 
   // Output named metadata.
-  if (!M->named_metadata_empty()) Out << '\n';
+  if (!M->named_metadata_empty())
+    Out << '\n';
 
   for (const NamedMDNode &Node : M->named_metadata())
     printNamedMDNode(&Node);
@@ -3519,9 +3636,14 @@ void AssemblyWriter::printNamedMDNode(const NamedMDNode *NMD) {
 static void PrintVisibility(GlobalValue::VisibilityTypes Vis,
                             formatted_raw_ostream &Out) {
   switch (Vis) {
-  case GlobalValue::DefaultVisibility: break;
-  case GlobalValue::HiddenVisibility:    Out << "hidden "; break;
-  case GlobalValue::ProtectedVisibility: Out << "protected "; break;
+  case GlobalValue::DefaultVisibility:
+    break;
+  case GlobalValue::HiddenVisibility:
+    Out << "hidden ";
+    break;
+  case GlobalValue::ProtectedVisibility:
+    Out << "protected ";
+    break;
   }
 }
 
@@ -3534,29 +3656,34 @@ static void PrintDSOLocation(const GlobalValue &GV,
 static void PrintDLLStorageClass(GlobalValue::DLLStorageClassTypes SCT,
                                  formatted_raw_ostream &Out) {
   switch (SCT) {
-  case GlobalValue::DefaultStorageClass: break;
-  case GlobalValue::DLLImportStorageClass: Out << "dllimport "; break;
-  case GlobalValue::DLLExportStorageClass: Out << "dllexport "; break;
+  case GlobalValue::DefaultStorageClass:
+    break;
+  case GlobalValue::DLLImportStorageClass:
+    Out << "dllimport ";
+    break;
+  case GlobalValue::DLLExportStorageClass:
+    Out << "dllexport ";
+    break;
   }
 }
 
 static void PrintThreadLocalModel(GlobalVariable::ThreadLocalMode TLM,
                                   formatted_raw_ostream &Out) {
   switch (TLM) {
-    case GlobalVariable::NotThreadLocal:
-      break;
-    case GlobalVariable::GeneralDynamicTLSModel:
-      Out << "thread_local ";
-      break;
-    case GlobalVariable::LocalDynamicTLSModel:
-      Out << "thread_local(localdynamic) ";
-      break;
-    case GlobalVariable::InitialExecTLSModel:
-      Out << "thread_local(initialexec) ";
-      break;
-    case GlobalVariable::LocalExecTLSModel:
-      Out << "thread_local(localexec) ";
-      break;
+  case GlobalVariable::NotThreadLocal:
+    break;
+  case GlobalVariable::GeneralDynamicTLSModel:
+    Out << "thread_local ";
+    break;
+  case GlobalVariable::LocalDynamicTLSModel:
+    Out << "thread_local(localdynamic) ";
+    break;
+  case GlobalVariable::InitialExecTLSModel:
+    Out << "thread_local(initialexec) ";
+    break;
+  case GlobalVariable::LocalExecTLSModel:
+    Out << "thread_local(localexec) ";
+    break;
   }
 }
 
@@ -3608,11 +3735,12 @@ void AssemblyWriter::printGlobal(const GlobalVariable *GV) {
   PrintThreadLocalModel(GV->getThreadLocalMode(), Out);
   StringRef UA = getUnnamedAddrEncoding(GV->getUnnamedAddr());
   if (!UA.empty())
-      Out << UA << ' ';
+    Out << UA << ' ';
 
   if (unsigned AddressSpace = GV->getType()->getAddressSpace())
     Out << "addrspace(" << AddressSpace << ") ";
-  if (GV->isExternallyInitialized()) Out << "externally_initialized ";
+  if (GV->isExternallyInitialized())
+    Out << "externally_initialized ";
   Out << (GV->isConstant() ? "constant " : "global ");
   TypePrinter.print(GV->getValueType(), Out);
 
@@ -3675,7 +3803,7 @@ void AssemblyWriter::printAlias(const GlobalAlias *GA) {
   PrintThreadLocalModel(GA->getThreadLocalMode(), Out);
   StringRef UA = getUnnamedAddrEncoding(GA->getUnnamedAddr());
   if (!UA.empty())
-      Out << UA << ' ';
+    Out << UA << ' ';
 
   Out << "alias ";
 
@@ -3733,9 +3861,7 @@ void AssemblyWriter::printIFunc(const GlobalIFunc *GI) {
   Out << '\n';
 }
 
-void AssemblyWriter::printComdat(const Comdat *C) {
-  C->print(Out);
-}
+void AssemblyWriter::printComdat(const Comdat *C) { C->print(Out); }
 
 void AssemblyWriter::printTypeIdentities() {
   if (TypePrinter.empty())
@@ -3768,7 +3894,8 @@ void AssemblyWriter::printTypeIdentities() {
 
 /// printFunction - Print all aspects of a function.
 void AssemblyWriter::printFunction(const Function *F) {
-  if (AnnotationWriter) AnnotationWriter->emitFunctionAnnot(F, Out);
+  if (AnnotationWriter)
+    AnnotationWriter->emitFunctionAnnot(F, Out);
 
   if (F->isMaterializable())
     Out << "; Materializable\n";
@@ -3780,7 +3907,8 @@ void AssemblyWriter::printFunction(const Function *F) {
 
     for (const Attribute &Attr : AS) {
       if (!Attr.isStringAttribute()) {
-        if (!AttrStr.empty()) AttrStr += ' ';
+        if (!AttrStr.empty())
+          AttrStr += ' ';
         AttrStr += Attr.getAsString();
       }
     }
@@ -3848,8 +3976,9 @@ void AssemblyWriter::printFunction(const Function *F) {
 
   // Finish printing arguments...
   if (FT->isVarArg()) {
-    if (FT->getNumParams()) Out << ", ";
-    Out << "...";  // Output varargs portion of signature!
+    if (FT->getNumParams())
+      Out << ", ";
+    Out << "..."; // Output varargs portion of signature!
   }
   Out << ')';
   StringRef UA = getUnnamedAddrEncoding(F->getUnnamedAddr());
@@ -3939,7 +4068,7 @@ void AssemblyWriter::printArgument(const Argument *Arg, AttributeSet Attrs) {
 /// printBasicBlock - This member is called for each basic block in a method.
 void AssemblyWriter::printBasicBlock(const BasicBlock *BB) {
   bool IsEntryBlock = BB->getParent() && BB->isEntryBlock();
-  if (BB->hasName()) {              // Print out the label if it exists...
+  if (BB->hasName()) { // Print out the label if it exists...
     Out << "\n";
     PrintLLVMName(Out, BB->getName(), LabelPrefix);
     Out << ':';
@@ -3972,14 +4101,16 @@ void AssemblyWriter::printBasicBlock(const BasicBlock *BB) {
 
   Out << "\n";
 
-  if (AnnotationWriter) AnnotationWriter->emitBasicBlockStartAnnot(BB, Out);
+  if (AnnotationWriter)
+    AnnotationWriter->emitBasicBlockStartAnnot(BB, Out);
 
   // Output all of the instructions in the basic block...
   for (const Instruction &I : *BB) {
     printInstructionLine(I);
   }
 
-  if (AnnotationWriter) AnnotationWriter->emitBasicBlockEndAnnot(BB, Out);
+  if (AnnotationWriter)
+    AnnotationWriter->emitBasicBlockEndAnnot(BB, Out);
 }
 
 /// printInstructionLine - Print an instruction and a newline character.
@@ -4031,7 +4162,8 @@ static void maybePrintCallAddrSpace(const Value *Operand, const Instruction *I,
 
 // This member is called for each Instruction in a function..
 void AssemblyWriter::printInstruction(const Instruction &I) {
-  if (AnnotationWriter) AnnotationWriter->emitInstructionAnnot(&I, Out);
+  if (AnnotationWriter)
+    AnnotationWriter->emitInstructionAnnot(&I, Out);
 
   // Print out indentation for an instruction.
   Out << "  ";
@@ -4062,7 +4194,7 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
   Out << I.getOpcodeName();
 
   // If this is an atomic load or store, print out the atomic marker.
-  if ((isa<LoadInst>(I)  && cast<LoadInst>(I).isAtomic()) ||
+  if ((isa<LoadInst>(I) && cast<LoadInst>(I).isAtomic()) ||
       (isa<StoreInst>(I) && cast<StoreInst>(I).isAtomic()))
     Out << " atomic";
 
@@ -4070,7 +4202,7 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
     Out << " weak";
 
   // If this is a volatile operation, print out the volatile marker.
-  if ((isa<LoadInst>(I)  && cast<LoadInst>(I).isVolatile()) ||
+  if ((isa<LoadInst>(I) && cast<LoadInst>(I).isVolatile()) ||
       (isa<StoreInst>(I) && cast<StoreInst>(I).isVolatile()) ||
       (isa<AtomicCmpXchgInst>(I) && cast<AtomicCmpXchgInst>(I).isVolatile()) ||
       (isa<AtomicRMWInst>(I) && cast<AtomicRMWInst>(I).isVolatile()))
@@ -4101,7 +4233,7 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
     writeOperand(BI.getSuccessor(1), true);
 
   } else if (isa<SwitchInst>(I)) {
-    const SwitchInst& SI(cast<SwitchInst>(I));
+    const SwitchInst &SI(cast<SwitchInst>(I));
     // Special case switch instruction to get formatting nice and correct.
     Out << ' ';
     writeOperand(SI.getCondition(), true);
@@ -4133,10 +4265,13 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
     Out << ' ';
 
     for (unsigned op = 0, Eop = PN->getNumIncomingValues(); op < Eop; ++op) {
-      if (op) Out << ", ";
+      if (op)
+        Out << ", ";
       Out << "[ ";
-      writeOperand(PN->getIncomingValue(op), false); Out << ", ";
-      writeOperand(PN->getIncomingBlock(op), false); Out << " ]";
+      writeOperand(PN->getIncomingValue(op), false);
+      Out << ", ";
+      writeOperand(PN->getIncomingBlock(op), false);
+      Out << " ]";
     }
   } else if (const ExtractValueInst *EVI = dyn_cast<ExtractValueInst>(&I)) {
     Out << ' ';
@@ -4145,7 +4280,8 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
       Out << ", " << i;
   } else if (const InsertValueInst *IVI = dyn_cast<InsertValueInst>(&I)) {
     Out << ' ';
-    writeOperand(I.getOperand(0), true); Out << ", ";
+    writeOperand(I.getOperand(0), true);
+    Out << ", ";
     writeOperand(I.getOperand(1), true);
     for (unsigned i : IVI->indices())
       Out << ", " << i;
@@ -4159,7 +4295,8 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
       Out << "          cleanup";
 
     for (unsigned i = 0, e = LPI->getNumClauses(); i != e; ++i) {
-      if (i != 0 || LPI->isCleanup()) Out << "\n";
+      if (i != 0 || LPI->isCleanup())
+        Out << "\n";
       if (LPI->isCatch(i))
         Out << "          catch ";
       else
@@ -4373,18 +4510,18 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
   } else if (isa<CastInst>(I)) {
     if (Operand) {
       Out << ' ';
-      writeOperand(Operand, true);   // Work with broken code
+      writeOperand(Operand, true); // Work with broken code
     }
     Out << " to ";
     TypePrinter.print(I.getType(), Out);
   } else if (isa<VAArgInst>(I)) {
     if (Operand) {
       Out << ' ';
-      writeOperand(Operand, true);   // Work with broken code
+      writeOperand(Operand, true); // Work with broken code
     }
     Out << ", ";
     TypePrinter.print(I.getType(), Out);
-  } else if (Operand) {   // Print the normal way.
+  } else if (Operand) { // Print the normal way.
     if (const auto *GEP = dyn_cast<GetElementPtrInst>(&I)) {
       Out << ' ';
       TypePrinter.print(GEP->getSourceElementType(), Out);
@@ -4413,7 +4550,7 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
         // note that Operand shouldn't be null, but the test helps make dump()
         // more tolerant of malformed IR
         if (Operand && Operand->getType() != TheType) {
-          PrintAllTypes = true;    // We have differing types!  Print them all!
+          PrintAllTypes = true; // We have differing types!  Print them all!
           break;
         }
       }
@@ -4426,7 +4563,8 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
 
     Out << ' ';
     for (unsigned i = 0, E = I.getNumOperands(); i != E; ++i) {
-      if (i) Out << ", ";
+      if (i)
+        Out << ", ";
       writeOperand(I.getOperand(i), PrintAllTypes);
     }
   }
@@ -4543,8 +4681,8 @@ void AssemblyWriter::writeAllAttributeGroups() {
     asVec[I.second] = I;
 
   for (const auto &I : asVec)
-    Out << "attributes #" << I.second << " = { "
-        << I.first.getAsString(true) << " }\n";
+    Out << "attributes #" << I.second << " = { " << I.first.getAsString(true)
+        << " }\n";
 }
 
 void AssemblyWriter::printUseListOrder(const Value *V,
@@ -4587,23 +4725,19 @@ void AssemblyWriter::printUseLists(const Function *F) {
 //===----------------------------------------------------------------------===//
 
 void Function::print(raw_ostream &ROS, AssemblyAnnotationWriter *AAW,
-                     bool ShouldPreserveUseListOrder,
-                     bool IsForDebug) const {
+                     bool ShouldPreserveUseListOrder, bool IsForDebug) const {
   SlotTracker SlotTable(this->getParent());
   formatted_raw_ostream OS(ROS);
-  AssemblyWriter W(OS, SlotTable, this->getParent(), AAW,
-                   IsForDebug,
+  AssemblyWriter W(OS, SlotTable, this->getParent(), AAW, IsForDebug,
                    ShouldPreserveUseListOrder);
   W.printFunction(this);
 }
 
 void BasicBlock::print(raw_ostream &ROS, AssemblyAnnotationWriter *AAW,
-                     bool ShouldPreserveUseListOrder,
-                     bool IsForDebug) const {
+                       bool ShouldPreserveUseListOrder, bool IsForDebug) const {
   SlotTracker SlotTable(this->getParent());
   formatted_raw_ostream OS(ROS);
-  AssemblyWriter W(OS, SlotTable, this->getModule(), AAW,
-                   IsForDebug,
+  AssemblyWriter W(OS, SlotTable, this->getModule(), AAW, IsForDebug,
                    ShouldPreserveUseListOrder);
   W.printBasicBlock(this);
 }
@@ -4667,13 +4801,13 @@ void Comdat::print(raw_ostream &ROS, bool /*IsForDebug*/) const {
 
 void Type::print(raw_ostream &OS, bool /*IsForDebug*/, bool NoDetails) const {
   TypePrinting TP;
-  TP.print(const_cast<Type*>(this), OS);
+  TP.print(const_cast<Type *>(this), OS);
 
   if (NoDetails)
     return;
 
   // If the type is a named struct type, print the body as well.
-  if (StructType *STy = dyn_cast<StructType>(const_cast<Type*>(this)))
+  if (StructType *STy = dyn_cast<StructType>(const_cast<Type *>(this)))
     if (!STy->isLiteral()) {
       OS << " = type ";
       TP.printStructBody(STy, OS);
@@ -4896,8 +5030,8 @@ void Metadata::print(raw_ostream &OS, const Module *M,
   printMetadataImpl(OS, *this, MST, M, /* OnlyAsOperand */ false);
 }
 
-void Metadata::print(raw_ostream &OS, ModuleSlotTracker &MST,
-                     const Module *M, bool /*IsForDebug*/) const {
+void Metadata::print(raw_ostream &OS, ModuleSlotTracker &MST, const Module *M,
+                     bool /*IsForDebug*/) const {
   printMetadataImpl(OS, *this, MST, M, /* OnlyAsOperand */ false);
 }
 
@@ -4934,11 +5068,17 @@ void ModuleSlotTracker::collectMDNodes(MachineMDNodeListType &L, unsigned LB,
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
 // Value::dump - allow easy printing of Values from the debugger.
 LLVM_DUMP_METHOD
-void Value::dump() const { print(dbgs(), /*IsForDebug=*/true); dbgs() << '\n'; }
+void Value::dump() const {
+  print(dbgs(), /*IsForDebug=*/true);
+  dbgs() << '\n';
+}
 
 // Type::dump - allow easy printing of Types from the debugger.
 LLVM_DUMP_METHOD
-void Type::dump() const { print(dbgs(), /*IsForDebug=*/true); dbgs() << '\n'; }
+void Type::dump() const {
+  print(dbgs(), /*IsForDebug=*/true);
+  dbgs() << '\n';
+}
 
 // Module::dump() - Allow printing of Modules from the debugger.
 LLVM_DUMP_METHOD
diff --git a/llvm/lib/IR/AttributeImpl.h b/llvm/lib/IR/AttributeImpl.h
index 78496786b0ae95b..a93c95944f96082 100644
--- a/llvm/lib/IR/AttributeImpl.h
+++ b/llvm/lib/IR/AttributeImpl.h
@@ -100,7 +100,8 @@ class AttributeImpl : public FoldingSetNode {
 
   static void Profile(FoldingSetNodeID &ID, StringRef Kind, StringRef Values) {
     ID.AddString(Kind);
-    if (!Values.empty()) ID.AddString(Values);
+    if (!Values.empty())
+      ID.AddString(Values);
   }
 
   static void Profile(FoldingSetNodeID &ID, Attribute::AttrKind Kind,
@@ -143,8 +144,7 @@ class IntAttributeImpl : public EnumAttributeImpl {
 public:
   IntAttributeImpl(Attribute::AttrKind Kind, uint64_t Val)
       : EnumAttributeImpl(IntAttrEntry, Kind), Val(Val) {
-    assert(Attribute::isIntAttrKind(Kind) &&
-           "Wrong kind for int attribute!");
+    assert(Attribute::isIntAttrKind(Kind) && "Wrong kind for int attribute!");
   }
 
   uint64_t getValue() const { return Val; }
@@ -221,7 +221,7 @@ class AttributeSetNode final
       private TrailingObjects<AttributeSetNode, Attribute> {
   friend TrailingObjects;
 
-  unsigned NumAttrs; ///< Number of attributes in this node.
+  unsigned NumAttrs;              ///< Number of attributes in this node.
   AttributeBitSet AvailableAttrs; ///< Available enum attributes.
 
   DenseMap<StringRef, Attribute> StringAttrs;
@@ -259,8 +259,8 @@ class AttributeSetNode final
   MaybeAlign getStackAlignment() const;
   uint64_t getDereferenceableBytes() const;
   uint64_t getDereferenceableOrNullBytes() const;
-  std::optional<std::pair<unsigned, std::optional<unsigned>>> getAllocSizeArgs()
-      const;
+  std::optional<std::pair<unsigned, std::optional<unsigned>>>
+  getAllocSizeArgs() const;
   unsigned getVScaleRangeMin() const;
   std::optional<unsigned> getVScaleRangeMax() const;
   UWTableKind getUWTableKind() const;
diff --git a/llvm/lib/IR/Attributes.cpp b/llvm/lib/IR/Attributes.cpp
index 3d89d18e5822bee..bc5f0dc25916c69 100644
--- a/llvm/lib/IR/Attributes.cpp
+++ b/llvm/lib/IR/Attributes.cpp
@@ -124,7 +124,8 @@ Attribute Attribute::get(LLVMContext &Context, StringRef Kind, StringRef Val) {
   LLVMContextImpl *pImpl = Context.pImpl;
   FoldingSetNodeID ID;
   ID.AddString(Kind);
-  if (!Val.empty()) ID.AddString(Val);
+  if (!Val.empty())
+    ID.AddString(Val);
 
   void *InsertPoint;
   AttributeImpl *PA = pImpl->AttrsSet.FindNodeOrInsertPos(ID, InsertPoint);
@@ -176,7 +177,7 @@ Attribute Attribute::getWithStackAlignment(LLVMContext &Context, Align A) {
 }
 
 Attribute Attribute::getWithDereferenceableBytes(LLVMContext &Context,
-                                                uint64_t Bytes) {
+                                                 uint64_t Bytes) {
   assert(Bytes && "Bytes must be non-zero.");
   return get(Context, Dereferenceable, Bytes);
 }
@@ -288,54 +289,60 @@ bool Attribute::isTypeAttribute() const {
 }
 
 Attribute::AttrKind Attribute::getKindAsEnum() const {
-  if (!pImpl) return None;
+  if (!pImpl)
+    return None;
   assert((isEnumAttribute() || isIntAttribute() || isTypeAttribute()) &&
          "Invalid attribute type to get the kind as an enum!");
   return pImpl->getKindAsEnum();
 }
 
 uint64_t Attribute::getValueAsInt() const {
-  if (!pImpl) return 0;
+  if (!pImpl)
+    return 0;
   assert(isIntAttribute() &&
          "Expected the attribute to be an integer attribute!");
   return pImpl->getValueAsInt();
 }
 
 bool Attribute::getValueAsBool() const {
-  if (!pImpl) return false;
+  if (!pImpl)
+    return false;
   assert(isStringAttribute() &&
          "Expected the attribute to be a string attribute!");
   return pImpl->getValueAsBool();
 }
 
 StringRef Attribute::getKindAsString() const {
-  if (!pImpl) return {};
+  if (!pImpl)
+    return {};
   assert(isStringAttribute() &&
          "Invalid attribute type to get the kind as a string!");
   return pImpl->getKindAsString();
 }
 
 StringRef Attribute::getValueAsString() const {
-  if (!pImpl) return {};
+  if (!pImpl)
+    return {};
   assert(isStringAttribute() &&
          "Invalid attribute type to get the value as a string!");
   return pImpl->getValueAsString();
 }
 
 Type *Attribute::getValueAsType() const {
-  if (!pImpl) return {};
+  if (!pImpl)
+    return {};
   assert(isTypeAttribute() &&
          "Invalid attribute type to get the value as a type!");
   return pImpl->getValueAsType();
 }
 
-
 bool Attribute::hasAttribute(AttrKind Kind) const {
   return (pImpl && pImpl->hasAttribute(Kind)) || (!pImpl && Kind == None);
 }
 
 bool Attribute::hasAttribute(StringRef Kind) const {
-  if (!isStringAttribute()) return false;
+  if (!isStringAttribute())
+    return false;
   return pImpl && pImpl->hasAttribute(Kind);
 }
 
@@ -423,7 +430,8 @@ static const char *getModRefStr(ModRefInfo MR) {
 }
 
 std::string Attribute::getAsString(bool InAttrGrp) const {
-  if (!pImpl) return {};
+  if (!pImpl)
+    return {};
 
   if (isEnumAttribute())
     return getNameFromAttrKind(getKindAsEnum()).str();
@@ -598,15 +606,16 @@ bool Attribute::hasParentContext(LLVMContext &C) const {
 }
 
 bool Attribute::operator<(Attribute A) const {
-  if (!pImpl && !A.pImpl) return false;
-  if (!pImpl) return true;
-  if (!A.pImpl) return false;
+  if (!pImpl && !A.pImpl)
+    return false;
+  if (!pImpl)
+    return true;
+  if (!A.pImpl)
+    return false;
   return *pImpl < *A.pImpl;
 }
 
-void Attribute::Profile(FoldingSetNodeID &ID) const {
-  ID.AddPointer(pImpl);
-}
+void Attribute::Profile(FoldingSetNodeID &ID) const { ID.AddPointer(pImpl); }
 
 enum AttributeProperty {
   FnAttr = (1 << 0),
@@ -641,12 +650,14 @@ bool Attribute::canUseAsRetAttr(AttrKind Kind) {
 //===----------------------------------------------------------------------===//
 
 bool AttributeImpl::hasAttribute(Attribute::AttrKind A) const {
-  if (isStringAttribute()) return false;
+  if (isStringAttribute())
+    return false;
   return getKindAsEnum() == A;
 }
 
 bool AttributeImpl::hasAttribute(StringRef Kind) const {
-  if (!isStringAttribute()) return false;
+  if (!isStringAttribute())
+    return false;
   return getKindAsString() == Kind;
 }
 
@@ -661,7 +672,8 @@ uint64_t AttributeImpl::getValueAsInt() const {
 }
 
 bool AttributeImpl::getValueAsBool() const {
-  assert(getValueAsString().empty() || getValueAsString() == "false" || getValueAsString() == "true");
+  assert(getValueAsString().empty() || getValueAsString() == "false" ||
+         getValueAsString() == "true");
   return getValueAsString() == "true";
 }
 
@@ -719,7 +731,8 @@ AttributeSet AttributeSet::get(LLVMContext &C, ArrayRef<Attribute> Attrs) {
 
 AttributeSet AttributeSet::addAttribute(LLVMContext &C,
                                         Attribute::AttrKind Kind) const {
-  if (hasAttribute(Kind)) return *this;
+  if (hasAttribute(Kind))
+    return *this;
   AttrBuilder B(C);
   B.addAttribute(Kind);
   return addAttributes(C, AttributeSet::get(C, B));
@@ -746,16 +759,18 @@ AttributeSet AttributeSet::addAttributes(LLVMContext &C,
 }
 
 AttributeSet AttributeSet::removeAttribute(LLVMContext &C,
-                                             Attribute::AttrKind Kind) const {
-  if (!hasAttribute(Kind)) return *this;
+                                           Attribute::AttrKind Kind) const {
+  if (!hasAttribute(Kind))
+    return *this;
   AttrBuilder B(C, *this);
   B.removeAttribute(Kind);
   return get(C, B);
 }
 
 AttributeSet AttributeSet::removeAttribute(LLVMContext &C,
-                                             StringRef Kind) const {
-  if (!hasAttribute(Kind)) return *this;
+                                           StringRef Kind) const {
+  if (!hasAttribute(Kind))
+    return *this;
   AttrBuilder B(C, *this);
   B.removeAttribute(Kind);
   return get(C, B);
@@ -886,8 +901,8 @@ AttributeSet::iterator AttributeSet::end() const {
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
 LLVM_DUMP_METHOD void AttributeSet::dump() const {
   dbgs() << "AS =\n";
-    dbgs() << "  { ";
-    dbgs() << getAsString(true) << " }\n";
+  dbgs() << "  { ";
+  dbgs() << getAsString(true) << " }\n";
 }
 #endif
 
@@ -902,7 +917,7 @@ AttributeSetNode::AttributeSetNode(ArrayRef<Attribute> Attrs)
 
   for (const auto &I : *this) {
     if (I.isStringAttribute())
-      StringAttrs.insert({ I.getKindAsString(), I });
+      StringAttrs.insert({I.getKindAsString(), I});
     else
       AvailableAttrs.addAttribute(I.getKindAsEnum());
   }
@@ -930,7 +945,7 @@ AttributeSetNode *AttributeSetNode::getSorted(LLVMContext &C,
 
   void *InsertPoint;
   AttributeSetNode *PA =
-    pImpl->AttrsSetNodes.FindNodeOrInsertPos(ID, InsertPoint);
+      pImpl->AttrsSetNodes.FindNodeOrInsertPos(ID, InsertPoint);
 
   // If we didn't find any existing attributes of the same shape then create a
   // new one and insert it.
@@ -1069,9 +1084,7 @@ std::string AttributeSetNode::getAsString(bool InAttrGrp) const {
 
 /// Map from AttributeList index to the internal array index. Adding one happens
 /// to work, because -1 wraps around to 0.
-static unsigned attrIdxToArrayIdx(unsigned Index) {
-  return Index + 1;
-}
+static unsigned attrIdxToArrayIdx(unsigned Index) { return Index + 1; }
 
 AttributeListImpl::AttributeListImpl(ArrayRef<AttributeSet> Sets)
     : NumAttrSets(Sets.size()) {
@@ -1103,7 +1116,7 @@ void AttributeListImpl::Profile(FoldingSetNodeID &ID,
 }
 
 bool AttributeListImpl::hasAttrSomewhere(Attribute::AttrKind Kind,
-                                        unsigned *Index) const {
+                                         unsigned *Index) const {
   if (!AvailableSomewhereAttrs.hasAttribute(Kind))
     return false;
 
@@ -1119,7 +1132,6 @@ bool AttributeListImpl::hasAttrSomewhere(Attribute::AttrKind Kind,
   return true;
 }
 
-
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
 LLVM_DUMP_METHOD void AttributeListImpl::dump() const {
   AttributeList(const_cast<AttributeListImpl *>(this)).dump();
@@ -1176,7 +1188,8 @@ AttributeList::get(LLVMContext &C,
   // list.
   SmallVector<std::pair<unsigned, AttributeSet>, 8> AttrPairVec;
   for (ArrayRef<std::pair<unsigned, Attribute>>::iterator I = Attrs.begin(),
-         E = Attrs.end(); I != E; ) {
+                                                          E = Attrs.end();
+       I != E;) {
     unsigned Index = I->first;
     SmallVector<Attribute, 4> AttrVec;
     while (I != E && I->first == Index) {
@@ -1541,7 +1554,7 @@ MaybeAlign AttributeList::getParamStackAlignment(unsigned ArgNo) const {
 }
 
 Type *AttributeList::getParamByValType(unsigned Index) const {
-  return getAttributes(Index+FirstArgIndex).getByValType();
+  return getAttributes(Index + FirstArgIndex).getByValType();
 }
 
 Type *AttributeList::getParamStructRetType(unsigned Index) const {
@@ -1793,7 +1806,8 @@ AttrBuilder &AttrBuilder::addStackAlignmentAttr(MaybeAlign Align) {
 }
 
 AttrBuilder &AttrBuilder::addDereferenceableAttr(uint64_t Bytes) {
-  if (Bytes == 0) return *this;
+  if (Bytes == 0)
+    return *this;
 
   return addRawIntAttr(Attribute::Dereferenceable, Bytes);
 }
@@ -1974,7 +1988,7 @@ AttributeMask AttributeFuncs::typeIncompatible(Type *Ty,
           .addAttribute(Attribute::AllocatedPointer);
   }
 
-    // Attributes that only apply to pointers or vectors of pointers.
+  // Attributes that only apply to pointers or vectors of pointers.
   if (!Ty->isPtrOrPtrVectorTy()) {
     if (ASK & ASK_SAFE_TO_DROP)
       Incompatible.addAttribute(Attribute::Alignment);
@@ -2037,7 +2051,7 @@ static bool checkDenormMode(const Function &Caller, const Function &Callee) {
   return false;
 }
 
-template<typename AttrClass>
+template <typename AttrClass>
 static bool isEqual(const Function &Caller, const Function &Callee) {
   return Caller.getFnAttribute(AttrClass::getKind()) ==
          Callee.getFnAttribute(AttrClass::getKind());
@@ -2048,7 +2062,7 @@ static bool isEqual(const Function &Caller, const Function &Callee) {
 ///
 /// This function sets the caller's attribute to false if the callee's attribute
 /// is false.
-template<typename AttrClass>
+template <typename AttrClass>
 static void setAND(Function &Caller, const Function &Callee) {
   if (AttrClass::isSet(Caller, AttrClass::getKind()) &&
       !AttrClass::isSet(Callee, AttrClass::getKind()))
@@ -2060,7 +2074,7 @@ static void setAND(Function &Caller, const Function &Callee) {
 ///
 /// This function sets the caller's attribute to true if the callee's attribute
 /// is true.
-template<typename AttrClass>
+template <typename AttrClass>
 static void setOR(Function &Caller, const Function &Callee) {
   if (!AttrClass::isSet(Caller, AttrClass::getKind()) &&
       AttrClass::isSet(Callee, AttrClass::getKind()))
@@ -2109,8 +2123,8 @@ static void adjustCallerStackProbes(Function &Caller, const Function &Callee) {
 /// If the inlined function defines the size of guard region
 /// on the stack, then ensure that the calling function defines a guard region
 /// that is no larger.
-static void
-adjustCallerStackProbeSize(Function &Caller, const Function &Callee) {
+static void adjustCallerStackProbeSize(Function &Caller,
+                                       const Function &Callee) {
   Attribute CalleeAttr = Callee.getFnAttribute("stack-probe-size");
   if (CalleeAttr.isValid()) {
     Attribute CallerAttr = Caller.getFnAttribute("stack-probe-size");
@@ -2137,8 +2151,8 @@ adjustCallerStackProbeSize(Function &Caller, const Function &Callee) {
 /// to merge the attribute this way. Heuristics that would use
 /// min-legal-vector-width to determine inline compatibility would need to be
 /// handled as part of inline cost analysis.
-static void
-adjustMinLegalVectorWidth(Function &Caller, const Function &Callee) {
+static void adjustMinLegalVectorWidth(Function &Caller,
+                                      const Function &Callee) {
   Attribute CallerAttr = Caller.getFnAttribute("min-legal-vector-width");
   if (CallerAttr.isValid()) {
     Attribute CalleeAttr = Callee.getFnAttribute("min-legal-vector-width");
@@ -2158,21 +2172,19 @@ adjustMinLegalVectorWidth(Function &Caller, const Function &Callee) {
 
 /// If the inlined function has null_pointer_is_valid attribute,
 /// set this attribute in the caller post inlining.
-static void
-adjustNullPointerValidAttr(Function &Caller, const Function &Callee) {
+static void adjustNullPointerValidAttr(Function &Caller,
+                                       const Function &Callee) {
   if (Callee.nullPointerIsDefined() && !Caller.nullPointerIsDefined()) {
     Caller.addFnAttr(Attribute::NullPointerIsValid);
   }
 }
 
 struct EnumAttr {
-  static bool isSet(const Function &Fn,
-                    Attribute::AttrKind Kind) {
+  static bool isSet(const Function &Fn, Attribute::AttrKind Kind) {
     return Fn.hasFnAttribute(Kind);
   }
 
-  static void set(Function &Fn,
-                  Attribute::AttrKind Kind, bool Val) {
+  static void set(Function &Fn, Attribute::AttrKind Kind, bool Val) {
     if (Val)
       Fn.addFnAttr(Kind);
     else
@@ -2181,14 +2193,12 @@ struct EnumAttr {
 };
 
 struct StrBoolAttr {
-  static bool isSet(const Function &Fn,
-                    StringRef Kind) {
+  static bool isSet(const Function &Fn, StringRef Kind) {
     auto A = Fn.getFnAttribute(Kind);
     return A.getValueAsString().equals("true");
   }
 
-  static void set(Function &Fn,
-                  StringRef Kind, bool Val) {
+  static void set(Function &Fn, StringRef Kind, bool Val) {
     Fn.addFnAttr(Kind, Val ? "true" : "false");
   }
 };
@@ -2225,7 +2235,7 @@ void AttributeFuncs::mergeAttributesForInlining(Function &Caller,
 }
 
 void AttributeFuncs::mergeAttributesForOutlining(Function &Base,
-                                                const Function &ToMerge) {
+                                                 const Function &ToMerge) {
 
   // We merge functions so that they meet the most general case.
   // For example, if the NoNansFPMathAttr is set in one function, but not in
diff --git a/llvm/lib/IR/AutoUpgrade.cpp b/llvm/lib/IR/AutoUpgrade.cpp
index 4e10c6ce5e2bd50..076d8ff0062f941 100644
--- a/llvm/lib/IR/AutoUpgrade.cpp
+++ b/llvm/lib/IR/AutoUpgrade.cpp
@@ -53,7 +53,7 @@ static void rename(GlobalValue *GV) { GV->setName(GV->getName() + ".old"); }
 
 // Upgrade the declarations of the SSE4.1 ptest intrinsics whose arguments have
 // changed their type from v4f32 to v2i64.
-static bool UpgradePTESTIntrinsic(Function* F, Intrinsic::ID IID,
+static bool UpgradePTESTIntrinsic(Function *F, Intrinsic::ID IID,
                                   Function *&NewFn) {
   // Check whether this is an old version of the function, which received
   // v4f32 arguments.
@@ -73,7 +73,7 @@ static bool UpgradeX86IntrinsicsWith8BitMask(Function *F, Intrinsic::ID IID,
                                              Function *&NewFn) {
   // Check that the last argument is an i32.
   Type *LastArgType = F->getFunctionType()->getParamType(
-     F->getFunctionType()->getNumParams() - 1);
+      F->getFunctionType()->getNumParams() - 1);
   if (!LastArgType->isIntegerTy(32))
     return false;
 
@@ -122,307 +122,307 @@ static bool ShouldUpgradeX86Intrinsic(Function *F, StringRef Name) {
   // like to use this information to remove upgrade code for some older
   // intrinsics. It is currently undecided how we will determine that future
   // point.
-  if (Name == "addcarryx.u32" || // Added in 8.0
-      Name == "addcarryx.u64" || // Added in 8.0
-      Name == "addcarry.u32" || // Added in 8.0
-      Name == "addcarry.u64" || // Added in 8.0
-      Name == "subborrow.u32" || // Added in 8.0
-      Name == "subborrow.u64" || // Added in 8.0
-      Name.startswith("sse2.padds.") || // Added in 8.0
-      Name.startswith("sse2.psubs.") || // Added in 8.0
-      Name.startswith("sse2.paddus.") || // Added in 8.0
-      Name.startswith("sse2.psubus.") || // Added in 8.0
-      Name.startswith("avx2.padds.") || // Added in 8.0
-      Name.startswith("avx2.psubs.") || // Added in 8.0
-      Name.startswith("avx2.paddus.") || // Added in 8.0
-      Name.startswith("avx2.psubus.") || // Added in 8.0
-      Name.startswith("avx512.padds.") || // Added in 8.0
-      Name.startswith("avx512.psubs.") || // Added in 8.0
-      Name.startswith("avx512.mask.padds.") || // Added in 8.0
-      Name.startswith("avx512.mask.psubs.") || // Added in 8.0
-      Name.startswith("avx512.mask.paddus.") || // Added in 8.0
-      Name.startswith("avx512.mask.psubus.") || // Added in 8.0
-      Name=="ssse3.pabs.b.128" || // Added in 6.0
-      Name=="ssse3.pabs.w.128" || // Added in 6.0
-      Name=="ssse3.pabs.d.128" || // Added in 6.0
-      Name.startswith("fma4.vfmadd.s") || // Added in 7.0
-      Name.startswith("fma.vfmadd.") || // Added in 7.0
-      Name.startswith("fma.vfmsub.") || // Added in 7.0
-      Name.startswith("fma.vfmsubadd.") || // Added in 7.0
-      Name.startswith("fma.vfnmadd.") || // Added in 7.0
-      Name.startswith("fma.vfnmsub.") || // Added in 7.0
-      Name.startswith("avx512.mask.vfmadd.") || // Added in 7.0
-      Name.startswith("avx512.mask.vfnmadd.") || // Added in 7.0
-      Name.startswith("avx512.mask.vfnmsub.") || // Added in 7.0
-      Name.startswith("avx512.mask3.vfmadd.") || // Added in 7.0
-      Name.startswith("avx512.maskz.vfmadd.") || // Added in 7.0
-      Name.startswith("avx512.mask3.vfmsub.") || // Added in 7.0
-      Name.startswith("avx512.mask3.vfnmsub.") || // Added in 7.0
-      Name.startswith("avx512.mask.vfmaddsub.") || // Added in 7.0
-      Name.startswith("avx512.maskz.vfmaddsub.") || // Added in 7.0
-      Name.startswith("avx512.mask3.vfmaddsub.") || // Added in 7.0
-      Name.startswith("avx512.mask3.vfmsubadd.") || // Added in 7.0
-      Name.startswith("avx512.mask.shuf.i") || // Added in 6.0
-      Name.startswith("avx512.mask.shuf.f") || // Added in 6.0
-      Name.startswith("avx512.kunpck") || //added in 6.0
-      Name.startswith("avx2.pabs.") || // Added in 6.0
-      Name.startswith("avx512.mask.pabs.") || // Added in 6.0
-      Name.startswith("avx512.broadcastm") || // Added in 6.0
-      Name == "sse.sqrt.ss" || // Added in 7.0
-      Name == "sse2.sqrt.sd" || // Added in 7.0
-      Name.startswith("avx512.mask.sqrt.p") || // Added in 7.0
-      Name.startswith("avx.sqrt.p") || // Added in 7.0
-      Name.startswith("sse2.sqrt.p") || // Added in 7.0
-      Name.startswith("sse.sqrt.p") || // Added in 7.0
-      Name.startswith("avx512.mask.pbroadcast") || // Added in 6.0
-      Name.startswith("sse2.pcmpeq.") || // Added in 3.1
-      Name.startswith("sse2.pcmpgt.") || // Added in 3.1
-      Name.startswith("avx2.pcmpeq.") || // Added in 3.1
-      Name.startswith("avx2.pcmpgt.") || // Added in 3.1
-      Name.startswith("avx512.mask.pcmpeq.") || // Added in 3.9
-      Name.startswith("avx512.mask.pcmpgt.") || // Added in 3.9
-      Name.startswith("avx.vperm2f128.") || // Added in 6.0
-      Name == "avx2.vperm2i128" || // Added in 6.0
-      Name == "sse.add.ss" || // Added in 4.0
-      Name == "sse2.add.sd" || // Added in 4.0
-      Name == "sse.sub.ss" || // Added in 4.0
-      Name == "sse2.sub.sd" || // Added in 4.0
-      Name == "sse.mul.ss" || // Added in 4.0
-      Name == "sse2.mul.sd" || // Added in 4.0
-      Name == "sse.div.ss" || // Added in 4.0
-      Name == "sse2.div.sd" || // Added in 4.0
-      Name == "sse41.pmaxsb" || // Added in 3.9
-      Name == "sse2.pmaxs.w" || // Added in 3.9
-      Name == "sse41.pmaxsd" || // Added in 3.9
-      Name == "sse2.pmaxu.b" || // Added in 3.9
-      Name == "sse41.pmaxuw" || // Added in 3.9
-      Name == "sse41.pmaxud" || // Added in 3.9
-      Name == "sse41.pminsb" || // Added in 3.9
-      Name == "sse2.pmins.w" || // Added in 3.9
-      Name == "sse41.pminsd" || // Added in 3.9
-      Name == "sse2.pminu.b" || // Added in 3.9
-      Name == "sse41.pminuw" || // Added in 3.9
-      Name == "sse41.pminud" || // Added in 3.9
-      Name == "avx512.kand.w" || // Added in 7.0
-      Name == "avx512.kandn.w" || // Added in 7.0
-      Name == "avx512.knot.w" || // Added in 7.0
-      Name == "avx512.kor.w" || // Added in 7.0
-      Name == "avx512.kxor.w" || // Added in 7.0
-      Name == "avx512.kxnor.w" || // Added in 7.0
-      Name == "avx512.kortestc.w" || // Added in 7.0
-      Name == "avx512.kortestz.w" || // Added in 7.0
-      Name.startswith("avx512.mask.pshuf.b.") || // Added in 4.0
-      Name.startswith("avx2.pmax") || // Added in 3.9
-      Name.startswith("avx2.pmin") || // Added in 3.9
-      Name.startswith("avx512.mask.pmax") || // Added in 4.0
-      Name.startswith("avx512.mask.pmin") || // Added in 4.0
-      Name.startswith("avx2.vbroadcast") || // Added in 3.8
-      Name.startswith("avx2.pbroadcast") || // Added in 3.8
-      Name.startswith("avx.vpermil.") || // Added in 3.1
-      Name.startswith("sse2.pshuf") || // Added in 3.9
-      Name.startswith("avx512.pbroadcast") || // Added in 3.9
-      Name.startswith("avx512.mask.broadcast.s") || // Added in 3.9
-      Name.startswith("avx512.mask.movddup") || // Added in 3.9
-      Name.startswith("avx512.mask.movshdup") || // Added in 3.9
-      Name.startswith("avx512.mask.movsldup") || // Added in 3.9
-      Name.startswith("avx512.mask.pshuf.d.") || // Added in 3.9
-      Name.startswith("avx512.mask.pshufl.w.") || // Added in 3.9
-      Name.startswith("avx512.mask.pshufh.w.") || // Added in 3.9
-      Name.startswith("avx512.mask.shuf.p") || // Added in 4.0
-      Name.startswith("avx512.mask.vpermil.p") || // Added in 3.9
-      Name.startswith("avx512.mask.perm.df.") || // Added in 3.9
-      Name.startswith("avx512.mask.perm.di.") || // Added in 3.9
-      Name.startswith("avx512.mask.punpckl") || // Added in 3.9
-      Name.startswith("avx512.mask.punpckh") || // Added in 3.9
-      Name.startswith("avx512.mask.unpckl.") || // Added in 3.9
-      Name.startswith("avx512.mask.unpckh.") || // Added in 3.9
-      Name.startswith("avx512.mask.pand.") || // Added in 3.9
-      Name.startswith("avx512.mask.pandn.") || // Added in 3.9
-      Name.startswith("avx512.mask.por.") || // Added in 3.9
-      Name.startswith("avx512.mask.pxor.") || // Added in 3.9
-      Name.startswith("avx512.mask.and.") || // Added in 3.9
-      Name.startswith("avx512.mask.andn.") || // Added in 3.9
-      Name.startswith("avx512.mask.or.") || // Added in 3.9
-      Name.startswith("avx512.mask.xor.") || // Added in 3.9
-      Name.startswith("avx512.mask.padd.") || // Added in 4.0
-      Name.startswith("avx512.mask.psub.") || // Added in 4.0
-      Name.startswith("avx512.mask.pmull.") || // Added in 4.0
-      Name.startswith("avx512.mask.cvtdq2pd.") || // Added in 4.0
-      Name.startswith("avx512.mask.cvtudq2pd.") || // Added in 4.0
-      Name.startswith("avx512.mask.cvtudq2ps.") || // Added in 7.0 updated 9.0
-      Name.startswith("avx512.mask.cvtqq2pd.") || // Added in 7.0 updated 9.0
-      Name.startswith("avx512.mask.cvtuqq2pd.") || // Added in 7.0 updated 9.0
-      Name.startswith("avx512.mask.cvtdq2ps.") || // Added in 7.0 updated 9.0
-      Name == "avx512.mask.vcvtph2ps.128" || // Added in 11.0
-      Name == "avx512.mask.vcvtph2ps.256" || // Added in 11.0
-      Name == "avx512.mask.cvtqq2ps.256" || // Added in 9.0
-      Name == "avx512.mask.cvtqq2ps.512" || // Added in 9.0
-      Name == "avx512.mask.cvtuqq2ps.256" || // Added in 9.0
-      Name == "avx512.mask.cvtuqq2ps.512" || // Added in 9.0
-      Name == "avx512.mask.cvtpd2dq.256" || // Added in 7.0
-      Name == "avx512.mask.cvtpd2ps.256" || // Added in 7.0
-      Name == "avx512.mask.cvttpd2dq.256" || // Added in 7.0
-      Name == "avx512.mask.cvttps2dq.128" || // Added in 7.0
-      Name == "avx512.mask.cvttps2dq.256" || // Added in 7.0
-      Name == "avx512.mask.cvtps2pd.128" || // Added in 7.0
-      Name == "avx512.mask.cvtps2pd.256" || // Added in 7.0
-      Name == "avx512.cvtusi2sd" || // Added in 7.0
-      Name.startswith("avx512.mask.permvar.") || // Added in 7.0
-      Name == "sse2.pmulu.dq" || // Added in 7.0
-      Name == "sse41.pmuldq" || // Added in 7.0
-      Name == "avx2.pmulu.dq" || // Added in 7.0
-      Name == "avx2.pmul.dq" || // Added in 7.0
-      Name == "avx512.pmulu.dq.512" || // Added in 7.0
-      Name == "avx512.pmul.dq.512" || // Added in 7.0
-      Name.startswith("avx512.mask.pmul.dq.") || // Added in 4.0
-      Name.startswith("avx512.mask.pmulu.dq.") || // Added in 4.0
-      Name.startswith("avx512.mask.pmul.hr.sw.") || // Added in 7.0
-      Name.startswith("avx512.mask.pmulh.w.") || // Added in 7.0
-      Name.startswith("avx512.mask.pmulhu.w.") || // Added in 7.0
-      Name.startswith("avx512.mask.pmaddw.d.") || // Added in 7.0
-      Name.startswith("avx512.mask.pmaddubs.w.") || // Added in 7.0
-      Name.startswith("avx512.mask.packsswb.") || // Added in 5.0
-      Name.startswith("avx512.mask.packssdw.") || // Added in 5.0
-      Name.startswith("avx512.mask.packuswb.") || // Added in 5.0
-      Name.startswith("avx512.mask.packusdw.") || // Added in 5.0
-      Name.startswith("avx512.mask.cmp.b") || // Added in 5.0
-      Name.startswith("avx512.mask.cmp.d") || // Added in 5.0
-      Name.startswith("avx512.mask.cmp.q") || // Added in 5.0
-      Name.startswith("avx512.mask.cmp.w") || // Added in 5.0
-      Name.startswith("avx512.cmp.p") || // Added in 12.0
-      Name.startswith("avx512.mask.ucmp.") || // Added in 5.0
-      Name.startswith("avx512.cvtb2mask.") || // Added in 7.0
-      Name.startswith("avx512.cvtw2mask.") || // Added in 7.0
-      Name.startswith("avx512.cvtd2mask.") || // Added in 7.0
-      Name.startswith("avx512.cvtq2mask.") || // Added in 7.0
-      Name.startswith("avx512.mask.vpermilvar.") || // Added in 4.0
-      Name.startswith("avx512.mask.psll.d") || // Added in 4.0
-      Name.startswith("avx512.mask.psll.q") || // Added in 4.0
-      Name.startswith("avx512.mask.psll.w") || // Added in 4.0
-      Name.startswith("avx512.mask.psra.d") || // Added in 4.0
-      Name.startswith("avx512.mask.psra.q") || // Added in 4.0
-      Name.startswith("avx512.mask.psra.w") || // Added in 4.0
-      Name.startswith("avx512.mask.psrl.d") || // Added in 4.0
-      Name.startswith("avx512.mask.psrl.q") || // Added in 4.0
-      Name.startswith("avx512.mask.psrl.w") || // Added in 4.0
-      Name.startswith("avx512.mask.pslli") || // Added in 4.0
-      Name.startswith("avx512.mask.psrai") || // Added in 4.0
-      Name.startswith("avx512.mask.psrli") || // Added in 4.0
-      Name.startswith("avx512.mask.psllv") || // Added in 4.0
-      Name.startswith("avx512.mask.psrav") || // Added in 4.0
-      Name.startswith("avx512.mask.psrlv") || // Added in 4.0
-      Name.startswith("sse41.pmovsx") || // Added in 3.8
-      Name.startswith("sse41.pmovzx") || // Added in 3.9
-      Name.startswith("avx2.pmovsx") || // Added in 3.9
-      Name.startswith("avx2.pmovzx") || // Added in 3.9
-      Name.startswith("avx512.mask.pmovsx") || // Added in 4.0
-      Name.startswith("avx512.mask.pmovzx") || // Added in 4.0
-      Name.startswith("avx512.mask.lzcnt.") || // Added in 5.0
-      Name.startswith("avx512.mask.pternlog.") || // Added in 7.0
-      Name.startswith("avx512.maskz.pternlog.") || // Added in 7.0
-      Name.startswith("avx512.mask.vpmadd52") || // Added in 7.0
-      Name.startswith("avx512.maskz.vpmadd52") || // Added in 7.0
-      Name.startswith("avx512.mask.vpermi2var.") || // Added in 7.0
-      Name.startswith("avx512.mask.vpermt2var.") || // Added in 7.0
+  if (Name == "addcarryx.u32" ||                     // Added in 8.0
+      Name == "addcarryx.u64" ||                     // Added in 8.0
+      Name == "addcarry.u32" ||                      // Added in 8.0
+      Name == "addcarry.u64" ||                      // Added in 8.0
+      Name == "subborrow.u32" ||                     // Added in 8.0
+      Name == "subborrow.u64" ||                     // Added in 8.0
+      Name.startswith("sse2.padds.") ||              // Added in 8.0
+      Name.startswith("sse2.psubs.") ||              // Added in 8.0
+      Name.startswith("sse2.paddus.") ||             // Added in 8.0
+      Name.startswith("sse2.psubus.") ||             // Added in 8.0
+      Name.startswith("avx2.padds.") ||              // Added in 8.0
+      Name.startswith("avx2.psubs.") ||              // Added in 8.0
+      Name.startswith("avx2.paddus.") ||             // Added in 8.0
+      Name.startswith("avx2.psubus.") ||             // Added in 8.0
+      Name.startswith("avx512.padds.") ||            // Added in 8.0
+      Name.startswith("avx512.psubs.") ||            // Added in 8.0
+      Name.startswith("avx512.mask.padds.") ||       // Added in 8.0
+      Name.startswith("avx512.mask.psubs.") ||       // Added in 8.0
+      Name.startswith("avx512.mask.paddus.") ||      // Added in 8.0
+      Name.startswith("avx512.mask.psubus.") ||      // Added in 8.0
+      Name == "ssse3.pabs.b.128" ||                  // Added in 6.0
+      Name == "ssse3.pabs.w.128" ||                  // Added in 6.0
+      Name == "ssse3.pabs.d.128" ||                  // Added in 6.0
+      Name.startswith("fma4.vfmadd.s") ||            // Added in 7.0
+      Name.startswith("fma.vfmadd.") ||              // Added in 7.0
+      Name.startswith("fma.vfmsub.") ||              // Added in 7.0
+      Name.startswith("fma.vfmsubadd.") ||           // Added in 7.0
+      Name.startswith("fma.vfnmadd.") ||             // Added in 7.0
+      Name.startswith("fma.vfnmsub.") ||             // Added in 7.0
+      Name.startswith("avx512.mask.vfmadd.") ||      // Added in 7.0
+      Name.startswith("avx512.mask.vfnmadd.") ||     // Added in 7.0
+      Name.startswith("avx512.mask.vfnmsub.") ||     // Added in 7.0
+      Name.startswith("avx512.mask3.vfmadd.") ||     // Added in 7.0
+      Name.startswith("avx512.maskz.vfmadd.") ||     // Added in 7.0
+      Name.startswith("avx512.mask3.vfmsub.") ||     // Added in 7.0
+      Name.startswith("avx512.mask3.vfnmsub.") ||    // Added in 7.0
+      Name.startswith("avx512.mask.vfmaddsub.") ||   // Added in 7.0
+      Name.startswith("avx512.maskz.vfmaddsub.") ||  // Added in 7.0
+      Name.startswith("avx512.mask3.vfmaddsub.") ||  // Added in 7.0
+      Name.startswith("avx512.mask3.vfmsubadd.") ||  // Added in 7.0
+      Name.startswith("avx512.mask.shuf.i") ||       // Added in 6.0
+      Name.startswith("avx512.mask.shuf.f") ||       // Added in 6.0
+      Name.startswith("avx512.kunpck") ||            // added in 6.0
+      Name.startswith("avx2.pabs.") ||               // Added in 6.0
+      Name.startswith("avx512.mask.pabs.") ||        // Added in 6.0
+      Name.startswith("avx512.broadcastm") ||        // Added in 6.0
+      Name == "sse.sqrt.ss" ||                       // Added in 7.0
+      Name == "sse2.sqrt.sd" ||                      // Added in 7.0
+      Name.startswith("avx512.mask.sqrt.p") ||       // Added in 7.0
+      Name.startswith("avx.sqrt.p") ||               // Added in 7.0
+      Name.startswith("sse2.sqrt.p") ||              // Added in 7.0
+      Name.startswith("sse.sqrt.p") ||               // Added in 7.0
+      Name.startswith("avx512.mask.pbroadcast") ||   // Added in 6.0
+      Name.startswith("sse2.pcmpeq.") ||             // Added in 3.1
+      Name.startswith("sse2.pcmpgt.") ||             // Added in 3.1
+      Name.startswith("avx2.pcmpeq.") ||             // Added in 3.1
+      Name.startswith("avx2.pcmpgt.") ||             // Added in 3.1
+      Name.startswith("avx512.mask.pcmpeq.") ||      // Added in 3.9
+      Name.startswith("avx512.mask.pcmpgt.") ||      // Added in 3.9
+      Name.startswith("avx.vperm2f128.") ||          // Added in 6.0
+      Name == "avx2.vperm2i128" ||                   // Added in 6.0
+      Name == "sse.add.ss" ||                        // Added in 4.0
+      Name == "sse2.add.sd" ||                       // Added in 4.0
+      Name == "sse.sub.ss" ||                        // Added in 4.0
+      Name == "sse2.sub.sd" ||                       // Added in 4.0
+      Name == "sse.mul.ss" ||                        // Added in 4.0
+      Name == "sse2.mul.sd" ||                       // Added in 4.0
+      Name == "sse.div.ss" ||                        // Added in 4.0
+      Name == "sse2.div.sd" ||                       // Added in 4.0
+      Name == "sse41.pmaxsb" ||                      // Added in 3.9
+      Name == "sse2.pmaxs.w" ||                      // Added in 3.9
+      Name == "sse41.pmaxsd" ||                      // Added in 3.9
+      Name == "sse2.pmaxu.b" ||                      // Added in 3.9
+      Name == "sse41.pmaxuw" ||                      // Added in 3.9
+      Name == "sse41.pmaxud" ||                      // Added in 3.9
+      Name == "sse41.pminsb" ||                      // Added in 3.9
+      Name == "sse2.pmins.w" ||                      // Added in 3.9
+      Name == "sse41.pminsd" ||                      // Added in 3.9
+      Name == "sse2.pminu.b" ||                      // Added in 3.9
+      Name == "sse41.pminuw" ||                      // Added in 3.9
+      Name == "sse41.pminud" ||                      // Added in 3.9
+      Name == "avx512.kand.w" ||                     // Added in 7.0
+      Name == "avx512.kandn.w" ||                    // Added in 7.0
+      Name == "avx512.knot.w" ||                     // Added in 7.0
+      Name == "avx512.kor.w" ||                      // Added in 7.0
+      Name == "avx512.kxor.w" ||                     // Added in 7.0
+      Name == "avx512.kxnor.w" ||                    // Added in 7.0
+      Name == "avx512.kortestc.w" ||                 // Added in 7.0
+      Name == "avx512.kortestz.w" ||                 // Added in 7.0
+      Name.startswith("avx512.mask.pshuf.b.") ||     // Added in 4.0
+      Name.startswith("avx2.pmax") ||                // Added in 3.9
+      Name.startswith("avx2.pmin") ||                // Added in 3.9
+      Name.startswith("avx512.mask.pmax") ||         // Added in 4.0
+      Name.startswith("avx512.mask.pmin") ||         // Added in 4.0
+      Name.startswith("avx2.vbroadcast") ||          // Added in 3.8
+      Name.startswith("avx2.pbroadcast") ||          // Added in 3.8
+      Name.startswith("avx.vpermil.") ||             // Added in 3.1
+      Name.startswith("sse2.pshuf") ||               // Added in 3.9
+      Name.startswith("avx512.pbroadcast") ||        // Added in 3.9
+      Name.startswith("avx512.mask.broadcast.s") ||  // Added in 3.9
+      Name.startswith("avx512.mask.movddup") ||      // Added in 3.9
+      Name.startswith("avx512.mask.movshdup") ||     // Added in 3.9
+      Name.startswith("avx512.mask.movsldup") ||     // Added in 3.9
+      Name.startswith("avx512.mask.pshuf.d.") ||     // Added in 3.9
+      Name.startswith("avx512.mask.pshufl.w.") ||    // Added in 3.9
+      Name.startswith("avx512.mask.pshufh.w.") ||    // Added in 3.9
+      Name.startswith("avx512.mask.shuf.p") ||       // Added in 4.0
+      Name.startswith("avx512.mask.vpermil.p") ||    // Added in 3.9
+      Name.startswith("avx512.mask.perm.df.") ||     // Added in 3.9
+      Name.startswith("avx512.mask.perm.di.") ||     // Added in 3.9
+      Name.startswith("avx512.mask.punpckl") ||      // Added in 3.9
+      Name.startswith("avx512.mask.punpckh") ||      // Added in 3.9
+      Name.startswith("avx512.mask.unpckl.") ||      // Added in 3.9
+      Name.startswith("avx512.mask.unpckh.") ||      // Added in 3.9
+      Name.startswith("avx512.mask.pand.") ||        // Added in 3.9
+      Name.startswith("avx512.mask.pandn.") ||       // Added in 3.9
+      Name.startswith("avx512.mask.por.") ||         // Added in 3.9
+      Name.startswith("avx512.mask.pxor.") ||        // Added in 3.9
+      Name.startswith("avx512.mask.and.") ||         // Added in 3.9
+      Name.startswith("avx512.mask.andn.") ||        // Added in 3.9
+      Name.startswith("avx512.mask.or.") ||          // Added in 3.9
+      Name.startswith("avx512.mask.xor.") ||         // Added in 3.9
+      Name.startswith("avx512.mask.padd.") ||        // Added in 4.0
+      Name.startswith("avx512.mask.psub.") ||        // Added in 4.0
+      Name.startswith("avx512.mask.pmull.") ||       // Added in 4.0
+      Name.startswith("avx512.mask.cvtdq2pd.") ||    // Added in 4.0
+      Name.startswith("avx512.mask.cvtudq2pd.") ||   // Added in 4.0
+      Name.startswith("avx512.mask.cvtudq2ps.") ||   // Added in 7.0 updated 9.0
+      Name.startswith("avx512.mask.cvtqq2pd.") ||    // Added in 7.0 updated 9.0
+      Name.startswith("avx512.mask.cvtuqq2pd.") ||   // Added in 7.0 updated 9.0
+      Name.startswith("avx512.mask.cvtdq2ps.") ||    // Added in 7.0 updated 9.0
+      Name == "avx512.mask.vcvtph2ps.128" ||         // Added in 11.0
+      Name == "avx512.mask.vcvtph2ps.256" ||         // Added in 11.0
+      Name == "avx512.mask.cvtqq2ps.256" ||          // Added in 9.0
+      Name == "avx512.mask.cvtqq2ps.512" ||          // Added in 9.0
+      Name == "avx512.mask.cvtuqq2ps.256" ||         // Added in 9.0
+      Name == "avx512.mask.cvtuqq2ps.512" ||         // Added in 9.0
+      Name == "avx512.mask.cvtpd2dq.256" ||          // Added in 7.0
+      Name == "avx512.mask.cvtpd2ps.256" ||          // Added in 7.0
+      Name == "avx512.mask.cvttpd2dq.256" ||         // Added in 7.0
+      Name == "avx512.mask.cvttps2dq.128" ||         // Added in 7.0
+      Name == "avx512.mask.cvttps2dq.256" ||         // Added in 7.0
+      Name == "avx512.mask.cvtps2pd.128" ||          // Added in 7.0
+      Name == "avx512.mask.cvtps2pd.256" ||          // Added in 7.0
+      Name == "avx512.cvtusi2sd" ||                  // Added in 7.0
+      Name.startswith("avx512.mask.permvar.") ||     // Added in 7.0
+      Name == "sse2.pmulu.dq" ||                     // Added in 7.0
+      Name == "sse41.pmuldq" ||                      // Added in 7.0
+      Name == "avx2.pmulu.dq" ||                     // Added in 7.0
+      Name == "avx2.pmul.dq" ||                      // Added in 7.0
+      Name == "avx512.pmulu.dq.512" ||               // Added in 7.0
+      Name == "avx512.pmul.dq.512" ||                // Added in 7.0
+      Name.startswith("avx512.mask.pmul.dq.") ||     // Added in 4.0
+      Name.startswith("avx512.mask.pmulu.dq.") ||    // Added in 4.0
+      Name.startswith("avx512.mask.pmul.hr.sw.") ||  // Added in 7.0
+      Name.startswith("avx512.mask.pmulh.w.") ||     // Added in 7.0
+      Name.startswith("avx512.mask.pmulhu.w.") ||    // Added in 7.0
+      Name.startswith("avx512.mask.pmaddw.d.") ||    // Added in 7.0
+      Name.startswith("avx512.mask.pmaddubs.w.") ||  // Added in 7.0
+      Name.startswith("avx512.mask.packsswb.") ||    // Added in 5.0
+      Name.startswith("avx512.mask.packssdw.") ||    // Added in 5.0
+      Name.startswith("avx512.mask.packuswb.") ||    // Added in 5.0
+      Name.startswith("avx512.mask.packusdw.") ||    // Added in 5.0
+      Name.startswith("avx512.mask.cmp.b") ||        // Added in 5.0
+      Name.startswith("avx512.mask.cmp.d") ||        // Added in 5.0
+      Name.startswith("avx512.mask.cmp.q") ||        // Added in 5.0
+      Name.startswith("avx512.mask.cmp.w") ||        // Added in 5.0
+      Name.startswith("avx512.cmp.p") ||             // Added in 12.0
+      Name.startswith("avx512.mask.ucmp.") ||        // Added in 5.0
+      Name.startswith("avx512.cvtb2mask.") ||        // Added in 7.0
+      Name.startswith("avx512.cvtw2mask.") ||        // Added in 7.0
+      Name.startswith("avx512.cvtd2mask.") ||        // Added in 7.0
+      Name.startswith("avx512.cvtq2mask.") ||        // Added in 7.0
+      Name.startswith("avx512.mask.vpermilvar.") ||  // Added in 4.0
+      Name.startswith("avx512.mask.psll.d") ||       // Added in 4.0
+      Name.startswith("avx512.mask.psll.q") ||       // Added in 4.0
+      Name.startswith("avx512.mask.psll.w") ||       // Added in 4.0
+      Name.startswith("avx512.mask.psra.d") ||       // Added in 4.0
+      Name.startswith("avx512.mask.psra.q") ||       // Added in 4.0
+      Name.startswith("avx512.mask.psra.w") ||       // Added in 4.0
+      Name.startswith("avx512.mask.psrl.d") ||       // Added in 4.0
+      Name.startswith("avx512.mask.psrl.q") ||       // Added in 4.0
+      Name.startswith("avx512.mask.psrl.w") ||       // Added in 4.0
+      Name.startswith("avx512.mask.pslli") ||        // Added in 4.0
+      Name.startswith("avx512.mask.psrai") ||        // Added in 4.0
+      Name.startswith("avx512.mask.psrli") ||        // Added in 4.0
+      Name.startswith("avx512.mask.psllv") ||        // Added in 4.0
+      Name.startswith("avx512.mask.psrav") ||        // Added in 4.0
+      Name.startswith("avx512.mask.psrlv") ||        // Added in 4.0
+      Name.startswith("sse41.pmovsx") ||             // Added in 3.8
+      Name.startswith("sse41.pmovzx") ||             // Added in 3.9
+      Name.startswith("avx2.pmovsx") ||              // Added in 3.9
+      Name.startswith("avx2.pmovzx") ||              // Added in 3.9
+      Name.startswith("avx512.mask.pmovsx") ||       // Added in 4.0
+      Name.startswith("avx512.mask.pmovzx") ||       // Added in 4.0
+      Name.startswith("avx512.mask.lzcnt.") ||       // Added in 5.0
+      Name.startswith("avx512.mask.pternlog.") ||    // Added in 7.0
+      Name.startswith("avx512.maskz.pternlog.") ||   // Added in 7.0
+      Name.startswith("avx512.mask.vpmadd52") ||     // Added in 7.0
+      Name.startswith("avx512.maskz.vpmadd52") ||    // Added in 7.0
+      Name.startswith("avx512.mask.vpermi2var.") ||  // Added in 7.0
+      Name.startswith("avx512.mask.vpermt2var.") ||  // Added in 7.0
       Name.startswith("avx512.maskz.vpermt2var.") || // Added in 7.0
-      Name.startswith("avx512.mask.vpdpbusd.") || // Added in 7.0
-      Name.startswith("avx512.maskz.vpdpbusd.") || // Added in 7.0
-      Name.startswith("avx512.mask.vpdpbusds.") || // Added in 7.0
-      Name.startswith("avx512.maskz.vpdpbusds.") || // Added in 7.0
-      Name.startswith("avx512.mask.vpdpwssd.") || // Added in 7.0
-      Name.startswith("avx512.maskz.vpdpwssd.") || // Added in 7.0
-      Name.startswith("avx512.mask.vpdpwssds.") || // Added in 7.0
-      Name.startswith("avx512.maskz.vpdpwssds.") || // Added in 7.0
-      Name.startswith("avx512.mask.dbpsadbw.") || // Added in 7.0
-      Name.startswith("avx512.mask.vpshld.") || // Added in 7.0
-      Name.startswith("avx512.mask.vpshrd.") || // Added in 7.0
-      Name.startswith("avx512.mask.vpshldv.") || // Added in 8.0
-      Name.startswith("avx512.mask.vpshrdv.") || // Added in 8.0
-      Name.startswith("avx512.maskz.vpshldv.") || // Added in 8.0
-      Name.startswith("avx512.maskz.vpshrdv.") || // Added in 8.0
-      Name.startswith("avx512.vpshld.") || // Added in 8.0
-      Name.startswith("avx512.vpshrd.") || // Added in 8.0
+      Name.startswith("avx512.mask.vpdpbusd.") ||    // Added in 7.0
+      Name.startswith("avx512.maskz.vpdpbusd.") ||   // Added in 7.0
+      Name.startswith("avx512.mask.vpdpbusds.") ||   // Added in 7.0
+      Name.startswith("avx512.maskz.vpdpbusds.") ||  // Added in 7.0
+      Name.startswith("avx512.mask.vpdpwssd.") ||    // Added in 7.0
+      Name.startswith("avx512.maskz.vpdpwssd.") ||   // Added in 7.0
+      Name.startswith("avx512.mask.vpdpwssds.") ||   // Added in 7.0
+      Name.startswith("avx512.maskz.vpdpwssds.") ||  // Added in 7.0
+      Name.startswith("avx512.mask.dbpsadbw.") ||    // Added in 7.0
+      Name.startswith("avx512.mask.vpshld.") ||      // Added in 7.0
+      Name.startswith("avx512.mask.vpshrd.") ||      // Added in 7.0
+      Name.startswith("avx512.mask.vpshldv.") ||     // Added in 8.0
+      Name.startswith("avx512.mask.vpshrdv.") ||     // Added in 8.0
+      Name.startswith("avx512.maskz.vpshldv.") ||    // Added in 8.0
+      Name.startswith("avx512.maskz.vpshrdv.") ||    // Added in 8.0
+      Name.startswith("avx512.vpshld.") ||           // Added in 8.0
+      Name.startswith("avx512.vpshrd.") ||           // Added in 8.0
       Name.startswith("avx512.mask.add.p") || // Added in 7.0. 128/256 in 4.0
       Name.startswith("avx512.mask.sub.p") || // Added in 7.0. 128/256 in 4.0
       Name.startswith("avx512.mask.mul.p") || // Added in 7.0. 128/256 in 4.0
       Name.startswith("avx512.mask.div.p") || // Added in 7.0. 128/256 in 4.0
       Name.startswith("avx512.mask.max.p") || // Added in 7.0. 128/256 in 5.0
       Name.startswith("avx512.mask.min.p") || // Added in 7.0. 128/256 in 5.0
-      Name.startswith("avx512.mask.fpclass.p") || // Added in 7.0
-      Name.startswith("avx512.mask.vpshufbitqmb.") || // Added in 8.0
+      Name.startswith("avx512.mask.fpclass.p") ||       // Added in 7.0
+      Name.startswith("avx512.mask.vpshufbitqmb.") ||   // Added in 8.0
       Name.startswith("avx512.mask.pmultishift.qb.") || // Added in 8.0
-      Name.startswith("avx512.mask.conflict.") || // Added in 9.0
-      Name == "avx512.mask.pmov.qd.256" || // Added in 9.0
-      Name == "avx512.mask.pmov.qd.512" || // Added in 9.0
-      Name == "avx512.mask.pmov.wb.256" || // Added in 9.0
-      Name == "avx512.mask.pmov.wb.512" || // Added in 9.0
-      Name == "sse.cvtsi2ss" || // Added in 7.0
-      Name == "sse.cvtsi642ss" || // Added in 7.0
-      Name == "sse2.cvtsi2sd" || // Added in 7.0
-      Name == "sse2.cvtsi642sd" || // Added in 7.0
-      Name == "sse2.cvtss2sd" || // Added in 7.0
-      Name == "sse2.cvtdq2pd" || // Added in 3.9
-      Name == "sse2.cvtdq2ps" || // Added in 7.0
-      Name == "sse2.cvtps2pd" || // Added in 3.9
-      Name == "avx.cvtdq2.pd.256" || // Added in 3.9
-      Name == "avx.cvtdq2.ps.256" || // Added in 7.0
-      Name == "avx.cvt.ps2.pd.256" || // Added in 3.9
-      Name.startswith("vcvtph2ps.") || // Added in 11.0
-      Name.startswith("avx.vinsertf128.") || // Added in 3.7
-      Name == "avx2.vinserti128" || // Added in 3.7
-      Name.startswith("avx512.mask.insert") || // Added in 4.0
-      Name.startswith("avx.vextractf128.") || // Added in 3.7
-      Name == "avx2.vextracti128" || // Added in 3.7
-      Name.startswith("avx512.mask.vextract") || // Added in 4.0
-      Name.startswith("sse4a.movnt.") || // Added in 3.9
-      Name.startswith("avx.movnt.") || // Added in 3.2
-      Name.startswith("avx512.storent.") || // Added in 3.9
-      Name == "sse41.movntdqa" || // Added in 5.0
-      Name == "avx2.movntdqa" || // Added in 5.0
-      Name == "avx512.movntdqa" || // Added in 5.0
-      Name == "sse2.storel.dq" || // Added in 3.9
-      Name.startswith("sse.storeu.") || // Added in 3.9
-      Name.startswith("sse2.storeu.") || // Added in 3.9
-      Name.startswith("avx.storeu.") || // Added in 3.9
-      Name.startswith("avx512.mask.storeu.") || // Added in 3.9
-      Name.startswith("avx512.mask.store.p") || // Added in 3.9
-      Name.startswith("avx512.mask.store.b.") || // Added in 3.9
-      Name.startswith("avx512.mask.store.w.") || // Added in 3.9
-      Name.startswith("avx512.mask.store.d.") || // Added in 3.9
-      Name.startswith("avx512.mask.store.q.") || // Added in 3.9
-      Name == "avx512.mask.store.ss" || // Added in 7.0
-      Name.startswith("avx512.mask.loadu.") || // Added in 3.9
-      Name.startswith("avx512.mask.load.") || // Added in 3.9
-      Name.startswith("avx512.mask.expand.load.") || // Added in 7.0
+      Name.startswith("avx512.mask.conflict.") ||       // Added in 9.0
+      Name == "avx512.mask.pmov.qd.256" ||              // Added in 9.0
+      Name == "avx512.mask.pmov.qd.512" ||              // Added in 9.0
+      Name == "avx512.mask.pmov.wb.256" ||              // Added in 9.0
+      Name == "avx512.mask.pmov.wb.512" ||              // Added in 9.0
+      Name == "sse.cvtsi2ss" ||                         // Added in 7.0
+      Name == "sse.cvtsi642ss" ||                       // Added in 7.0
+      Name == "sse2.cvtsi2sd" ||                        // Added in 7.0
+      Name == "sse2.cvtsi642sd" ||                      // Added in 7.0
+      Name == "sse2.cvtss2sd" ||                        // Added in 7.0
+      Name == "sse2.cvtdq2pd" ||                        // Added in 3.9
+      Name == "sse2.cvtdq2ps" ||                        // Added in 7.0
+      Name == "sse2.cvtps2pd" ||                        // Added in 3.9
+      Name == "avx.cvtdq2.pd.256" ||                    // Added in 3.9
+      Name == "avx.cvtdq2.ps.256" ||                    // Added in 7.0
+      Name == "avx.cvt.ps2.pd.256" ||                   // Added in 3.9
+      Name.startswith("vcvtph2ps.") ||                  // Added in 11.0
+      Name.startswith("avx.vinsertf128.") ||            // Added in 3.7
+      Name == "avx2.vinserti128" ||                     // Added in 3.7
+      Name.startswith("avx512.mask.insert") ||          // Added in 4.0
+      Name.startswith("avx.vextractf128.") ||           // Added in 3.7
+      Name == "avx2.vextracti128" ||                    // Added in 3.7
+      Name.startswith("avx512.mask.vextract") ||        // Added in 4.0
+      Name.startswith("sse4a.movnt.") ||                // Added in 3.9
+      Name.startswith("avx.movnt.") ||                  // Added in 3.2
+      Name.startswith("avx512.storent.") ||             // Added in 3.9
+      Name == "sse41.movntdqa" ||                       // Added in 5.0
+      Name == "avx2.movntdqa" ||                        // Added in 5.0
+      Name == "avx512.movntdqa" ||                      // Added in 5.0
+      Name == "sse2.storel.dq" ||                       // Added in 3.9
+      Name.startswith("sse.storeu.") ||                 // Added in 3.9
+      Name.startswith("sse2.storeu.") ||                // Added in 3.9
+      Name.startswith("avx.storeu.") ||                 // Added in 3.9
+      Name.startswith("avx512.mask.storeu.") ||         // Added in 3.9
+      Name.startswith("avx512.mask.store.p") ||         // Added in 3.9
+      Name.startswith("avx512.mask.store.b.") ||        // Added in 3.9
+      Name.startswith("avx512.mask.store.w.") ||        // Added in 3.9
+      Name.startswith("avx512.mask.store.d.") ||        // Added in 3.9
+      Name.startswith("avx512.mask.store.q.") ||        // Added in 3.9
+      Name == "avx512.mask.store.ss" ||                 // Added in 7.0
+      Name.startswith("avx512.mask.loadu.") ||          // Added in 3.9
+      Name.startswith("avx512.mask.load.") ||           // Added in 3.9
+      Name.startswith("avx512.mask.expand.load.") ||    // Added in 7.0
       Name.startswith("avx512.mask.compress.store.") || // Added in 7.0
-      Name.startswith("avx512.mask.expand.b") || // Added in 9.0
-      Name.startswith("avx512.mask.expand.w") || // Added in 9.0
-      Name.startswith("avx512.mask.expand.d") || // Added in 9.0
-      Name.startswith("avx512.mask.expand.q") || // Added in 9.0
-      Name.startswith("avx512.mask.expand.p") || // Added in 9.0
-      Name.startswith("avx512.mask.compress.b") || // Added in 9.0
-      Name.startswith("avx512.mask.compress.w") || // Added in 9.0
-      Name.startswith("avx512.mask.compress.d") || // Added in 9.0
-      Name.startswith("avx512.mask.compress.q") || // Added in 9.0
-      Name.startswith("avx512.mask.compress.p") || // Added in 9.0
-      Name == "sse42.crc32.64.8" || // Added in 3.4
-      Name.startswith("avx.vbroadcast.s") || // Added in 3.5
-      Name.startswith("avx512.vbroadcast.s") || // Added in 7.0
-      Name.startswith("avx512.mask.palignr.") || // Added in 3.9
-      Name.startswith("avx512.mask.valign.") || // Added in 4.0
-      Name.startswith("sse2.psll.dq") || // Added in 3.7
-      Name.startswith("sse2.psrl.dq") || // Added in 3.7
-      Name.startswith("avx2.psll.dq") || // Added in 3.7
-      Name.startswith("avx2.psrl.dq") || // Added in 3.7
-      Name.startswith("avx512.psll.dq") || // Added in 3.9
-      Name.startswith("avx512.psrl.dq") || // Added in 3.9
-      Name == "sse41.pblendw" || // Added in 3.7
-      Name.startswith("sse41.blendp") || // Added in 3.7
-      Name.startswith("avx.blend.p") || // Added in 3.7
-      Name == "avx2.pblendw" || // Added in 3.7
-      Name.startswith("avx2.pblendd.") || // Added in 3.7
-      Name.startswith("avx.vbroadcastf128") || // Added in 4.0
-      Name == "avx2.vbroadcasti128" || // Added in 3.7
+      Name.startswith("avx512.mask.expand.b") ||        // Added in 9.0
+      Name.startswith("avx512.mask.expand.w") ||        // Added in 9.0
+      Name.startswith("avx512.mask.expand.d") ||        // Added in 9.0
+      Name.startswith("avx512.mask.expand.q") ||        // Added in 9.0
+      Name.startswith("avx512.mask.expand.p") ||        // Added in 9.0
+      Name.startswith("avx512.mask.compress.b") ||      // Added in 9.0
+      Name.startswith("avx512.mask.compress.w") ||      // Added in 9.0
+      Name.startswith("avx512.mask.compress.d") ||      // Added in 9.0
+      Name.startswith("avx512.mask.compress.q") ||      // Added in 9.0
+      Name.startswith("avx512.mask.compress.p") ||      // Added in 9.0
+      Name == "sse42.crc32.64.8" ||                     // Added in 3.4
+      Name.startswith("avx.vbroadcast.s") ||            // Added in 3.5
+      Name.startswith("avx512.vbroadcast.s") ||         // Added in 7.0
+      Name.startswith("avx512.mask.palignr.") ||        // Added in 3.9
+      Name.startswith("avx512.mask.valign.") ||         // Added in 4.0
+      Name.startswith("sse2.psll.dq") ||                // Added in 3.7
+      Name.startswith("sse2.psrl.dq") ||                // Added in 3.7
+      Name.startswith("avx2.psll.dq") ||                // Added in 3.7
+      Name.startswith("avx2.psrl.dq") ||                // Added in 3.7
+      Name.startswith("avx512.psll.dq") ||              // Added in 3.9
+      Name.startswith("avx512.psrl.dq") ||              // Added in 3.9
+      Name == "sse41.pblendw" ||                        // Added in 3.7
+      Name.startswith("sse41.blendp") ||                // Added in 3.7
+      Name.startswith("avx.blend.p") ||                 // Added in 3.7
+      Name == "avx2.pblendw" ||                         // Added in 3.7
+      Name.startswith("avx2.pblendd.") ||               // Added in 3.7
+      Name.startswith("avx.vbroadcastf128") ||          // Added in 4.0
+      Name == "avx2.vbroadcasti128" ||                  // Added in 3.7
       Name.startswith("avx512.mask.broadcastf32x4.") || // Added in 6.0
       Name.startswith("avx512.mask.broadcastf64x2.") || // Added in 6.0
       Name.startswith("avx512.mask.broadcastf32x8.") || // Added in 6.0
@@ -431,21 +431,21 @@ static bool ShouldUpgradeX86Intrinsic(Function *F, StringRef Name) {
       Name.startswith("avx512.mask.broadcasti64x2.") || // Added in 6.0
       Name.startswith("avx512.mask.broadcasti32x8.") || // Added in 6.0
       Name.startswith("avx512.mask.broadcasti64x4.") || // Added in 6.0
-      Name == "xop.vpcmov" || // Added in 3.8
-      Name == "xop.vpcmov.256" || // Added in 5.0
-      Name.startswith("avx512.mask.move.s") || // Added in 4.0
-      Name.startswith("avx512.cvtmask2") || // Added in 5.0
-      Name.startswith("xop.vpcom") || // Added in 3.2, Updated in 9.0
-      Name.startswith("xop.vprot") || // Added in 8.0
-      Name.startswith("avx512.prol") || // Added in 8.0
-      Name.startswith("avx512.pror") || // Added in 8.0
+      Name == "xop.vpcmov" ||                           // Added in 3.8
+      Name == "xop.vpcmov.256" ||                       // Added in 5.0
+      Name.startswith("avx512.mask.move.s") ||          // Added in 4.0
+      Name.startswith("avx512.cvtmask2") ||             // Added in 5.0
+      Name.startswith("xop.vpcom") ||          // Added in 3.2, Updated in 9.0
+      Name.startswith("xop.vprot") ||          // Added in 8.0
+      Name.startswith("avx512.prol") ||        // Added in 8.0
+      Name.startswith("avx512.pror") ||        // Added in 8.0
       Name.startswith("avx512.mask.prorv.") || // Added in 8.0
       Name.startswith("avx512.mask.pror.") ||  // Added in 8.0
       Name.startswith("avx512.mask.prolv.") || // Added in 8.0
       Name.startswith("avx512.mask.prol.") ||  // Added in 8.0
-      Name.startswith("avx512.ptestm") || //Added in 6.0
-      Name.startswith("avx512.ptestnm") || //Added in 6.0
-      Name.startswith("avx512.mask.pavg")) // Added in 6.0
+      Name.startswith("avx512.ptestm") ||      // Added in 6.0
+      Name.startswith("avx512.ptestnm") ||     // Added in 6.0
+      Name.startswith("avx512.mask.pavg"))     // Added in 6.0
     return true;
 
   return false;
@@ -470,8 +470,7 @@ static bool UpgradeX86IntrinsicFunction(Function *F, StringRef Name,
       return false;
 
     rename(F);
-    NewFn = Intrinsic::getDeclaration(F->getParent(),
-                                      Intrinsic::x86_rdtscp);
+    NewFn = Intrinsic::getDeclaration(F->getParent(), Intrinsic::x86_rdtscp);
     return true;
   }
 
@@ -541,26 +540,26 @@ static bool UpgradeX86IntrinsicFunction(Function *F, StringRef Name,
     return UpgradeX86BF16Intrinsic(
         F, Intrinsic::x86_avx512bf16_cvtneps2bf16_512, NewFn);
   if (Name == "avx512bf16.dpbf16ps.128") // Added in 9.0
-    return UpgradeX86BF16DPIntrinsic(
-        F, Intrinsic::x86_avx512bf16_dpbf16ps_128, NewFn);
+    return UpgradeX86BF16DPIntrinsic(F, Intrinsic::x86_avx512bf16_dpbf16ps_128,
+                                     NewFn);
   if (Name == "avx512bf16.dpbf16ps.256") // Added in 9.0
-    return UpgradeX86BF16DPIntrinsic(
-        F, Intrinsic::x86_avx512bf16_dpbf16ps_256, NewFn);
+    return UpgradeX86BF16DPIntrinsic(F, Intrinsic::x86_avx512bf16_dpbf16ps_256,
+                                     NewFn);
   if (Name == "avx512bf16.dpbf16ps.512") // Added in 9.0
-    return UpgradeX86BF16DPIntrinsic(
-        F, Intrinsic::x86_avx512bf16_dpbf16ps_512, NewFn);
+    return UpgradeX86BF16DPIntrinsic(F, Intrinsic::x86_avx512bf16_dpbf16ps_512,
+                                     NewFn);
 
   // frcz.ss/sd may need to have an argument dropped. Added in 3.2
   if (Name.startswith("xop.vfrcz.ss") && F->arg_size() == 2) {
     rename(F);
-    NewFn = Intrinsic::getDeclaration(F->getParent(),
-                                      Intrinsic::x86_xop_vfrcz_ss);
+    NewFn =
+        Intrinsic::getDeclaration(F->getParent(), Intrinsic::x86_xop_vfrcz_ss);
     return true;
   }
   if (Name.startswith("xop.vfrcz.sd") && F->arg_size() == 2) {
     rename(F);
-    NewFn = Intrinsic::getDeclaration(F->getParent(),
-                                      Intrinsic::x86_xop_vfrcz_sd);
+    NewFn =
+        Intrinsic::getDeclaration(F->getParent(), Intrinsic::x86_xop_vfrcz_sd);
     return true;
   }
   // Upgrade any XOP PERMIL2 index operand still using a float/double vector.
@@ -682,7 +681,8 @@ static bool UpgradeIntrinsicFunction1(Function *F, Function *&NewFn) {
   Name = Name.substr(5); // Strip off "llvm."
 
   switch (Name[0]) {
-  default: break;
+  default:
+    break;
   case 'a': {
     if (Name.startswith("arm.rbit") || Name.startswith("aarch64.rbit")) {
       NewFn = Intrinsic::getDeclaration(F->getParent(), Intrinsic::bitreverse,
@@ -753,14 +753,12 @@ static bool UpgradeIntrinsicFunction1(Function *F, Function *&NewFn) {
       return true;
     }
     if (Name.startswith("arm.neon.vclz")) {
-      Type* args[2] = {
-        F->arg_begin()->getType(),
-        Type::getInt1Ty(F->getContext())
-      };
+      Type *args[2] = {F->arg_begin()->getType(),
+                       Type::getInt1Ty(F->getContext())};
       // Can't use Intrinsic::getDeclaration here as it adds a ".i1" to
       // the end of the name. Change name from llvm.arm.neon.vclz.* to
       //  llvm.ctlz.*
-      FunctionType* fType = FunctionType::get(F->getReturnType(), args, false);
+      FunctionType *fType = FunctionType::get(F->getReturnType(), args, false);
       NewFn = Function::Create(fType, F->getLinkage(), F->getAddressSpace(),
                                "llvm.ctlz." + Name.substr(14), F->getParent());
       return true;
@@ -770,17 +768,16 @@ static bool UpgradeIntrinsicFunction1(Function *F, Function *&NewFn) {
                                         F->arg_begin()->getType());
       return true;
     }
-    static const Regex vstRegex("^arm\\.neon\\.vst([1234]|[234]lane)\\.v[a-z0-9]*$");
+    static const Regex vstRegex(
+        "^arm\\.neon\\.vst([1234]|[234]lane)\\.v[a-z0-9]*$");
     if (vstRegex.match(Name)) {
-      static const Intrinsic::ID StoreInts[] = {Intrinsic::arm_neon_vst1,
-                                                Intrinsic::arm_neon_vst2,
-                                                Intrinsic::arm_neon_vst3,
-                                                Intrinsic::arm_neon_vst4};
+      static const Intrinsic::ID StoreInts[] = {
+          Intrinsic::arm_neon_vst1, Intrinsic::arm_neon_vst2,
+          Intrinsic::arm_neon_vst3, Intrinsic::arm_neon_vst4};
 
       static const Intrinsic::ID StoreLaneInts[] = {
-        Intrinsic::arm_neon_vst2lane, Intrinsic::arm_neon_vst3lane,
-        Intrinsic::arm_neon_vst4lane
-      };
+          Intrinsic::arm_neon_vst2lane, Intrinsic::arm_neon_vst3lane,
+          Intrinsic::arm_neon_vst4lane};
 
       auto fArgs = F->getFunctionType()->params();
       Type *Tys[] = {fArgs[0], fArgs[1]};
@@ -793,7 +790,8 @@ static bool UpgradeIntrinsicFunction1(Function *F, Function *&NewFn) {
       return true;
     }
     if (Name == "aarch64.thread.pointer" || Name == "arm.thread.pointer") {
-      NewFn = Intrinsic::getDeclaration(F->getParent(), Intrinsic::thread_pointer);
+      NewFn =
+          Intrinsic::getDeclaration(F->getParent(), Intrinsic::thread_pointer);
       return true;
     }
     if (Name.startswith("arm.neon.vqadds.")) {
@@ -834,8 +832,7 @@ static bool UpgradeIntrinsicFunction1(Function *F, Function *&NewFn) {
         Name.endswith("i8")) {
       Intrinsic::ID IID =
           StringSwitch<Intrinsic::ID>(Name)
-              .Cases("arm.neon.bfdot.v2f32.v8i8",
-                     "arm.neon.bfdot.v4f32.v16i8",
+              .Cases("arm.neon.bfdot.v2f32.v8i8", "arm.neon.bfdot.v4f32.v16i8",
                      Intrinsic::arm_neon_bfdot)
               .Cases("aarch64.neon.bfdot.v2f32.v8i8",
                      "aarch64.neon.bfdot.v4f32.v16i8",
@@ -848,10 +845,9 @@ static bool UpgradeIntrinsicFunction1(Function *F, Function *&NewFn) {
       assert((OperandWidth == 64 || OperandWidth == 128) &&
              "Unexpected operand width");
       LLVMContext &Ctx = F->getParent()->getContext();
-      std::array<Type *, 2> Tys {{
-        F->getReturnType(),
-        FixedVectorType::get(Type::getBFloatTy(Ctx), OperandWidth / 16)
-      }};
+      std::array<Type *, 2> Tys{
+          {F->getReturnType(),
+           FixedVectorType::get(Type::getBFloatTy(Ctx), OperandWidth / 16)}};
       NewFn = Intrinsic::getDeclaration(F->getParent(), IID, Tys);
       return true;
     }
@@ -863,12 +859,9 @@ static bool UpgradeIntrinsicFunction1(Function *F, Function *&NewFn) {
         Name.endswith(".v4f32.v16i8")) {
       Intrinsic::ID IID =
           StringSwitch<Intrinsic::ID>(Name)
-              .Case("arm.neon.bfmmla.v4f32.v16i8",
-                    Intrinsic::arm_neon_bfmmla)
-              .Case("arm.neon.bfmlalb.v4f32.v16i8",
-                    Intrinsic::arm_neon_bfmlalb)
-              .Case("arm.neon.bfmlalt.v4f32.v16i8",
-                    Intrinsic::arm_neon_bfmlalt)
+              .Case("arm.neon.bfmmla.v4f32.v16i8", Intrinsic::arm_neon_bfmmla)
+              .Case("arm.neon.bfmlalb.v4f32.v16i8", Intrinsic::arm_neon_bfmlalb)
+              .Case("arm.neon.bfmlalt.v4f32.v16i8", Intrinsic::arm_neon_bfmlalt)
               .Case("aarch64.neon.bfmmla.v4f32.v16i8",
                     Intrinsic::aarch64_neon_bfmmla)
               .Case("aarch64.neon.bfmlalb.v4f32.v16i8",
@@ -921,8 +914,8 @@ static bool UpgradeIntrinsicFunction1(Function *F, Function *&NewFn) {
       }
 
       if (Name.startswith("atomic.inc") || Name.startswith("atomic.dec")) {
-        // This was replaced with atomicrmw uinc_wrap and udec_wrap, so there's no
-        // new declaration.
+        // This was replaced with atomicrmw uinc_wrap and udec_wrap, so there's
+        // no new declaration.
         NewFn = nullptr;
         return true;
       }
@@ -1020,7 +1013,8 @@ static bool UpgradeIntrinsicFunction1(Function *F, Function *&NewFn) {
   case 'f':
     if (Name.startswith("flt.rounds")) {
       rename(F);
-      NewFn = Intrinsic::getDeclaration(F->getParent(), Intrinsic::get_rounding);
+      NewFn =
+          Intrinsic::getDeclaration(F->getParent(), Intrinsic::get_rounding);
       return true;
     }
     break;
@@ -1028,10 +1022,10 @@ static bool UpgradeIntrinsicFunction1(Function *F, Function *&NewFn) {
     if (Name.startswith("invariant.group.barrier")) {
       // Rename invariant.group.barrier to launder.invariant.group
       auto Args = F->getFunctionType()->params();
-      Type* ObjectPtr[1] = {Args[0]};
+      Type *ObjectPtr[1] = {Args[0]};
       rename(F);
-      NewFn = Intrinsic::getDeclaration(F->getParent(),
-          Intrinsic::launder_invariant_group, ObjectPtr);
+      NewFn = Intrinsic::getDeclaration(
+          F->getParent(), Intrinsic::launder_invariant_group, ObjectPtr);
       return true;
     }
     break;
@@ -1123,7 +1117,7 @@ static bool UpgradeIntrinsicFunction1(Function *F, Function *&NewFn) {
     // We only need to change the name to match the mangling including the
     // address space.
     if (Name.startswith("objectsize.")) {
-      Type *Tys[2] = { F->getReturnType(), F->arg_begin()->getType() };
+      Type *Tys[2] = {F->getReturnType(), F->arg_begin()->getType()};
       if (F->arg_size() == 2 || F->arg_size() == 3 ||
           F->getName() !=
               Intrinsic::getName(Intrinsic::objectsize, Tys, F->getParent())) {
@@ -1336,8 +1330,8 @@ GlobalVariable *llvm::UpgradeGlobalVariable(GlobalVariable *GV) {
 
 // Handles upgrading SSE2/AVX2/AVX512BW PSLLDQ intrinsics by converting them
 // to byte shuffles.
-static Value *UpgradeX86PSLLDQIntrinsics(IRBuilder<> &Builder,
-                                         Value *Op, unsigned Shift) {
+static Value *UpgradeX86PSLLDQIntrinsics(IRBuilder<> &Builder, Value *Op,
+                                         unsigned Shift) {
   auto *ResultTy = cast<FixedVectorType>(Op->getType());
   unsigned NumElts = ResultTy->getNumElements() * 8;
 
@@ -1422,8 +1416,8 @@ static Value *getX86MaskVec(IRBuilder<> &Builder, Value *Mask,
   return Mask;
 }
 
-static Value *EmitX86Select(IRBuilder<> &Builder, Value *Mask,
-                            Value *Op0, Value *Op1) {
+static Value *EmitX86Select(IRBuilder<> &Builder, Value *Mask, Value *Op0,
+                            Value *Op1) {
   // If the mask is all ones just emit the first operation.
   if (const auto *C = dyn_cast<Constant>(Mask))
     if (C->isAllOnesValue())
@@ -1434,8 +1428,8 @@ static Value *EmitX86Select(IRBuilder<> &Builder, Value *Mask,
   return Builder.CreateSelect(Mask, Op0, Op1);
 }
 
-static Value *EmitX86ScalarSelect(IRBuilder<> &Builder, Value *Mask,
-                                  Value *Op0, Value *Op1) {
+static Value *EmitX86ScalarSelect(IRBuilder<> &Builder, Value *Mask, Value *Op0,
+                                  Value *Op1) {
   // If the mask is all ones just emit the first operation.
   if (const auto *C = dyn_cast<Constant>(Mask))
     if (C->isAllOnesValue())
@@ -1485,7 +1479,7 @@ static Value *UpgradeX86ALIGNIntrinsics(IRBuilder<> &Builder, Value *Op0,
     for (unsigned i = 0; i != 16; ++i) {
       unsigned Idx = ShiftVal + i;
       if (!IsVALIGN && Idx >= 16) // Disable wrap for VALIGN.
-        Idx += NumElts - 16; // End of lane, switch operand.
+        Idx += NumElts - 16;      // End of lane, switch operand.
       Indices[l + i] = Idx + l;
     }
   }
@@ -1542,18 +1536,17 @@ static Value *UpgradeX86VPERMT2Intrinsics(IRBuilder<> &Builder, CallBase &CI,
   else
     llvm_unreachable("Unexpected intrinsic");
 
-  Value *Args[] = { CI.getArgOperand(0) , CI.getArgOperand(1),
-                    CI.getArgOperand(2) };
+  Value *Args[] = {CI.getArgOperand(0), CI.getArgOperand(1),
+                   CI.getArgOperand(2)};
 
   // If this isn't index form we need to swap operand 0 and 1.
   if (!IndexForm)
     std::swap(Args[0], Args[1]);
 
-  Value *V = Builder.CreateCall(Intrinsic::getDeclaration(CI.getModule(), IID),
-                                Args);
+  Value *V =
+      Builder.CreateCall(Intrinsic::getDeclaration(CI.getModule(), IID), Args);
   Value *PassThru = ZeroMask ? ConstantAggregateZero::get(Ty)
-                             : Builder.CreateBitCast(CI.getArgOperand(1),
-                                                     Ty);
+                             : Builder.CreateBitCast(CI.getArgOperand(1), Ty);
   return EmitX86Select(Builder, CI.getArgOperand(3), V, PassThru);
 }
 
@@ -1664,21 +1657,20 @@ static Value *upgradeX86ConcatShift(IRBuilder<> &Builder, CallBase &CI,
 
   unsigned NumArgs = CI.arg_size();
   if (NumArgs >= 4) { // For masked intrinsics.
-    Value *VecSrc = NumArgs == 5 ? CI.getArgOperand(3) :
-                    ZeroMask     ? ConstantAggregateZero::get(CI.getType()) :
-                                   CI.getArgOperand(0);
+    Value *VecSrc = NumArgs == 5 ? CI.getArgOperand(3)
+                    : ZeroMask   ? ConstantAggregateZero::get(CI.getType())
+                                 : CI.getArgOperand(0);
     Value *Mask = CI.getOperand(NumArgs - 1);
     Res = EmitX86Select(Builder, Mask, Res, VecSrc);
   }
   return Res;
 }
 
-static Value *UpgradeMaskedStore(IRBuilder<> &Builder,
-                                 Value *Ptr, Value *Data, Value *Mask,
-                                 bool Aligned) {
+static Value *UpgradeMaskedStore(IRBuilder<> &Builder, Value *Ptr, Value *Data,
+                                 Value *Mask, bool Aligned) {
   // Cast the pointer to the right type.
-  Ptr = Builder.CreateBitCast(Ptr,
-                              llvm::PointerType::getUnqual(Data->getType()));
+  Ptr =
+      Builder.CreateBitCast(Ptr, llvm::PointerType::getUnqual(Data->getType()));
   const Align Alignment =
       Aligned
           ? Align(Data->getType()->getPrimitiveSizeInBits().getFixedValue() / 8)
@@ -1695,9 +1687,8 @@ static Value *UpgradeMaskedStore(IRBuilder<> &Builder,
   return Builder.CreateMaskedStore(Data, Ptr, Alignment, Mask);
 }
 
-static Value *UpgradeMaskedLoad(IRBuilder<> &Builder,
-                                Value *Ptr, Value *Passthru, Value *Mask,
-                                bool Aligned) {
+static Value *UpgradeMaskedLoad(IRBuilder<> &Builder, Value *Ptr,
+                                Value *Passthru, Value *Mask, bool Aligned) {
   Type *ValTy = Passthru->getType();
   // Cast the pointer to the right type.
   Ptr = Builder.CreateBitCast(Ptr, llvm::PointerType::getUnqual(ValTy));
@@ -1774,9 +1765,8 @@ static Value *ApplyX86MaskOn1BitsVec(IRBuilder<> &Builder, Value *Vec,
       Indices[i] = i;
     for (unsigned i = NumElts; i != 8; ++i)
       Indices[i] = NumElts + i % NumElts;
-    Vec = Builder.CreateShuffleVector(Vec,
-                                      Constant::getNullValue(Vec->getType()),
-                                      Indices);
+    Vec = Builder.CreateShuffleVector(
+        Vec, Constant::getNullValue(Vec->getType()), Indices);
   }
   return Builder.CreateBitCast(Vec, Builder.getIntNTy(std::max(NumElts, 8U)));
 }
@@ -1796,13 +1786,26 @@ static Value *upgradeMaskedCompare(IRBuilder<> &Builder, CallBase &CI,
   } else {
     ICmpInst::Predicate Pred;
     switch (CC) {
-    default: llvm_unreachable("Unknown condition code");
-    case 0: Pred = ICmpInst::ICMP_EQ;  break;
-    case 1: Pred = Signed ? ICmpInst::ICMP_SLT : ICmpInst::ICMP_ULT; break;
-    case 2: Pred = Signed ? ICmpInst::ICMP_SLE : ICmpInst::ICMP_ULE; break;
-    case 4: Pred = ICmpInst::ICMP_NE;  break;
-    case 5: Pred = Signed ? ICmpInst::ICMP_SGE : ICmpInst::ICMP_UGE; break;
-    case 6: Pred = Signed ? ICmpInst::ICMP_SGT : ICmpInst::ICMP_UGT; break;
+    default:
+      llvm_unreachable("Unknown condition code");
+    case 0:
+      Pred = ICmpInst::ICMP_EQ;
+      break;
+    case 1:
+      Pred = Signed ? ICmpInst::ICMP_SLT : ICmpInst::ICMP_ULT;
+      break;
+    case 2:
+      Pred = Signed ? ICmpInst::ICMP_SLE : ICmpInst::ICMP_ULE;
+      break;
+    case 4:
+      Pred = ICmpInst::ICMP_NE;
+      break;
+    case 5:
+      Pred = Signed ? ICmpInst::ICMP_SGE : ICmpInst::ICMP_UGE;
+      break;
+    case 6:
+      Pred = Signed ? ICmpInst::ICMP_SGT : ICmpInst::ICMP_UGT;
+      break;
     }
     Cmp = Builder.CreateICmp(Pred, Op0, CI.getArgOperand(1));
   }
@@ -1816,29 +1819,28 @@ static Value *upgradeMaskedCompare(IRBuilder<> &Builder, CallBase &CI,
 static Value *UpgradeX86MaskedShift(IRBuilder<> &Builder, CallBase &CI,
                                     Intrinsic::ID IID) {
   Function *Intrin = Intrinsic::getDeclaration(CI.getModule(), IID);
-  Value *Rep = Builder.CreateCall(Intrin,
-                                 { CI.getArgOperand(0), CI.getArgOperand(1) });
+  Value *Rep =
+      Builder.CreateCall(Intrin, {CI.getArgOperand(0), CI.getArgOperand(1)});
   return EmitX86Select(Builder, CI.getArgOperand(3), Rep, CI.getArgOperand(2));
 }
 
-static Value* upgradeMaskedMove(IRBuilder<> &Builder, CallBase &CI) {
-  Value* A = CI.getArgOperand(0);
-  Value* B = CI.getArgOperand(1);
-  Value* Src = CI.getArgOperand(2);
-  Value* Mask = CI.getArgOperand(3);
-
-  Value* AndNode = Builder.CreateAnd(Mask, APInt(8, 1));
-  Value* Cmp = Builder.CreateIsNotNull(AndNode);
-  Value* Extract1 = Builder.CreateExtractElement(B, (uint64_t)0);
-  Value* Extract2 = Builder.CreateExtractElement(Src, (uint64_t)0);
-  Value* Select = Builder.CreateSelect(Cmp, Extract1, Extract2);
+static Value *upgradeMaskedMove(IRBuilder<> &Builder, CallBase &CI) {
+  Value *A = CI.getArgOperand(0);
+  Value *B = CI.getArgOperand(1);
+  Value *Src = CI.getArgOperand(2);
+  Value *Mask = CI.getArgOperand(3);
+
+  Value *AndNode = Builder.CreateAnd(Mask, APInt(8, 1));
+  Value *Cmp = Builder.CreateIsNotNull(AndNode);
+  Value *Extract1 = Builder.CreateExtractElement(B, (uint64_t)0);
+  Value *Extract2 = Builder.CreateExtractElement(Src, (uint64_t)0);
+  Value *Select = Builder.CreateSelect(Cmp, Extract1, Extract2);
   return Builder.CreateInsertElement(A, Select, (uint64_t)0);
 }
 
-
-static Value* UpgradeMaskToInt(IRBuilder<> &Builder, CallBase &CI) {
-  Value* Op = CI.getArgOperand(0);
-  Type* ReturnOp = CI.getType();
+static Value *UpgradeMaskToInt(IRBuilder<> &Builder, CallBase &CI) {
+  Value *Op = CI.getArgOperand(0);
+  Type *ReturnOp = CI.getType();
   unsigned NumElts = cast<FixedVectorType>(CI.getType())->getNumElements();
   Value *Mask = getX86MaskVec(Builder, Op, NumElts);
   return Builder.CreateSExt(Mask, ReturnOp, "vpmovm2");
@@ -2075,8 +2077,8 @@ static bool upgradeAVX512MaskToSelect(StringRef Name, IRBuilder<> &Builder,
   SmallVector<Value *, 4> Args(CI.args());
   Args.pop_back();
   Args.pop_back();
-  Rep = Builder.CreateCall(Intrinsic::getDeclaration(CI.getModule(), IID),
-                           Args);
+  Rep =
+      Builder.CreateCall(Intrinsic::getDeclaration(CI.getModule(), IID), Args);
   unsigned NumArgs = CI.arg_size();
   Rep = EmitX86Select(Builder, CI.getArgOperand(NumArgs - 1), Rep,
                       CI.getArgOperand(NumArgs - 2));
@@ -2283,8 +2285,8 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       return;
     }
 
-    if (IsX86 && (Name.startswith("avx.movnt.") ||
-                  Name.startswith("avx512.storent."))) {
+    if (IsX86 &&
+        (Name.startswith("avx.movnt.") || Name.startswith("avx512.storent."))) {
       SmallVector<Metadata *, 1> Elts;
       Elts.push_back(
           ConstantAsMetadata::get(ConstantInt::get(Type::getInt32Ty(C), 1)));
@@ -2294,9 +2296,8 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       Value *Arg1 = CI->getArgOperand(1);
 
       // Convert the type of the pointer to a pointer to the stored type.
-      Value *BC = Builder.CreateBitCast(Arg0,
-                                        PointerType::getUnqual(Arg1->getType()),
-                                        "cast");
+      Value *BC = Builder.CreateBitCast(
+          Arg0, PointerType::getUnqual(Arg1->getType()), "cast");
       StoreInst *SI = Builder.CreateAlignedStore(
           Arg1, BC,
           Align(Arg1->getType()->getPrimitiveSizeInBits().getFixedValue() / 8));
@@ -2314,9 +2315,8 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       auto *NewVecTy = FixedVectorType::get(Type::getInt64Ty(C), 2);
       Value *BC0 = Builder.CreateBitCast(Arg1, NewVecTy, "cast");
       Value *Elt = Builder.CreateExtractElement(BC0, (uint64_t)0);
-      Value *BC = Builder.CreateBitCast(Arg0,
-                                        PointerType::getUnqual(Elt->getType()),
-                                        "cast");
+      Value *BC = Builder.CreateBitCast(
+          Arg0, PointerType::getUnqual(Elt->getType()), "cast");
       Builder.CreateAlignedStore(Elt, BC, Align(1));
 
       // Remove intrinsic.
@@ -2324,15 +2324,14 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       return;
     }
 
-    if (IsX86 && (Name.startswith("sse.storeu.") ||
-                  Name.startswith("sse2.storeu.") ||
-                  Name.startswith("avx.storeu."))) {
+    if (IsX86 &&
+        (Name.startswith("sse.storeu.") || Name.startswith("sse2.storeu.") ||
+         Name.startswith("avx.storeu."))) {
       Value *Arg0 = CI->getArgOperand(0);
       Value *Arg1 = CI->getArgOperand(1);
 
-      Arg0 = Builder.CreateBitCast(Arg0,
-                                   PointerType::getUnqual(Arg1->getType()),
-                                   "cast");
+      Arg0 = Builder.CreateBitCast(
+          Arg0, PointerType::getUnqual(Arg1->getType()), "cast");
       Builder.CreateAlignedStore(Arg1, Arg0, Align(1));
 
       // Remove intrinsic.
@@ -2363,8 +2362,8 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
 
     Value *Rep;
     // Upgrade packed integer vector compare intrinsics to compare instructions.
-    if (IsX86 && (Name.startswith("sse2.pcmp") ||
-                  Name.startswith("avx2.pcmp"))) {
+    if (IsX86 &&
+        (Name.startswith("sse2.pcmp") || Name.startswith("avx2.pcmp"))) {
       // "sse2.pcpmpeq." "sse2.pcmpgt." "avx2.pcmpeq." or "avx2.pcmpgt."
       bool CmpEq = Name[9] == 'e';
       Rep = Builder.CreateICmp(CmpEq ? ICmpInst::ICMP_EQ : ICmpInst::ICMP_SGT,
@@ -2378,12 +2377,11 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
                          ExtTy->getPrimitiveSizeInBits();
       Rep = Builder.CreateZExt(CI->getArgOperand(0), ExtTy);
       Rep = Builder.CreateVectorSplat(NumElts, Rep);
-    } else if (IsX86 && (Name == "sse.sqrt.ss" ||
-                         Name == "sse2.sqrt.sd")) {
+    } else if (IsX86 && (Name == "sse.sqrt.ss" || Name == "sse2.sqrt.sd")) {
       Value *Vec = CI->getArgOperand(0);
       Value *Elt0 = Builder.CreateExtractElement(Vec, (uint64_t)0);
-      Function *Intr = Intrinsic::getDeclaration(F->getParent(),
-                                                 Intrinsic::sqrt, Elt0->getType());
+      Function *Intr = Intrinsic::getDeclaration(
+          F->getParent(), Intrinsic::sqrt, Elt0->getType());
       Elt0 = Builder.CreateCall(Intr, Elt0);
       Rep = Builder.CreateInsertElement(Vec, Elt0, (uint64_t)0);
     } else if (IsX86 && (Name.startswith("avx.sqrt.p") ||
@@ -2400,9 +2398,9 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
         Intrinsic::ID IID = Name[18] == 's' ? Intrinsic::x86_avx512_sqrt_ps_512
                                             : Intrinsic::x86_avx512_sqrt_pd_512;
 
-        Value *Args[] = { CI->getArgOperand(0), CI->getArgOperand(3) };
-        Rep = Builder.CreateCall(Intrinsic::getDeclaration(CI->getModule(),
-                                                           IID), Args);
+        Value *Args[] = {CI->getArgOperand(0), CI->getArgOperand(3)};
+        Rep = Builder.CreateCall(
+            Intrinsic::getDeclaration(CI->getModule(), IID), Args);
       } else {
         Rep = Builder.CreateCall(Intrinsic::getDeclaration(F->getParent(),
                                                            Intrinsic::sqrt,
@@ -2419,11 +2417,12 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       Rep = Builder.CreateAnd(Op0, Op1);
       llvm::Type *Ty = Op0->getType();
       Value *Zero = llvm::Constant::getNullValue(Ty);
-      ICmpInst::Predicate Pred =
-        Name.startswith("avx512.ptestm") ? ICmpInst::ICMP_NE : ICmpInst::ICMP_EQ;
+      ICmpInst::Predicate Pred = Name.startswith("avx512.ptestm")
+                                     ? ICmpInst::ICMP_NE
+                                     : ICmpInst::ICMP_EQ;
       Rep = Builder.CreateICmp(Pred, Rep, Zero);
       Rep = ApplyX86MaskOn1BitsVec(Builder, Rep, Mask);
-    } else if (IsX86 && (Name.startswith("avx512.mask.pbroadcast"))){
+    } else if (IsX86 && (Name.startswith("avx512.mask.pbroadcast"))) {
       unsigned NumElts = cast<FixedVectorType>(CI->getArgOperand(1)->getType())
                              ->getNumElements();
       Rep = Builder.CreateVectorSplat(NumElts, CI->getArgOperand(0));
@@ -2520,14 +2519,21 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       unsigned VecWidth = OpTy->getPrimitiveSizeInBits();
       Intrinsic::ID IID;
       switch (VecWidth) {
-      default: llvm_unreachable("Unexpected intrinsic");
-      case 128: IID = Intrinsic::x86_avx512_vpshufbitqmb_128; break;
-      case 256: IID = Intrinsic::x86_avx512_vpshufbitqmb_256; break;
-      case 512: IID = Intrinsic::x86_avx512_vpshufbitqmb_512; break;
+      default:
+        llvm_unreachable("Unexpected intrinsic");
+      case 128:
+        IID = Intrinsic::x86_avx512_vpshufbitqmb_128;
+        break;
+      case 256:
+        IID = Intrinsic::x86_avx512_vpshufbitqmb_256;
+        break;
+      case 512:
+        IID = Intrinsic::x86_avx512_vpshufbitqmb_512;
+        break;
       }
 
       Rep = Builder.CreateCall(Intrinsic::getDeclaration(F->getParent(), IID),
-                               { CI->getOperand(0), CI->getArgOperand(1) });
+                               {CI->getOperand(0), CI->getArgOperand(1)});
       Rep = ApplyX86MaskOn1BitsVec(Builder, Rep, CI->getArgOperand(2));
     } else if (IsX86 && Name.startswith("avx512.mask.fpclass.p")) {
       Type *OpTy = CI->getArgOperand(0)->getType();
@@ -2550,7 +2556,7 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
         llvm_unreachable("Unexpected intrinsic");
 
       Rep = Builder.CreateCall(Intrinsic::getDeclaration(F->getParent(), IID),
-                               { CI->getOperand(0), CI->getArgOperand(1) });
+                               {CI->getOperand(0), CI->getArgOperand(1)});
       Rep = ApplyX86MaskOn1BitsVec(Builder, Rep, CI->getArgOperand(2));
     } else if (IsX86 && Name.startswith("avx512.cmp.p")) {
       SmallVector<Value *, 4> Args(CI->args());
@@ -2595,50 +2601,42 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       Value *Zero = llvm::Constant::getNullValue(Op->getType());
       Rep = Builder.CreateICmp(ICmpInst::ICMP_SLT, Op, Zero);
       Rep = ApplyX86MaskOn1BitsVec(Builder, Rep, nullptr);
-    } else if(IsX86 && (Name == "ssse3.pabs.b.128" ||
-                        Name == "ssse3.pabs.w.128" ||
-                        Name == "ssse3.pabs.d.128" ||
-                        Name.startswith("avx2.pabs") ||
-                        Name.startswith("avx512.mask.pabs"))) {
+    } else if (IsX86 &&
+               (Name == "ssse3.pabs.b.128" || Name == "ssse3.pabs.w.128" ||
+                Name == "ssse3.pabs.d.128" || Name.startswith("avx2.pabs") ||
+                Name.startswith("avx512.mask.pabs"))) {
       Rep = upgradeAbs(Builder, *CI);
-    } else if (IsX86 && (Name == "sse41.pmaxsb" ||
-                         Name == "sse2.pmaxs.w" ||
-                         Name == "sse41.pmaxsd" ||
-                         Name.startswith("avx2.pmaxs") ||
-                         Name.startswith("avx512.mask.pmaxs"))) {
+    } else if (IsX86 &&
+               (Name == "sse41.pmaxsb" || Name == "sse2.pmaxs.w" ||
+                Name == "sse41.pmaxsd" || Name.startswith("avx2.pmaxs") ||
+                Name.startswith("avx512.mask.pmaxs"))) {
       Rep = UpgradeX86BinaryIntrinsics(Builder, *CI, Intrinsic::smax);
-    } else if (IsX86 && (Name == "sse2.pmaxu.b" ||
-                         Name == "sse41.pmaxuw" ||
-                         Name == "sse41.pmaxud" ||
-                         Name.startswith("avx2.pmaxu") ||
-                         Name.startswith("avx512.mask.pmaxu"))) {
+    } else if (IsX86 &&
+               (Name == "sse2.pmaxu.b" || Name == "sse41.pmaxuw" ||
+                Name == "sse41.pmaxud" || Name.startswith("avx2.pmaxu") ||
+                Name.startswith("avx512.mask.pmaxu"))) {
       Rep = UpgradeX86BinaryIntrinsics(Builder, *CI, Intrinsic::umax);
-    } else if (IsX86 && (Name == "sse41.pminsb" ||
-                         Name == "sse2.pmins.w" ||
-                         Name == "sse41.pminsd" ||
-                         Name.startswith("avx2.pmins") ||
-                         Name.startswith("avx512.mask.pmins"))) {
+    } else if (IsX86 &&
+               (Name == "sse41.pminsb" || Name == "sse2.pmins.w" ||
+                Name == "sse41.pminsd" || Name.startswith("avx2.pmins") ||
+                Name.startswith("avx512.mask.pmins"))) {
       Rep = UpgradeX86BinaryIntrinsics(Builder, *CI, Intrinsic::smin);
-    } else if (IsX86 && (Name == "sse2.pminu.b" ||
-                         Name == "sse41.pminuw" ||
-                         Name == "sse41.pminud" ||
-                         Name.startswith("avx2.pminu") ||
-                         Name.startswith("avx512.mask.pminu"))) {
+    } else if (IsX86 &&
+               (Name == "sse2.pminu.b" || Name == "sse41.pminuw" ||
+                Name == "sse41.pminud" || Name.startswith("avx2.pminu") ||
+                Name.startswith("avx512.mask.pminu"))) {
       Rep = UpgradeX86BinaryIntrinsics(Builder, *CI, Intrinsic::umin);
-    } else if (IsX86 && (Name == "sse2.pmulu.dq" ||
-                         Name == "avx2.pmulu.dq" ||
+    } else if (IsX86 && (Name == "sse2.pmulu.dq" || Name == "avx2.pmulu.dq" ||
                          Name == "avx512.pmulu.dq.512" ||
                          Name.startswith("avx512.mask.pmulu.dq."))) {
-      Rep = upgradePMULDQ(Builder, *CI, /*Signed*/false);
-    } else if (IsX86 && (Name == "sse41.pmuldq" ||
-                         Name == "avx2.pmul.dq" ||
+      Rep = upgradePMULDQ(Builder, *CI, /*Signed*/ false);
+    } else if (IsX86 && (Name == "sse41.pmuldq" || Name == "avx2.pmul.dq" ||
                          Name == "avx512.pmul.dq.512" ||
                          Name.startswith("avx512.mask.pmul.dq."))) {
-      Rep = upgradePMULDQ(Builder, *CI, /*Signed*/true);
-    } else if (IsX86 && (Name == "sse.cvtsi2ss" ||
-                         Name == "sse2.cvtsi2sd" ||
-                         Name == "sse.cvtsi642ss" ||
-                         Name == "sse2.cvtsi642sd")) {
+      Rep = upgradePMULDQ(Builder, *CI, /*Signed*/ true);
+    } else if (IsX86 &&
+               (Name == "sse.cvtsi2ss" || Name == "sse2.cvtsi2sd" ||
+                Name == "sse.cvtsi642ss" || Name == "sse2.cvtsi642sd")) {
       Rep = Builder.CreateSIToFP(
           CI->getArgOperand(1),
           cast<VectorType>(CI->getType())->getElementType());
@@ -2653,24 +2651,22 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       Rep = Builder.CreateFPExt(
           Rep, cast<VectorType>(CI->getType())->getElementType());
       Rep = Builder.CreateInsertElement(CI->getArgOperand(0), Rep, (uint64_t)0);
-    } else if (IsX86 && (Name == "sse2.cvtdq2pd" ||
-                         Name == "sse2.cvtdq2ps" ||
-                         Name == "avx.cvtdq2.pd.256" ||
-                         Name == "avx.cvtdq2.ps.256" ||
-                         Name.startswith("avx512.mask.cvtdq2pd.") ||
-                         Name.startswith("avx512.mask.cvtudq2pd.") ||
-                         Name.startswith("avx512.mask.cvtdq2ps.") ||
-                         Name.startswith("avx512.mask.cvtudq2ps.") ||
-                         Name.startswith("avx512.mask.cvtqq2pd.") ||
-                         Name.startswith("avx512.mask.cvtuqq2pd.") ||
-                         Name == "avx512.mask.cvtqq2ps.256" ||
-                         Name == "avx512.mask.cvtqq2ps.512" ||
-                         Name == "avx512.mask.cvtuqq2ps.256" ||
-                         Name == "avx512.mask.cvtuqq2ps.512" ||
-                         Name == "sse2.cvtps2pd" ||
-                         Name == "avx.cvt.ps2.pd.256" ||
-                         Name == "avx512.mask.cvtps2pd.128" ||
-                         Name == "avx512.mask.cvtps2pd.256")) {
+    } else if (IsX86 &&
+               (Name == "sse2.cvtdq2pd" || Name == "sse2.cvtdq2ps" ||
+                Name == "avx.cvtdq2.pd.256" || Name == "avx.cvtdq2.ps.256" ||
+                Name.startswith("avx512.mask.cvtdq2pd.") ||
+                Name.startswith("avx512.mask.cvtudq2pd.") ||
+                Name.startswith("avx512.mask.cvtdq2ps.") ||
+                Name.startswith("avx512.mask.cvtudq2ps.") ||
+                Name.startswith("avx512.mask.cvtqq2pd.") ||
+                Name.startswith("avx512.mask.cvtuqq2pd.") ||
+                Name == "avx512.mask.cvtqq2ps.256" ||
+                Name == "avx512.mask.cvtqq2ps.512" ||
+                Name == "avx512.mask.cvtuqq2ps.256" ||
+                Name == "avx512.mask.cvtuqq2ps.512" ||
+                Name == "sse2.cvtps2pd" || Name == "avx.cvt.ps2.pd.256" ||
+                Name == "avx512.mask.cvtps2pd.128" ||
+                Name == "avx512.mask.cvtps2pd.256")) {
       auto *DstTy = cast<FixedVectorType>(CI->getType());
       Rep = CI->getArgOperand(0);
       auto *SrcTy = cast<FixedVectorType>(Rep->getType());
@@ -2690,9 +2686,9 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
                 cast<ConstantInt>(CI->getArgOperand(3))->getZExtValue() != 4)) {
         Intrinsic::ID IID = IsUnsigned ? Intrinsic::x86_avx512_uitofp_round
                                        : Intrinsic::x86_avx512_sitofp_round;
-        Function *F = Intrinsic::getDeclaration(CI->getModule(), IID,
-                                                { DstTy, SrcTy });
-        Rep = Builder.CreateCall(F, { Rep, CI->getArgOperand(3) });
+        Function *F =
+            Intrinsic::getDeclaration(CI->getModule(), IID, {DstTy, SrcTy});
+        Rep = Builder.CreateCall(F, {Rep, CI->getArgOperand(3)});
       } else {
         Rep = IsUnsigned ? Builder.CreateUIToFP(Rep, DstTy, "cvt")
                          : Builder.CreateSIToFP(Rep, DstTy, "cvt");
@@ -2734,10 +2730,9 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       Value *MaskVec = getX86MaskVec(Builder, CI->getArgOperand(2),
                                      ResultTy->getNumElements());
 
-      Function *ELd = Intrinsic::getDeclaration(F->getParent(),
-                                                Intrinsic::masked_expandload,
-                                                ResultTy);
-      Rep = Builder.CreateCall(ELd, { Ptr, MaskVec, CI->getOperand(1) });
+      Function *ELd = Intrinsic::getDeclaration(
+          F->getParent(), Intrinsic::masked_expandload, ResultTy);
+      Rep = Builder.CreateCall(ELd, {Ptr, MaskVec, CI->getOperand(1)});
     } else if (IsX86 && Name.startswith("avx512.mask.compress.store.")) {
       auto *ResultTy = cast<VectorType>(CI->getArgOperand(1)->getType());
       Type *PtrTy = ResultTy->getElementType();
@@ -2750,10 +2745,9 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
           getX86MaskVec(Builder, CI->getArgOperand(2),
                         cast<FixedVectorType>(ResultTy)->getNumElements());
 
-      Function *CSt = Intrinsic::getDeclaration(F->getParent(),
-                                                Intrinsic::masked_compressstore,
-                                                ResultTy);
-      Rep = Builder.CreateCall(CSt, { CI->getArgOperand(1), Ptr, MaskVec });
+      Function *CSt = Intrinsic::getDeclaration(
+          F->getParent(), Intrinsic::masked_compressstore, ResultTy);
+      Rep = Builder.CreateCall(CSt, {CI->getArgOperand(1), Ptr, MaskVec});
     } else if (IsX86 && (Name.startswith("avx512.mask.compress.") ||
                          Name.startswith("avx512.mask.expand."))) {
       auto *ResultTy = cast<FixedVectorType>(CI->getType());
@@ -2765,8 +2759,8 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       Intrinsic::ID IID = IsCompress ? Intrinsic::x86_avx512_mask_compress
                                      : Intrinsic::x86_avx512_mask_expand;
       Function *Intr = Intrinsic::getDeclaration(F->getParent(), IID, ResultTy);
-      Rep = Builder.CreateCall(Intr, { CI->getOperand(0), CI->getOperand(1),
-                                       MaskVec });
+      Rep = Builder.CreateCall(Intr,
+                               {CI->getOperand(0), CI->getOperand(1), MaskVec});
     } else if (IsX86 && Name.startswith("xop.vpcom")) {
       bool IsSigned;
       if (Name.endswith("ub") || Name.endswith("uw") || Name.endswith("ud") ||
@@ -2828,9 +2822,10 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       bool ZeroMask = Name[11] == 'z';
       Rep = upgradeX86ConcatShift(Builder, *CI, true, ZeroMask);
     } else if (IsX86 && Name == "sse42.crc32.64.8") {
-      Function *CRC32 = Intrinsic::getDeclaration(F->getParent(),
-                                               Intrinsic::x86_sse42_crc32_32_8);
-      Value *Trunc0 = Builder.CreateTrunc(CI->getArgOperand(0), Type::getInt32Ty(C));
+      Function *CRC32 = Intrinsic::getDeclaration(
+          F->getParent(), Intrinsic::x86_sse42_crc32_32_8);
+      Value *Trunc0 =
+          Builder.CreateTrunc(CI->getArgOperand(0), Type::getInt32Ty(C));
       Rep = Builder.CreateCall(CRC32, {Trunc0, CI->getArgOperand(1)});
       Rep = Builder.CreateZExt(Rep, CI->getType(), "");
     } else if (IsX86 && (Name.startswith("avx.vbroadcast.s") ||
@@ -2839,14 +2834,14 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       auto *VecTy = cast<FixedVectorType>(CI->getType());
       Type *EltTy = VecTy->getElementType();
       unsigned EltNum = VecTy->getNumElements();
-      Value *Cast = Builder.CreateBitCast(CI->getArgOperand(0),
-                                          EltTy->getPointerTo());
+      Value *Cast =
+          Builder.CreateBitCast(CI->getArgOperand(0), EltTy->getPointerTo());
       Value *Load = Builder.CreateLoad(EltTy, Cast);
       Type *I32Ty = Type::getInt32Ty(C);
       Rep = PoisonValue::get(VecTy);
       for (unsigned I = 0; I < EltNum; ++I)
-        Rep = Builder.CreateInsertElement(Rep, Load,
-                                          ConstantInt::get(I32Ty, I));
+        Rep =
+            Builder.CreateInsertElement(Rep, Load, ConstantInt::get(I32Ty, I));
     } else if (IsX86 && (Name.startswith("sse41.pmovsx") ||
                          Name.startswith("sse41.pmovzx") ||
                          Name.startswith("avx2.pmovsx") ||
@@ -2915,7 +2910,7 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
                                         CI->getArgOperand(1), ShuffleMask);
       Rep = EmitX86Select(Builder, CI->getArgOperand(4), Rep,
                           CI->getArgOperand(3));
-    }else if (IsX86 && (Name.startswith("avx512.mask.broadcastf") ||
+    } else if (IsX86 && (Name.startswith("avx512.mask.broadcastf") ||
                          Name.startswith("avx512.mask.broadcasti"))) {
       unsigned NumSrcElts =
           cast<FixedVectorType>(CI->getArgOperand(0)->getType())
@@ -2928,8 +2923,7 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
         ShuffleMask[i] = i % NumSrcElts;
 
       Rep = Builder.CreateShuffleVector(CI->getArgOperand(0),
-                                        CI->getArgOperand(0),
-                                        ShuffleMask);
+                                        CI->getArgOperand(0), ShuffleMask);
       Rep = EmitX86Select(Builder, CI->getArgOperand(2), Rep,
                           CI->getArgOperand(1));
     } else if (IsX86 && (Name.startswith("avx2.pbroadcast") ||
@@ -2966,57 +2960,50 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
                          Name.startswith("avx512.mask.psubus."))) {
       Rep = UpgradeX86BinaryIntrinsics(Builder, *CI, Intrinsic::usub_sat);
     } else if (IsX86 && Name.startswith("avx512.mask.palignr.")) {
-      Rep = UpgradeX86ALIGNIntrinsics(Builder, CI->getArgOperand(0),
-                                      CI->getArgOperand(1),
-                                      CI->getArgOperand(2),
-                                      CI->getArgOperand(3),
-                                      CI->getArgOperand(4),
-                                      false);
+      Rep = UpgradeX86ALIGNIntrinsics(
+          Builder, CI->getArgOperand(0), CI->getArgOperand(1),
+          CI->getArgOperand(2), CI->getArgOperand(3), CI->getArgOperand(4),
+          false);
     } else if (IsX86 && Name.startswith("avx512.mask.valign.")) {
-      Rep = UpgradeX86ALIGNIntrinsics(Builder, CI->getArgOperand(0),
-                                      CI->getArgOperand(1),
-                                      CI->getArgOperand(2),
-                                      CI->getArgOperand(3),
-                                      CI->getArgOperand(4),
-                                      true);
-    } else if (IsX86 && (Name == "sse2.psll.dq" ||
-                         Name == "avx2.psll.dq")) {
+      Rep = UpgradeX86ALIGNIntrinsics(
+          Builder, CI->getArgOperand(0), CI->getArgOperand(1),
+          CI->getArgOperand(2), CI->getArgOperand(3), CI->getArgOperand(4),
+          true);
+    } else if (IsX86 && (Name == "sse2.psll.dq" || Name == "avx2.psll.dq")) {
       // 128/256-bit shift left specified in bits.
       unsigned Shift = cast<ConstantInt>(CI->getArgOperand(1))->getZExtValue();
       Rep = UpgradeX86PSLLDQIntrinsics(Builder, CI->getArgOperand(0),
                                        Shift / 8); // Shift is in bits.
-    } else if (IsX86 && (Name == "sse2.psrl.dq" ||
-                         Name == "avx2.psrl.dq")) {
+    } else if (IsX86 && (Name == "sse2.psrl.dq" || Name == "avx2.psrl.dq")) {
       // 128/256-bit shift right specified in bits.
       unsigned Shift = cast<ConstantInt>(CI->getArgOperand(1))->getZExtValue();
       Rep = UpgradeX86PSRLDQIntrinsics(Builder, CI->getArgOperand(0),
                                        Shift / 8); // Shift is in bits.
-    } else if (IsX86 && (Name == "sse2.psll.dq.bs" ||
-                         Name == "avx2.psll.dq.bs" ||
-                         Name == "avx512.psll.dq.512")) {
+    } else if (IsX86 &&
+               (Name == "sse2.psll.dq.bs" || Name == "avx2.psll.dq.bs" ||
+                Name == "avx512.psll.dq.512")) {
       // 128/256/512-bit shift left specified in bytes.
       unsigned Shift = cast<ConstantInt>(CI->getArgOperand(1))->getZExtValue();
       Rep = UpgradeX86PSLLDQIntrinsics(Builder, CI->getArgOperand(0), Shift);
-    } else if (IsX86 && (Name == "sse2.psrl.dq.bs" ||
-                         Name == "avx2.psrl.dq.bs" ||
-                         Name == "avx512.psrl.dq.512")) {
+    } else if (IsX86 &&
+               (Name == "sse2.psrl.dq.bs" || Name == "avx2.psrl.dq.bs" ||
+                Name == "avx512.psrl.dq.512")) {
       // 128/256/512-bit shift right specified in bytes.
       unsigned Shift = cast<ConstantInt>(CI->getArgOperand(1))->getZExtValue();
       Rep = UpgradeX86PSRLDQIntrinsics(Builder, CI->getArgOperand(0), Shift);
-    } else if (IsX86 && (Name == "sse41.pblendw" ||
-                         Name.startswith("sse41.blendp") ||
-                         Name.startswith("avx.blend.p") ||
-                         Name == "avx2.pblendw" ||
-                         Name.startswith("avx2.pblendd."))) {
+    } else if (IsX86 &&
+               (Name == "sse41.pblendw" || Name.startswith("sse41.blendp") ||
+                Name.startswith("avx.blend.p") || Name == "avx2.pblendw" ||
+                Name.startswith("avx2.pblendd."))) {
       Value *Op0 = CI->getArgOperand(0);
       Value *Op1 = CI->getArgOperand(1);
-      unsigned Imm = cast <ConstantInt>(CI->getArgOperand(2))->getZExtValue();
+      unsigned Imm = cast<ConstantInt>(CI->getArgOperand(2))->getZExtValue();
       auto *VecTy = cast<FixedVectorType>(CI->getType());
       unsigned NumElts = VecTy->getNumElements();
 
       SmallVector<int, 16> Idxs(NumElts);
       for (unsigned i = 0; i != NumElts; ++i)
-        Idxs[i] = ((Imm >> (i%8)) & 1) ? i + NumElts : i;
+        Idxs[i] = ((Imm >> (i % 8)) & 1) ? i + NumElts : i;
 
       Rep = Builder.CreateShuffleVector(Op0, Op1, Idxs);
     } else if (IsX86 && (Name.startswith("avx.vinsertf128.") ||
@@ -3144,10 +3131,10 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
 
       Rep = Builder.CreateShuffleVector(V0, V1, ShuffleMask);
 
-    } else if (IsX86 && (Name.startswith("avx.vpermil.") ||
-                         Name == "sse2.pshuf.d" ||
-                         Name.startswith("avx512.mask.vpermil.p") ||
-                         Name.startswith("avx512.mask.pshuf.d."))) {
+    } else if (IsX86 &&
+               (Name.startswith("avx.vpermil.") || Name == "sse2.pshuf.d" ||
+                Name.startswith("avx512.mask.vpermil.p") ||
+                Name.startswith("avx512.mask.pshuf.d."))) {
       Value *Op0 = CI->getArgOperand(0);
       unsigned Imm = cast<ConstantInt>(CI->getArgOperand(1))->getZExtValue();
       auto *VecTy = cast<FixedVectorType>(CI->getType());
@@ -3212,7 +3199,7 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       unsigned Imm = cast<ConstantInt>(CI->getArgOperand(2))->getZExtValue();
       unsigned NumElts = cast<FixedVectorType>(CI->getType())->getNumElements();
 
-      unsigned NumLaneElts = 128/CI->getType()->getScalarSizeInBits();
+      unsigned NumLaneElts = 128 / CI->getType()->getScalarSizeInBits();
       unsigned HalfLaneElts = NumLaneElts / 2;
 
       SmallVector<int, 16> Idxs(NumElts);
@@ -3224,7 +3211,8 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
           Idxs[i] += NumElts;
         // Now select the specific element. By adding HalfLaneElts bits from
         // the immediate. Wrapping around the immediate every 8-bits.
-        Idxs[i] += (Imm >> ((i * HalfLaneElts) % 8)) & ((1 << HalfLaneElts) - 1);
+        Idxs[i] +=
+            (Imm >> ((i * HalfLaneElts) % 8)) & ((1 << HalfLaneElts) - 1);
       }
 
       Rep = Builder.CreateShuffleVector(Op0, Op1, Idxs);
@@ -3236,7 +3224,7 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
                          Name.startswith("avx512.mask.movsldup"))) {
       Value *Op0 = CI->getArgOperand(0);
       unsigned NumElts = cast<FixedVectorType>(CI->getType())->getNumElements();
-      unsigned NumLaneElts = 128/CI->getType()->getScalarSizeInBits();
+      unsigned NumLaneElts = 128 / CI->getType()->getScalarSizeInBits();
 
       unsigned Offset = 0;
       if (Name.startswith("avx512.mask.movshdup."))
@@ -3258,7 +3246,7 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       Value *Op0 = CI->getArgOperand(0);
       Value *Op1 = CI->getArgOperand(1);
       int NumElts = cast<FixedVectorType>(CI->getType())->getNumElements();
-      int NumLaneElts = 128/CI->getType()->getScalarSizeInBits();
+      int NumLaneElts = 128 / CI->getType()->getScalarSizeInBits();
 
       SmallVector<int, 64> Idxs(NumElts);
       for (int l = 0; l != NumElts; l += NumLaneElts)
@@ -3274,7 +3262,7 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       Value *Op0 = CI->getArgOperand(0);
       Value *Op1 = CI->getArgOperand(1);
       int NumElts = cast<FixedVectorType>(CI->getType())->getNumElements();
-      int NumLaneElts = 128/CI->getType()->getScalarSizeInBits();
+      int NumLaneElts = 128 / CI->getType()->getScalarSizeInBits();
 
       SmallVector<int, 64> Idxs(NumElts);
       for (int l = 0; l != NumElts; l += NumLaneElts)
@@ -3342,9 +3330,9 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
         else
           IID = Intrinsic::x86_avx512_add_pd_512;
 
-        Rep = Builder.CreateCall(Intrinsic::getDeclaration(F->getParent(), IID),
-                                 { CI->getArgOperand(0), CI->getArgOperand(1),
-                                   CI->getArgOperand(4) });
+        Rep = Builder.CreateCall(
+            Intrinsic::getDeclaration(F->getParent(), IID),
+            {CI->getArgOperand(0), CI->getArgOperand(1), CI->getArgOperand(4)});
       } else {
         Rep = Builder.CreateFAdd(CI->getArgOperand(0), CI->getArgOperand(1));
       }
@@ -3358,9 +3346,9 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
         else
           IID = Intrinsic::x86_avx512_div_pd_512;
 
-        Rep = Builder.CreateCall(Intrinsic::getDeclaration(F->getParent(), IID),
-                                 { CI->getArgOperand(0), CI->getArgOperand(1),
-                                   CI->getArgOperand(4) });
+        Rep = Builder.CreateCall(
+            Intrinsic::getDeclaration(F->getParent(), IID),
+            {CI->getArgOperand(0), CI->getArgOperand(1), CI->getArgOperand(4)});
       } else {
         Rep = Builder.CreateFDiv(CI->getArgOperand(0), CI->getArgOperand(1));
       }
@@ -3374,9 +3362,9 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
         else
           IID = Intrinsic::x86_avx512_mul_pd_512;
 
-        Rep = Builder.CreateCall(Intrinsic::getDeclaration(F->getParent(), IID),
-                                 { CI->getArgOperand(0), CI->getArgOperand(1),
-                                   CI->getArgOperand(4) });
+        Rep = Builder.CreateCall(
+            Intrinsic::getDeclaration(F->getParent(), IID),
+            {CI->getArgOperand(0), CI->getArgOperand(1), CI->getArgOperand(4)});
       } else {
         Rep = Builder.CreateFMul(CI->getArgOperand(0), CI->getArgOperand(1));
       }
@@ -3390,45 +3378,45 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
         else
           IID = Intrinsic::x86_avx512_sub_pd_512;
 
-        Rep = Builder.CreateCall(Intrinsic::getDeclaration(F->getParent(), IID),
-                                 { CI->getArgOperand(0), CI->getArgOperand(1),
-                                   CI->getArgOperand(4) });
+        Rep = Builder.CreateCall(
+            Intrinsic::getDeclaration(F->getParent(), IID),
+            {CI->getArgOperand(0), CI->getArgOperand(1), CI->getArgOperand(4)});
       } else {
         Rep = Builder.CreateFSub(CI->getArgOperand(0), CI->getArgOperand(1));
       }
       Rep = EmitX86Select(Builder, CI->getArgOperand(3), Rep,
                           CI->getArgOperand(2));
-    } else if (IsX86 && (Name.startswith("avx512.mask.max.p") ||
-                         Name.startswith("avx512.mask.min.p")) &&
+    } else if (IsX86 &&
+               (Name.startswith("avx512.mask.max.p") ||
+                Name.startswith("avx512.mask.min.p")) &&
                Name.drop_front(18) == ".512") {
       bool IsDouble = Name[17] == 'd';
       bool IsMin = Name[13] == 'i';
       static const Intrinsic::ID MinMaxTbl[2][2] = {
-        { Intrinsic::x86_avx512_max_ps_512, Intrinsic::x86_avx512_max_pd_512 },
-        { Intrinsic::x86_avx512_min_ps_512, Intrinsic::x86_avx512_min_pd_512 }
-      };
+          {Intrinsic::x86_avx512_max_ps_512, Intrinsic::x86_avx512_max_pd_512},
+          {Intrinsic::x86_avx512_min_ps_512, Intrinsic::x86_avx512_min_pd_512}};
       Intrinsic::ID IID = MinMaxTbl[IsMin][IsDouble];
 
-      Rep = Builder.CreateCall(Intrinsic::getDeclaration(F->getParent(), IID),
-                               { CI->getArgOperand(0), CI->getArgOperand(1),
-                                 CI->getArgOperand(4) });
+      Rep = Builder.CreateCall(
+          Intrinsic::getDeclaration(F->getParent(), IID),
+          {CI->getArgOperand(0), CI->getArgOperand(1), CI->getArgOperand(4)});
       Rep = EmitX86Select(Builder, CI->getArgOperand(3), Rep,
                           CI->getArgOperand(2));
     } else if (IsX86 && Name.startswith("avx512.mask.lzcnt.")) {
       Rep = Builder.CreateCall(Intrinsic::getDeclaration(F->getParent(),
                                                          Intrinsic::ctlz,
                                                          CI->getType()),
-                               { CI->getArgOperand(0), Builder.getInt1(false) });
+                               {CI->getArgOperand(0), Builder.getInt1(false)});
       Rep = EmitX86Select(Builder, CI->getArgOperand(2), Rep,
                           CI->getArgOperand(1));
     } else if (IsX86 && Name.startswith("avx512.mask.psll")) {
-      bool IsImmediate = Name[16] == 'i' ||
-                         (Name.size() > 18 && Name[18] == 'i');
+      bool IsImmediate =
+          Name[16] == 'i' || (Name.size() > 18 && Name[18] == 'i');
       bool IsVariable = Name[16] == 'v';
-      char Size = Name[16] == '.' ? Name[17] :
-                  Name[17] == '.' ? Name[18] :
-                  Name[18] == '.' ? Name[19] :
-                                    Name[20];
+      char Size = Name[16] == '.'   ? Name[17]
+                  : Name[17] == '.' ? Name[18]
+                  : Name[18] == '.' ? Name[19]
+                                    : Name[20];
 
       Intrinsic::ID IID;
       if (IsVariable && Name[17] != '.') {
@@ -3474,13 +3462,13 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
           llvm_unreachable("Unexpected size");
       } else {
         if (Size == 'd') // psll.di.512, pslli.d, psll.d, psllv.d.512
-          IID = IsImmediate ? Intrinsic::x86_avx512_pslli_d_512 :
-                IsVariable  ? Intrinsic::x86_avx512_psllv_d_512 :
-                              Intrinsic::x86_avx512_psll_d_512;
+          IID = IsImmediate  ? Intrinsic::x86_avx512_pslli_d_512
+                : IsVariable ? Intrinsic::x86_avx512_psllv_d_512
+                             : Intrinsic::x86_avx512_psll_d_512;
         else if (Size == 'q') // psll.qi.512, pslli.q, psll.q, psllv.q.512
-          IID = IsImmediate ? Intrinsic::x86_avx512_pslli_q_512 :
-                IsVariable  ? Intrinsic::x86_avx512_psllv_q_512 :
-                              Intrinsic::x86_avx512_psll_q_512;
+          IID = IsImmediate  ? Intrinsic::x86_avx512_pslli_q_512
+                : IsVariable ? Intrinsic::x86_avx512_psllv_q_512
+                             : Intrinsic::x86_avx512_psll_q_512;
         else if (Size == 'w') // psll.wi.512, pslli.w, psll.w
           IID = IsImmediate ? Intrinsic::x86_avx512_pslli_w_512
                             : Intrinsic::x86_avx512_psll_w_512;
@@ -3490,13 +3478,13 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
 
       Rep = UpgradeX86MaskedShift(Builder, *CI, IID);
     } else if (IsX86 && Name.startswith("avx512.mask.psrl")) {
-      bool IsImmediate = Name[16] == 'i' ||
-                         (Name.size() > 18 && Name[18] == 'i');
+      bool IsImmediate =
+          Name[16] == 'i' || (Name.size() > 18 && Name[18] == 'i');
       bool IsVariable = Name[16] == 'v';
-      char Size = Name[16] == '.' ? Name[17] :
-                  Name[17] == '.' ? Name[18] :
-                  Name[18] == '.' ? Name[19] :
-                                    Name[20];
+      char Size = Name[16] == '.'   ? Name[17]
+                  : Name[17] == '.' ? Name[18]
+                  : Name[18] == '.' ? Name[19]
+                                    : Name[20];
 
       Intrinsic::ID IID;
       if (IsVariable && Name[17] != '.') {
@@ -3542,13 +3530,13 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
           llvm_unreachable("Unexpected size");
       } else {
         if (Size == 'd') // psrl.di.512, psrli.d, psrl.d, psrl.d.512
-          IID = IsImmediate ? Intrinsic::x86_avx512_psrli_d_512 :
-                IsVariable  ? Intrinsic::x86_avx512_psrlv_d_512 :
-                              Intrinsic::x86_avx512_psrl_d_512;
+          IID = IsImmediate  ? Intrinsic::x86_avx512_psrli_d_512
+                : IsVariable ? Intrinsic::x86_avx512_psrlv_d_512
+                             : Intrinsic::x86_avx512_psrl_d_512;
         else if (Size == 'q') // psrl.qi.512, psrli.q, psrl.q, psrl.q.512
-          IID = IsImmediate ? Intrinsic::x86_avx512_psrli_q_512 :
-                IsVariable  ? Intrinsic::x86_avx512_psrlv_q_512 :
-                              Intrinsic::x86_avx512_psrl_q_512;
+          IID = IsImmediate  ? Intrinsic::x86_avx512_psrli_q_512
+                : IsVariable ? Intrinsic::x86_avx512_psrlv_q_512
+                             : Intrinsic::x86_avx512_psrl_q_512;
         else if (Size == 'w') // psrl.wi.512, psrli.w, psrl.w)
           IID = IsImmediate ? Intrinsic::x86_avx512_psrli_w_512
                             : Intrinsic::x86_avx512_psrl_w_512;
@@ -3558,13 +3546,13 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
 
       Rep = UpgradeX86MaskedShift(Builder, *CI, IID);
     } else if (IsX86 && Name.startswith("avx512.mask.psra")) {
-      bool IsImmediate = Name[16] == 'i' ||
-                         (Name.size() > 18 && Name[18] == 'i');
+      bool IsImmediate =
+          Name[16] == 'i' || (Name.size() > 18 && Name[18] == 'i');
       bool IsVariable = Name[16] == 'v';
-      char Size = Name[16] == '.' ? Name[17] :
-                  Name[17] == '.' ? Name[18] :
-                  Name[18] == '.' ? Name[19] :
-                                    Name[20];
+      char Size = Name[16] == '.'   ? Name[17]
+                  : Name[17] == '.' ? Name[18]
+                  : Name[18] == '.' ? Name[19]
+                                    : Name[20];
 
       Intrinsic::ID IID;
       if (IsVariable && Name[17] != '.') {
@@ -3585,9 +3573,9 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
           IID = IsImmediate ? Intrinsic::x86_sse2_psrai_d
                             : Intrinsic::x86_sse2_psra_d;
         else if (Size == 'q') // avx512.mask.psra.q.128, avx512.mask.psra.qi.128
-          IID = IsImmediate ? Intrinsic::x86_avx512_psrai_q_128 :
-                IsVariable  ? Intrinsic::x86_avx512_psrav_q_128 :
-                              Intrinsic::x86_avx512_psra_q_128;
+          IID = IsImmediate  ? Intrinsic::x86_avx512_psrai_q_128
+                : IsVariable ? Intrinsic::x86_avx512_psrav_q_128
+                             : Intrinsic::x86_avx512_psra_q_128;
         else if (Size == 'w') // avx512.mask.psra.w.128, avx512.mask.psra.wi.128
           IID = IsImmediate ? Intrinsic::x86_sse2_psrai_w
                             : Intrinsic::x86_sse2_psra_w;
@@ -3598,9 +3586,9 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
           IID = IsImmediate ? Intrinsic::x86_avx2_psrai_d
                             : Intrinsic::x86_avx2_psra_d;
         else if (Size == 'q') // avx512.mask.psra.q.256, avx512.mask.psra.qi.256
-          IID = IsImmediate ? Intrinsic::x86_avx512_psrai_q_256 :
-                IsVariable  ? Intrinsic::x86_avx512_psrav_q_256 :
-                              Intrinsic::x86_avx512_psra_q_256;
+          IID = IsImmediate  ? Intrinsic::x86_avx512_psrai_q_256
+                : IsVariable ? Intrinsic::x86_avx512_psrav_q_256
+                             : Intrinsic::x86_avx512_psra_q_256;
         else if (Size == 'w') // avx512.mask.psra.w.256, avx512.mask.psra.wi.256
           IID = IsImmediate ? Intrinsic::x86_avx2_psrai_w
                             : Intrinsic::x86_avx2_psra_w;
@@ -3608,13 +3596,13 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
           llvm_unreachable("Unexpected size");
       } else {
         if (Size == 'd') // psra.di.512, psrai.d, psra.d, psrav.d.512
-          IID = IsImmediate ? Intrinsic::x86_avx512_psrai_d_512 :
-                IsVariable  ? Intrinsic::x86_avx512_psrav_d_512 :
-                              Intrinsic::x86_avx512_psra_d_512;
+          IID = IsImmediate  ? Intrinsic::x86_avx512_psrai_d_512
+                : IsVariable ? Intrinsic::x86_avx512_psrav_d_512
+                             : Intrinsic::x86_avx512_psra_d_512;
         else if (Size == 'q') // psra.qi.512, psrai.q, psra.q
-          IID = IsImmediate ? Intrinsic::x86_avx512_psrai_q_512 :
-                IsVariable  ? Intrinsic::x86_avx512_psrav_q_512 :
-                              Intrinsic::x86_avx512_psra_q_512;
+          IID = IsImmediate  ? Intrinsic::x86_avx512_psrai_q_512
+                : IsVariable ? Intrinsic::x86_avx512_psrav_q_512
+                             : Intrinsic::x86_avx512_psra_q_512;
         else if (Size == 'w') // psra.wi.512, psrai.w, psra.w
           IID = IsImmediate ? Intrinsic::x86_avx512_psrai_w_512
                             : Intrinsic::x86_avx512_psra_w_512;
@@ -3649,8 +3637,8 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       bool NegAcc = NegMul ? Name[8] == 's' : Name[7] == 's';
       bool IsScalar = NegMul ? Name[12] == 's' : Name[11] == 's';
 
-      Value *Ops[] = { CI->getArgOperand(0), CI->getArgOperand(1),
-                       CI->getArgOperand(2) };
+      Value *Ops[] = {CI->getArgOperand(0), CI->getArgOperand(1),
+                      CI->getArgOperand(2)};
 
       if (IsScalar) {
         Ops[0] = Builder.CreateExtractElement(Ops[0], (uint64_t)0);
@@ -3671,11 +3659,11 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
                                Ops);
 
       if (IsScalar)
-        Rep = Builder.CreateInsertElement(CI->getArgOperand(0), Rep,
-                                          (uint64_t)0);
+        Rep =
+            Builder.CreateInsertElement(CI->getArgOperand(0), Rep, (uint64_t)0);
     } else if (IsX86 && Name.startswith("fma4.vfmadd.s")) {
-      Value *Ops[] = { CI->getArgOperand(0), CI->getArgOperand(1),
-                       CI->getArgOperand(2) };
+      Value *Ops[] = {CI->getArgOperand(0), CI->getArgOperand(1),
+                      CI->getArgOperand(2)};
 
       Ops[0] = Builder.CreateExtractElement(Ops[0], (uint64_t)0);
       Ops[1] = Builder.CreateExtractElement(Ops[1], (uint64_t)0);
@@ -3717,7 +3705,7 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
 
       if (!isa<ConstantInt>(CI->getArgOperand(4)) ||
           cast<ConstantInt>(CI->getArgOperand(4))->getZExtValue() != 4) {
-        Value *Ops[] = { A, B, C, CI->getArgOperand(4) };
+        Value *Ops[] = {A, B, C, CI->getArgOperand(4)};
 
         Intrinsic::ID IID;
         if (Name.back() == 'd')
@@ -3728,24 +3716,23 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
         Rep = Builder.CreateCall(FMA, Ops);
       } else {
         Function *FMA = Intrinsic::getDeclaration(CI->getModule(),
-                                                  Intrinsic::fma,
-                                                  A->getType());
-        Rep = Builder.CreateCall(FMA, { A, B, C });
+                                                  Intrinsic::fma, A->getType());
+        Rep = Builder.CreateCall(FMA, {A, B, C});
       }
 
-      Value *PassThru = IsMaskZ ? Constant::getNullValue(Rep->getType()) :
-                        IsMask3 ? C : A;
+      Value *PassThru = IsMaskZ   ? Constant::getNullValue(Rep->getType())
+                        : IsMask3 ? C
+                                  : A;
 
       // For Mask3 with NegAcc, we need to create a new extractelement that
       // avoids the negation above.
       if (NegAcc && IsMask3)
-        PassThru = Builder.CreateExtractElement(CI->getArgOperand(2),
-                                                (uint64_t)0);
+        PassThru =
+            Builder.CreateExtractElement(CI->getArgOperand(2), (uint64_t)0);
 
-      Rep = EmitX86ScalarSelect(Builder, CI->getArgOperand(3),
-                                Rep, PassThru);
-      Rep = Builder.CreateInsertElement(CI->getArgOperand(IsMask3 ? 2 : 0),
-                                        Rep, (uint64_t)0);
+      Rep = EmitX86ScalarSelect(Builder, CI->getArgOperand(3), Rep, PassThru);
+      Rep = Builder.CreateInsertElement(CI->getArgOperand(IsMask3 ? 2 : 0), Rep,
+                                        (uint64_t)0);
     } else if (IsX86 && (Name.startswith("avx512.mask.vfmadd.p") ||
                          Name.startswith("avx512.mask.vfnmadd.p") ||
                          Name.startswith("avx512.mask.vfnmsub.p") ||
@@ -3776,26 +3763,25 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
            cast<ConstantInt>(CI->getArgOperand(4))->getZExtValue() != 4)) {
         Intrinsic::ID IID;
         // Check the character before ".512" in string.
-        if (Name[Name.size()-5] == 's')
+        if (Name[Name.size() - 5] == 's')
           IID = Intrinsic::x86_avx512_vfmadd_ps_512;
         else
           IID = Intrinsic::x86_avx512_vfmadd_pd_512;
 
         Rep = Builder.CreateCall(Intrinsic::getDeclaration(F->getParent(), IID),
-                                 { A, B, C, CI->getArgOperand(4) });
+                                 {A, B, C, CI->getArgOperand(4)});
       } else {
         Function *FMA = Intrinsic::getDeclaration(CI->getModule(),
-                                                  Intrinsic::fma,
-                                                  A->getType());
-        Rep = Builder.CreateCall(FMA, { A, B, C });
+                                                  Intrinsic::fma, A->getType());
+        Rep = Builder.CreateCall(FMA, {A, B, C});
       }
 
-      Value *PassThru = IsMaskZ ? llvm::Constant::getNullValue(CI->getType()) :
-                        IsMask3 ? CI->getArgOperand(2) :
-                                  CI->getArgOperand(0);
+      Value *PassThru = IsMaskZ   ? llvm::Constant::getNullValue(CI->getType())
+                        : IsMask3 ? CI->getArgOperand(2)
+                                  : CI->getArgOperand(0);
 
       Rep = EmitX86Select(Builder, CI->getArgOperand(3), Rep, PassThru);
-    } else if (IsX86 &&  Name.startswith("fma.vfmsubadd.p")) {
+    } else if (IsX86 && Name.startswith("fma.vfmsubadd.p")) {
       unsigned VecWidth = CI->getType()->getPrimitiveSizeInBits();
       unsigned EltWidth = CI->getType()->getScalarSizeInBits();
       Intrinsic::ID IID;
@@ -3810,8 +3796,8 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       else
         llvm_unreachable("Unexpected intrinsic");
 
-      Value *Ops[] = { CI->getArgOperand(0), CI->getArgOperand(1),
-                       CI->getArgOperand(2) };
+      Value *Ops[] = {CI->getArgOperand(0), CI->getArgOperand(1),
+                      CI->getArgOperand(2)};
       Ops[2] = Builder.CreateFNeg(Ops[2]);
       Rep = Builder.CreateCall(Intrinsic::getDeclaration(F->getParent(), IID),
                                Ops);
@@ -3827,13 +3813,13 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       if (CI->arg_size() == 5) {
         Intrinsic::ID IID;
         // Check the character before ".512" in string.
-        if (Name[Name.size()-5] == 's')
+        if (Name[Name.size() - 5] == 's')
           IID = Intrinsic::x86_avx512_vfmaddsub_ps_512;
         else
           IID = Intrinsic::x86_avx512_vfmaddsub_pd_512;
 
-        Value *Ops[] = { CI->getArgOperand(0), CI->getArgOperand(1),
-                         CI->getArgOperand(2), CI->getArgOperand(4) };
+        Value *Ops[] = {CI->getArgOperand(0), CI->getArgOperand(1),
+                        CI->getArgOperand(2), CI->getArgOperand(4)};
         if (IsSubAdd)
           Ops[2] = Builder.CreateFNeg(Ops[2]);
 
@@ -3842,11 +3828,11 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       } else {
         int NumElts = cast<FixedVectorType>(CI->getType())->getNumElements();
 
-        Value *Ops[] = { CI->getArgOperand(0), CI->getArgOperand(1),
-                         CI->getArgOperand(2) };
+        Value *Ops[] = {CI->getArgOperand(0), CI->getArgOperand(1),
+                        CI->getArgOperand(2)};
 
-        Function *FMA = Intrinsic::getDeclaration(CI->getModule(), Intrinsic::fma,
-                                                  Ops[0]->getType());
+        Function *FMA = Intrinsic::getDeclaration(
+            CI->getModule(), Intrinsic::fma, Ops[0]->getType());
         Value *Odd = Builder.CreateCall(FMA, Ops);
         Ops[2] = Builder.CreateFNeg(Ops[2]);
         Value *Even = Builder.CreateCall(FMA, Ops);
@@ -3861,9 +3847,9 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
         Rep = Builder.CreateShuffleVector(Even, Odd, Idxs);
       }
 
-      Value *PassThru = IsMaskZ ? llvm::Constant::getNullValue(CI->getType()) :
-                        IsMask3 ? CI->getArgOperand(2) :
-                                  CI->getArgOperand(0);
+      Value *PassThru = IsMaskZ   ? llvm::Constant::getNullValue(CI->getType())
+                        : IsMask3 ? CI->getArgOperand(2)
+                                  : CI->getArgOperand(0);
 
       Rep = EmitX86Select(Builder, CI->getArgOperand(3), Rep, PassThru);
     } else if (IsX86 && (Name.startswith("avx512.mask.pternlog.") ||
@@ -3887,8 +3873,8 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       else
         llvm_unreachable("Unexpected intrinsic");
 
-      Value *Args[] = { CI->getArgOperand(0) , CI->getArgOperand(1),
-                        CI->getArgOperand(2), CI->getArgOperand(3) };
+      Value *Args[] = {CI->getArgOperand(0), CI->getArgOperand(1),
+                       CI->getArgOperand(2), CI->getArgOperand(3)};
       Rep = Builder.CreateCall(Intrinsic::getDeclaration(CI->getModule(), IID),
                                Args);
       Value *PassThru = ZeroMask ? ConstantAggregateZero::get(CI->getType())
@@ -3915,8 +3901,8 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       else
         llvm_unreachable("Unexpected intrinsic");
 
-      Value *Args[] = { CI->getArgOperand(0) , CI->getArgOperand(1),
-                        CI->getArgOperand(2) };
+      Value *Args[] = {CI->getArgOperand(0), CI->getArgOperand(1),
+                       CI->getArgOperand(2)};
       Rep = Builder.CreateCall(Intrinsic::getDeclaration(CI->getModule(), IID),
                                Args);
       Value *PassThru = ZeroMask ? ConstantAggregateZero::get(CI->getType())
@@ -3951,8 +3937,8 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       else
         llvm_unreachable("Unexpected intrinsic");
 
-      Value *Args[] = { CI->getArgOperand(0), CI->getArgOperand(1),
-                        CI->getArgOperand(2)  };
+      Value *Args[] = {CI->getArgOperand(0), CI->getArgOperand(1),
+                       CI->getArgOperand(2)};
       Rep = Builder.CreateCall(Intrinsic::getDeclaration(CI->getModule(), IID),
                                Args);
       Value *PassThru = ZeroMask ? ConstantAggregateZero::get(CI->getType())
@@ -3981,8 +3967,8 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
       else
         llvm_unreachable("Unexpected intrinsic");
 
-      Value *Args[] = { CI->getArgOperand(0), CI->getArgOperand(1),
-                        CI->getArgOperand(2)  };
+      Value *Args[] = {CI->getArgOperand(0), CI->getArgOperand(1),
+                       CI->getArgOperand(2)};
       Rep = Builder.CreateCall(Intrinsic::getDeclaration(CI->getModule(), IID),
                                Args);
       Value *PassThru = ZeroMask ? ConstantAggregateZero::get(CI->getType())
@@ -4004,17 +3990,16 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
         llvm_unreachable("Unexpected intrinsic");
 
       // Make a call with 3 operands.
-      Value *Args[] = { CI->getArgOperand(0), CI->getArgOperand(1),
-                        CI->getArgOperand(2)};
+      Value *Args[] = {CI->getArgOperand(0), CI->getArgOperand(1),
+                       CI->getArgOperand(2)};
       Value *NewCall = Builder.CreateCall(
-                                Intrinsic::getDeclaration(CI->getModule(), IID),
-                                Args);
+          Intrinsic::getDeclaration(CI->getModule(), IID), Args);
 
       // Extract the second result and store it.
       Value *Data = Builder.CreateExtractValue(NewCall, 1);
       // Cast the pointer to the right type.
-      Value *Ptr = Builder.CreateBitCast(CI->getArgOperand(3),
-                                 llvm::PointerType::getUnqual(Data->getType()));
+      Value *Ptr = Builder.CreateBitCast(
+          CI->getArgOperand(3), llvm::PointerType::getUnqual(Data->getType()));
       Builder.CreateAlignedStore(Data, Ptr, Align(1));
       // Replace the original call result with the first result of the new call.
       Value *CF = Builder.CreateExtractValue(NewCall, 0);
@@ -4294,8 +4279,9 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
         CI->arg_size() == 2 ? Builder.getFalse() : CI->getArgOperand(2);
     Value *Dynamic =
         CI->arg_size() < 4 ? Builder.getFalse() : CI->getArgOperand(3);
-    NewCall = Builder.CreateCall(
-        NewFn, {CI->getArgOperand(0), CI->getArgOperand(1), NullIsUnknownSize, Dynamic});
+    NewCall =
+        Builder.CreateCall(NewFn, {CI->getArgOperand(0), CI->getArgOperand(1),
+                                   NullIsUnknownSize, Dynamic});
     break;
   }
 
@@ -4472,8 +4458,8 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
     // Extract the second result and store it.
     Value *Data = Builder.CreateExtractValue(NewCall, 1);
     // Cast the pointer to the right type.
-    Value *Ptr = Builder.CreateBitCast(CI->getArgOperand(0),
-                                 llvm::PointerType::getUnqual(Data->getType()));
+    Value *Ptr = Builder.CreateBitCast(
+        CI->getArgOperand(0), llvm::PointerType::getUnqual(Data->getType()));
     Builder.CreateAlignedStore(Data, Ptr, Align(1));
     // Replace the original call result with the first result of the new call.
     Value *TSC = Builder.CreateExtractValue(NewCall, 0);
@@ -4544,7 +4530,7 @@ void llvm::UpgradeIntrinsicCall(CallBase *CI, Function *NewFn) {
   }
   case Intrinsic::x86_avx512bf16_dpbf16ps_128:
   case Intrinsic::x86_avx512bf16_dpbf16ps_256:
-  case Intrinsic::x86_avx512bf16_dpbf16ps_512:{
+  case Intrinsic::x86_avx512bf16_dpbf16ps_512: {
     SmallVector<Value *, 4> Args(CI->args());
     unsigned NumElts =
         cast<FixedVectorType>(CI->getType())->getNumElements() * 2;
@@ -4643,8 +4629,9 @@ MDNode *llvm::UpgradeTBAANode(MDNode &MD) {
     return MDNode::get(Context, Elts2);
   }
   // Create a MDNode <MD, MD, offset 0>
-  Metadata *Elts[] = {&MD, &MD, ConstantAsMetadata::get(Constant::getNullValue(
-                                    Type::getInt64Ty(Context)))};
+  Metadata *Elts[] = {&MD, &MD,
+                      ConstantAsMetadata::get(
+                          Constant::getNullValue(Type::getInt64Ty(Context)))};
   return MDNode::get(Context, Elts);
 }
 
@@ -4781,8 +4768,8 @@ void llvm::UpgradeARCRuntime(Module &M) {
         // Bitcast argument to the parameter type of the new function if it's
         // not a variadic argument.
         if (I < NewFuncTy->getNumParams()) {
-          // Don't upgrade the intrinsic if it's not valid to bitcast the argument
-          // to the parameter type of the new function.
+          // Don't upgrade the intrinsic if it's not valid to bitcast the
+          // argument to the parameter type of the new function.
           if (!CastInst::castIsValid(Instruction::BitCast, Arg,
                                      NewFuncTy->getParamType(I))) {
             InvalidCast = true;
@@ -4950,8 +4937,9 @@ bool llvm::UpgradeModuleFlags(Module &M) {
       }
     }
 
-    // IRUpgrader turns a i32 type "Objective-C Garbage Collection" into i8 value.
-    // If the higher bits are set, it adds new module flag for swift info.
+    // IRUpgrader turns a i32 type "Objective-C Garbage Collection" into i8
+    // value. If the higher bits are set, it adds new module flag for swift
+    // info.
     if (ID->getString() == "Objective-C Garbage Collection") {
       auto Md = dyn_cast<ConstantAsMetadata>(Op->getOperand(2));
       if (Md) {
@@ -4967,9 +4955,9 @@ bool llvm::UpgradeModuleFlags(Module &M) {
           SwiftMinorVersion = (Val & 0xff0000) >> 16;
         }
         Metadata *Ops[3] = {
-          ConstantAsMetadata::get(ConstantInt::get(Int32Ty,Module::Error)),
-          Op->getOperand(1),
-          ConstantAsMetadata::get(ConstantInt::get(Int8Ty,Val & 0xff))};
+            ConstantAsMetadata::get(ConstantInt::get(Int32Ty, Module::Error)),
+            Op->getOperand(1),
+            ConstantAsMetadata::get(ConstantInt::get(Int8Ty, Val & 0xff))};
         ModFlags->setOperand(I, MDNode::get(M.getContext(), Ops));
         Changed = true;
       }
@@ -4988,8 +4976,7 @@ bool llvm::UpgradeModuleFlags(Module &M) {
   }
 
   if (HasSwiftVersionFlag) {
-    M.addModuleFlag(Module::Error, "Swift ABI Version",
-                    SwiftABIVersion);
+    M.addModuleFlag(Module::Error, "Swift ABI Version", SwiftABIVersion);
     M.addModuleFlag(Module::Error, "Swift Major Version",
                     ConstantInt::get(Int8Ty, SwiftMajorVersion));
     M.addModuleFlag(Module::Error, "Swift Minor Version",
@@ -5235,7 +5222,6 @@ void llvm::UpgradeOperandBundles(std::vector<OperandBundleDef> &Bundles) {
   // the "attachedcall" is meaningful and required, but without an operand,
   // it's just a marker NOP.  Dropping it merely prevents an optimization.
   erase_if(Bundles, [&](OperandBundleDef &OBD) {
-    return OBD.getTag() == "clang.arc.attachedcall" &&
-           OBD.inputs().empty();
+    return OBD.getTag() == "clang.arc.attachedcall" && OBD.inputs().empty();
   });
 }
diff --git a/llvm/lib/IR/BasicBlock.cpp b/llvm/lib/IR/BasicBlock.cpp
index 1081e03e0016f06..4af86bf45ca3fd9 100644
--- a/llvm/lib/IR/BasicBlock.cpp
+++ b/llvm/lib/IR/BasicBlock.cpp
@@ -32,9 +32,7 @@ ValueSymbolTable *BasicBlock::getValueSymbolTable() {
   return nullptr;
 }
 
-LLVMContext &BasicBlock::getContext() const {
-  return getType()->getContext();
-}
+LLVMContext &BasicBlock::getContext() const { return getType()->getContext(); }
 
 template <> void llvm::invalidateParentIListOrdering(BasicBlock *BB) {
   BB->invalidateOrders();
@@ -46,7 +44,7 @@ template class llvm::SymbolTableListTraits<Instruction>;
 
 BasicBlock::BasicBlock(LLVMContext &C, const Twine &Name, Function *NewParent,
                        BasicBlock *InsertBefore)
-  : Value(Type::getLabelTy(C), Value::BasicBlockVal), Parent(nullptr) {
+    : Value(Type::getLabelTy(C), Value::BasicBlockVal), Parent(nullptr) {
 
   if (NewParent)
     insertInto(NewParent, InsertBefore);
@@ -79,11 +77,11 @@ BasicBlock::~BasicBlock() {
   if (hasAddressTaken()) {
     assert(!use_empty() && "There should be at least one blockaddress!");
     Constant *Replacement =
-      ConstantInt::get(llvm::Type::getInt32Ty(getContext()), 1);
+        ConstantInt::get(llvm::Type::getInt32Ty(getContext()), 1);
     while (!use_empty()) {
       BlockAddress *BA = cast<BlockAddress>(user_back());
-      BA->replaceAllUsesWith(ConstantExpr::getIntToPtr(Replacement,
-                                                       BA->getType()));
+      BA->replaceAllUsesWith(
+          ConstantExpr::getIntToPtr(Replacement, BA->getType()));
       BA->destroyConstant();
     }
   }
@@ -142,9 +140,7 @@ void BasicBlock::moveAfter(BasicBlock *MovePos) {
                                getIterator());
 }
 
-const Module *BasicBlock::getModule() const {
-  return getParent()->getParent();
-}
+const Module *BasicBlock::getModule() const { return getParent()->getParent(); }
 
 const CallInst *BasicBlock::getTerminatingMustTailCall() const {
   if (InstList.empty())
@@ -193,7 +189,7 @@ const CallInst *BasicBlock::getTerminatingDeoptimizeCall() const {
 }
 
 const CallInst *BasicBlock::getPostdominatingDeoptimizeCall() const {
-  const BasicBlock* BB = this;
+  const BasicBlock *BB = this;
   SmallPtrSet<const BasicBlock *, 8> Visited;
   Visited.insert(BB);
   while (auto *Succ = BB->getUniqueSuccessor()) {
@@ -213,7 +209,7 @@ const Instruction *BasicBlock::getFirstMayFaultInst() const {
   return nullptr;
 }
 
-const Instruction* BasicBlock::getFirstNonPHI() const {
+const Instruction *BasicBlock::getFirstNonPHI() const {
   for (const Instruction &I : *this)
     if (!isa<PHINode>(I))
       return &I;
@@ -256,7 +252,8 @@ BasicBlock::const_iterator BasicBlock::getFirstInsertionPt() const {
     return end();
 
   const_iterator InsertPt = FirstNonPHI->getIterator();
-  if (InsertPt->isEHPad()) ++InsertPt;
+  if (InsertPt->isEHPad())
+    ++InsertPt;
   return InsertPt;
 }
 
@@ -291,7 +288,8 @@ void BasicBlock::dropAllReferences() {
 
 const BasicBlock *BasicBlock::getSinglePredecessor() const {
   const_pred_iterator PI = pred_begin(this), E = pred_end(this);
-  if (PI == E) return nullptr;         // No preds.
+  if (PI == E)
+    return nullptr; // No preds.
   const BasicBlock *ThePred = *PI;
   ++PI;
   return (PI == E) ? ThePred : nullptr /*multiple preds*/;
@@ -299,10 +297,11 @@ const BasicBlock *BasicBlock::getSinglePredecessor() const {
 
 const BasicBlock *BasicBlock::getUniquePredecessor() const {
   const_pred_iterator PI = pred_begin(this), E = pred_end(this);
-  if (PI == E) return nullptr; // No preds.
+  if (PI == E)
+    return nullptr; // No preds.
   const BasicBlock *PredBB = *PI;
   ++PI;
-  for (;PI != E; ++PI) {
+  for (; PI != E; ++PI) {
     if (*PI != PredBB)
       return nullptr;
     // The same predecessor appears multiple times in the predecessor list.
@@ -321,7 +320,8 @@ bool BasicBlock::hasNPredecessorsOrMore(unsigned N) const {
 
 const BasicBlock *BasicBlock::getSingleSuccessor() const {
   const_succ_iterator SI = succ_begin(this), E = succ_end(this);
-  if (SI == E) return nullptr; // no successors
+  if (SI == E)
+    return nullptr; // no successors
   const BasicBlock *TheSucc = *SI;
   ++SI;
   return (SI == E) ? TheSucc : nullptr /* multiple successors */;
@@ -329,10 +329,11 @@ const BasicBlock *BasicBlock::getSingleSuccessor() const {
 
 const BasicBlock *BasicBlock::getUniqueSuccessor() const {
   const_succ_iterator SI = succ_begin(this), E = succ_end(this);
-  if (SI == E) return nullptr; // No successors
+  if (SI == E)
+    return nullptr; // No successors
   const BasicBlock *SuccBB = *SI;
   ++SI;
-  for (;SI != E; ++SI) {
+  for (; SI != E; ++SI) {
     if (*SI != SuccBB)
       return nullptr;
     // The same successor appears multiple times in the successor list.
@@ -346,8 +347,7 @@ iterator_range<BasicBlock::phi_iterator> BasicBlock::phis() {
   return make_range<phi_iterator>(P, nullptr);
 }
 
-void BasicBlock::removePredecessor(BasicBlock *Pred,
-                                   bool KeepOneInputPHIs) {
+void BasicBlock::removePredecessor(BasicBlock *Pred, bool KeepOneInputPHIs) {
   // Use hasNUsesOrMore to bound the cost of this assertion for complex CFGs.
   assert((hasNUsesOrMore(16) || llvm::is_contained(predecessors(this), Pred)) &&
          "Pred is not a predecessor!");
@@ -529,8 +529,7 @@ const LandingPadInst *BasicBlock::getLandingPadInst() const {
 
 std::optional<uint64_t> BasicBlock::getIrrLoopHeaderWeight() const {
   const Instruction *TI = getTerminator();
-  if (MDNode *MDIrrLoopHeader =
-      TI->getMetadata(LLVMContext::MD_irr_loop)) {
+  if (MDNode *MDIrrLoopHeader = TI->getMetadata(LLVMContext::MD_irr_loop)) {
     MDString *MDName = cast<MDString>(MDIrrLoopHeader->getOperand(0));
     if (MDName->getString().equals("loop_header_weight")) {
       auto *CI = mdconst::extract<ConstantInt>(MDIrrLoopHeader->getOperand(1));
diff --git a/llvm/lib/IR/BuiltinGCs.cpp b/llvm/lib/IR/BuiltinGCs.cpp
index 163b0383e22c271..6115fd0b86f5201 100644
--- a/llvm/lib/IR/BuiltinGCs.cpp
+++ b/llvm/lib/IR/BuiltinGCs.cpp
@@ -12,8 +12,8 @@
 //===----------------------------------------------------------------------===//
 
 #include "llvm/IR/BuiltinGCs.h"
-#include "llvm/IR/GCStrategy.h"
 #include "llvm/IR/DerivedTypes.h"
+#include "llvm/IR/GCStrategy.h"
 #include "llvm/Support/Casting.h"
 
 using namespace llvm;
diff --git a/llvm/lib/IR/CMakeLists.txt b/llvm/lib/IR/CMakeLists.txt
index d9656a24d0ed3fb..151f07533c66c6c 100644
--- a/llvm/lib/IR/CMakeLists.txt
+++ b/llvm/lib/IR/CMakeLists.txt
@@ -1,87 +1,29 @@
-add_llvm_component_library(LLVMCore
-  AbstractCallSite.cpp
-  AsmWriter.cpp
-  Assumptions.cpp
-  Attributes.cpp
-  AutoUpgrade.cpp
-  BasicBlock.cpp
-  BuiltinGCs.cpp
-  Comdat.cpp
-  ConstantFold.cpp
-  ConstantRange.cpp
-  Constants.cpp
-  ConvergenceVerifier.cpp
-  Core.cpp
-  CycleInfo.cpp
-  DIBuilder.cpp
-  DataLayout.cpp
-  DebugInfo.cpp
-  DebugInfoMetadata.cpp
-  DebugLoc.cpp
-  DiagnosticHandler.cpp
-  DiagnosticInfo.cpp
-  DiagnosticPrinter.cpp
-  Dominators.cpp
-  EHPersonalities.cpp
-  FPEnv.cpp
-  Function.cpp
-  GCStrategy.cpp
-  GVMaterializer.cpp
-  Globals.cpp
-  IRBuilder.cpp
-  IRPrintingPasses.cpp
-  SSAContext.cpp
-  InlineAsm.cpp
-  Instruction.cpp
-  Instructions.cpp
-  IntrinsicInst.cpp
-  LLVMContext.cpp
-  LLVMContextImpl.cpp
-  LLVMRemarkStreamer.cpp
-  LegacyPassManager.cpp
-  MDBuilder.cpp
-  Mangler.cpp
-  Metadata.cpp
-  Module.cpp
-  ModuleSummaryIndex.cpp
-  Operator.cpp
-  OptBisect.cpp
-  Pass.cpp
-  PassInstrumentation.cpp
-  PassManager.cpp
-  PassRegistry.cpp
-  PassTimingInfo.cpp
-  PrintPasses.cpp
-  ProfDataUtils.cpp
-  SafepointIRVerifier.cpp
-  ProfileSummary.cpp
-  PseudoProbe.cpp
-  ReplaceConstant.cpp
-  Statepoint.cpp
-  StructuralHash.cpp
-  Type.cpp
-  TypedPointerType.cpp
-  TypeFinder.cpp
-  Use.cpp
-  User.cpp
-  Value.cpp
-  ValueSymbolTable.cpp
-  VectorBuilder.cpp
-  Verifier.cpp
+add_llvm_component_library(
+    LLVMCore AbstractCallSite.cpp AsmWriter.cpp Assumptions.cpp Attributes
+        .cpp AutoUpgrade.cpp BasicBlock.cpp BuiltinGCs.cpp Comdat
+        .cpp ConstantFold.cpp ConstantRange.cpp Constants
+        .cpp ConvergenceVerifier.cpp Core.cpp CycleInfo.cpp DIBuilder
+        .cpp DataLayout.cpp DebugInfo.cpp DebugInfoMetadata.cpp DebugLoc
+        .cpp DiagnosticHandler.cpp DiagnosticInfo.cpp DiagnosticPrinter
+        .cpp Dominators.cpp EHPersonalities.cpp FPEnv.cpp Function
+        .cpp GCStrategy.cpp GVMaterializer.cpp Globals.cpp IRBuilder
+        .cpp IRPrintingPasses.cpp SSAContext.cpp InlineAsm.cpp Instruction
+        .cpp Instructions.cpp IntrinsicInst.cpp LLVMContext.cpp LLVMContextImpl
+        .cpp LLVMRemarkStreamer.cpp LegacyPassManager.cpp MDBuilder.cpp Mangler
+        .cpp Metadata.cpp Module.cpp ModuleSummaryIndex.cpp Operator
+        .cpp OptBisect.cpp Pass.cpp PassInstrumentation.cpp PassManager
+        .cpp PassRegistry.cpp PassTimingInfo.cpp PrintPasses.cpp ProfDataUtils
+        .cpp SafepointIRVerifier.cpp ProfileSummary.cpp PseudoProbe
+        .cpp ReplaceConstant.cpp Statepoint.cpp StructuralHash.cpp Type
+        .cpp TypedPointerType.cpp TypeFinder.cpp Use.cpp User.cpp Value
+        .cpp ValueSymbolTable.cpp VectorBuilder.cpp Verifier.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/IR
+            ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+    llvm /
+    IR
 
-  LINK_LIBS
-  ${LLVM_PTHREAD_LIB}
+        LINK_LIBS ${LLVM_PTHREAD_LIB}
 
-  DEPENDS
-  intrinsics_gen
+    DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  BinaryFormat
-  Demangle
-  Remarks
-  Support
-  TargetParser
-  )
+        LINK_COMPONENTS BinaryFormat Demangle Remarks Support TargetParser)
diff --git a/llvm/lib/IR/ConstantFold.cpp b/llvm/lib/IR/ConstantFold.cpp
index f9644079f2d2b36..1430483210529dd 100644
--- a/llvm/lib/IR/ConstantFold.cpp
+++ b/llvm/lib/IR/ConstantFold.cpp
@@ -42,8 +42,10 @@ using namespace llvm::PatternMatch;
 /// all simple integer or FP values.
 static Constant *BitCastConstantVector(Constant *CV, VectorType *DstTy) {
 
-  if (CV->isAllOnesValue()) return Constant::getAllOnesValue(DstTy);
-  if (CV->isNullValue()) return Constant::getNullValue(DstTy);
+  if (CV->isAllOnesValue())
+    return Constant::getAllOnesValue(DstTy);
+  if (CV->isNullValue())
+    return Constant::getNullValue(DstTy);
 
   // Do not iterate on scalable vector. The num of elements is unknown at
   // compile-time.
@@ -64,11 +66,10 @@ static Constant *BitCastConstantVector(Constant *CV, VectorType *DstTy) {
                                     ConstantExpr::getBitCast(Splat, DstEltTy));
   }
 
-  SmallVector<Constant*, 16> Result;
+  SmallVector<Constant *, 16> Result;
   Type *Ty = IntegerType::get(CV->getContext(), 32);
   for (unsigned i = 0; i != NumElts; ++i) {
-    Constant *C =
-      ConstantExpr::getExtractElement(CV, ConstantInt::get(Ty, i));
+    Constant *C = ConstantExpr::getExtractElement(CV, ConstantInt::get(Ty, i));
     C = ConstantExpr::getBitCast(C, DstEltTy);
     Result.push_back(C);
   }
@@ -80,11 +81,10 @@ static Constant *BitCastConstantVector(Constant *CV, VectorType *DstTy) {
 /// expressions together. It uses CastInst::isEliminableCastPair to determine
 /// the opcode. Consequently its just a wrapper around that function.
 /// Determine if it is valid to fold a cast of a cast
-static unsigned
-foldConstantCastPair(
-  unsigned opc,          ///< opcode of the second cast constant expression
-  ConstantExpr *Op,      ///< the first cast constant expression
-  Type *DstTy            ///< destination type of the first cast
+static unsigned foldConstantCastPair(
+    unsigned opc,     ///< opcode of the second cast constant expression
+    ConstantExpr *Op, ///< the first cast constant expression
+    Type *DstTy       ///< destination type of the first cast
 ) {
   assert(Op && Op->isCast() && "Can't fold cast of cast without a cast!");
   assert(DstTy && DstTy->isFirstClassType() && "Invalid cast destination type");
@@ -136,7 +136,7 @@ static Constant *FoldBitCast(Constant *V, Type *DestTy) {
 
   // Finally, implement bitcast folding now.   The code below doesn't handle
   // bitcast right.
-  if (isa<ConstantPointerNull>(V))  // ptr->ptr cast.
+  if (isa<ConstantPointerNull>(V)) // ptr->ptr cast.
     return ConstantPointerNull::get(cast<PointerType>(DestTy));
 
   // Handle integral constant input.
@@ -148,9 +148,9 @@ static Constant *FoldBitCast(Constant *V, Type *DestTy) {
 
     // See note below regarding the PPC_FP128 restriction.
     if (DestTy->isFloatingPointTy() && !DestTy->isPPC_FP128Ty())
-      return ConstantFP::get(DestTy->getContext(),
-                             APFloat(DestTy->getFltSemantics(),
-                                     CI->getValue()));
+      return ConstantFP::get(
+          DestTy->getContext(),
+          APFloat(DestTy->getFltSemantics(), CI->getValue()));
 
     // Otherwise, can't fold this (vector?)
     return nullptr;
@@ -178,7 +178,6 @@ static Constant *FoldBitCast(Constant *V, Type *DestTy) {
   return nullptr;
 }
 
-
 /// V is an integer constant which only has a subset of its bytes used.
 /// The bytes used are indicated by ByteStart (which is the first byte used,
 /// counting from the least significant byte) and ByteSize, which is the number
@@ -192,27 +191,30 @@ static Constant *ExtractConstantBytes(Constant *C, unsigned ByteStart,
   assert(C->getType()->isIntegerTy() &&
          (cast<IntegerType>(C->getType())->getBitWidth() & 7) == 0 &&
          "Non-byte sized integer input");
-  unsigned CSize = cast<IntegerType>(C->getType())->getBitWidth()/8;
+  unsigned CSize = cast<IntegerType>(C->getType())->getBitWidth() / 8;
   assert(ByteSize && "Must be accessing some piece");
-  assert(ByteStart+ByteSize <= CSize && "Extracting invalid piece from input");
+  assert(ByteStart + ByteSize <= CSize &&
+         "Extracting invalid piece from input");
   assert(ByteSize != CSize && "Should not extract everything");
 
   // Constant Integers are simple.
   if (ConstantInt *CI = dyn_cast<ConstantInt>(C)) {
     APInt V = CI->getValue();
     if (ByteStart)
-      V.lshrInPlace(ByteStart*8);
-    V = V.trunc(ByteSize*8);
+      V.lshrInPlace(ByteStart * 8);
+    V = V.trunc(ByteSize * 8);
     return ConstantInt::get(CI->getContext(), V);
   }
 
   // In the input is a constant expr, we might be able to recursively simplify.
   // If not, we definitely can't do anything.
   ConstantExpr *CE = dyn_cast<ConstantExpr>(C);
-  if (!CE) return nullptr;
+  if (!CE)
+    return nullptr;
 
   switch (CE->getOpcode()) {
-  default: return nullptr;
+  default:
+    return nullptr;
   case Instruction::LShr: {
     ConstantInt *Amt = dyn_cast<ConstantInt>(CE->getOperand(1));
     if (!Amt)
@@ -261,32 +263,32 @@ static Constant *ExtractConstantBytes(Constant *C, unsigned ByteStart,
 
   case Instruction::ZExt: {
     unsigned SrcBitSize =
-      cast<IntegerType>(CE->getOperand(0)->getType())->getBitWidth();
+        cast<IntegerType>(CE->getOperand(0)->getType())->getBitWidth();
 
     // If extracting something that is completely zero, return 0.
-    if (ByteStart*8 >= SrcBitSize)
-      return Constant::getNullValue(IntegerType::get(CE->getContext(),
-                                                     ByteSize*8));
+    if (ByteStart * 8 >= SrcBitSize)
+      return Constant::getNullValue(
+          IntegerType::get(CE->getContext(), ByteSize * 8));
 
     // If exactly extracting the input, return it.
-    if (ByteStart == 0 && ByteSize*8 == SrcBitSize)
+    if (ByteStart == 0 && ByteSize * 8 == SrcBitSize)
       return CE->getOperand(0);
 
     // If extracting something completely in the input, if the input is a
     // multiple of 8 bits, recurse.
-    if ((SrcBitSize&7) == 0 && (ByteStart+ByteSize)*8 <= SrcBitSize)
+    if ((SrcBitSize & 7) == 0 && (ByteStart + ByteSize) * 8 <= SrcBitSize)
       return ExtractConstantBytes(CE->getOperand(0), ByteStart, ByteSize);
 
     // Otherwise, if extracting a subset of the input, which is not multiple of
     // 8 bits, do a shift and trunc to get the bits.
-    if ((ByteStart+ByteSize)*8 < SrcBitSize) {
-      assert((SrcBitSize&7) && "Shouldn't get byte sized case here");
+    if ((ByteStart + ByteSize) * 8 < SrcBitSize) {
+      assert((SrcBitSize & 7) && "Shouldn't get byte sized case here");
       Constant *Res = CE->getOperand(0);
       if (ByteStart)
-        Res = ConstantExpr::getLShr(Res,
-                                 ConstantInt::get(Res->getType(), ByteStart*8));
-      return ConstantExpr::getTrunc(Res, IntegerType::get(C->getContext(),
-                                                          ByteSize*8));
+        Res = ConstantExpr::getLShr(
+            Res, ConstantInt::get(Res->getType(), ByteStart * 8));
+      return ConstantExpr::getTrunc(
+          Res, IntegerType::get(C->getContext(), ByteSize * 8));
     }
 
     // TODO: Handle the 'partially zero' case.
@@ -366,8 +368,7 @@ Constant *llvm::ConstantFoldCastInstruction(unsigned opc, Constant *V,
     for (unsigned i = 0,
                   e = cast<FixedVectorType>(V->getType())->getNumElements();
          i != e; ++i) {
-      Constant *C =
-        ConstantExpr::getExtractElement(V, ConstantInt::get(Ty, i));
+      Constant *C = ConstantExpr::getExtractElement(V, ConstantInt::get(Ty, i));
       res.push_back(ConstantExpr::getCast(opc, C, DstEltTy));
     }
     return ConstantVector::get(res);
@@ -403,12 +404,12 @@ Constant *llvm::ConstantFoldCastInstruction(unsigned opc, Constant *V,
       }
       return ConstantInt::get(FPC->getContext(), IntVal);
     }
-    return nullptr; // Can't fold.
-  case Instruction::IntToPtr:   //always treated as unsigned
-    if (V->isNullValue())       // Is it an integral null value?
+    return nullptr;           // Can't fold.
+  case Instruction::IntToPtr: // always treated as unsigned
+    if (V->isNullValue())     // Is it an integral null value?
       return ConstantPointerNull::get(cast<PointerType>(DestTy));
-    return nullptr;                   // Other pointer types cannot be casted
-  case Instruction::PtrToInt:   // always treated as unsigned
+    return nullptr;           // Other pointer types cannot be casted
+  case Instruction::PtrToInt: // always treated as unsigned
     // Is it a null pointer value?
     if (V->isNullValue())
       return ConstantInt::get(DestTy, 0);
@@ -420,7 +421,7 @@ Constant *llvm::ConstantFoldCastInstruction(unsigned opc, Constant *V,
       const APInt &api = CI->getValue();
       APFloat apf(DestTy->getFltSemantics(),
                   APInt::getZero(DestTy->getPrimitiveSizeInBits()));
-      apf.convertFromAPInt(api, opc==Instruction::SIToFP,
+      apf.convertFromAPInt(api, opc == Instruction::SIToFP,
                            APFloat::rmNearestTiesToEven);
       return ConstantFP::get(V->getContext(), apf);
     }
@@ -428,15 +429,13 @@ Constant *llvm::ConstantFoldCastInstruction(unsigned opc, Constant *V,
   case Instruction::ZExt:
     if (ConstantInt *CI = dyn_cast<ConstantInt>(V)) {
       uint32_t BitWidth = cast<IntegerType>(DestTy)->getBitWidth();
-      return ConstantInt::get(V->getContext(),
-                              CI->getValue().zext(BitWidth));
+      return ConstantInt::get(V->getContext(), CI->getValue().zext(BitWidth));
     }
     return nullptr;
   case Instruction::SExt:
     if (ConstantInt *CI = dyn_cast<ConstantInt>(V)) {
       uint32_t BitWidth = cast<IntegerType>(DestTy)->getBitWidth();
-      return ConstantInt::get(V->getContext(),
-                              CI->getValue().sext(BitWidth));
+      return ConstantInt::get(V->getContext(), CI->getValue().sext(BitWidth));
     }
     return nullptr;
   case Instruction::Trunc: {
@@ -466,23 +465,25 @@ Constant *llvm::ConstantFoldCastInstruction(unsigned opc, Constant *V,
   }
 }
 
-Constant *llvm::ConstantFoldSelectInstruction(Constant *Cond,
-                                              Constant *V1, Constant *V2) {
+Constant *llvm::ConstantFoldSelectInstruction(Constant *Cond, Constant *V1,
+                                              Constant *V2) {
   // Check for i1 and vector true/false conditions.
-  if (Cond->isNullValue()) return V2;
-  if (Cond->isAllOnesValue()) return V1;
+  if (Cond->isNullValue())
+    return V2;
+  if (Cond->isAllOnesValue())
+    return V1;
 
   // If the condition is a vector constant, fold the result elementwise.
   if (ConstantVector *CondV = dyn_cast<ConstantVector>(Cond)) {
     auto *V1VTy = CondV->getType();
-    SmallVector<Constant*, 16> Result;
+    SmallVector<Constant *, 16> Result;
     Type *Ty = IntegerType::get(CondV->getContext(), 32);
     for (unsigned i = 0, e = V1VTy->getNumElements(); i != e; ++i) {
       Constant *V;
-      Constant *V1Element = ConstantExpr::getExtractElement(V1,
-                                                    ConstantInt::get(Ty, i));
-      Constant *V2Element = ConstantExpr::getExtractElement(V2,
-                                                    ConstantInt::get(Ty, i));
+      Constant *V1Element =
+          ConstantExpr::getExtractElement(V1, ConstantInt::get(Ty, i));
+      Constant *V2Element =
+          ConstantExpr::getExtractElement(V2, ConstantInt::get(Ty, i));
       auto *Cond = cast<Constant>(CondV->getOperand(i));
       if (isa<PoisonValue>(Cond)) {
         V = PoisonValue::get(V1Element->getType());
@@ -491,7 +492,8 @@ Constant *llvm::ConstantFoldSelectInstruction(Constant *Cond,
       } else if (isa<UndefValue>(Cond)) {
         V = isa<UndefValue>(V1Element) ? V1Element : V2Element;
       } else {
-        if (!isa<ConstantInt>(Cond)) break;
+        if (!isa<ConstantInt>(Cond))
+          break;
         V = Cond->isNullValue() ? V2Element : V1Element;
       }
       Result.push_back(V);
@@ -506,11 +508,13 @@ Constant *llvm::ConstantFoldSelectInstruction(Constant *Cond,
     return PoisonValue::get(V1->getType());
 
   if (isa<UndefValue>(Cond)) {
-    if (isa<UndefValue>(V1)) return V1;
+    if (isa<UndefValue>(V1))
+      return V1;
     return V2;
   }
 
-  if (V1 == V2) return V1;
+  if (V1 == V2)
+    return V1;
 
   if (isa<PoisonValue>(V1))
     return V2;
@@ -538,8 +542,10 @@ Constant *llvm::ConstantFoldSelectInstruction(Constant *Cond,
     // TODO: Recursively analyze aggregates or other constants.
     return false;
   };
-  if (isa<UndefValue>(V1) && NotPoison(V2)) return V2;
-  if (isa<UndefValue>(V2) && NotPoison(V1)) return V1;
+  if (isa<UndefValue>(V1) && NotPoison(V2))
+    return V2;
+  if (isa<UndefValue>(V2) && NotPoison(V1))
+    return V1;
 
   return nullptr;
 }
@@ -620,7 +626,8 @@ Constant *llvm::ConstantFoldInsertElementInstruction(Constant *Val,
     return Val;
 
   ConstantInt *CIdx = dyn_cast<ConstantInt>(Idx);
-  if (!CIdx) return nullptr;
+  if (!CIdx)
+    return nullptr;
 
   // Do not iterate on scalable vector. The num of elements is unknown at
   // compile-time.
@@ -633,7 +640,7 @@ Constant *llvm::ConstantFoldInsertElementInstruction(Constant *Val,
   if (CIdx->uge(NumElts))
     return PoisonValue::get(Val->getType());
 
-  SmallVector<Constant*, 16> Result;
+  SmallVector<Constant *, 16> Result;
   Result.reserve(NumElts);
   auto *Ty = Type::getInt32Ty(Val->getContext());
   uint64_t IdxVal = CIdx->getZExtValue();
@@ -684,7 +691,7 @@ Constant *llvm::ConstantFoldShuffleVectorInstruction(Constant *V1, Constant *V2,
   unsigned SrcNumElts = V1VTy->getElementCount().getKnownMinValue();
 
   // Loop over the shuffle mask, evaluating each element.
-  SmallVector<Constant*, 32> Result;
+  SmallVector<Constant *, 32> Result;
   for (unsigned i = 0; i != MaskNumElts; ++i) {
     int Elt = Mask[i];
     if (Elt == -1) {
@@ -692,13 +699,12 @@ Constant *llvm::ConstantFoldShuffleVectorInstruction(Constant *V1, Constant *V2,
       continue;
     }
     Constant *InElt;
-    if (unsigned(Elt) >= SrcNumElts*2)
+    if (unsigned(Elt) >= SrcNumElts * 2)
       InElt = UndefValue::get(EltTy);
     else if (unsigned(Elt) >= SrcNumElts) {
       Type *Ty = IntegerType::get(V2->getContext(), 32);
-      InElt =
-        ConstantExpr::getExtractElement(V2,
-                                        ConstantInt::get(Ty, Elt - SrcNumElts));
+      InElt = ConstantExpr::getExtractElement(
+          V2, ConstantInt::get(Ty, Elt - SrcNumElts));
     } else {
       Type *Ty = IntegerType::get(V1->getContext(), 32);
       InElt = ConstantExpr::getExtractElement(V1, ConstantInt::get(Ty, Elt));
@@ -721,8 +727,7 @@ Constant *llvm::ConstantFoldExtractValueInstruction(Constant *Agg,
   return nullptr;
 }
 
-Constant *llvm::ConstantFoldInsertValueInstruction(Constant *Agg,
-                                                   Constant *Val,
+Constant *llvm::ConstantFoldInsertValueInstruction(Constant *Agg, Constant *Val,
                                                    ArrayRef<unsigned> Idxs) {
   // Base case: no indices, so replace the entire value.
   if (Idxs.empty())
@@ -734,10 +739,11 @@ Constant *llvm::ConstantFoldInsertValueInstruction(Constant *Agg,
   else
     NumElts = cast<ArrayType>(Agg->getType())->getNumElements();
 
-  SmallVector<Constant*, 32> Result;
+  SmallVector<Constant *, 32> Result;
   for (unsigned i = 0; i != NumElts; ++i) {
     Constant *C = Agg->getAggregateElement(i);
-    if (!C) return nullptr;
+    if (!C)
+      return nullptr;
 
     if (Idxs[0] == i)
       C = ConstantFoldInsertValueInstruction(C, Val, Idxs.slice(1));
@@ -848,7 +854,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned Opcode, Constant *C1,
     case Instruction::And:
       if (isa<UndefValue>(C1) && isa<UndefValue>(C2)) // undef & undef -> undef
         return C1;
-      return Constant::getNullValue(C1->getType());   // undef & X -> 0
+      return Constant::getNullValue(C1->getType()); // undef & X -> 0
     case Instruction::Mul: {
       // undef * undef -> undef
       if (isa<UndefValue>(C1) && isa<UndefValue>(C2))
@@ -881,7 +887,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned Opcode, Constant *C1,
         return PoisonValue::get(C2->getType());
       // undef % X -> 0       otherwise
       return Constant::getNullValue(C1->getType());
-    case Instruction::Or:                          // X | undef -> -1
+    case Instruction::Or:                             // X | undef -> -1
       if (isa<UndefValue>(C1) && isa<UndefValue>(C2)) // undef | undef -> undef
         return C1;
       return Constant::getAllOnesValue(C1->getType()); // undef | X -> ~0
@@ -945,41 +951,45 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned Opcode, Constant *C1,
   if (ConstantInt *CI2 = dyn_cast<ConstantInt>(C2)) {
     switch (Opcode) {
     case Instruction::Add:
-      if (CI2->isZero()) return C1;                             // X + 0 == X
+      if (CI2->isZero())
+        return C1; // X + 0 == X
       break;
     case Instruction::Sub:
-      if (CI2->isZero()) return C1;                             // X - 0 == X
+      if (CI2->isZero())
+        return C1; // X - 0 == X
       break;
     case Instruction::Mul:
-      if (CI2->isZero()) return C2;                             // X * 0 == 0
+      if (CI2->isZero())
+        return C2; // X * 0 == 0
       if (CI2->isOne())
-        return C1;                                              // X * 1 == X
+        return C1; // X * 1 == X
       break;
     case Instruction::UDiv:
     case Instruction::SDiv:
       if (CI2->isOne())
-        return C1;                                            // X / 1 == X
+        return C1; // X / 1 == X
       if (CI2->isZero())
-        return PoisonValue::get(CI2->getType());              // X / 0 == poison
+        return PoisonValue::get(CI2->getType()); // X / 0 == poison
       break;
     case Instruction::URem:
     case Instruction::SRem:
       if (CI2->isOne())
-        return Constant::getNullValue(CI2->getType());        // X % 1 == 0
+        return Constant::getNullValue(CI2->getType()); // X % 1 == 0
       if (CI2->isZero())
-        return PoisonValue::get(CI2->getType());              // X % 0 == poison
+        return PoisonValue::get(CI2->getType()); // X % 0 == poison
       break;
     case Instruction::And:
-      if (CI2->isZero()) return C2;                           // X & 0 == 0
+      if (CI2->isZero())
+        return C2; // X & 0 == 0
       if (CI2->isMinusOne())
-        return C1;                                            // X & -1 == X
+        return C1; // X & -1 == X
 
       if (ConstantExpr *CE1 = dyn_cast<ConstantExpr>(C1)) {
         // (zext i32 to i64) & 4294967295 -> (zext i32 to i64)
         if (CE1->getOpcode() == Instruction::ZExt) {
           unsigned DstWidth = CI2->getType()->getBitWidth();
           unsigned SrcWidth =
-            CE1->getOperand(0)->getType()->getPrimitiveSizeInBits();
+              CE1->getOperand(0)->getType()->getPrimitiveSizeInBits();
           APInt PossiblySetBits(APInt::getLowBitsSet(DstWidth, SrcWidth));
           if ((PossiblySetBits & CI2->getValue()) == PossiblySetBits)
             return C1;
@@ -1024,16 +1034,19 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned Opcode, Constant *C1,
       }
       break;
     case Instruction::Or:
-      if (CI2->isZero()) return C1;        // X | 0 == X
+      if (CI2->isZero())
+        return C1; // X | 0 == X
       if (CI2->isMinusOne())
-        return C2;                         // X | -1 == -1
+        return C2; // X | -1 == -1
       break;
     case Instruction::Xor:
-      if (CI2->isZero()) return C1;        // X ^ 0 == X
+      if (CI2->isZero())
+        return C1; // X ^ 0 == X
 
       if (ConstantExpr *CE1 = dyn_cast<ConstantExpr>(C1)) {
         switch (CE1->getOpcode()) {
-        default: break;
+        default:
+          break;
         case Instruction::ICmp:
         case Instruction::FCmp:
           // cmp pred ^ true -> cmp !pred
@@ -1048,7 +1061,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned Opcode, Constant *C1,
     case Instruction::AShr:
       // ashr (zext C to Ty), C2 -> lshr (zext C, CSA), C2
       if (ConstantExpr *CE1 = dyn_cast<ConstantExpr>(C1))
-        if (CE1->getOpcode() == Instruction::ZExt)  // Top bits known zero.
+        if (CE1->getOpcode() == Instruction::ZExt) // Top bits known zero.
           return ConstantExpr::getLShr(C1, C2);
       break;
     }
@@ -1079,7 +1092,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned Opcode, Constant *C1,
       case Instruction::SDiv:
         assert(!CI2->isZero() && "Div by zero handled above");
         if (C2V.isAllOnes() && C1V.isMinSignedValue())
-          return PoisonValue::get(CI1->getType());   // MIN_INT / -1 -> poison
+          return PoisonValue::get(CI1->getType()); // MIN_INT / -1 -> poison
         return ConstantInt::get(CI1->getContext(), C1V.sdiv(C2V));
       case Instruction::URem:
         assert(!CI2->isZero() && "Div by zero handled above");
@@ -1087,7 +1100,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned Opcode, Constant *C1,
       case Instruction::SRem:
         assert(!CI2->isZero() && "Div by zero handled above");
         if (C2V.isAllOnes() && C1V.isMinSignedValue())
-          return PoisonValue::get(CI1->getType());   // MIN_INT % -1 -> poison
+          return PoisonValue::get(CI1->getType()); // MIN_INT % -1 -> poison
         return ConstantInt::get(CI1->getContext(), C1V.srem(C2V));
       case Instruction::And:
         return ConstantInt::get(CI1->getContext(), C1V & C2V);
@@ -1118,7 +1131,8 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned Opcode, Constant *C1,
     case Instruction::LShr:
     case Instruction::AShr:
     case Instruction::Shl:
-      if (CI1->isZero()) return C1;
+      if (CI1->isZero())
+        return C1;
       break;
     default:
       break;
@@ -1127,7 +1141,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned Opcode, Constant *C1,
     if (ConstantFP *CFP2 = dyn_cast<ConstantFP>(C2)) {
       const APFloat &C1V = CFP1->getValueAPF();
       const APFloat &C2V = CFP2->getValueAPF();
-      APFloat C3V = C1V;  // copy for modification
+      APFloat C3V = C1V; // copy for modification
       switch (Opcode) {
       default:
         break;
@@ -1166,7 +1180,7 @@ Constant *llvm::ConstantFoldBinaryInstruction(unsigned Opcode, Constant *C1,
 
     if (auto *FVTy = dyn_cast<FixedVectorType>(VTy)) {
       // Fold each element and create a vector constant from those constants.
-      SmallVector<Constant*, 16> Result;
+      SmallVector<Constant *, 16> Result;
       Type *Ty = IntegerType::get(FVTy->getContext(), 32);
       for (unsigned i = 0, e = FVTy->getNumElements(); i != e; ++i) {
         Constant *ExtractIdx = ConstantInt::get(Ty, i);
@@ -1256,22 +1270,23 @@ static FCmpInst::Predicate evaluateFCmpRelation(Constant *V1, Constant *V2) {
 
   // We do not know if a constant expression will evaluate to a number or NaN.
   // Therefore, we can only say that the relation is unordered or equal.
-  if (V1 == V2) return FCmpInst::FCMP_UEQ;
+  if (V1 == V2)
+    return FCmpInst::FCMP_UEQ;
 
   if (!isa<ConstantExpr>(V1)) {
     if (!isa<ConstantExpr>(V2)) {
       // Simple case, use the standard constant folder.
       ConstantInt *R = nullptr;
       R = dyn_cast<ConstantInt>(
-                      ConstantExpr::getFCmp(FCmpInst::FCMP_OEQ, V1, V2));
+          ConstantExpr::getFCmp(FCmpInst::FCMP_OEQ, V1, V2));
       if (R && !R->isZero())
         return FCmpInst::FCMP_OEQ;
       R = dyn_cast<ConstantInt>(
-                      ConstantExpr::getFCmp(FCmpInst::FCMP_OLT, V1, V2));
+          ConstantExpr::getFCmp(FCmpInst::FCMP_OLT, V1, V2));
       if (R && !R->isZero())
         return FCmpInst::FCMP_OLT;
       R = dyn_cast<ConstantInt>(
-                      ConstantExpr::getFCmp(FCmpInst::FCMP_OGT, V1, V2));
+          ConstantExpr::getFCmp(FCmpInst::FCMP_OGT, V1, V2));
       if (R && !R->isZero())
         return FCmpInst::FCMP_OGT;
 
@@ -1343,7 +1358,8 @@ static ICmpInst::Predicate evaluateICmpRelation(Constant *V1, Constant *V2,
                                                 bool isSigned) {
   assert(V1->getType() == V2->getType() &&
          "Cannot compare different types of values!");
-  if (V1 == V2) return ICmpInst::ICMP_EQ;
+  if (V1 == V2)
+    return ICmpInst::ICMP_EQ;
 
   if (!isa<ConstantExpr>(V1) && !isa<GlobalValue>(V1) &&
       !isa<BlockAddress>(V1)) {
@@ -1371,14 +1387,14 @@ static ICmpInst::Predicate evaluateICmpRelation(Constant *V1, Constant *V2,
 
     // If the first operand is simple, swap operands.
     ICmpInst::Predicate SwappedRelation =
-      evaluateICmpRelation(V2, V1, isSigned);
+        evaluateICmpRelation(V2, V1, isSigned);
     if (SwappedRelation != ICmpInst::BAD_ICMP_PREDICATE)
       return ICmpInst::getSwappedPredicate(SwappedRelation);
 
   } else if (const GlobalValue *GV = dyn_cast<GlobalValue>(V1)) {
-    if (isa<ConstantExpr>(V2)) {  // Swap as necessary.
+    if (isa<ConstantExpr>(V2)) { // Swap as necessary.
       ICmpInst::Predicate SwappedRelation =
-        evaluateICmpRelation(V2, V1, isSigned);
+          evaluateICmpRelation(V2, V1, isSigned);
       if (SwappedRelation != ICmpInst::BAD_ICMP_PREDICATE)
         return ICmpInst::getSwappedPredicate(SwappedRelation);
       return ICmpInst::BAD_ICMP_PREDICATE;
@@ -1404,9 +1420,9 @@ static ICmpInst::Predicate evaluateICmpRelation(Constant *V1, Constant *V2,
         return ICmpInst::ICMP_UGT;
     }
   } else if (const BlockAddress *BA = dyn_cast<BlockAddress>(V1)) {
-    if (isa<ConstantExpr>(V2)) {  // Swap as necessary.
+    if (isa<ConstantExpr>(V2)) { // Swap as necessary.
       ICmpInst::Predicate SwappedRelation =
-        evaluateICmpRelation(V2, V1, isSigned);
+          evaluateICmpRelation(V2, V1, isSigned);
       if (SwappedRelation != ICmpInst::BAD_ICMP_PREDICATE)
         return ICmpInst::getSwappedPredicate(SwappedRelation);
       return ICmpInst::BAD_ICMP_PREDICATE;
@@ -1459,11 +1475,12 @@ static ICmpInst::Predicate evaluateICmpRelation(Constant *V1, Constant *V2,
       // If the cast is not actually changing bits, and the second operand is a
       // null pointer, do the comparison with the pre-casted value.
       if (V2->isNullValue() && CE1->getType()->isIntOrPtrTy()) {
-        if (CE1->getOpcode() == Instruction::ZExt) isSigned = false;
-        if (CE1->getOpcode() == Instruction::SExt) isSigned = true;
-        return evaluateICmpRelation(CE1Op0,
-                                    Constant::getNullValue(CE1Op0->getType()),
-                                    isSigned);
+        if (CE1->getOpcode() == Instruction::ZExt)
+          isSigned = false;
+        if (CE1->getOpcode() == Instruction::SExt)
+          isSigned = true;
+        return evaluateICmpRelation(
+            CE1Op0, Constant::getNullValue(CE1Op0->getType()), isSigned);
       }
       break;
 
@@ -1626,7 +1643,7 @@ Constant *llvm::ConstantFoldCompareInstruction(CmpInst::Predicate Predicate,
 
     // If we can constant fold the comparison of each element, constant fold
     // the whole vector comparison.
-    SmallVector<Constant*, 4> ResElts;
+    SmallVector<Constant *, 4> ResElts;
     Type *Ty = IntegerType::get(C1->getContext(), 32);
     // Compare the elements, producing an i1 result or constant expr.
     for (unsigned I = 0, E = C1VTy->getElementCount().getKnownMinValue();
@@ -1646,9 +1663,10 @@ Constant *llvm::ConstantFoldCompareInstruction(CmpInst::Predicate Predicate,
       // Only call evaluateFCmpRelation if we have a constant expr to avoid
       // infinite recursive loop
       (isa<ConstantExpr>(C1) || isa<ConstantExpr>(C2))) {
-    int Result = -1;  // -1 = unknown, 0 = known false, 1 = known true.
+    int Result = -1; // -1 = unknown, 0 = known false, 1 = known true.
     switch (evaluateFCmpRelation(C1, C2)) {
-    default: llvm_unreachable("Unknown relation!");
+    default:
+      llvm_unreachable("Unknown relation!");
     case FCmpInst::FCMP_UNO:
     case FCmpInst::FCMP_ORD:
     case FCmpInst::FCMP_UNE:
@@ -1717,52 +1735,77 @@ Constant *llvm::ConstantFoldCompareInstruction(CmpInst::Predicate Predicate,
 
   } else {
     // Evaluate the relation between the two constants, per the predicate.
-    int Result = -1;  // -1 = unknown, 0 = known false, 1 = known true.
+    int Result = -1; // -1 = unknown, 0 = known false, 1 = known true.
     switch (evaluateICmpRelation(C1, C2, CmpInst::isSigned(Predicate))) {
-    default: llvm_unreachable("Unknown relational!");
+    default:
+      llvm_unreachable("Unknown relational!");
     case ICmpInst::BAD_ICMP_PREDICATE:
-      break;  // Couldn't determine anything about these constants.
-    case ICmpInst::ICMP_EQ:   // We know the constants are equal!
+      break; // Couldn't determine anything about these constants.
+    case ICmpInst::ICMP_EQ: // We know the constants are equal!
       // If we know the constants are equal, we can decide the result of this
       // computation precisely.
       Result = ICmpInst::isTrueWhenEqual(Predicate);
       break;
     case ICmpInst::ICMP_ULT:
       switch (Predicate) {
-      case ICmpInst::ICMP_ULT: case ICmpInst::ICMP_NE: case ICmpInst::ICMP_ULE:
-        Result = 1; break;
-      case ICmpInst::ICMP_UGT: case ICmpInst::ICMP_EQ: case ICmpInst::ICMP_UGE:
-        Result = 0; break;
+      case ICmpInst::ICMP_ULT:
+      case ICmpInst::ICMP_NE:
+      case ICmpInst::ICMP_ULE:
+        Result = 1;
+        break;
+      case ICmpInst::ICMP_UGT:
+      case ICmpInst::ICMP_EQ:
+      case ICmpInst::ICMP_UGE:
+        Result = 0;
+        break;
       default:
         break;
       }
       break;
     case ICmpInst::ICMP_SLT:
       switch (Predicate) {
-      case ICmpInst::ICMP_SLT: case ICmpInst::ICMP_NE: case ICmpInst::ICMP_SLE:
-        Result = 1; break;
-      case ICmpInst::ICMP_SGT: case ICmpInst::ICMP_EQ: case ICmpInst::ICMP_SGE:
-        Result = 0; break;
+      case ICmpInst::ICMP_SLT:
+      case ICmpInst::ICMP_NE:
+      case ICmpInst::ICMP_SLE:
+        Result = 1;
+        break;
+      case ICmpInst::ICMP_SGT:
+      case ICmpInst::ICMP_EQ:
+      case ICmpInst::ICMP_SGE:
+        Result = 0;
+        break;
       default:
         break;
       }
       break;
     case ICmpInst::ICMP_UGT:
       switch (Predicate) {
-      case ICmpInst::ICMP_UGT: case ICmpInst::ICMP_NE: case ICmpInst::ICMP_UGE:
-        Result = 1; break;
-      case ICmpInst::ICMP_ULT: case ICmpInst::ICMP_EQ: case ICmpInst::ICMP_ULE:
-        Result = 0; break;
+      case ICmpInst::ICMP_UGT:
+      case ICmpInst::ICMP_NE:
+      case ICmpInst::ICMP_UGE:
+        Result = 1;
+        break;
+      case ICmpInst::ICMP_ULT:
+      case ICmpInst::ICMP_EQ:
+      case ICmpInst::ICMP_ULE:
+        Result = 0;
+        break;
       default:
         break;
       }
       break;
     case ICmpInst::ICMP_SGT:
       switch (Predicate) {
-      case ICmpInst::ICMP_SGT: case ICmpInst::ICMP_NE: case ICmpInst::ICMP_SGE:
-        Result = 1; break;
-      case ICmpInst::ICMP_SLT: case ICmpInst::ICMP_EQ: case ICmpInst::ICMP_SLE:
-        Result = 0; break;
+      case ICmpInst::ICMP_SGT:
+      case ICmpInst::ICMP_NE:
+      case ICmpInst::ICMP_SGE:
+        Result = 1;
+        break;
+      case ICmpInst::ICMP_SLT:
+      case ICmpInst::ICMP_EQ:
+      case ICmpInst::ICMP_SLE:
+        Result = 0;
+        break;
       default:
         break;
       }
@@ -1848,13 +1891,15 @@ Constant *llvm::ConstantFoldCompareInstruction(CmpInst::Predicate Predicate,
 }
 
 /// Test whether the given sequence of *normalized* indices is "inbounds".
-template<typename IndexTy>
+template <typename IndexTy>
 static bool isInBoundsIndices(ArrayRef<IndexTy> Idxs) {
   // No indices means nothing that could be out of bounds.
-  if (Idxs.empty()) return true;
+  if (Idxs.empty())
+    return true;
 
   // If the first index is zero, it's in bounds.
-  if (cast<Constant>(Idxs[0])->isNullValue()) return true;
+  if (cast<Constant>(Idxs[0])->isNullValue())
+    return true;
 
   // If the first index is one and all the rest are zero, it's in bounds,
   // by the one-past-the-end rule.
@@ -1902,7 +1947,7 @@ static Constant *foldGEPOfGEP(GEPOperator *GEP, Type *PointeeTy, bool InBounds,
   Constant *Idx0 = cast<Constant>(Idxs[0]);
   if (Idx0->isNullValue()) {
     // Handle the simple case of a zero index.
-    SmallVector<Value*, 16> NewIndices;
+    SmallVector<Value *, 16> NewIndices;
     NewIndices.reserve(Idxs.size() + GEP->getNumIndices());
     NewIndices.append(GEP->idx_begin(), GEP->idx_end());
     NewIndices.append(Idxs.begin() + 1, Idxs.end());
@@ -1912,8 +1957,8 @@ static Constant *foldGEPOfGEP(GEPOperator *GEP, Type *PointeeTy, bool InBounds,
   }
 
   gep_type_iterator LastI = gep_type_end(GEP);
-  for (gep_type_iterator I = gep_type_begin(GEP), E = gep_type_end(GEP);
-       I != E; ++I)
+  for (gep_type_iterator I = gep_type_begin(GEP), E = gep_type_end(GEP); I != E;
+       ++I)
     LastI = I;
 
   // We can't combine GEPs if the last index is a struct type.
@@ -1926,21 +1971,20 @@ static Constant *foldGEPOfGEP(GEPOperator *GEP, Type *PointeeTy, bool InBounds,
     return nullptr;
 
   // TODO: This code may be extended to handle vectors as well.
-  auto *LastIdx = cast<Constant>(GEP->getOperand(GEP->getNumOperands()-1));
+  auto *LastIdx = cast<Constant>(GEP->getOperand(GEP->getNumOperands() - 1));
   Type *LastIdxTy = LastIdx->getType();
   if (LastIdxTy->isVectorTy())
     return nullptr;
 
-  SmallVector<Value*, 16> NewIndices;
+  SmallVector<Value *, 16> NewIndices;
   NewIndices.reserve(Idxs.size() + GEP->getNumIndices());
   NewIndices.append(GEP->idx_begin(), GEP->idx_end() - 1);
 
   // Add the last index of the source with the first index of the new GEP.
   // Make sure to handle the case when they are actually different types.
   if (LastIdxTy != Idx0->getType()) {
-    unsigned CommonExtendedWidth =
-        std::max(LastIdxTy->getIntegerBitWidth(),
-                 Idx0->getType()->getIntegerBitWidth());
+    unsigned CommonExtendedWidth = std::max(
+        LastIdxTy->getIntegerBitWidth(), Idx0->getType()->getIntegerBitWidth());
     CommonExtendedWidth = std::max(CommonExtendedWidth, 64U);
 
     Type *CommonTy =
@@ -1968,7 +2012,8 @@ Constant *llvm::ConstantFoldGetElementPtr(Type *PointeeTy, Constant *C,
                                           bool InBounds,
                                           std::optional<unsigned> InRangeIndex,
                                           ArrayRef<Value *> Idxs) {
-  if (Idxs.empty()) return C;
+  if (Idxs.empty())
+    return C;
 
   Type *GEPTy = GetElementPtrInst::getGEPReturnType(
       C, ArrayRef((Value *const *)Idxs.data(), Idxs.size()));
@@ -2175,7 +2220,8 @@ Constant *llvm::ConstantFoldGetElementPtr(Type *PointeeTy, Constant *C,
   // If we did any factoring, start over with the adjusted indices.
   if (!NewIdxs.empty()) {
     for (unsigned i = 0, e = Idxs.size(); i != e; ++i)
-      if (!NewIdxs[i]) NewIdxs[i] = cast<Constant>(Idxs[i]);
+      if (!NewIdxs[i])
+        NewIdxs[i] = cast<Constant>(Idxs[i]);
     return ConstantExpr::getGetElementPtr(PointeeTy, C, NewIdxs, InBounds,
                                           InRangeIndex);
   }
diff --git a/llvm/lib/IR/ConstantRange.cpp b/llvm/lib/IR/ConstantRange.cpp
index d22d56e40c08f20..5ad61356a3a6ae5 100644
--- a/llvm/lib/IR/ConstantRange.cpp
+++ b/llvm/lib/IR/ConstantRange.cpp
@@ -20,9 +20,9 @@
 //
 //===----------------------------------------------------------------------===//
 
+#include "llvm/IR/ConstantRange.h"
 #include "llvm/ADT/APInt.h"
 #include "llvm/Config/llvm-config.h"
-#include "llvm/IR/ConstantRange.h"
 #include "llvm/IR/Constants.h"
 #include "llvm/IR/InstrTypes.h"
 #include "llvm/IR/Instruction.h"
@@ -45,8 +45,7 @@ ConstantRange::ConstantRange(uint32_t BitWidth, bool Full)
     : Lower(Full ? APInt::getMaxValue(BitWidth) : APInt::getMinValue(BitWidth)),
       Upper(Lower) {}
 
-ConstantRange::ConstantRange(APInt V)
-    : Lower(std::move(V)), Upper(Lower + 1) {}
+ConstantRange::ConstantRange(APInt V) : Lower(std::move(V)), Upper(Lower + 1) {}
 
 ConstantRange::ConstantRange(APInt L, APInt U)
     : Lower(std::move(L)), Upper(std::move(U)) {
@@ -202,8 +201,8 @@ CmpInst::Predicate ConstantRange::getEquivalentPredWithFlippedSignedness(
   return CmpInst::Predicate::BAD_ICMP_PREDICATE;
 }
 
-void ConstantRange::getEquivalentICmp(CmpInst::Predicate &Pred,
-                                      APInt &RHS, APInt &Offset) const {
+void ConstantRange::getEquivalentICmp(CmpInst::Predicate &Pred, APInt &RHS,
+                                      APInt &Offset) const {
   Offset = APInt(getBitWidth(), 0);
   if (isFullSet() || isEmptySet()) {
     Pred = isEmptySet() ? CmpInst::ICMP_ULT : CmpInst::ICMP_UGE;
@@ -254,7 +253,8 @@ static ConstantRange makeExactMulNUWRegion(const APInt &V) {
       APIntOps::RoundingUDiv(APInt::getMinValue(BitWidth), V,
                              APInt::Rounding::UP),
       APIntOps::RoundingUDiv(APInt::getMaxValue(BitWidth), V,
-                             APInt::Rounding::DOWN) + 1);
+                             APInt::Rounding::DOWN) +
+          1);
 }
 
 /// Exact mul nsw region for single element RHS.
@@ -289,9 +289,9 @@ ConstantRange::makeGuaranteedNoWrapRegion(Instruction::BinaryOps BinOp,
 
   assert(Instruction::isBinaryOp(BinOp) && "Binary operators only!");
 
-  assert((NoWrapKind == OBO::NoSignedWrap ||
-          NoWrapKind == OBO::NoUnsignedWrap) &&
-         "NoWrapKind invalid!");
+  assert(
+      (NoWrapKind == OBO::NoSignedWrap || NoWrapKind == OBO::NoUnsignedWrap) &&
+      "NoWrapKind invalid!");
 
   bool Unsigned = NoWrapKind == OBO::NoUnsignedWrap;
   unsigned BitWidth = Other.getBitWidth();
@@ -306,9 +306,9 @@ ConstantRange::makeGuaranteedNoWrapRegion(Instruction::BinaryOps BinOp,
 
     APInt SignedMinVal = APInt::getSignedMinValue(BitWidth);
     APInt SMin = Other.getSignedMin(), SMax = Other.getSignedMax();
-    return getNonEmpty(
-        SMin.isNegative() ? SignedMinVal - SMin : SignedMinVal,
-        SMax.isStrictlyPositive() ? SignedMinVal - SMax : SignedMinVal);
+    return getNonEmpty(SMin.isNegative() ? SignedMinVal - SMin : SignedMinVal,
+                       SMax.isStrictlyPositive() ? SignedMinVal - SMax
+                                                 : SignedMinVal);
   }
 
   case Instruction::Sub: {
@@ -317,9 +317,9 @@ ConstantRange::makeGuaranteedNoWrapRegion(Instruction::BinaryOps BinOp,
 
     APInt SignedMinVal = APInt::getSignedMinValue(BitWidth);
     APInt SMin = Other.getSignedMin(), SMax = Other.getSignedMax();
-    return getNonEmpty(
-        SMax.isStrictlyPositive() ? SignedMinVal + SMax : SignedMinVal,
-        SMin.isNegative() ? SignedMinVal + SMin : SignedMinVal);
+    return getNonEmpty(SMax.isStrictlyPositive() ? SignedMinVal + SMax
+                                                 : SignedMinVal,
+                       SMin.isNegative() ? SignedMinVal + SMin : SignedMinVal);
   }
 
   case Instruction::Mul:
@@ -372,20 +372,16 @@ bool ConstantRange::isWrappedSet() const {
   return Lower.ugt(Upper) && !Upper.isZero();
 }
 
-bool ConstantRange::isUpperWrapped() const {
-  return Lower.ugt(Upper);
-}
+bool ConstantRange::isUpperWrapped() const { return Lower.ugt(Upper); }
 
 bool ConstantRange::isSignWrappedSet() const {
   return Lower.sgt(Upper) && !Upper.isMinSignedValue();
 }
 
-bool ConstantRange::isUpperSignWrapped() const {
-  return Lower.sgt(Upper);
-}
+bool ConstantRange::isUpperSignWrapped() const { return Lower.sgt(Upper); }
 
-bool
-ConstantRange::isSizeStrictlySmallerThan(const ConstantRange &Other) const {
+bool ConstantRange::isSizeStrictlySmallerThan(
+    const ConstantRange &Other) const {
   assert(getBitWidth() == Other.getBitWidth());
   if (isFullSet())
     return false;
@@ -394,8 +390,7 @@ ConstantRange::isSizeStrictlySmallerThan(const ConstantRange &Other) const {
   return (Upper - Lower).ult(Other.Upper - Other.Lower);
 }
 
-bool
-ConstantRange::isSizeLargerThan(uint64_t MaxSize) const {
+bool ConstantRange::isSizeLargerThan(uint64_t MaxSize) const {
   // If this a full set, we need special handling to avoid needing an extra bit
   // to represent the size.
   if (isFullSet())
@@ -453,8 +448,10 @@ bool ConstantRange::contains(const APInt &V) const {
 }
 
 bool ConstantRange::contains(const ConstantRange &Other) const {
-  if (isFullSet() || Other.isEmptySet()) return true;
-  if (isEmptySet() || Other.isFullSet()) return false;
+  if (isFullSet() || Other.isEmptySet())
+    return true;
+  if (isEmptySet() || Other.isFullSet())
+    return false;
 
   if (!isUpperWrapped()) {
     if (Other.isUpperWrapped())
@@ -464,8 +461,7 @@ bool ConstantRange::contains(const ConstantRange &Other) const {
   }
 
   if (!Other.isUpperWrapped())
-    return Other.getUpper().ule(Upper) ||
-           Lower.ule(Other.getLower());
+    return Other.getUpper().ule(Upper) || Lower.ule(Other.getLower());
 
   return Other.getUpper().ule(Upper) && Lower.ule(Other.getLower());
 }
@@ -497,9 +493,9 @@ ConstantRange ConstantRange::difference(const ConstantRange &CR) const {
   return intersectWith(CR.inverse());
 }
 
-static ConstantRange getPreferredRange(
-    const ConstantRange &CR1, const ConstantRange &CR2,
-    ConstantRange::PreferredRangeType Type) {
+static ConstantRange getPreferredRange(const ConstantRange &CR1,
+                                       const ConstantRange &CR2,
+                                       ConstantRange::PreferredRangeType Type) {
   if (Type == ConstantRange::Unsigned) {
     if (!CR1.isWrappedSet() && CR2.isWrappedSet())
       return CR1;
@@ -523,8 +519,10 @@ ConstantRange ConstantRange::intersectWith(const ConstantRange &CR,
          "ConstantRange types don't agree!");
 
   // Handle common cases.
-  if (   isEmptySet() || CR.isFullSet()) return *this;
-  if (CR.isEmptySet() ||    isFullSet()) return CR;
+  if (isEmptySet() || CR.isFullSet())
+    return *this;
+  if (CR.isEmptySet() || isFullSet())
+    return CR;
 
   if (!isUpperWrapped() && CR.isUpperWrapped())
     return CR.intersectWith(*this, Type);
@@ -628,8 +626,10 @@ ConstantRange ConstantRange::unionWith(const ConstantRange &CR,
   assert(getBitWidth() == CR.getBitWidth() &&
          "ConstantRange types don't agree!");
 
-  if (   isFullSet() || CR.isEmptySet()) return *this;
-  if (CR.isFullSet() ||    isEmptySet()) return CR;
+  if (isFullSet() || CR.isEmptySet())
+    return *this;
+  if (CR.isFullSet() || isEmptySet())
+    return CR;
 
   if (!isUpperWrapped() && CR.isUpperWrapped())
     return CR.unionWith(*this, Type);
@@ -641,8 +641,8 @@ ConstantRange ConstantRange::unionWith(const ConstantRange &CR,
     //  L---------U
     // -----U L-----
     if (CR.Upper.ult(Lower) || Upper.ult(CR.Lower))
-      return getPreferredRange(
-          ConstantRange(Lower, CR.Upper), ConstantRange(CR.Lower, Upper), Type);
+      return getPreferredRange(ConstantRange(Lower, CR.Upper),
+                               ConstantRange(CR.Lower, Upper), Type);
 
     APInt L = CR.Lower.ult(Lower) ? CR.Lower : Lower;
     APInt U = (CR.Upper - 1).ugt(Upper - 1) ? CR.Upper : Upper;
@@ -670,8 +670,8 @@ ConstantRange ConstantRange::unionWith(const ConstantRange &CR,
     // ----------U L----
     // ----U L----------
     if (Upper.ult(CR.Lower) && CR.Upper.ult(Lower))
-      return getPreferredRange(
-          ConstantRange(Lower, CR.Upper), ConstantRange(CR.Lower, Upper), Type);
+      return getPreferredRange(ConstantRange(Lower, CR.Upper),
+                               ConstantRange(CR.Lower, Upper), Type);
 
     // ----U     L----- : this
     //        L----U    : CR
@@ -766,7 +766,8 @@ ConstantRange ConstantRange::castOp(Instruction::CastOps CastOp,
 }
 
 ConstantRange ConstantRange::zeroExtend(uint32_t DstTySize) const {
-  if (isEmptySet()) return getEmpty(DstTySize);
+  if (isEmptySet())
+    return getEmpty(DstTySize);
 
   unsigned SrcTySize = getBitWidth();
   assert(SrcTySize < DstTySize && "Not a value extension");
@@ -783,7 +784,8 @@ ConstantRange ConstantRange::zeroExtend(uint32_t DstTySize) const {
 }
 
 ConstantRange ConstantRange::signExtend(uint32_t DstTySize) const {
-  if (isEmptySet()) return getEmpty(DstTySize);
+  if (isEmptySet())
+    return getEmpty(DstTySize);
 
   unsigned SrcTySize = getBitWidth();
   assert(SrcTySize < DstTySize && "Not a value extension");
@@ -793,8 +795,9 @@ ConstantRange ConstantRange::signExtend(uint32_t DstTySize) const {
     return ConstantRange(Lower.sext(DstTySize), Upper.zext(DstTySize));
 
   if (isFullSet() || isSignWrappedSet()) {
-    return ConstantRange(APInt::getHighBitsSet(DstTySize,DstTySize-SrcTySize+1),
-                         APInt::getLowBitsSet(DstTySize, SrcTySize-1) + 1);
+    return ConstantRange(
+        APInt::getHighBitsSet(DstTySize, DstTySize - SrcTySize + 1),
+        APInt::getLowBitsSet(DstTySize, SrcTySize - 1) + 1);
   }
 
   return ConstantRange(Lower.sext(DstTySize), Upper.sext(DstTySize));
@@ -819,7 +822,8 @@ ConstantRange ConstantRange::truncate(uint32_t DstTySize) const {
     if (Upper.getActiveBits() > DstTySize || Upper.countr_one() == DstTySize)
       return getFull(DstTySize);
 
-    Union = ConstantRange(APInt::getMaxValue(DstTySize),Upper.trunc(DstTySize));
+    Union =
+        ConstantRange(APInt::getMaxValue(DstTySize), Upper.trunc(DstTySize));
     UpperDiv.setAllBits();
 
     // Union covers the MaxValue case, so return if the remaining range is just
@@ -838,16 +842,16 @@ ConstantRange ConstantRange::truncate(uint32_t DstTySize) const {
 
   unsigned UpperDivWidth = UpperDiv.getActiveBits();
   if (UpperDivWidth <= DstTySize)
-    return ConstantRange(LowerDiv.trunc(DstTySize),
-                         UpperDiv.trunc(DstTySize)).unionWith(Union);
+    return ConstantRange(LowerDiv.trunc(DstTySize), UpperDiv.trunc(DstTySize))
+        .unionWith(Union);
 
   // The truncated value wraps around. Check if we can do better than fullset.
   if (UpperDivWidth == DstTySize + 1) {
     // Clear the MSB so that UpperDiv wraps around.
     UpperDiv.clearBit(DstTySize);
     if (UpperDiv.ult(LowerDiv))
-      return ConstantRange(LowerDiv.trunc(DstTySize),
-                           UpperDiv.trunc(DstTySize)).unionWith(Union);
+      return ConstantRange(LowerDiv.trunc(DstTySize), UpperDiv.trunc(DstTySize))
+          .unionWith(Union);
   }
 
   return getFull(DstTySize);
@@ -988,8 +992,7 @@ ConstantRange ConstantRange::intrinsic(Intrinsic::ID IntrinsicID,
   }
 }
 
-ConstantRange
-ConstantRange::add(const ConstantRange &Other) const {
+ConstantRange ConstantRange::add(const ConstantRange &Other) const {
   if (isEmptySet() || Other.isEmptySet())
     return getEmpty();
   if (isFullSet() || Other.isFullSet())
@@ -1001,8 +1004,7 @@ ConstantRange::add(const ConstantRange &Other) const {
     return getFull();
 
   ConstantRange X = ConstantRange(std::move(NewLower), std::move(NewUpper));
-  if (X.isSizeStrictlySmallerThan(*this) ||
-      X.isSizeStrictlySmallerThan(Other))
+  if (X.isSizeStrictlySmallerThan(*this) || X.isSizeStrictlySmallerThan(Other))
     // We've wrapped, therefore, full set.
     return getFull();
   return X;
@@ -1035,8 +1037,7 @@ ConstantRange ConstantRange::addWithNoWrap(const ConstantRange &Other,
   return Result;
 }
 
-ConstantRange
-ConstantRange::sub(const ConstantRange &Other) const {
+ConstantRange ConstantRange::sub(const ConstantRange &Other) const {
   if (isEmptySet() || Other.isEmptySet())
     return getEmpty();
   if (isFullSet() || Other.isFullSet())
@@ -1048,8 +1049,7 @@ ConstantRange::sub(const ConstantRange &Other) const {
     return getFull();
 
   ConstantRange X = ConstantRange(std::move(NewLower), std::move(NewUpper));
-  if (X.isSizeStrictlySmallerThan(*this) ||
-      X.isSizeStrictlySmallerThan(Other))
+  if (X.isSizeStrictlySmallerThan(*this) || X.isSizeStrictlySmallerThan(Other))
     // We've wrapped, therefore, full set.
     return getFull();
   return X;
@@ -1085,8 +1085,7 @@ ConstantRange ConstantRange::subWithNoWrap(const ConstantRange &Other,
   return Result;
 }
 
-ConstantRange
-ConstantRange::multiply(const ConstantRange &Other) const {
+ConstantRange ConstantRange::multiply(const ConstantRange &Other) const {
   // TODO: If either operand is a single element and the multiply is known to
   // be non-wrapping, round the result min and max value to the appropriate
   // multiple of that element. If wrapping is possible, at least adjust the
@@ -1121,8 +1120,8 @@ ConstantRange::multiply(const ConstantRange &Other) const {
   APInt Other_min = Other.getUnsignedMin().zext(getBitWidth() * 2);
   APInt Other_max = Other.getUnsignedMax().zext(getBitWidth() * 2);
 
-  ConstantRange Result_zext = ConstantRange(this_min * Other_min,
-                                            this_max * Other_max + 1);
+  ConstantRange Result_zext =
+      ConstantRange(this_min * Other_min, this_max * Other_max + 1);
   ConstantRange UR = Result_zext.truncate(getBitWidth());
 
   // If the unsigned range doesn't wrap, and isn't negative then it's a range
@@ -1144,8 +1143,8 @@ ConstantRange::multiply(const ConstantRange &Other) const {
   Other_min = Other.getSignedMin().sext(getBitWidth() * 2);
   Other_max = Other.getSignedMax().sext(getBitWidth() * 2);
 
-  auto L = {this_min * Other_min, this_min * Other_max,
-            this_max * Other_min, this_max * Other_max};
+  auto L = {this_min * Other_min, this_min * Other_max, this_max * Other_min,
+            this_max * Other_max};
   auto Compare = [](const APInt &A, const APInt &B) { return A.slt(B); };
   ConstantRange Result_sext(std::min(L, Compare), std::max(L, Compare) + 1);
   ConstantRange SR = Result_sext.truncate(getBitWidth());
@@ -1172,8 +1171,7 @@ ConstantRange ConstantRange::smul_fast(const ConstantRange &Other) const {
   return getNonEmpty(std::min(Muls, Compare), std::max(Muls, Compare) + 1);
 }
 
-ConstantRange
-ConstantRange::smax(const ConstantRange &Other) const {
+ConstantRange ConstantRange::smax(const ConstantRange &Other) const {
   // X smax Y is: range(smax(X_smin, Y_smin),
   //                    smax(X_smax, Y_smax))
   if (isEmptySet() || Other.isEmptySet())
@@ -1186,8 +1184,7 @@ ConstantRange::smax(const ConstantRange &Other) const {
   return Res;
 }
 
-ConstantRange
-ConstantRange::umax(const ConstantRange &Other) const {
+ConstantRange ConstantRange::umax(const ConstantRange &Other) const {
   // X umax Y is: range(umax(X_umin, Y_umin),
   //                    umax(X_umax, Y_umax))
   if (isEmptySet() || Other.isEmptySet())
@@ -1200,8 +1197,7 @@ ConstantRange::umax(const ConstantRange &Other) const {
   return Res;
 }
 
-ConstantRange
-ConstantRange::smin(const ConstantRange &Other) const {
+ConstantRange ConstantRange::smin(const ConstantRange &Other) const {
   // X smin Y is: range(smin(X_smin, Y_smin),
   //                    smin(X_smax, Y_smax))
   if (isEmptySet() || Other.isEmptySet())
@@ -1214,8 +1210,7 @@ ConstantRange::smin(const ConstantRange &Other) const {
   return Res;
 }
 
-ConstantRange
-ConstantRange::umin(const ConstantRange &Other) const {
+ConstantRange ConstantRange::umin(const ConstantRange &Other) const {
   // X umin Y is: range(umin(X_umin, Y_umin),
   //                    umin(X_umax, Y_umax))
   if (isEmptySet() || Other.isEmptySet())
@@ -1228,8 +1223,7 @@ ConstantRange::umin(const ConstantRange &Other) const {
   return Res;
 }
 
-ConstantRange
-ConstantRange::udiv(const ConstantRange &RHS) const {
+ConstantRange ConstantRange::udiv(const ConstantRange &RHS) const {
   if (isEmptySet() || RHS.isEmptySet() || RHS.getUnsignedMax().isZero())
     return getEmpty();
 
@@ -1306,9 +1300,8 @@ ConstantRange ConstantRange::sdiv(const ConstantRange &RHS) const {
           // [SignedMin, X] without SignedMin is [SignedMin + 1, X].
           AdjNegLLower = NegL.Lower + 1;
 
-        PosRes = PosRes.unionWith(
-            ConstantRange(std::move(Lo),
-                          AdjNegLLower.sdiv(NegR.Upper - 1) + 1));
+        PosRes = PosRes.unionWith(ConstantRange(
+            std::move(Lo), AdjNegLLower.sdiv(NegR.Upper - 1) + 1));
       }
     } else {
       PosRes = PosRes.unionWith(
@@ -1453,11 +1446,10 @@ ConstantRange ConstantRange::binaryXor(const ConstantRange &Other) const {
   if (isSingleElement() && getSingleElement()->isAllOnes())
     return Other.binaryNot();
 
-  return fromKnownBits(toKnownBits() ^ Other.toKnownBits(), /*IsSigned*/false);
+  return fromKnownBits(toKnownBits() ^ Other.toKnownBits(), /*IsSigned*/ false);
 }
 
-ConstantRange
-ConstantRange::shl(const ConstantRange &Other) const {
+ConstantRange ConstantRange::shl(const ConstantRange &Other) const {
   if (isEmptySet() || Other.isEmptySet())
     return getEmpty();
 
@@ -1497,8 +1489,7 @@ ConstantRange::shl(const ConstantRange &Other) const {
   return ConstantRange::getNonEmpty(std::move(Min), std::move(Max) + 1);
 }
 
-ConstantRange
-ConstantRange::lshr(const ConstantRange &Other) const {
+ConstantRange ConstantRange::lshr(const ConstantRange &Other) const {
   if (isEmptySet() || Other.isEmptySet())
     return getEmpty();
 
@@ -1507,8 +1498,7 @@ ConstantRange::lshr(const ConstantRange &Other) const {
   return getNonEmpty(std::move(min), std::move(max));
 }
 
-ConstantRange
-ConstantRange::ashr(const ConstantRange &Other) const {
+ConstantRange ConstantRange::ashr(const ConstantRange &Other) const {
   if (isEmptySet() || Other.isEmptySet())
     return getEmpty();
 
@@ -1732,8 +1722,8 @@ ConstantRange ConstantRange::ctlz(bool ZeroIsPoison) const {
                      APInt(getBitWidth(), getUnsignedMin().countl_zero() + 1));
 }
 
-ConstantRange::OverflowResult ConstantRange::unsignedAddMayOverflow(
-    const ConstantRange &Other) const {
+ConstantRange::OverflowResult
+ConstantRange::unsignedAddMayOverflow(const ConstantRange &Other) const {
   if (isEmptySet() || Other.isEmptySet())
     return OverflowResult::MayOverflow;
 
@@ -1748,8 +1738,8 @@ ConstantRange::OverflowResult ConstantRange::unsignedAddMayOverflow(
   return OverflowResult::NeverOverflows;
 }
 
-ConstantRange::OverflowResult ConstantRange::signedAddMayOverflow(
-    const ConstantRange &Other) const {
+ConstantRange::OverflowResult
+ConstantRange::signedAddMayOverflow(const ConstantRange &Other) const {
   if (isEmptySet() || Other.isEmptySet())
     return OverflowResult::MayOverflow;
 
@@ -1778,8 +1768,8 @@ ConstantRange::OverflowResult ConstantRange::signedAddMayOverflow(
   return OverflowResult::NeverOverflows;
 }
 
-ConstantRange::OverflowResult ConstantRange::unsignedSubMayOverflow(
-    const ConstantRange &Other) const {
+ConstantRange::OverflowResult
+ConstantRange::unsignedSubMayOverflow(const ConstantRange &Other) const {
   if (isEmptySet() || Other.isEmptySet())
     return OverflowResult::MayOverflow;
 
@@ -1794,8 +1784,8 @@ ConstantRange::OverflowResult ConstantRange::unsignedSubMayOverflow(
   return OverflowResult::NeverOverflows;
 }
 
-ConstantRange::OverflowResult ConstantRange::signedSubMayOverflow(
-    const ConstantRange &Other) const {
+ConstantRange::OverflowResult
+ConstantRange::signedSubMayOverflow(const ConstantRange &Other) const {
   if (isEmptySet() || Other.isEmptySet())
     return OverflowResult::MayOverflow;
 
@@ -1824,8 +1814,8 @@ ConstantRange::OverflowResult ConstantRange::signedSubMayOverflow(
   return OverflowResult::NeverOverflows;
 }
 
-ConstantRange::OverflowResult ConstantRange::unsignedMulMayOverflow(
-    const ConstantRange &Other) const {
+ConstantRange::OverflowResult
+ConstantRange::unsignedMulMayOverflow(const ConstantRange &Other) const {
   if (isEmptySet() || Other.isEmptySet())
     return OverflowResult::MayOverflow;
 
@@ -1833,11 +1823,11 @@ ConstantRange::OverflowResult ConstantRange::unsignedMulMayOverflow(
   APInt OtherMin = Other.getUnsignedMin(), OtherMax = Other.getUnsignedMax();
   bool Overflow;
 
-  (void) Min.umul_ov(OtherMin, Overflow);
+  (void)Min.umul_ov(OtherMin, Overflow);
   if (Overflow)
     return OverflowResult::AlwaysOverflowsHigh;
 
-  (void) Max.umul_ov(OtherMax, Overflow);
+  (void)Max.umul_ov(OtherMax, Overflow);
   if (Overflow)
     return OverflowResult::MayOverflow;
 
@@ -1854,9 +1844,7 @@ void ConstantRange::print(raw_ostream &OS) const {
 }
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
-LLVM_DUMP_METHOD void ConstantRange::dump() const {
-  print(dbgs());
-}
+LLVM_DUMP_METHOD void ConstantRange::dump() const { print(dbgs()); }
 #endif
 
 ConstantRange llvm::getConstantRangeFromMetadata(const MDNode &Ranges) {
diff --git a/llvm/lib/IR/Constants.cpp b/llvm/lib/IR/Constants.cpp
index 6589bd33be7619c..5e1b2376babe92c 100644
--- a/llvm/lib/IR/Constants.cpp
+++ b/llvm/lib/IR/Constants.cpp
@@ -49,7 +49,8 @@ bool Constant::isNegativeZeroValue() const {
     if (const auto *SplatCFP = dyn_cast_or_null<ConstantFP>(getSplatValue()))
       return SplatCFP->isNegativeZeroValue();
 
-  // We've already handled true FP case; any other FP vectors can't represent -0.0.
+  // We've already handled true FP case; any other FP vectors can't represent
+  // -0.0.
   if (getType()->isFPOrFPVectorTy())
     return false;
 
@@ -677,12 +678,14 @@ Constant::PossibleRelocationsTy Constant::getRelocationInfo() const {
 /// recursively traversing users of the constantexpr.
 /// If RemoveDeadUsers is true, also remove dead users at the same time.
 static bool constantIsDead(const Constant *C, bool RemoveDeadUsers) {
-  if (isa<GlobalValue>(C)) return false; // Cannot remove this
+  if (isa<GlobalValue>(C))
+    return false; // Cannot remove this
 
   Value::const_user_iterator I = C->user_begin(), E = C->user_end();
   while (I != E) {
     const Constant *User = dyn_cast<Constant>(*I);
-    if (!User) return false; // Non-constant usage;
+    if (!User)
+      return false; // Non-constant usage;
     if (!constantIsDead(User, RemoveDeadUsers))
       return false; // Constant wasn't dead
 
@@ -701,7 +704,7 @@ static bool constantIsDead(const Constant *C, bool RemoveDeadUsers) {
     ReplaceableMetadataImpl::SalvageDebugInfo(*C);
     const_cast<Constant *>(C)->destroyConstant();
   }
-  
+
   return true;
 }
 
@@ -899,7 +902,7 @@ ConstantInt *ConstantInt::get(IntegerType *Ty, uint64_t V, bool isSigned) {
   return get(Ty->getContext(), APInt(Ty->getBitWidth(), V, isSigned));
 }
 
-Constant *ConstantInt::get(Type *Ty, const APInt& V) {
+Constant *ConstantInt::get(Type *Ty, const APInt &V) {
   ConstantInt *C = get(Ty->getContext(), V);
   assert(C->getType() == Ty->getScalarType() &&
          "ConstantInt type doesn't match the type implied by its value!");
@@ -911,7 +914,7 @@ Constant *ConstantInt::get(Type *Ty, const APInt& V) {
   return C;
 }
 
-ConstantInt *ConstantInt::get(IntegerType* Ty, StringRef Str, uint8_t radix) {
+ConstantInt *ConstantInt::get(IntegerType *Ty, StringRef Str, uint8_t radix) {
   return get(Ty->getContext(), APInt(Ty->getBitWidth(), Str, radix));
 }
 
@@ -1009,10 +1012,9 @@ Constant *ConstantFP::getZero(Type *Ty, bool Negative) {
   return C;
 }
 
-
 // ConstantFP accessors.
-ConstantFP* ConstantFP::get(LLVMContext &Context, const APFloat& V) {
-  LLVMContextImpl* pImpl = Context.pImpl;
+ConstantFP *ConstantFP::get(LLVMContext &Context, const APFloat &V) {
+  LLVMContextImpl *pImpl = Context.pImpl;
 
   std::unique_ptr<ConstantFP> &Slot = pImpl->FPConstants[V];
 
@@ -1036,8 +1038,7 @@ Constant *ConstantFP::getInfinity(Type *Ty, bool Negative) {
 
 ConstantFP::ConstantFP(Type *Ty, const APFloat &V)
     : ConstantData(Ty, ConstantFPVal), Val(V) {
-  assert(&V.getSemantics() == &Ty->getFltSemantics() &&
-         "FP type Mismatch");
+  assert(&V.getSemantics() == &Ty->getFltSemantics() && "FP type Mismatch");
 }
 
 bool ConstantFP::isExactlyValue(const APFloat &V) const {
@@ -1232,13 +1233,13 @@ ConstantArray::ConstantArray(ArrayType *T, ArrayRef<Constant *> V)
          "Invalid initializer for constant array");
 }
 
-Constant *ConstantArray::get(ArrayType *Ty, ArrayRef<Constant*> V) {
+Constant *ConstantArray::get(ArrayType *Ty, ArrayRef<Constant *> V) {
   if (Constant *C = getImpl(Ty, V))
     return C;
   return Ty->getContext().pImpl->ArrayConstants.getOrCreate(Ty, V);
 }
 
-Constant *ConstantArray::getImpl(ArrayType *Ty, ArrayRef<Constant*> V) {
+Constant *ConstantArray::getImpl(ArrayType *Ty, ArrayRef<Constant *> V) {
   // Empty arrays are canonicalized to ConstantAggregateZero.
   if (V.empty())
     return ConstantAggregateZero::get(Ty);
@@ -1272,18 +1273,17 @@ Constant *ConstantArray::getImpl(ArrayType *Ty, ArrayRef<Constant*> V) {
 }
 
 StructType *ConstantStruct::getTypeForElements(LLVMContext &Context,
-                                               ArrayRef<Constant*> V,
+                                               ArrayRef<Constant *> V,
                                                bool Packed) {
   unsigned VecSize = V.size();
-  SmallVector<Type*, 16> EltTypes(VecSize);
+  SmallVector<Type *, 16> EltTypes(VecSize);
   for (unsigned i = 0; i != VecSize; ++i)
     EltTypes[i] = V[i]->getType();
 
   return StructType::get(Context, EltTypes, Packed);
 }
 
-
-StructType *ConstantStruct::getTypeForElements(ArrayRef<Constant*> V,
+StructType *ConstantStruct::getTypeForElements(ArrayRef<Constant *> V,
                                                bool Packed) {
   assert(!V.empty() &&
          "ConstantStruct::getTypeForElements cannot be called on empty list");
@@ -1297,7 +1297,7 @@ ConstantStruct::ConstantStruct(StructType *T, ArrayRef<Constant *> V)
 }
 
 // ConstantStruct accessors.
-Constant *ConstantStruct::get(StructType *ST, ArrayRef<Constant*> V) {
+Constant *ConstantStruct::get(StructType *ST, ArrayRef<Constant *> V) {
   assert((ST->isOpaque() || ST->getNumElements() == V.size()) &&
          "Incorrect # elements specified to ConstantStruct::get");
 
@@ -1339,14 +1339,14 @@ ConstantVector::ConstantVector(VectorType *T, ArrayRef<Constant *> V)
 }
 
 // ConstantVector accessors.
-Constant *ConstantVector::get(ArrayRef<Constant*> V) {
+Constant *ConstantVector::get(ArrayRef<Constant *> V) {
   if (Constant *C = getImpl(V))
     return C;
   auto *Ty = FixedVectorType::get(V.front()->getType(), V.size());
   return Ty->getContext().pImpl->VectorConstants.getOrCreate(Ty, V);
 }
 
-Constant *ConstantVector::getImpl(ArrayRef<Constant*> V) {
+Constant *ConstantVector::getImpl(ArrayRef<Constant *> V) {
   assert(!V.empty() && "Vectors can't be empty");
   auto *T = FixedVectorType::get(V.front()->getType(), V.size());
 
@@ -1427,9 +1427,7 @@ void ConstantTokenNone::destroyConstantImpl() {
 // Utility function for determining if a ConstantExpr is a CastOp or not. This
 // can't be inline because we don't want to #include Instruction.h into
 // Constant.h
-bool ConstantExpr::isCast() const {
-  return Instruction::isCast(getOpcode());
-}
+bool ConstantExpr::isCast() const { return Instruction::isCast(getOpcode()); }
 
 bool ConstantExpr::isCompare() const {
   return getOpcode() == Instruction::ICmp || getOpcode() == Instruction::FCmp;
@@ -1453,7 +1451,7 @@ Constant *ConstantExpr::getWithOperands(ArrayRef<Constant *> Ops, Type *Ty,
 
   // If no operands changed return self.
   if (Ty == getType() && std::equal(Ops.begin(), Ops.end(), op_begin()))
-    return const_cast<ConstantExpr*>(this);
+    return const_cast<ConstantExpr *>(this);
 
   Type *OnlyIfReducedTy = OnlyIfReduced ? Ty : nullptr;
   switch (getOpcode()) {
@@ -1497,7 +1495,6 @@ Constant *ConstantExpr::getWithOperands(ArrayRef<Constant *> Ops, Type *Ty,
   }
 }
 
-
 //===----------------------------------------------------------------------===//
 //                      isValueValidForType implementations
 
@@ -1515,13 +1512,13 @@ bool ConstantInt::isValueValidForType(Type *Ty, int64_t Val) {
   return isIntN(NumBits, Val);
 }
 
-bool ConstantFP::isValueValidForType(Type *Ty, const APFloat& Val) {
+bool ConstantFP::isValueValidForType(Type *Ty, const APFloat &Val) {
   // convert modifies in place, so make a copy.
   APFloat Val2 = APFloat(Val);
   bool losesInfo;
   switch (Ty->getTypeID()) {
   default:
-    return false;         // These can't be represented as floating point!
+    return false; // These can't be represented as floating point!
 
   // FIXME rounding mode needs to be more flexible
   case Type::HalfTyID: {
@@ -1539,7 +1536,8 @@ bool ConstantFP::isValueValidForType(Type *Ty, const APFloat& Val) {
   case Type::FloatTyID: {
     if (&Val2.getSemantics() == &APFloat::IEEEsingle())
       return true;
-    Val2.convert(APFloat::IEEEsingle(), APFloat::rmNearestTiesToEven, &losesInfo);
+    Val2.convert(APFloat::IEEEsingle(), APFloat::rmNearestTiesToEven,
+                 &losesInfo);
     return !losesInfo;
   }
   case Type::DoubleTyID: {
@@ -1548,7 +1546,8 @@ bool ConstantFP::isValueValidForType(Type *Ty, const APFloat& Val) {
         &Val2.getSemantics() == &APFloat::IEEEsingle() ||
         &Val2.getSemantics() == &APFloat::IEEEdouble())
       return true;
-    Val2.convert(APFloat::IEEEdouble(), APFloat::rmNearestTiesToEven, &losesInfo);
+    Val2.convert(APFloat::IEEEdouble(), APFloat::rmNearestTiesToEven,
+                 &losesInfo);
     return !losesInfo;
   }
   case Type::X86_FP80TyID:
@@ -1572,7 +1571,6 @@ bool ConstantFP::isValueValidForType(Type *Ty, const APFloat& Val) {
   }
 }
 
-
 //===----------------------------------------------------------------------===//
 //                      Factory Function Implementation
 
@@ -1598,7 +1596,6 @@ void ConstantArray::destroyConstantImpl() {
   getType()->getContext().pImpl->ArrayConstants.remove(this);
 }
 
-
 //---- ConstantStruct::get() implementation...
 //
 
@@ -1761,7 +1758,7 @@ BlockAddress *BlockAddress::get(BasicBlock *BB) {
 
 BlockAddress *BlockAddress::get(Function *F, BasicBlock *BB) {
   BlockAddress *&BA =
-    F->getContext().pImpl->BlockAddresses[std::make_pair(F, BB)];
+      F->getContext().pImpl->BlockAddresses[std::make_pair(F, BB)];
   if (!BA)
     BA = new BlockAddress(F, BB);
 
@@ -1791,8 +1788,8 @@ BlockAddress *BlockAddress::lookup(const BasicBlock *BB) {
 
 /// Remove the constant from the constant table.
 void BlockAddress::destroyConstantImpl() {
-  getFunction()->getType()->getContext().pImpl
-    ->BlockAddresses.erase(std::make_pair(getFunction(), getBasicBlock()));
+  getFunction()->getType()->getContext().pImpl->BlockAddresses.erase(
+      std::make_pair(getFunction(), getBasicBlock()));
   getBasicBlock()->AdjustBlockAddressRefCount(-1);
 }
 
@@ -1812,7 +1809,7 @@ Value *BlockAddress::handleOperandChangeImpl(Value *From, Value *To) {
   // See if the 'new' entry already exists, if not, just update this in place
   // and return early.
   BlockAddress *&NewBA =
-    getContext().pImpl->BlockAddresses[std::make_pair(NewF, NewBB)];
+      getContext().pImpl->BlockAddresses[std::make_pair(NewF, NewBB)];
   if (NewBA)
     return NewBA;
 
@@ -1820,8 +1817,8 @@ Value *BlockAddress::handleOperandChangeImpl(Value *From, Value *To) {
 
   // Remove the old entry, this can't cause the map to rehash (just a
   // tombstone will get added).
-  getContext().pImpl->BlockAddresses.erase(std::make_pair(getFunction(),
-                                                          getBasicBlock()));
+  getContext().pImpl->BlockAddresses.erase(
+      std::make_pair(getFunction(), getBasicBlock()));
   NewBA = this;
   setOperand(0, NewF);
   setOperand(1, NewBB);
@@ -2025,7 +2022,7 @@ Constant *ConstantExpr::getSExtOrTrunc(Constant *C, Type *Ty) {
 Constant *ConstantExpr::getPointerCast(Constant *S, Type *Ty) {
   assert(S->getType()->isPtrOrPtrVectorTy() && "Invalid cast");
   assert((Ty->isIntOrIntVectorTy() || Ty->isPtrOrPtrVectorTy()) &&
-          "Invalid cast");
+         "Invalid cast");
 
   if (Ty->isIntOrIntVectorTy())
     return getPtrToInt(S, Ty);
@@ -2049,14 +2046,16 @@ Constant *ConstantExpr::getPointerBitCastOrAddrSpaceCast(Constant *S,
 }
 
 Constant *ConstantExpr::getIntegerCast(Constant *C, Type *Ty, bool isSigned) {
-  assert(C->getType()->isIntOrIntVectorTy() &&
-         Ty->isIntOrIntVectorTy() && "Invalid cast");
+  assert(C->getType()->isIntOrIntVectorTy() && Ty->isIntOrIntVectorTy() &&
+         "Invalid cast");
   unsigned SrcBits = C->getType()->getScalarSizeInBits();
   unsigned DstBits = Ty->getScalarSizeInBits();
   Instruction::CastOps opcode =
-    (SrcBits == DstBits ? Instruction::BitCast :
-     (SrcBits > DstBits ? Instruction::Trunc :
-      (isSigned ? Instruction::SExt : Instruction::ZExt)));
+      (SrcBits == DstBits
+           ? Instruction::BitCast
+           : (SrcBits > DstBits
+                  ? Instruction::Trunc
+                  : (isSigned ? Instruction::SExt : Instruction::ZExt)));
   return getCast(opcode, C, Ty);
 }
 
@@ -2068,7 +2067,7 @@ Constant *ConstantExpr::getFPCast(Constant *C, Type *Ty) {
   if (SrcBits == DstBits)
     return C; // Avoid a useless cast
   Instruction::CastOps opcode =
-    (SrcBits > DstBits ? Instruction::FPTrunc : Instruction::FPExt);
+      (SrcBits > DstBits ? Instruction::FPTrunc : Instruction::FPExt);
   return getCast(opcode, C, Ty);
 }
 
@@ -2080,7 +2079,7 @@ Constant *ConstantExpr::getTrunc(Constant *C, Type *Ty, bool OnlyIfReduced) {
   assert((fromVec == toVec) && "Cannot convert from scalar to/from vector");
   assert(C->getType()->isIntOrIntVectorTy() && "Trunc operand must be integer");
   assert(Ty->isIntOrIntVectorTy() && "Trunc produces only integral");
-  assert(C->getType()->getScalarSizeInBits() > Ty->getScalarSizeInBits()&&
+  assert(C->getType()->getScalarSizeInBits() > Ty->getScalarSizeInBits() &&
          "SrcTy must be larger than DestTy for Trunc!");
 
   return getFoldedCast(Instruction::Trunc, C, Ty, OnlyIfReduced);
@@ -2094,7 +2093,7 @@ Constant *ConstantExpr::getSExt(Constant *C, Type *Ty, bool OnlyIfReduced) {
   assert((fromVec == toVec) && "Cannot convert from scalar to/from vector");
   assert(C->getType()->isIntOrIntVectorTy() && "SExt operand must be integral");
   assert(Ty->isIntOrIntVectorTy() && "SExt produces only integer");
-  assert(C->getType()->getScalarSizeInBits() < Ty->getScalarSizeInBits()&&
+  assert(C->getType()->getScalarSizeInBits() < Ty->getScalarSizeInBits() &&
          "SrcTy must be smaller than DestTy for SExt!");
 
   return getFoldedCast(Instruction::SExt, C, Ty, OnlyIfReduced);
@@ -2108,7 +2107,7 @@ Constant *ConstantExpr::getZExt(Constant *C, Type *Ty, bool OnlyIfReduced) {
   assert((fromVec == toVec) && "Cannot convert from scalar to/from vector");
   assert(C->getType()->isIntOrIntVectorTy() && "ZEXt operand must be integral");
   assert(Ty->isIntOrIntVectorTy() && "ZExt produces only integer");
-  assert(C->getType()->getScalarSizeInBits() < Ty->getScalarSizeInBits()&&
+  assert(C->getType()->getScalarSizeInBits() < Ty->getScalarSizeInBits() &&
          "SrcTy must be smaller than DestTy for ZExt!");
 
   return getFoldedCast(Instruction::ZExt, C, Ty, OnlyIfReduced);
@@ -2121,7 +2120,7 @@ Constant *ConstantExpr::getFPTrunc(Constant *C, Type *Ty, bool OnlyIfReduced) {
 #endif
   assert((fromVec == toVec) && "Cannot convert from scalar to/from vector");
   assert(C->getType()->isFPOrFPVectorTy() && Ty->isFPOrFPVectorTy() &&
-         C->getType()->getScalarSizeInBits() > Ty->getScalarSizeInBits()&&
+         C->getType()->getScalarSizeInBits() > Ty->getScalarSizeInBits() &&
          "This is an illegal floating point truncation!");
   return getFoldedCast(Instruction::FPTrunc, C, Ty, OnlyIfReduced);
 }
@@ -2133,7 +2132,7 @@ Constant *ConstantExpr::getFPExtend(Constant *C, Type *Ty, bool OnlyIfReduced) {
 #endif
   assert((fromVec == toVec) && "Cannot convert from scalar to/from vector");
   assert(C->getType()->isFPOrFPVectorTy() && Ty->isFPOrFPVectorTy() &&
-         C->getType()->getScalarSizeInBits() < Ty->getScalarSizeInBits()&&
+         C->getType()->getScalarSizeInBits() < Ty->getScalarSizeInBits() &&
          "This is an illegal floating point extension!");
   return getFoldedCast(Instruction::FPExt, C, Ty, OnlyIfReduced);
 }
@@ -2217,7 +2216,8 @@ Constant *ConstantExpr::getBitCast(Constant *C, Type *DstTy,
 
   // It is common to ask for a bitcast of a value to its own type, handle this
   // speedily.
-  if (C->getType() == DstTy) return C;
+  if (C->getType() == DstTy)
+    return C;
 
   return getFoldedCast(Instruction::BitCast, C, DstTy, OnlyIfReduced);
 }
@@ -2270,7 +2270,7 @@ Constant *ConstantExpr::get(unsigned Opcode, Constant *C1, Constant *C2,
   if (OnlyIfReducedTy == C1->getType())
     return nullptr;
 
-  Constant *ArgVec[] = { C1, C2 };
+  Constant *ArgVec[] = {C1, C2};
   ConstantExprKeyType Key(Opcode, ArgVec, 0, Flags);
 
   LLVMContextImpl *pImpl = C1->getContext().pImpl;
@@ -2331,27 +2331,25 @@ bool ConstantExpr::isSupportedBinOp(unsigned Opcode) {
   }
 }
 
-Constant *ConstantExpr::getSizeOf(Type* Ty) {
+Constant *ConstantExpr::getSizeOf(Type *Ty) {
   // sizeof is implemented as: (i64) gep (Ty*)null, 1
   // Note that a non-inbounds gep is used, as null isn't within any object.
   Constant *GEPIdx = ConstantInt::get(Type::getInt32Ty(Ty->getContext()), 1);
   Constant *GEP = getGetElementPtr(
       Ty, Constant::getNullValue(PointerType::getUnqual(Ty)), GEPIdx);
-  return getPtrToInt(GEP,
-                     Type::getInt64Ty(Ty->getContext()));
+  return getPtrToInt(GEP, Type::getInt64Ty(Ty->getContext()));
 }
 
-Constant *ConstantExpr::getAlignOf(Type* Ty) {
+Constant *ConstantExpr::getAlignOf(Type *Ty) {
   // alignof is implemented as: (i64) gep ({i1,Ty}*)null, 0, 1
   // Note that a non-inbounds gep is used, as null isn't within any object.
   Type *AligningTy = StructType::get(Type::getInt1Ty(Ty->getContext()), Ty);
   Constant *NullPtr = Constant::getNullValue(AligningTy->getPointerTo(0));
   Constant *Zero = ConstantInt::get(Type::getInt64Ty(Ty->getContext()), 0);
   Constant *One = ConstantInt::get(Type::getInt32Ty(Ty->getContext()), 1);
-  Constant *Indices[2] = { Zero, One };
+  Constant *Indices[2] = {Zero, One};
   Constant *GEP = getGetElementPtr(AligningTy, NullPtr, Indices);
-  return getPtrToInt(GEP,
-                     Type::getInt64Ty(Ty->getContext()));
+  return getPtrToInt(GEP, Type::getInt64Ty(Ty->getContext()));
 }
 
 Constant *ConstantExpr::getCompare(unsigned short Predicate, Constant *C1,
@@ -2359,18 +2357,35 @@ Constant *ConstantExpr::getCompare(unsigned short Predicate, Constant *C1,
   assert(C1->getType() == C2->getType() && "Op types should be identical!");
 
   switch (Predicate) {
-  default: llvm_unreachable("Invalid CmpInst predicate");
-  case CmpInst::FCMP_FALSE: case CmpInst::FCMP_OEQ: case CmpInst::FCMP_OGT:
-  case CmpInst::FCMP_OGE:   case CmpInst::FCMP_OLT: case CmpInst::FCMP_OLE:
-  case CmpInst::FCMP_ONE:   case CmpInst::FCMP_ORD: case CmpInst::FCMP_UNO:
-  case CmpInst::FCMP_UEQ:   case CmpInst::FCMP_UGT: case CmpInst::FCMP_UGE:
-  case CmpInst::FCMP_ULT:   case CmpInst::FCMP_ULE: case CmpInst::FCMP_UNE:
+  default:
+    llvm_unreachable("Invalid CmpInst predicate");
+  case CmpInst::FCMP_FALSE:
+  case CmpInst::FCMP_OEQ:
+  case CmpInst::FCMP_OGT:
+  case CmpInst::FCMP_OGE:
+  case CmpInst::FCMP_OLT:
+  case CmpInst::FCMP_OLE:
+  case CmpInst::FCMP_ONE:
+  case CmpInst::FCMP_ORD:
+  case CmpInst::FCMP_UNO:
+  case CmpInst::FCMP_UEQ:
+  case CmpInst::FCMP_UGT:
+  case CmpInst::FCMP_UGE:
+  case CmpInst::FCMP_ULT:
+  case CmpInst::FCMP_ULE:
+  case CmpInst::FCMP_UNE:
   case CmpInst::FCMP_TRUE:
     return getFCmp(Predicate, C1, C2, OnlyIfReduced);
 
-  case CmpInst::ICMP_EQ:  case CmpInst::ICMP_NE:  case CmpInst::ICMP_UGT:
-  case CmpInst::ICMP_UGE: case CmpInst::ICMP_ULT: case CmpInst::ICMP_ULE:
-  case CmpInst::ICMP_SGT: case CmpInst::ICMP_SGE: case CmpInst::ICMP_SLT:
+  case CmpInst::ICMP_EQ:
+  case CmpInst::ICMP_NE:
+  case CmpInst::ICMP_UGT:
+  case CmpInst::ICMP_UGE:
+  case CmpInst::ICMP_ULT:
+  case CmpInst::ICMP_ULE:
+  case CmpInst::ICMP_SGT:
+  case CmpInst::ICMP_SGE:
+  case CmpInst::ICMP_SLT:
   case CmpInst::ICMP_SLE:
     return getICmp(Predicate, C1, C2, OnlyIfReduced);
   }
@@ -2385,10 +2400,10 @@ Constant *ConstantExpr::getGetElementPtr(Type *Ty, Constant *C,
 
   if (Constant *FC =
           ConstantFoldGetElementPtr(Ty, C, InBounds, InRangeIndex, Idxs))
-    return FC;          // Fold a few common cases.
+    return FC; // Fold a few common cases.
 
-  assert(GetElementPtrInst::getIndexedType(Ty, Idxs) &&
-         "GEP indices invalid!");;
+  assert(GetElementPtrInst::getIndexedType(Ty, Idxs) && "GEP indices invalid!");
+  ;
 
   // Get the result type of the getelementptr!
   Type *ReqTy = GetElementPtrInst::getGEPReturnType(C, Idxs);
@@ -2400,16 +2415,15 @@ Constant *ConstantExpr::getGetElementPtr(Type *Ty, Constant *C,
     EltCount = VecTy->getElementCount();
 
   // Look up the constant in the table first to ensure uniqueness
-  std::vector<Constant*> ArgVec;
+  std::vector<Constant *> ArgVec;
   ArgVec.reserve(1 + Idxs.size());
   ArgVec.push_back(C);
   auto GTI = gep_type_begin(Ty, Idxs), GTE = gep_type_end(Ty, Idxs);
   for (; GTI != GTE; ++GTI) {
     auto *Idx = cast<Constant>(GTI.getOperand());
-    assert(
-        (!isa<VectorType>(Idx->getType()) ||
-         cast<VectorType>(Idx->getType())->getElementCount() == EltCount) &&
-        "getelementptr index type missmatch");
+    assert((!isa<VectorType>(Idx->getType()) ||
+            cast<VectorType>(Idx->getType())->getElementCount() == EltCount) &&
+           "getelementptr index type missmatch");
 
     if (GTI.isStruct() && Idx->getType()->isVectorTy()) {
       Idx = Idx->getSplatValue();
@@ -2437,13 +2451,13 @@ Constant *ConstantExpr::getICmp(unsigned short pred, Constant *LHS,
   assert(CmpInst::isIntPredicate(Predicate) && "Invalid ICmp Predicate");
 
   if (Constant *FC = ConstantFoldCompareInstruction(Predicate, LHS, RHS))
-    return FC;          // Fold a few common cases...
+    return FC; // Fold a few common cases...
 
   if (OnlyIfReduced)
     return nullptr;
 
   // Look up the constant in the table first to ensure uniqueness
-  Constant *ArgVec[] = { LHS, RHS };
+  Constant *ArgVec[] = {LHS, RHS};
   // Get the key type with both the opcode and predicate
   const ConstantExprKeyType Key(Instruction::ICmp, ArgVec, Predicate);
 
@@ -2462,13 +2476,13 @@ Constant *ConstantExpr::getFCmp(unsigned short pred, Constant *LHS,
   assert(CmpInst::isFPPredicate(Predicate) && "Invalid FCmp Predicate");
 
   if (Constant *FC = ConstantFoldCompareInstruction(Predicate, LHS, RHS))
-    return FC;          // Fold a few common cases...
+    return FC; // Fold a few common cases...
 
   if (OnlyIfReduced)
     return nullptr;
 
   // Look up the constant in the table first to ensure uniqueness
-  Constant *ArgVec[] = { LHS, RHS };
+  Constant *ArgVec[] = {LHS, RHS};
   // Get the key type with both the opcode and predicate
   const ConstantExprKeyType Key(Instruction::FCmp, ArgVec, Predicate);
 
@@ -2488,14 +2502,14 @@ Constant *ConstantExpr::getExtractElement(Constant *Val, Constant *Idx,
          "Extractelement index must be an integer type!");
 
   if (Constant *FC = ConstantFoldExtractElementInstruction(Val, Idx))
-    return FC;          // Fold a few common cases.
+    return FC; // Fold a few common cases.
 
   Type *ReqTy = cast<VectorType>(Val->getType())->getElementType();
   if (OnlyIfReducedTy == ReqTy)
     return nullptr;
 
   // Look up the constant in the table first to ensure uniqueness
-  Constant *ArgVec[] = { Val, Idx };
+  Constant *ArgVec[] = {Val, Idx};
   const ConstantExprKeyType Key(Instruction::ExtractElement, ArgVec);
 
   LLVMContextImpl *pImpl = Val->getContext().pImpl;
@@ -2512,13 +2526,13 @@ Constant *ConstantExpr::getInsertElement(Constant *Val, Constant *Elt,
          "Insertelement index must be i32 type!");
 
   if (Constant *FC = ConstantFoldInsertElementInstruction(Val, Elt, Idx))
-    return FC;          // Fold a few common cases.
+    return FC; // Fold a few common cases.
 
   if (OnlyIfReducedTy == Val->getType())
     return nullptr;
 
   // Look up the constant in the table first to ensure uniqueness
-  Constant *ArgVec[] = { Val, Elt, Idx };
+  Constant *ArgVec[] = {Val, Elt, Idx};
   const ConstantExprKeyType Key(Instruction::InsertElement, ArgVec);
 
   LLVMContextImpl *pImpl = Val->getContext().pImpl;
@@ -2532,7 +2546,7 @@ Constant *ConstantExpr::getShuffleVector(Constant *V1, Constant *V2,
          "Invalid shuffle vector constant expr operands!");
 
   if (Constant *FC = ConstantFoldShuffleVectorInstruction(V1, V2, Mask))
-    return FC;          // Fold a few common cases.
+    return FC; // Fold a few common cases.
 
   unsigned NElts = Mask.size();
   auto V1VTy = cast<VectorType>(V1->getType());
@@ -2563,24 +2577,24 @@ Constant *ConstantExpr::getNot(Constant *C) {
   return get(Instruction::Xor, C, Constant::getAllOnesValue(C->getType()));
 }
 
-Constant *ConstantExpr::getAdd(Constant *C1, Constant *C2,
-                               bool HasNUW, bool HasNSW) {
+Constant *ConstantExpr::getAdd(Constant *C1, Constant *C2, bool HasNUW,
+                               bool HasNSW) {
   unsigned Flags = (HasNUW ? OverflowingBinaryOperator::NoUnsignedWrap : 0) |
-                   (HasNSW ? OverflowingBinaryOperator::NoSignedWrap   : 0);
+                   (HasNSW ? OverflowingBinaryOperator::NoSignedWrap : 0);
   return get(Instruction::Add, C1, C2, Flags);
 }
 
-Constant *ConstantExpr::getSub(Constant *C1, Constant *C2,
-                               bool HasNUW, bool HasNSW) {
+Constant *ConstantExpr::getSub(Constant *C1, Constant *C2, bool HasNUW,
+                               bool HasNSW) {
   unsigned Flags = (HasNUW ? OverflowingBinaryOperator::NoUnsignedWrap : 0) |
-                   (HasNSW ? OverflowingBinaryOperator::NoSignedWrap   : 0);
+                   (HasNSW ? OverflowingBinaryOperator::NoSignedWrap : 0);
   return get(Instruction::Sub, C1, C2, Flags);
 }
 
-Constant *ConstantExpr::getMul(Constant *C1, Constant *C2,
-                               bool HasNUW, bool HasNSW) {
+Constant *ConstantExpr::getMul(Constant *C1, Constant *C2, bool HasNUW,
+                               bool HasNSW) {
   unsigned Flags = (HasNUW ? OverflowingBinaryOperator::NoUnsignedWrap : 0) |
-                   (HasNSW ? OverflowingBinaryOperator::NoSignedWrap   : 0);
+                   (HasNSW ? OverflowingBinaryOperator::NoSignedWrap : 0);
   return get(Instruction::Mul, C1, C2, Flags);
 }
 
@@ -2588,10 +2602,10 @@ Constant *ConstantExpr::getXor(Constant *C1, Constant *C2) {
   return get(Instruction::Xor, C1, C2);
 }
 
-Constant *ConstantExpr::getShl(Constant *C1, Constant *C2,
-                               bool HasNUW, bool HasNSW) {
+Constant *ConstantExpr::getShl(Constant *C1, Constant *C2, bool HasNUW,
+                               bool HasNSW) {
   unsigned Flags = (HasNUW ? OverflowingBinaryOperator::NoUnsignedWrap : 0) |
-                   (HasNSW ? OverflowingBinaryOperator::NoSignedWrap   : 0);
+                   (HasNSW ? OverflowingBinaryOperator::NoSignedWrap : 0);
   return get(Instruction::Shl, C1, C2, Flags);
 }
 
@@ -2641,20 +2655,20 @@ Constant *ConstantExpr::getBinOpIdentity(unsigned Opcode, Type *Ty,
   // Commutative opcodes: it does not matter if AllowRHSConstant is set.
   if (Instruction::isCommutative(Opcode)) {
     switch (Opcode) {
-      case Instruction::Add: // X + 0 = X
-      case Instruction::Or:  // X | 0 = X
-      case Instruction::Xor: // X ^ 0 = X
-        return Constant::getNullValue(Ty);
-      case Instruction::Mul: // X * 1 = X
-        return ConstantInt::get(Ty, 1);
-      case Instruction::And: // X & -1 = X
-        return Constant::getAllOnesValue(Ty);
-      case Instruction::FAdd: // X + -0.0 = X
-        return ConstantFP::getZero(Ty, !NSZ);
-      case Instruction::FMul: // X * 1.0 = X
-        return ConstantFP::get(Ty, 1.0);
-      default:
-        llvm_unreachable("Every commutative binop has an identity constant");
+    case Instruction::Add: // X + 0 = X
+    case Instruction::Or:  // X | 0 = X
+    case Instruction::Xor: // X ^ 0 = X
+      return Constant::getNullValue(Ty);
+    case Instruction::Mul: // X * 1 = X
+      return ConstantInt::get(Ty, 1);
+    case Instruction::And: // X & -1 = X
+      return Constant::getAllOnesValue(Ty);
+    case Instruction::FAdd: // X + -0.0 = X
+      return ConstantFP::getZero(Ty, !NSZ);
+    case Instruction::FMul: // X * 1.0 = X
+      return ConstantFP::get(Ty, 1.0);
+    default:
+      llvm_unreachable("Every commutative binop has an identity constant");
     }
   }
 
@@ -2663,19 +2677,19 @@ Constant *ConstantExpr::getBinOpIdentity(unsigned Opcode, Type *Ty,
     return nullptr;
 
   switch (Opcode) {
-    case Instruction::Sub:  // X - 0 = X
-    case Instruction::Shl:  // X << 0 = X
-    case Instruction::LShr: // X >>u 0 = X
-    case Instruction::AShr: // X >> 0 = X
-    case Instruction::FSub: // X - 0.0 = X
-      return Constant::getNullValue(Ty);
-    case Instruction::SDiv: // X / 1 = X
-    case Instruction::UDiv: // X /u 1 = X
-      return ConstantInt::get(Ty, 1);
-    case Instruction::FDiv: // X / 1.0 = X
-      return ConstantFP::get(Ty, 1.0);
-    default:
-      return nullptr;
+  case Instruction::Sub:  // X - 0 = X
+  case Instruction::Shl:  // X << 0 = X
+  case Instruction::LShr: // X >>u 0 = X
+  case Instruction::AShr: // X >> 0 = X
+  case Instruction::FSub: // X - 0.0 = X
+    return Constant::getNullValue(Ty);
+  case Instruction::SDiv: // X / 1 = X
+  case Instruction::UDiv: // X /u 1 = X
+    return ConstantInt::get(Ty, 1);
+  case Instruction::FDiv: // X / 1.0 = X
+    return ConstantFP::get(Ty, 1.0);
+  default:
+    return nullptr;
   }
 }
 
@@ -2714,7 +2728,7 @@ GetElementPtrConstantExpr::GetElementPtrConstantExpr(
   Op<0>() = C;
   Use *OperandList = getOperandList();
   for (unsigned i = 0, E = IdxList.size(); i != E; ++i)
-    OperandList[i+1] = IdxList[i];
+    OperandList[i + 1] = IdxList[i];
 }
 
 Type *GetElementPtrConstantExpr::getSourceElementType() const {
@@ -2735,7 +2749,7 @@ Type *ConstantDataSequential::getElementType() const {
 }
 
 StringRef ConstantDataSequential::getRawDataValues() const {
-  return StringRef(DataElements, getNumElements()*getElementByteSize());
+  return StringRef(DataElements, getNumElements() * getElementByteSize());
 }
 
 bool ConstantDataSequential::isElementTypeCompatible(Type *Ty) {
@@ -2748,7 +2762,8 @@ bool ConstantDataSequential::isElementTypeCompatible(Type *Ty) {
     case 32:
     case 64:
       return true;
-    default: break;
+    default:
+      break;
     }
   }
   return false;
@@ -2760,18 +2775,16 @@ unsigned ConstantDataSequential::getNumElements() const {
   return cast<FixedVectorType>(getType())->getNumElements();
 }
 
-
 uint64_t ConstantDataSequential::getElementByteSize() const {
-  return getElementType()->getPrimitiveSizeInBits()/8;
+  return getElementType()->getPrimitiveSizeInBits() / 8;
 }
 
 /// Return the start of the specified element.
 const char *ConstantDataSequential::getElementPointer(unsigned Elt) const {
   assert(Elt < getNumElements() && "Invalid Elt");
-  return DataElements+Elt*getElementByteSize();
+  return DataElements + Elt * getElementByteSize();
 }
 
-
 /// Return true if the array is empty or all zeros.
 static bool isAllZeros(StringRef Arr) {
   for (char I : Arr)
@@ -2887,8 +2900,8 @@ Constant *ConstantDataArray::getFP(Type *ElementType, ArrayRef<uint64_t> Elts) {
   return getImpl(StringRef(Data, Elts.size() * 8), Ty);
 }
 
-Constant *ConstantDataArray::getString(LLVMContext &Context,
-                                       StringRef Str, bool AddNull) {
+Constant *ConstantDataArray::getString(LLVMContext &Context, StringRef Str,
+                                       bool AddNull) {
   if (!AddNull) {
     const uint8_t *Data = Str.bytes_begin();
     return get(Context, ArrayRef(Data, Str.size()));
@@ -2903,22 +2916,26 @@ Constant *ConstantDataArray::getString(LLVMContext &Context,
 /// get() constructors - Return a constant with vector type with an element
 /// count and element type matching the ArrayRef passed in.  Note that this
 /// can return a ConstantAggregateZero object.
-Constant *ConstantDataVector::get(LLVMContext &Context, ArrayRef<uint8_t> Elts){
+Constant *ConstantDataVector::get(LLVMContext &Context,
+                                  ArrayRef<uint8_t> Elts) {
   auto *Ty = FixedVectorType::get(Type::getInt8Ty(Context), Elts.size());
   const char *Data = reinterpret_cast<const char *>(Elts.data());
   return getImpl(StringRef(Data, Elts.size() * 1), Ty);
 }
-Constant *ConstantDataVector::get(LLVMContext &Context, ArrayRef<uint16_t> Elts){
+Constant *ConstantDataVector::get(LLVMContext &Context,
+                                  ArrayRef<uint16_t> Elts) {
   auto *Ty = FixedVectorType::get(Type::getInt16Ty(Context), Elts.size());
   const char *Data = reinterpret_cast<const char *>(Elts.data());
   return getImpl(StringRef(Data, Elts.size() * 2), Ty);
 }
-Constant *ConstantDataVector::get(LLVMContext &Context, ArrayRef<uint32_t> Elts){
+Constant *ConstantDataVector::get(LLVMContext &Context,
+                                  ArrayRef<uint32_t> Elts) {
   auto *Ty = FixedVectorType::get(Type::getInt32Ty(Context), Elts.size());
   const char *Data = reinterpret_cast<const char *>(Elts.data());
   return getImpl(StringRef(Data, Elts.size() * 4), Ty);
 }
-Constant *ConstantDataVector::get(LLVMContext &Context, ArrayRef<uint64_t> Elts){
+Constant *ConstantDataVector::get(LLVMContext &Context,
+                                  ArrayRef<uint64_t> Elts) {
   auto *Ty = FixedVectorType::get(Type::getInt64Ty(Context), Elts.size());
   const char *Data = reinterpret_cast<const char *>(Elts.data());
   return getImpl(StringRef(Data, Elts.size() * 8), Ty);
@@ -3010,7 +3027,6 @@ Constant *ConstantDataVector::getSplat(unsigned NumElts, Constant *V) {
   return ConstantVector::getSplat(ElementCount::getFixed(NumElts), V);
 }
 
-
 uint64_t ConstantDataSequential::getElementAsInteger(unsigned Elt) const {
   assert(isa<IntegerType>(getElementType()) &&
          "Accessor can only be used when element is an integer");
@@ -3019,7 +3035,8 @@ uint64_t ConstantDataSequential::getElementAsInteger(unsigned Elt) const {
   // The data is stored in host byte order, make sure to cast back to the right
   // type to load with the right endianness.
   switch (getElementType()->getIntegerBitWidth()) {
-  default: llvm_unreachable("Invalid bitwidth for CDS");
+  default:
+    llvm_unreachable("Invalid bitwidth for CDS");
   case 8:
     return *reinterpret_cast<const uint8_t *>(EltPtr);
   case 16:
@@ -3039,7 +3056,8 @@ APInt ConstantDataSequential::getElementAsAPInt(unsigned Elt) const {
   // The data is stored in host byte order, make sure to cast back to the right
   // type to load with the right endianness.
   switch (getElementType()->getIntegerBitWidth()) {
-  default: llvm_unreachable("Invalid bitwidth for CDS");
+  default:
+    llvm_unreachable("Invalid bitwidth for CDS");
   case 8: {
     auto EltVal = *reinterpret_cast<const uint8_t *>(EltPtr);
     return APInt(8, EltVal);
@@ -3115,7 +3133,8 @@ bool ConstantDataSequential::isCString() const {
   StringRef Str = getAsString();
 
   // The last value must be nul.
-  if (Str.back() != 0) return false;
+  if (Str.back() != 0)
+    return false;
 
   // Other elements must be non-nul.
   return !Str.drop_back().contains(0);
@@ -3127,7 +3146,7 @@ bool ConstantDataVector::isSplatData() const {
   // Compare elements 1+ to the 0'th element.
   unsigned EltSize = getElementByteSize();
   for (unsigned i = 1, e = getNumElements(); i != e; ++i)
-    if (memcmp(Base, Base+i*EltSize, EltSize))
+    if (memcmp(Base, Base + i * EltSize, EltSize))
       return false;
 
   return true;
@@ -3191,8 +3210,8 @@ Value *ConstantArray::handleOperandChangeImpl(Value *From, Value *To) {
   assert(isa<Constant>(To) && "Cannot make Constant refer to non-constant!");
   Constant *ToC = cast<Constant>(To);
 
-  SmallVector<Constant*, 8> Values;
-  Values.reserve(getNumOperands());  // Build replacement array.
+  SmallVector<Constant *, 8> Values;
+  Values.reserve(getNumOperands()); // Build replacement array.
 
   // Fill values with the modified operands of the constant array.  Also,
   // compute whether this turns into an all-zeros array.
@@ -3202,7 +3221,7 @@ Value *ConstantArray::handleOperandChangeImpl(Value *From, Value *To) {
   bool AllSame = true;
   Use *OperandList = getOperandList();
   unsigned OperandNo = 0;
-  for (Use *O = OperandList, *E = OperandList+getNumOperands(); O != E; ++O) {
+  for (Use *O = OperandList, *E = OperandList + getNumOperands(); O != E; ++O) {
     Constant *Val = cast<Constant>(O->get());
     if (Val == From) {
       OperandNo = (O - OperandList);
@@ -3234,8 +3253,8 @@ Value *ConstantStruct::handleOperandChangeImpl(Value *From, Value *To) {
 
   Use *OperandList = getOperandList();
 
-  SmallVector<Constant*, 8> Values;
-  Values.reserve(getNumOperands());  // Build replacement struct.
+  SmallVector<Constant *, 8> Values;
+  Values.reserve(getNumOperands()); // Build replacement struct.
 
   // Fill values with the modified operands of the constant struct.  Also,
   // compute whether this turns into an all-zeros struct.
@@ -3268,8 +3287,8 @@ Value *ConstantVector::handleOperandChangeImpl(Value *From, Value *To) {
   assert(isa<Constant>(To) && "Cannot make Constant refer to non-constant!");
   Constant *ToC = cast<Constant>(To);
 
-  SmallVector<Constant*, 8> Values;
-  Values.reserve(getNumOperands());  // Build replacement array...
+  SmallVector<Constant *, 8> Values;
+  Values.reserve(getNumOperands()); // Build replacement array...
   unsigned NumUpdated = 0;
   unsigned OperandNo = 0;
   for (unsigned i = 0, e = getNumOperands(); i != e; ++i) {
@@ -3294,7 +3313,7 @@ Value *ConstantExpr::handleOperandChangeImpl(Value *From, Value *ToV) {
   assert(isa<Constant>(ToV) && "Cannot make Constant refer to non-constant!");
   Constant *To = cast<Constant>(ToV);
 
-  SmallVector<Constant*, 8> NewOps;
+  SmallVector<Constant *, 8> NewOps;
   unsigned NumUpdated = 0;
   unsigned OperandNo = 0;
   for (unsigned i = 0, e = getNumOperands(); i != e; ++i) {
@@ -3318,7 +3337,7 @@ Value *ConstantExpr::handleOperandChangeImpl(Value *From, Value *ToV) {
 
 Instruction *ConstantExpr::getAsInstruction(Instruction *InsertBefore) const {
   SmallVector<Value *, 4> ValueOperands(operands());
-  ArrayRef<Value*> Ops(ValueOperands);
+  ArrayRef<Value *> Ops(ValueOperands);
 
   switch (getOpcode()) {
   case Instruction::Trunc:
diff --git a/llvm/lib/IR/ConstantsContext.h b/llvm/lib/IR/ConstantsContext.h
index 6023216a5070856..a6e5bfbb40f950c 100644
--- a/llvm/lib/IR/ConstantsContext.h
+++ b/llvm/lib/IR/ConstantsContext.h
@@ -45,7 +45,7 @@ namespace llvm {
 class CastConstantExpr final : public ConstantExpr {
 public:
   CastConstantExpr(unsigned Opcode, Constant *C, Type *Ty)
-    : ConstantExpr(Ty, Opcode, &Op<0>(), 1) {
+      : ConstantExpr(Ty, Opcode, &Op<0>(), 1) {
     Op<0>() = C;
   }
 
@@ -69,7 +69,7 @@ class BinaryConstantExpr final : public ConstantExpr {
 public:
   BinaryConstantExpr(unsigned Opcode, Constant *C1, Constant *C2,
                      unsigned Flags)
-    : ConstantExpr(C1->getType(), Opcode, &Op<0>(), 2) {
+      : ConstantExpr(C1->getType(), Opcode, &Op<0>(), 2) {
     Op<0>() = C1;
     Op<1>() = C2;
     SubclassOptionalData = Flags;
@@ -96,8 +96,8 @@ class BinaryConstantExpr final : public ConstantExpr {
 class ExtractElementConstantExpr final : public ConstantExpr {
 public:
   ExtractElementConstantExpr(Constant *C1, Constant *C2)
-    : ConstantExpr(cast<VectorType>(C1->getType())->getElementType(),
-                   Instruction::ExtractElement, &Op<0>(), 2) {
+      : ConstantExpr(cast<VectorType>(C1->getType())->getElementType(),
+                     Instruction::ExtractElement, &Op<0>(), 2) {
     Op<0>() = C1;
     Op<1>() = C2;
   }
@@ -123,8 +123,7 @@ class ExtractElementConstantExpr final : public ConstantExpr {
 class InsertElementConstantExpr final : public ConstantExpr {
 public:
   InsertElementConstantExpr(Constant *C1, Constant *C2, Constant *C3)
-    : ConstantExpr(C1->getType(), Instruction::InsertElement,
-                   &Op<0>(), 3) {
+      : ConstantExpr(C1->getType(), Instruction::InsertElement, &Op<0>(), 3) {
     Op<0>() = C1;
     Op<1>() = C2;
     Op<2>() = C3;
@@ -220,9 +219,9 @@ class GetElementPtrConstantExpr final : public ConstantExpr {
 class CompareConstantExpr final : public ConstantExpr {
 public:
   unsigned short predicate;
-  CompareConstantExpr(Type *ty, Instruction::OtherOps opc,
-                      unsigned short pred,  Constant* LHS, Constant* RHS)
-    : ConstantExpr(ty, opc, &Op<0>(), 2), predicate(pred) {
+  CompareConstantExpr(Type *ty, Instruction::OtherOps opc, unsigned short pred,
+                      Constant *LHS, Constant *RHS)
+      : ConstantExpr(ty, opc, &Op<0>(), 2), predicate(pred) {
     Op<0>() = LHS;
     Op<1>() = RHS;
   }
diff --git a/llvm/lib/IR/Core.cpp b/llvm/lib/IR/Core.cpp
index 17093fa0ac4ee1e..ac07463cc748676 100644
--- a/llvm/lib/IR/Core.cpp
+++ b/llvm/lib/IR/Core.cpp
@@ -53,31 +53,24 @@ void llvm::initializeCore(PassRegistry &Registry) {
   initializeVerifierLegacyPassPass(Registry);
 }
 
-void LLVMShutdown() {
-  llvm_shutdown();
-}
+void LLVMShutdown() { llvm_shutdown(); }
 
 /*===-- Version query -----------------------------------------------------===*/
 
 void LLVMGetVersion(unsigned *Major, unsigned *Minor, unsigned *Patch) {
-    if (Major)
-        *Major = LLVM_VERSION_MAJOR;
-    if (Minor)
-        *Minor = LLVM_VERSION_MINOR;
-    if (Patch)
-        *Patch = LLVM_VERSION_PATCH;
+  if (Major)
+    *Major = LLVM_VERSION_MAJOR;
+  if (Minor)
+    *Minor = LLVM_VERSION_MINOR;
+  if (Patch)
+    *Patch = LLVM_VERSION_PATCH;
 }
 
 /*===-- Error handling ----------------------------------------------------===*/
 
-char *LLVMCreateMessage(const char *Message) {
-  return strdup(Message);
-}
-
-void LLVMDisposeMessage(char *Message) {
-  free(Message);
-}
+char *LLVMCreateMessage(const char *Message) { return strdup(Message); }
 
+void LLVMDisposeMessage(char *Message) { free(Message); }
 
 /*===-- Operations on contexts --------------------------------------------===*/
 
@@ -86,9 +79,7 @@ static LLVMContext &getGlobalContext() {
   return GlobalContext;
 }
 
-LLVMContextRef LLVMContextCreate() {
-  return wrap(new LLVMContext());
-}
+LLVMContextRef LLVMContextCreate() { return wrap(new LLVMContext()); }
 
 LLVMContextRef LLVMGetGlobalContext() { return wrap(&getGlobalContext()); }
 
@@ -113,7 +104,7 @@ void *LLVMContextGetDiagnosticContext(LLVMContextRef C) {
 void LLVMContextSetYieldCallback(LLVMContextRef C, LLVMYieldCallback Callback,
                                  void *OpaqueHandle) {
   auto YieldCallback =
-    LLVM_EXTENSION reinterpret_cast<LLVMContext::YieldCallbackTy>(Callback);
+      LLVM_EXTENSION reinterpret_cast<LLVMContext::YieldCallbackTy>(Callback);
   unwrap(C)->setYieldCallback(YieldCallback, OpaqueHandle);
 }
 
@@ -125,9 +116,7 @@ void LLVMContextSetDiscardValueNames(LLVMContextRef C, LLVMBool Discard) {
   unwrap(C)->setDiscardValueNames(Discard);
 }
 
-void LLVMContextDispose(LLVMContextRef C) {
-  delete unwrap(C);
-}
+void LLVMContextDispose(LLVMContextRef C) { delete unwrap(C); }
 
 unsigned LLVMGetMDKindIDInContext(LLVMContextRef C, const char *Name,
                                   unsigned SLen) {
@@ -176,22 +165,20 @@ LLVMTypeRef LLVMGetTypeAttributeValue(LLVMAttributeRef A) {
   return wrap(Attr.getValueAsType());
 }
 
-LLVMAttributeRef LLVMCreateStringAttribute(LLVMContextRef C,
-                                           const char *K, unsigned KLength,
-                                           const char *V, unsigned VLength) {
-  return wrap(Attribute::get(*unwrap(C), StringRef(K, KLength),
-                             StringRef(V, VLength)));
+LLVMAttributeRef LLVMCreateStringAttribute(LLVMContextRef C, const char *K,
+                                           unsigned KLength, const char *V,
+                                           unsigned VLength) {
+  return wrap(
+      Attribute::get(*unwrap(C), StringRef(K, KLength), StringRef(V, VLength)));
 }
 
-const char *LLVMGetStringAttributeKind(LLVMAttributeRef A,
-                                       unsigned *Length) {
+const char *LLVMGetStringAttributeKind(LLVMAttributeRef A, unsigned *Length) {
   auto S = unwrap(A).getKindAsString();
   *Length = S.size();
   return S.data();
 }
 
-const char *LLVMGetStringAttributeValue(LLVMAttributeRef A,
-                                        unsigned *Length) {
+const char *LLVMGetStringAttributeValue(LLVMAttributeRef A, unsigned *Length) {
   auto S = unwrap(A).getValueAsString();
   *Length = S.size();
   return S.data();
@@ -222,24 +209,24 @@ char *LLVMGetDiagInfoDescription(LLVMDiagnosticInfoRef DI) {
 }
 
 LLVMDiagnosticSeverity LLVMGetDiagInfoSeverity(LLVMDiagnosticInfoRef DI) {
-    LLVMDiagnosticSeverity severity;
-
-    switch(unwrap(DI)->getSeverity()) {
-    default:
-      severity = LLVMDSError;
-      break;
-    case DS_Warning:
-      severity = LLVMDSWarning;
-      break;
-    case DS_Remark:
-      severity = LLVMDSRemark;
-      break;
-    case DS_Note:
-      severity = LLVMDSNote;
-      break;
-    }
+  LLVMDiagnosticSeverity severity;
+
+  switch (unwrap(DI)->getSeverity()) {
+  default:
+    severity = LLVMDSError;
+    break;
+  case DS_Warning:
+    severity = LLVMDSWarning;
+    break;
+  case DS_Remark:
+    severity = LLVMDSRemark;
+    break;
+  case DS_Note:
+    severity = LLVMDSNote;
+    break;
+  }
 
-    return severity;
+  return severity;
 }
 
 /*===-- Operations on modules ---------------------------------------------===*/
@@ -253,9 +240,7 @@ LLVMModuleRef LLVMModuleCreateWithNameInContext(const char *ModuleID,
   return wrap(new Module(ModuleID, *unwrap(C)));
 }
 
-void LLVMDisposeModule(LLVMModuleRef M) {
-  delete unwrap(M);
-}
+void LLVMDisposeModule(LLVMModuleRef M) { delete unwrap(M); }
 
 const char *LLVMGetModuleIdentifier(LLVMModuleRef M, size_t *Len) {
   auto &Str = unwrap(M)->getModuleIdentifier();
@@ -291,7 +276,7 @@ void LLVMSetDataLayout(LLVMModuleRef M, const char *DataLayoutStr) {
 }
 
 /*--.. Target triple .......................................................--*/
-const char * LLVMGetTarget(LLVMModuleRef M) {
+const char *LLVMGetTarget(LLVMModuleRef M) {
   return unwrap(M)->getTargetTriple().c_str();
 }
 
@@ -390,16 +375,15 @@ LLVMMetadataRef LLVMModuleFlagEntriesGetMetadata(LLVMModuleFlagEntry *Entries,
   return MFE.Metadata;
 }
 
-LLVMMetadataRef LLVMGetModuleFlag(LLVMModuleRef M,
-                                  const char *Key, size_t KeyLen) {
+LLVMMetadataRef LLVMGetModuleFlag(LLVMModuleRef M, const char *Key,
+                                  size_t KeyLen) {
   return wrap(unwrap(M)->getModuleFlag({Key, KeyLen}));
 }
 
 void LLVMAddModuleFlag(LLVMModuleRef M, LLVMModuleFlagBehavior Behavior,
-                       const char *Key, size_t KeyLen,
-                       LLVMMetadataRef Val) {
-  unwrap(M)->addModuleFlag(map_to_llvmModFlagBehavior(Behavior),
-                           {Key, KeyLen}, unwrap(Val));
+                       const char *Key, size_t KeyLen, LLVMMetadataRef Val) {
+  unwrap(M)->addModuleFlag(map_to_llvmModFlagBehavior(Behavior), {Key, KeyLen},
+                           unwrap(Val));
 }
 
 /*--.. Printing modules ....................................................--*/
@@ -540,7 +524,6 @@ LLVMContextRef LLVMGetModuleContext(LLVMModuleRef M) {
   return wrap(&unwrap(M)->getContext());
 }
 
-
 /*===-- Operations on types -----------------------------------------------===*/
 
 /*--.. Operations on all types (mostly) ....................................--*/
@@ -595,10 +578,7 @@ LLVMTypeKind LLVMGetTypeKind(LLVMTypeRef Ty) {
   llvm_unreachable("Unhandled TypeID.");
 }
 
-LLVMBool LLVMTypeIsSized(LLVMTypeRef Ty)
-{
-    return unwrap(Ty)->isSized();
-}
+LLVMBool LLVMTypeIsSized(LLVMTypeRef Ty) { return unwrap(Ty)->isSized(); }
 
 LLVMContextRef LLVMGetTypeContext(LLVMTypeRef Ty) {
   return wrap(&unwrap(Ty)->getContext());
@@ -624,32 +604,32 @@ char *LLVMPrintTypeToString(LLVMTypeRef Ty) {
 
 /*--.. Operations on integer types .........................................--*/
 
-LLVMTypeRef LLVMInt1TypeInContext(LLVMContextRef C)  {
-  return (LLVMTypeRef) Type::getInt1Ty(*unwrap(C));
+LLVMTypeRef LLVMInt1TypeInContext(LLVMContextRef C) {
+  return (LLVMTypeRef)Type::getInt1Ty(*unwrap(C));
 }
-LLVMTypeRef LLVMInt8TypeInContext(LLVMContextRef C)  {
-  return (LLVMTypeRef) Type::getInt8Ty(*unwrap(C));
+LLVMTypeRef LLVMInt8TypeInContext(LLVMContextRef C) {
+  return (LLVMTypeRef)Type::getInt8Ty(*unwrap(C));
 }
 LLVMTypeRef LLVMInt16TypeInContext(LLVMContextRef C) {
-  return (LLVMTypeRef) Type::getInt16Ty(*unwrap(C));
+  return (LLVMTypeRef)Type::getInt16Ty(*unwrap(C));
 }
 LLVMTypeRef LLVMInt32TypeInContext(LLVMContextRef C) {
-  return (LLVMTypeRef) Type::getInt32Ty(*unwrap(C));
+  return (LLVMTypeRef)Type::getInt32Ty(*unwrap(C));
 }
 LLVMTypeRef LLVMInt64TypeInContext(LLVMContextRef C) {
-  return (LLVMTypeRef) Type::getInt64Ty(*unwrap(C));
+  return (LLVMTypeRef)Type::getInt64Ty(*unwrap(C));
 }
 LLVMTypeRef LLVMInt128TypeInContext(LLVMContextRef C) {
-  return (LLVMTypeRef) Type::getInt128Ty(*unwrap(C));
+  return (LLVMTypeRef)Type::getInt128Ty(*unwrap(C));
 }
 LLVMTypeRef LLVMIntTypeInContext(LLVMContextRef C, unsigned NumBits) {
   return wrap(IntegerType::get(*unwrap(C), NumBits));
 }
 
-LLVMTypeRef LLVMInt1Type(void)  {
+LLVMTypeRef LLVMInt1Type(void) {
   return LLVMInt1TypeInContext(LLVMGetGlobalContext());
 }
-LLVMTypeRef LLVMInt8Type(void)  {
+LLVMTypeRef LLVMInt8Type(void) {
   return LLVMInt8TypeInContext(LLVMGetGlobalContext());
 }
 LLVMTypeRef LLVMInt16Type(void) {
@@ -675,31 +655,31 @@ unsigned LLVMGetIntTypeWidth(LLVMTypeRef IntegerTy) {
 /*--.. Operations on real types ............................................--*/
 
 LLVMTypeRef LLVMHalfTypeInContext(LLVMContextRef C) {
-  return (LLVMTypeRef) Type::getHalfTy(*unwrap(C));
+  return (LLVMTypeRef)Type::getHalfTy(*unwrap(C));
 }
 LLVMTypeRef LLVMBFloatTypeInContext(LLVMContextRef C) {
-  return (LLVMTypeRef) Type::getBFloatTy(*unwrap(C));
+  return (LLVMTypeRef)Type::getBFloatTy(*unwrap(C));
 }
 LLVMTypeRef LLVMFloatTypeInContext(LLVMContextRef C) {
-  return (LLVMTypeRef) Type::getFloatTy(*unwrap(C));
+  return (LLVMTypeRef)Type::getFloatTy(*unwrap(C));
 }
 LLVMTypeRef LLVMDoubleTypeInContext(LLVMContextRef C) {
-  return (LLVMTypeRef) Type::getDoubleTy(*unwrap(C));
+  return (LLVMTypeRef)Type::getDoubleTy(*unwrap(C));
 }
 LLVMTypeRef LLVMX86FP80TypeInContext(LLVMContextRef C) {
-  return (LLVMTypeRef) Type::getX86_FP80Ty(*unwrap(C));
+  return (LLVMTypeRef)Type::getX86_FP80Ty(*unwrap(C));
 }
 LLVMTypeRef LLVMFP128TypeInContext(LLVMContextRef C) {
-  return (LLVMTypeRef) Type::getFP128Ty(*unwrap(C));
+  return (LLVMTypeRef)Type::getFP128Ty(*unwrap(C));
 }
 LLVMTypeRef LLVMPPCFP128TypeInContext(LLVMContextRef C) {
-  return (LLVMTypeRef) Type::getPPC_FP128Ty(*unwrap(C));
+  return (LLVMTypeRef)Type::getPPC_FP128Ty(*unwrap(C));
 }
 LLVMTypeRef LLVMX86MMXTypeInContext(LLVMContextRef C) {
-  return (LLVMTypeRef) Type::getX86_MMXTy(*unwrap(C));
+  return (LLVMTypeRef)Type::getX86_MMXTy(*unwrap(C));
 }
 LLVMTypeRef LLVMX86AMXTypeInContext(LLVMContextRef C) {
-  return (LLVMTypeRef) Type::getX86_AMXTy(*unwrap(C));
+  return (LLVMTypeRef)Type::getX86_AMXTy(*unwrap(C));
 }
 
 LLVMTypeRef LLVMHalfType(void) {
@@ -732,10 +712,9 @@ LLVMTypeRef LLVMX86AMXType(void) {
 
 /*--.. Operations on function types ........................................--*/
 
-LLVMTypeRef LLVMFunctionType(LLVMTypeRef ReturnType,
-                             LLVMTypeRef *ParamTypes, unsigned ParamCount,
-                             LLVMBool IsVarArg) {
-  ArrayRef<Type*> Tys(unwrap(ParamTypes), ParamCount);
+LLVMTypeRef LLVMFunctionType(LLVMTypeRef ReturnType, LLVMTypeRef *ParamTypes,
+                             unsigned ParamCount, LLVMBool IsVarArg) {
+  ArrayRef<Type *> Tys(unwrap(ParamTypes), ParamCount);
   return wrap(FunctionType::get(unwrap(ReturnType), Tys, IsVarArg != 0));
 }
 
@@ -760,24 +739,22 @@ void LLVMGetParamTypes(LLVMTypeRef FunctionTy, LLVMTypeRef *Dest) {
 /*--.. Operations on struct types ..........................................--*/
 
 LLVMTypeRef LLVMStructTypeInContext(LLVMContextRef C, LLVMTypeRef *ElementTypes,
-                           unsigned ElementCount, LLVMBool Packed) {
-  ArrayRef<Type*> Tys(unwrap(ElementTypes), ElementCount);
+                                    unsigned ElementCount, LLVMBool Packed) {
+  ArrayRef<Type *> Tys(unwrap(ElementTypes), ElementCount);
   return wrap(StructType::get(*unwrap(C), Tys, Packed != 0));
 }
 
-LLVMTypeRef LLVMStructType(LLVMTypeRef *ElementTypes,
-                           unsigned ElementCount, LLVMBool Packed) {
+LLVMTypeRef LLVMStructType(LLVMTypeRef *ElementTypes, unsigned ElementCount,
+                           LLVMBool Packed) {
   return LLVMStructTypeInContext(LLVMGetGlobalContext(), ElementTypes,
                                  ElementCount, Packed);
 }
 
-LLVMTypeRef LLVMStructCreateNamed(LLVMContextRef C, const char *Name)
-{
+LLVMTypeRef LLVMStructCreateNamed(LLVMContextRef C, const char *Name) {
   return wrap(StructType::create(*unwrap(C), Name));
 }
 
-const char *LLVMGetStructName(LLVMTypeRef Ty)
-{
+const char *LLVMGetStructName(LLVMTypeRef Ty) {
   StructType *Type = unwrap<StructType>(Ty);
   if (!Type->hasName())
     return nullptr;
@@ -786,7 +763,7 @@ const char *LLVMGetStructName(LLVMTypeRef Ty)
 
 void LLVMStructSetBody(LLVMTypeRef StructTy, LLVMTypeRef *ElementTypes,
                        unsigned ElementCount, LLVMBool Packed) {
-  ArrayRef<Type*> Tys(unwrap(ElementTypes), ElementCount);
+  ArrayRef<Type *> Tys(unwrap(ElementTypes), ElementCount);
   unwrap<StructType>(StructTy)->setBody(Tys, Packed != 0);
 }
 
@@ -828,11 +805,11 @@ LLVMTypeRef LLVMGetTypeByName2(LLVMContextRef C, const char *Name) {
 /*--.. Operations on array, pointer, and vector types (sequence types) .....--*/
 
 void LLVMGetSubtypes(LLVMTypeRef Tp, LLVMTypeRef *Arr) {
-    int i = 0;
-    for (auto *T : unwrap(Tp)->subtypes()) {
-        Arr[i] = wrap(T);
-        i++;
-    }
+  int i = 0;
+  for (auto *T : unwrap(Tp)->subtypes()) {
+    Arr[i] = wrap(T);
+    i++;
+  }
 }
 
 LLVMTypeRef LLVMArrayType(LLVMTypeRef ElementType, unsigned ElementCount) {
@@ -847,9 +824,7 @@ LLVMTypeRef LLVMPointerType(LLVMTypeRef ElementType, unsigned AddressSpace) {
   return wrap(PointerType::get(unwrap(ElementType), AddressSpace));
 }
 
-LLVMBool LLVMPointerTypeIsOpaque(LLVMTypeRef Ty) {
-  return true;
-}
+LLVMBool LLVMPointerTypeIsOpaque(LLVMTypeRef Ty) { return true; }
 
 LLVMTypeRef LLVMVectorType(LLVMTypeRef ElementType, unsigned ElementCount) {
   return wrap(FixedVectorType::get(unwrap(ElementType), ElementCount));
@@ -868,7 +843,7 @@ LLVMTypeRef LLVMGetElementType(LLVMTypeRef WrappedTy) {
 }
 
 unsigned LLVMGetNumContainedTypes(LLVMTypeRef Tp) {
-    return unwrap(Tp)->getNumContainedTypes();
+  return unwrap(Tp)->getNumContainedTypes();
 }
 
 unsigned LLVMGetArrayLength(LLVMTypeRef ArrayTy) {
@@ -893,7 +868,7 @@ LLVMTypeRef LLVMPointerTypeInContext(LLVMContextRef C, unsigned AddressSpace) {
   return wrap(PointerType::get(*unwrap(C), AddressSpace));
 }
 
-LLVMTypeRef LLVMVoidTypeInContext(LLVMContextRef C)  {
+LLVMTypeRef LLVMVoidTypeInContext(LLVMContextRef C) {
   return wrap(Type::getVoidTy(*unwrap(C)));
 }
 LLVMTypeRef LLVMLabelTypeInContext(LLVMContextRef C) {
@@ -906,7 +881,7 @@ LLVMTypeRef LLVMMetadataTypeInContext(LLVMContextRef C) {
   return wrap(Type::getMetadataTy(*unwrap(C)));
 }
 
-LLVMTypeRef LLVMVoidType(void)  {
+LLVMTypeRef LLVMVoidType(void) {
   return LLVMVoidTypeInContext(LLVMGetGlobalContext());
 }
 LLVMTypeRef LLVMLabelType(void) {
@@ -933,10 +908,10 @@ LLVMTypeRef LLVMTypeOf(LLVMValueRef Val) {
 }
 
 LLVMValueKind LLVMGetValueKind(LLVMValueRef Val) {
-    switch(unwrap(Val)->getValueID()) {
+  switch (unwrap(Val)->getValueID()) {
 #define LLVM_C_API 1
-#define HANDLE_VALUE(Name) \
-  case Value::Name##Val: \
+#define HANDLE_VALUE(Name)                                                     \
+  case Value::Name##Val:                                                       \
     return LLVM##Name##ValueKind;
 #include "llvm/IR/Value.def"
   default:
@@ -966,7 +941,7 @@ void LLVMDumpValue(LLVMValueRef Val) {
   unwrap(Val)->print(errs(), /*IsForDebug=*/true);
 }
 
-char* LLVMPrintValueToString(LLVMValueRef Val) {
+char *LLVMPrintValueToString(LLVMValueRef Val) {
   std::string buf;
   raw_string_ostream os(buf);
 
@@ -1002,7 +977,7 @@ LLVMValueRef LLVMGetMetadata(LLVMValueRef Inst, unsigned KindID) {
 static MDNode *extractMDNode(MetadataAsValue *MAV) {
   Metadata *MD = MAV->getMetadata();
   assert((isa<MDNode>(MD) || isa<ConstantAsMetadata>(MD)) &&
-      "Expected a metadata node or a canonicalized constant");
+         "Expected a metadata node or a canonicalized constant");
 
   if (MDNode *N = dyn_cast<MDNode>(MD))
     return N;
@@ -1029,8 +1004,8 @@ llvm_getMetadata(size_t *NumEntries,
   AccessMD(MVEs);
 
   LLVMOpaqueValueMetadataEntry *Result =
-  static_cast<LLVMOpaqueValueMetadataEntry *>(
-                                              safe_malloc(MVEs.size() * sizeof(LLVMOpaqueValueMetadataEntry)));
+      static_cast<LLVMOpaqueValueMetadataEntry *>(
+          safe_malloc(MVEs.size() * sizeof(LLVMOpaqueValueMetadataEntry)));
   for (unsigned i = 0; i < MVEs.size(); ++i) {
     const auto &ModuleFlag = MVEs[i];
     Result[i].Kind = ModuleFlag.first;
@@ -1051,9 +1026,9 @@ LLVMInstructionGetAllMetadataOtherThanDebugLoc(LLVMValueRef Value,
 
 /*--.. Conversion functions ................................................--*/
 
-#define LLVM_DEFINE_VALUE_CAST(name)                                       \
-  LLVMValueRef LLVMIsA##name(LLVMValueRef Val) {                           \
-    return wrap(static_cast<Value*>(dyn_cast_or_null<name>(unwrap(Val)))); \
+#define LLVM_DEFINE_VALUE_CAST(name)                                           \
+  LLVMValueRef LLVMIsA##name(LLVMValueRef Val) {                               \
+    return wrap(static_cast<Value *>(dyn_cast_or_null<name>(unwrap(Val))));    \
   }
 
 LLVM_FOR_EACH_VALUE_SUBCLASS(LLVM_DEFINE_VALUE_CAST)
@@ -1096,13 +1071,9 @@ LLVMUseRef LLVMGetNextUse(LLVMUseRef U) {
   return nullptr;
 }
 
-LLVMValueRef LLVMGetUser(LLVMUseRef U) {
-  return wrap(unwrap(U)->getUser());
-}
+LLVMValueRef LLVMGetUser(LLVMUseRef U) { return wrap(unwrap(U)->getUser()); }
 
-LLVMValueRef LLVMGetUsedValue(LLVMUseRef U) {
-  return wrap(unwrap(U)->get());
-}
+LLVMValueRef LLVMGetUsedValue(LLVMUseRef U) { return wrap(unwrap(U)->get()); }
 
 /*--.. Operations on Users .................................................--*/
 
@@ -1165,9 +1136,7 @@ LLVMValueRef LLVMGetPoison(LLVMTypeRef Ty) {
   return wrap(PoisonValue::get(unwrap(Ty)));
 }
 
-LLVMBool LLVMIsConstant(LLVMValueRef Ty) {
-  return isa<Constant>(unwrap(Ty));
-}
+LLVMBool LLVMIsConstant(LLVMValueRef Ty) { return isa<Constant>(unwrap(Ty)); }
 
 LLVMBool LLVMIsNull(LLVMValueRef Val) {
   if (Constant *C = dyn_cast<Constant>(unwrap(Val)))
@@ -1175,9 +1144,7 @@ LLVMBool LLVMIsNull(LLVMValueRef Val) {
   return false;
 }
 
-LLVMBool LLVMIsUndef(LLVMValueRef Val) {
-  return isa<UndefValue>(unwrap(Val));
-}
+LLVMBool LLVMIsUndef(LLVMValueRef Val) { return isa<UndefValue>(unwrap(Val)); }
 
 LLVMBool LLVMIsPoison(LLVMValueRef Val) {
   return isa<PoisonValue>(unwrap(Val));
@@ -1196,7 +1163,8 @@ LLVMMetadataRef LLVMMDStringInContext2(LLVMContextRef C, const char *Str,
 
 LLVMMetadataRef LLVMMDNodeInContext2(LLVMContextRef C, LLVMMetadataRef *MDs,
                                      size_t Count) {
-  return wrap(MDNode::get(*unwrap(C), ArrayRef<Metadata*>(unwrap(MDs), Count)));
+  return wrap(
+      MDNode::get(*unwrap(C), ArrayRef<Metadata *>(unwrap(MDs), Count)));
 }
 
 LLVMValueRef LLVMMDStringInContext(LLVMContextRef C, const char *Str,
@@ -1303,13 +1271,14 @@ LLVMNamedMDNodeRef LLVMGetPreviousNamedMetadata(LLVMNamedMDNodeRef NMD) {
   return wrap(&*--I);
 }
 
-LLVMNamedMDNodeRef LLVMGetNamedMetadata(LLVMModuleRef M,
-                                        const char *Name, size_t NameLen) {
+LLVMNamedMDNodeRef LLVMGetNamedMetadata(LLVMModuleRef M, const char *Name,
+                                        size_t NameLen) {
   return wrap(unwrap(M)->getNamedMetadata(StringRef(Name, NameLen)));
 }
 
 LLVMNamedMDNodeRef LLVMGetOrInsertNamedMetadata(LLVMModuleRef M,
-                                                const char *Name, size_t NameLen) {
+                                                const char *Name,
+                                                size_t NameLen) {
   return wrap(unwrap(M)->getOrInsertNamedMetadata({Name, NameLen}));
 }
 
@@ -1352,7 +1321,7 @@ void LLVMGetNamedMetadataOperands(LLVMModuleRef M, const char *Name,
   if (!N)
     return;
   LLVMContext &Context = unwrap(M)->getContext();
-  for (unsigned i=0;i<N->getNumOperands();i++)
+  for (unsigned i = 0; i < N->getNumOperands(); i++)
     Dest[i] = wrap(MetadataAsValue::get(Context, N->getOperand(i)));
 }
 
@@ -1367,7 +1336,8 @@ void LLVMAddNamedMetadataOperand(LLVMModuleRef M, const char *Name,
 }
 
 const char *LLVMGetDebugLocDirectory(LLVMValueRef Val, unsigned *Length) {
-  if (!Length) return nullptr;
+  if (!Length)
+    return nullptr;
   StringRef S;
   if (const auto *I = dyn_cast<Instruction>(unwrap(Val))) {
     if (const auto &DL = I->getDebugLoc()) {
@@ -1391,7 +1361,8 @@ const char *LLVMGetDebugLocDirectory(LLVMValueRef Val, unsigned *Length) {
 }
 
 const char *LLVMGetDebugLocFilename(LLVMValueRef Val, unsigned *Length) {
-  if (!Length) return nullptr;
+  if (!Length)
+    return nullptr;
   StringRef S;
   if (const auto *I = dyn_cast<Instruction>(unwrap(Val))) {
     if (const auto &DL = I->getDebugLoc()) {
@@ -1454,15 +1425,15 @@ LLVMValueRef LLVMConstInt(LLVMTypeRef IntTy, unsigned long long N,
 LLVMValueRef LLVMConstIntOfArbitraryPrecision(LLVMTypeRef IntTy,
                                               unsigned NumWords,
                                               const uint64_t Words[]) {
-    IntegerType *Ty = unwrap<IntegerType>(IntTy);
-    return wrap(ConstantInt::get(
-        Ty->getContext(), APInt(Ty->getBitWidth(), ArrayRef(Words, NumWords))));
+  IntegerType *Ty = unwrap<IntegerType>(IntTy);
+  return wrap(ConstantInt::get(
+      Ty->getContext(), APInt(Ty->getBitWidth(), ArrayRef(Words, NumWords))));
 }
 
 LLVMValueRef LLVMConstIntOfString(LLVMTypeRef IntTy, const char Str[],
                                   uint8_t Radix) {
-  return wrap(ConstantInt::get(unwrap<IntegerType>(IntTy), StringRef(Str),
-                               Radix));
+  return wrap(
+      ConstantInt::get(unwrap<IntegerType>(IntTy), StringRef(Str), Radix));
 }
 
 LLVMValueRef LLVMConstIntOfStringAndSize(LLVMTypeRef IntTy, const char Str[],
@@ -1493,7 +1464,7 @@ long long LLVMConstIntGetSExtValue(LLVMValueRef ConstantVal) {
 }
 
 double LLVMConstRealGetDouble(LLVMValueRef ConstantVal, LLVMBool *LosesInfo) {
-  ConstantFP *cFP = unwrap<ConstantFP>(ConstantVal) ;
+  ConstantFP *cFP = unwrap<ConstantFP>(ConstantVal);
   Type *Ty = cFP->getType();
 
   if (Ty->isHalfTy() || Ty->isBFloatTy() || Ty->isFloatTy() ||
@@ -1504,7 +1475,8 @@ double LLVMConstRealGetDouble(LLVMValueRef ConstantVal, LLVMBool *LosesInfo) {
 
   bool APFLosesInfo;
   APFloat APF = cFP->getValueAPF();
-  APF.convert(APFloat::IEEEdouble(), APFloat::rmNearestTiesToEven, &APFLosesInfo);
+  APF.convert(APFloat::IEEEdouble(), APFloat::rmNearestTiesToEven,
+              &APFLosesInfo);
   *LosesInfo = APFLosesInfo;
   return APF.convertToDouble();
 }
@@ -1544,9 +1516,9 @@ const char *LLVMGetAsString(LLVMValueRef C, size_t *Length) {
   return Str.data();
 }
 
-LLVMValueRef LLVMConstArray(LLVMTypeRef ElementTy,
-                            LLVMValueRef *ConstantVals, unsigned Length) {
-  ArrayRef<Constant*> V(unwrap<Constant>(ConstantVals, Length), Length);
+LLVMValueRef LLVMConstArray(LLVMTypeRef ElementTy, LLVMValueRef *ConstantVals,
+                            unsigned Length) {
+  ArrayRef<Constant *> V(unwrap<Constant>(ConstantVals, Length), Length);
   return wrap(ConstantArray::get(ArrayType::get(unwrap(ElementTy), Length), V));
 }
 
@@ -1571,8 +1543,7 @@ LLVMValueRef LLVMConstStruct(LLVMValueRef *ConstantVals, unsigned Count,
 }
 
 LLVMValueRef LLVMConstNamedStruct(LLVMTypeRef StructTy,
-                                  LLVMValueRef *ConstantVals,
-                                  unsigned Count) {
+                                  LLVMValueRef *ConstantVals, unsigned Count) {
   Constant **Elements = unwrap<Constant>(ConstantVals, Count);
   StructType *Ty = unwrap<StructType>(StructTy);
 
@@ -1586,24 +1557,27 @@ LLVMValueRef LLVMConstVector(LLVMValueRef *ScalarConstantVals, unsigned Size) {
 
 /*-- Opcode mapping */
 
-static LLVMOpcode map_to_llvmopcode(int opcode)
-{
-    switch (opcode) {
-      default: llvm_unreachable("Unhandled Opcode.");
-#define HANDLE_INST(num, opc, clas) case num: return LLVM##opc;
+static LLVMOpcode map_to_llvmopcode(int opcode) {
+  switch (opcode) {
+  default:
+    llvm_unreachable("Unhandled Opcode.");
+#define HANDLE_INST(num, opc, clas)                                            \
+  case num:                                                                    \
+    return LLVM##opc;
 #include "llvm/IR/Instruction.def"
 #undef HANDLE_INST
-    }
+  }
 }
 
-static int map_from_llvmopcode(LLVMOpcode code)
-{
-    switch (code) {
-#define HANDLE_INST(num, opc, clas) case LLVM##opc: return num;
+static int map_from_llvmopcode(LLVMOpcode code) {
+  switch (code) {
+#define HANDLE_INST(num, opc, clas)                                            \
+  case LLVM##opc:                                                              \
+    return num;
 #include "llvm/IR/Instruction.def"
 #undef HANDLE_INST
-    }
-    llvm_unreachable("Unhandled Opcode.");
+  }
+  llvm_unreachable("Unhandled Opcode.");
 }
 
 /*--.. Constant expressions ................................................--*/
@@ -1632,7 +1606,6 @@ LLVMValueRef LLVMConstNUWNeg(LLVMValueRef ConstantVal) {
   return wrap(ConstantExpr::getNUWNeg(unwrap<Constant>(ConstantVal)));
 }
 
-
 LLVMValueRef LLVMConstNot(LLVMValueRef ConstantVal) {
   return wrap(ConstantExpr::getNot(unwrap<Constant>(ConstantVal)));
 }
@@ -1693,17 +1666,15 @@ LLVMValueRef LLVMConstXor(LLVMValueRef LHSConstant, LLVMValueRef RHSConstant) {
                                    unwrap<Constant>(RHSConstant)));
 }
 
-LLVMValueRef LLVMConstICmp(LLVMIntPredicate Predicate,
-                           LLVMValueRef LHSConstant, LLVMValueRef RHSConstant) {
-  return wrap(ConstantExpr::getICmp(Predicate,
-                                    unwrap<Constant>(LHSConstant),
+LLVMValueRef LLVMConstICmp(LLVMIntPredicate Predicate, LLVMValueRef LHSConstant,
+                           LLVMValueRef RHSConstant) {
+  return wrap(ConstantExpr::getICmp(Predicate, unwrap<Constant>(LHSConstant),
                                     unwrap<Constant>(RHSConstant)));
 }
 
 LLVMValueRef LLVMConstFCmp(LLVMRealPredicate Predicate,
                            LLVMValueRef LHSConstant, LLVMValueRef RHSConstant) {
-  return wrap(ConstantExpr::getFCmp(Predicate,
-                                    unwrap<Constant>(LHSConstant),
+  return wrap(ConstantExpr::getFCmp(Predicate, unwrap<Constant>(LHSConstant),
                                     unwrap<Constant>(RHSConstant)));
 }
 
@@ -1740,63 +1711,63 @@ LLVMValueRef LLVMConstInBoundsGEP2(LLVMTypeRef Ty, LLVMValueRef ConstantVal,
 }
 
 LLVMValueRef LLVMConstTrunc(LLVMValueRef ConstantVal, LLVMTypeRef ToType) {
-  return wrap(ConstantExpr::getTrunc(unwrap<Constant>(ConstantVal),
-                                     unwrap(ToType)));
+  return wrap(
+      ConstantExpr::getTrunc(unwrap<Constant>(ConstantVal), unwrap(ToType)));
 }
 
 LLVMValueRef LLVMConstSExt(LLVMValueRef ConstantVal, LLVMTypeRef ToType) {
-  return wrap(ConstantExpr::getSExt(unwrap<Constant>(ConstantVal),
-                                    unwrap(ToType)));
+  return wrap(
+      ConstantExpr::getSExt(unwrap<Constant>(ConstantVal), unwrap(ToType)));
 }
 
 LLVMValueRef LLVMConstZExt(LLVMValueRef ConstantVal, LLVMTypeRef ToType) {
-  return wrap(ConstantExpr::getZExt(unwrap<Constant>(ConstantVal),
-                                    unwrap(ToType)));
+  return wrap(
+      ConstantExpr::getZExt(unwrap<Constant>(ConstantVal), unwrap(ToType)));
 }
 
 LLVMValueRef LLVMConstFPTrunc(LLVMValueRef ConstantVal, LLVMTypeRef ToType) {
-  return wrap(ConstantExpr::getFPTrunc(unwrap<Constant>(ConstantVal),
-                                       unwrap(ToType)));
+  return wrap(
+      ConstantExpr::getFPTrunc(unwrap<Constant>(ConstantVal), unwrap(ToType)));
 }
 
 LLVMValueRef LLVMConstFPExt(LLVMValueRef ConstantVal, LLVMTypeRef ToType) {
-  return wrap(ConstantExpr::getFPExtend(unwrap<Constant>(ConstantVal),
-                                        unwrap(ToType)));
+  return wrap(
+      ConstantExpr::getFPExtend(unwrap<Constant>(ConstantVal), unwrap(ToType)));
 }
 
 LLVMValueRef LLVMConstUIToFP(LLVMValueRef ConstantVal, LLVMTypeRef ToType) {
-  return wrap(ConstantExpr::getUIToFP(unwrap<Constant>(ConstantVal),
-                                      unwrap(ToType)));
+  return wrap(
+      ConstantExpr::getUIToFP(unwrap<Constant>(ConstantVal), unwrap(ToType)));
 }
 
 LLVMValueRef LLVMConstSIToFP(LLVMValueRef ConstantVal, LLVMTypeRef ToType) {
-  return wrap(ConstantExpr::getSIToFP(unwrap<Constant>(ConstantVal),
-                                      unwrap(ToType)));
+  return wrap(
+      ConstantExpr::getSIToFP(unwrap<Constant>(ConstantVal), unwrap(ToType)));
 }
 
 LLVMValueRef LLVMConstFPToUI(LLVMValueRef ConstantVal, LLVMTypeRef ToType) {
-  return wrap(ConstantExpr::getFPToUI(unwrap<Constant>(ConstantVal),
-                                      unwrap(ToType)));
+  return wrap(
+      ConstantExpr::getFPToUI(unwrap<Constant>(ConstantVal), unwrap(ToType)));
 }
 
 LLVMValueRef LLVMConstFPToSI(LLVMValueRef ConstantVal, LLVMTypeRef ToType) {
-  return wrap(ConstantExpr::getFPToSI(unwrap<Constant>(ConstantVal),
-                                      unwrap(ToType)));
+  return wrap(
+      ConstantExpr::getFPToSI(unwrap<Constant>(ConstantVal), unwrap(ToType)));
 }
 
 LLVMValueRef LLVMConstPtrToInt(LLVMValueRef ConstantVal, LLVMTypeRef ToType) {
-  return wrap(ConstantExpr::getPtrToInt(unwrap<Constant>(ConstantVal),
-                                        unwrap(ToType)));
+  return wrap(
+      ConstantExpr::getPtrToInt(unwrap<Constant>(ConstantVal), unwrap(ToType)));
 }
 
 LLVMValueRef LLVMConstIntToPtr(LLVMValueRef ConstantVal, LLVMTypeRef ToType) {
-  return wrap(ConstantExpr::getIntToPtr(unwrap<Constant>(ConstantVal),
-                                        unwrap(ToType)));
+  return wrap(
+      ConstantExpr::getIntToPtr(unwrap<Constant>(ConstantVal), unwrap(ToType)));
 }
 
 LLVMValueRef LLVMConstBitCast(LLVMValueRef ConstantVal, LLVMTypeRef ToType) {
-  return wrap(ConstantExpr::getBitCast(unwrap<Constant>(ConstantVal),
-                                       unwrap(ToType)));
+  return wrap(
+      ConstantExpr::getBitCast(unwrap<Constant>(ConstantVal), unwrap(ToType)));
 }
 
 LLVMValueRef LLVMConstAddrSpaceCast(LLVMValueRef ConstantVal,
@@ -1836,8 +1807,8 @@ LLVMValueRef LLVMConstIntCast(LLVMValueRef ConstantVal, LLVMTypeRef ToType,
 }
 
 LLVMValueRef LLVMConstFPCast(LLVMValueRef ConstantVal, LLVMTypeRef ToType) {
-  return wrap(ConstantExpr::getFPCast(unwrap<Constant>(ConstantVal),
-                                      unwrap(ToType)));
+  return wrap(
+      ConstantExpr::getFPCast(unwrap<Constant>(ConstantVal), unwrap(ToType)));
 }
 
 LLVMValueRef LLVMConstExtractElement(LLVMValueRef VectorConstant,
@@ -1849,9 +1820,9 @@ LLVMValueRef LLVMConstExtractElement(LLVMValueRef VectorConstant,
 LLVMValueRef LLVMConstInsertElement(LLVMValueRef VectorConstant,
                                     LLVMValueRef ElementValueConstant,
                                     LLVMValueRef IndexConstant) {
-  return wrap(ConstantExpr::getInsertElement(unwrap<Constant>(VectorConstant),
-                                         unwrap<Constant>(ElementValueConstant),
-                                             unwrap<Constant>(IndexConstant)));
+  return wrap(ConstantExpr::getInsertElement(
+      unwrap<Constant>(VectorConstant), unwrap<Constant>(ElementValueConstant),
+      unwrap<Constant>(IndexConstant)));
 }
 
 LLVMValueRef LLVMConstShuffleVector(LLVMValueRef VectorAConstant,
@@ -1992,12 +1963,12 @@ void LLVMSetSection(LLVMValueRef Global, const char *Section) {
 
 LLVMVisibility LLVMGetVisibility(LLVMValueRef Global) {
   return static_cast<LLVMVisibility>(
-    unwrap<GlobalValue>(Global)->getVisibility());
+      unwrap<GlobalValue>(Global)->getVisibility());
 }
 
 void LLVMSetVisibility(LLVMValueRef Global, LLVMVisibility Viz) {
-  unwrap<GlobalValue>(Global)
-    ->setVisibility(static_cast<GlobalValue::VisibilityTypes>(Viz));
+  unwrap<GlobalValue>(Global)->setVisibility(
+      static_cast<GlobalValue::VisibilityTypes>(Viz));
 }
 
 LLVMDLLStorageClass LLVMGetDLLStorageClass(LLVMValueRef Global) {
@@ -2145,10 +2116,9 @@ LLVMValueRef LLVMAddGlobal(LLVMModuleRef M, LLVMTypeRef Ty, const char *Name) {
 LLVMValueRef LLVMAddGlobalInAddressSpace(LLVMModuleRef M, LLVMTypeRef Ty,
                                          const char *Name,
                                          unsigned AddressSpace) {
-  return wrap(new GlobalVariable(*unwrap(M), unwrap(Ty), false,
-                                 GlobalValue::ExternalLinkage, nullptr, Name,
-                                 nullptr, GlobalVariable::NotThreadLocal,
-                                 AddressSpace));
+  return wrap(new GlobalVariable(
+      *unwrap(M), unwrap(Ty), false, GlobalValue::ExternalLinkage, nullptr,
+      Name, nullptr, GlobalVariable::NotThreadLocal, AddressSpace));
 }
 
 LLVMValueRef LLVMGetNamedGlobal(LLVMModuleRef M, const char *Name) {
@@ -2192,15 +2162,15 @@ void LLVMDeleteGlobal(LLVMValueRef GlobalVar) {
 }
 
 LLVMValueRef LLVMGetInitializer(LLVMValueRef GlobalVar) {
-  GlobalVariable* GV = unwrap<GlobalVariable>(GlobalVar);
-  if ( !GV->hasInitializer() )
+  GlobalVariable *GV = unwrap<GlobalVariable>(GlobalVar);
+  if (!GV->hasInitializer())
     return nullptr;
   return wrap(GV->getInitializer());
 }
 
 void LLVMSetInitializer(LLVMValueRef GlobalVar, LLVMValueRef ConstantVal) {
-  unwrap<GlobalVariable>(GlobalVar)
-    ->setInitializer(unwrap<Constant>(ConstantVal));
+  unwrap<GlobalVariable>(GlobalVar)->setInitializer(
+      unwrap<Constant>(ConstantVal));
 }
 
 LLVMBool LLVMIsThreadLocal(LLVMValueRef GlobalVar) {
@@ -2276,8 +2246,8 @@ LLVMValueRef LLVMAddAlias2(LLVMModuleRef M, LLVMTypeRef ValueTy,
                                   unwrap<Constant>(Aliasee), unwrap(M)));
 }
 
-LLVMValueRef LLVMGetNamedGlobalAlias(LLVMModuleRef M,
-                                     const char *Name, size_t NameLen) {
+LLVMValueRef LLVMGetNamedGlobalAlias(LLVMModuleRef M, const char *Name,
+                                     size_t NameLen) {
   return wrap(unwrap(M)->getNamedAlias(StringRef(Name, NameLen)));
 }
 
@@ -2392,11 +2362,10 @@ static Intrinsic::ID llvm_map_to_intrinsic_id(unsigned ID) {
   return llvm::Intrinsic::ID(ID);
 }
 
-LLVMValueRef LLVMGetIntrinsicDeclaration(LLVMModuleRef Mod,
-                                         unsigned ID,
+LLVMValueRef LLVMGetIntrinsicDeclaration(LLVMModuleRef Mod, unsigned ID,
                                          LLVMTypeRef *ParamTypes,
                                          size_t ParamCount) {
-  ArrayRef<Type*> Tys(unwrap(ParamTypes), ParamCount);
+  ArrayRef<Type *> Tys(unwrap(ParamTypes), ParamCount);
   auto IID = llvm_map_to_intrinsic_id(ID);
   return wrap(llvm::Intrinsic::getDeclaration(unwrap(Mod), IID, Tys));
 }
@@ -2411,7 +2380,7 @@ const char *LLVMIntrinsicGetName(unsigned ID, size_t *NameLength) {
 LLVMTypeRef LLVMIntrinsicGetType(LLVMContextRef Ctx, unsigned ID,
                                  LLVMTypeRef *ParamTypes, size_t ParamCount) {
   auto IID = llvm_map_to_intrinsic_id(ID);
-  ArrayRef<Type*> Tys(unwrap(ParamTypes), ParamCount);
+  ArrayRef<Type *> Tys(unwrap(ParamTypes), ParamCount);
   return wrap(llvm::Intrinsic::getType(*unwrap(Ctx), IID, Tys));
 }
 
@@ -2420,7 +2389,7 @@ const char *LLVMIntrinsicCopyOverloadedName(unsigned ID,
                                             size_t ParamCount,
                                             size_t *NameLength) {
   auto IID = llvm_map_to_intrinsic_id(ID);
-  ArrayRef<Type*> Tys(unwrap(ParamTypes), ParamCount);
+  ArrayRef<Type *> Tys(unwrap(ParamTypes), ParamCount);
   auto Str = llvm::Intrinsic::getNameNoUnnamedTypes(IID, Tys);
   *NameLength = Str.length();
   return strdup(Str.c_str());
@@ -2451,13 +2420,12 @@ unsigned LLVMGetFunctionCallConv(LLVMValueRef Fn) {
 }
 
 void LLVMSetFunctionCallConv(LLVMValueRef Fn, unsigned CC) {
-  return unwrap<Function>(Fn)->setCallingConv(
-    static_cast<CallingConv::ID>(CC));
+  return unwrap<Function>(Fn)->setCallingConv(static_cast<CallingConv::ID>(CC));
 }
 
 const char *LLVMGetGC(LLVMValueRef Fn) {
   Function *F = unwrap<Function>(Fn);
-  return F->hasGC()? F->getGC().c_str() : nullptr;
+  return F->hasGC() ? F->getGC().c_str() : nullptr;
 }
 
 void LLVMSetGC(LLVMValueRef Fn, const char *GC) {
@@ -2577,18 +2545,16 @@ void LLVMSetParamAlignment(LLVMValueRef Arg, unsigned align) {
 
 /*--.. Operations on ifuncs ................................................--*/
 
-LLVMValueRef LLVMAddGlobalIFunc(LLVMModuleRef M,
-                                const char *Name, size_t NameLen,
-                                LLVMTypeRef Ty, unsigned AddrSpace,
-                                LLVMValueRef Resolver) {
-  return wrap(GlobalIFunc::create(unwrap(Ty), AddrSpace,
-                                  GlobalValue::ExternalLinkage,
-                                  StringRef(Name, NameLen),
-                                  unwrap<Constant>(Resolver), unwrap(M)));
+LLVMValueRef LLVMAddGlobalIFunc(LLVMModuleRef M, const char *Name,
+                                size_t NameLen, LLVMTypeRef Ty,
+                                unsigned AddrSpace, LLVMValueRef Resolver) {
+  return wrap(GlobalIFunc::create(
+      unwrap(Ty), AddrSpace, GlobalValue::ExternalLinkage,
+      StringRef(Name, NameLen), unwrap<Constant>(Resolver), unwrap(M)));
 }
 
-LLVMValueRef LLVMGetNamedGlobalIFunc(LLVMModuleRef M,
-                                     const char *Name, size_t NameLen) {
+LLVMValueRef LLVMGetNamedGlobalIFunc(LLVMModuleRef M, const char *Name,
+                                     size_t NameLen) {
   return wrap(unwrap(M)->getNamedIFunc(StringRef(Name, NameLen)));
 }
 
@@ -2643,7 +2609,7 @@ void LLVMRemoveGlobalIFunc(LLVMValueRef IFunc) {
 /*--.. Operations on basic blocks ..........................................--*/
 
 LLVMValueRef LLVMBasicBlockAsValue(LLVMBasicBlockRef BB) {
-  return wrap(static_cast<Value*>(unwrap(BB)));
+  return wrap(static_cast<Value *>(unwrap(BB)));
 }
 
 LLVMBool LLVMValueIsBasicBlock(LLVMValueRef Val) {
@@ -2670,7 +2636,8 @@ unsigned LLVMCountBasicBlocks(LLVMValueRef FnRef) {
   return unwrap<Function>(FnRef)->size();
 }
 
-void LLVMGetBasicBlocks(LLVMValueRef FnRef, LLVMBasicBlockRef *BasicBlocksRefs){
+void LLVMGetBasicBlocks(LLVMValueRef FnRef,
+                        LLVMBasicBlockRef *BasicBlocksRefs) {
   Function *Fn = unwrap<Function>(FnRef);
   for (BasicBlock &BB : *Fn)
     *BasicBlocksRefs++ = wrap(&BB);
@@ -2725,8 +2692,7 @@ void LLVMInsertExistingBasicBlockAfterInsertBlock(LLVMBuilderRef Builder,
   CurBB->getParent()->insert(std::next(CurBB->getIterator()), ToInsert);
 }
 
-void LLVMAppendExistingBasicBlock(LLVMValueRef Fn,
-                                  LLVMBasicBlockRef BB) {
+void LLVMAppendExistingBasicBlock(LLVMValueRef Fn, LLVMBasicBlockRef BB) {
   unwrap<Function>(Fn)->insert(unwrap<Function>(Fn)->end(), unwrap(BB));
 }
 
@@ -2884,8 +2850,7 @@ void LLVMAddCallSiteAttribute(LLVMValueRef C, LLVMAttributeIndex Idx,
   unwrap<CallBase>(C)->addAttributeAtIndex(Idx, unwrap(A));
 }
 
-unsigned LLVMGetCallSiteAttributeCount(LLVMValueRef C,
-                                       LLVMAttributeIndex Idx) {
+unsigned LLVMGetCallSiteAttributeCount(LLVMValueRef C, LLVMAttributeIndex Idx) {
   auto *Call = unwrap<CallBase>(C);
   auto AS = Call->getAttributes().getAttributes(Idx);
   return AS.getNumAttributes();
@@ -3063,7 +3028,7 @@ unsigned LLVMGetNumIndices(LLVMValueRef Inst) {
   if (auto *IV = dyn_cast<InsertValueInst>(I))
     return IV->getNumIndices();
   llvm_unreachable(
-    "LLVMGetNumIndices applies only to extractvalue and insertvalue!");
+      "LLVMGetNumIndices applies only to extractvalue and insertvalue!");
 }
 
 const unsigned *LLVMGetIndices(LLVMValueRef Inst) {
@@ -3073,10 +3038,9 @@ const unsigned *LLVMGetIndices(LLVMValueRef Inst) {
   if (auto *IV = dyn_cast<InsertValueInst>(I))
     return IV->getIndices().data();
   llvm_unreachable(
-    "LLVMGetIndices applies only to extractvalue and insertvalue!");
+      "LLVMGetIndices applies only to extractvalue and insertvalue!");
 }
 
-
 /*===-- Instruction builders ----------------------------------------------===*/
 
 LLVMBuilderRef LLVMCreateBuilderInContext(LLVMContextRef C) {
@@ -3105,7 +3069,7 @@ void LLVMPositionBuilderAtEnd(LLVMBuilderRef Builder, LLVMBasicBlockRef Block) {
 }
 
 LLVMBasicBlockRef LLVMGetInsertBlock(LLVMBuilderRef Builder) {
-   return wrap(unwrap(Builder)->GetInsertBlock());
+  return wrap(unwrap(Builder)->GetInsertBlock());
 }
 
 void LLVMClearInsertionPosition(LLVMBuilderRef Builder) {
@@ -3121,9 +3085,7 @@ void LLVMInsertIntoBuilderWithName(LLVMBuilderRef Builder, LLVMValueRef Instr,
   unwrap(Builder)->Insert(unwrap<Instruction>(Instr), Name);
 }
 
-void LLVMDisposeBuilder(LLVMBuilderRef Builder) {
-  delete unwrap(Builder);
-}
+void LLVMDisposeBuilder(LLVMBuilderRef Builder) { delete unwrap(Builder); }
 
 /*--.. Metadata builders ...................................................--*/
 
@@ -3161,9 +3123,8 @@ void LLVMAddMetadataToInst(LLVMBuilderRef Builder, LLVMValueRef Inst) {
 void LLVMBuilderSetDefaultFPMathTag(LLVMBuilderRef Builder,
                                     LLVMMetadataRef FPMathTag) {
 
-  unwrap(Builder)->setDefaultFPMathTag(FPMathTag
-                                       ? unwrap<MDNode>(FPMathTag)
-                                       : nullptr);
+  unwrap(Builder)->setDefaultFPMathTag(FPMathTag ? unwrap<MDNode>(FPMathTag)
+                                                 : nullptr);
 }
 
 LLVMMetadataRef LLVMBuilderGetDefaultFPMathTag(LLVMBuilderRef Builder) {
@@ -3260,8 +3221,8 @@ LLVMValueRef LLVMBuildCatchSwitch(LLVMBuilderRef B, LLVMValueRef ParentPad,
 
 LLVMValueRef LLVMBuildCatchRet(LLVMBuilderRef B, LLVMValueRef CatchPad,
                                LLVMBasicBlockRef BB) {
-  return wrap(unwrap(B)->CreateCatchRet(unwrap<CatchPadInst>(CatchPad),
-                                        unwrap(BB)));
+  return wrap(
+      unwrap(B)->CreateCatchRet(unwrap<CatchPadInst>(CatchPad), unwrap(BB)));
 }
 
 LLVMValueRef LLVMBuildCleanupRet(LLVMBuilderRef B, LLVMValueRef CatchPad,
@@ -3322,8 +3283,8 @@ LLVMValueRef LLVMGetParentCatchSwitch(LLVMValueRef CatchPad) {
 }
 
 void LLVMSetParentCatchSwitch(LLVMValueRef CatchPad, LLVMValueRef CatchSwitch) {
-  unwrap<CatchPadInst>(CatchPad)
-    ->setCatchSwitch(unwrap<CatchSwitchInst>(CatchSwitch));
+  unwrap<CatchPadInst>(CatchPad)->setCatchSwitch(
+      unwrap<CatchSwitchInst>(CatchSwitch));
 }
 
 /*--.. Funclets ...........................................................--*/
@@ -3343,18 +3304,18 @@ LLVMValueRef LLVMBuildAdd(LLVMBuilderRef B, LLVMValueRef LHS, LLVMValueRef RHS,
   return wrap(unwrap(B)->CreateAdd(unwrap(LHS), unwrap(RHS), Name));
 }
 
-LLVMValueRef LLVMBuildNSWAdd(LLVMBuilderRef B, LLVMValueRef LHS, LLVMValueRef RHS,
-                          const char *Name) {
+LLVMValueRef LLVMBuildNSWAdd(LLVMBuilderRef B, LLVMValueRef LHS,
+                             LLVMValueRef RHS, const char *Name) {
   return wrap(unwrap(B)->CreateNSWAdd(unwrap(LHS), unwrap(RHS), Name));
 }
 
-LLVMValueRef LLVMBuildNUWAdd(LLVMBuilderRef B, LLVMValueRef LHS, LLVMValueRef RHS,
-                          const char *Name) {
+LLVMValueRef LLVMBuildNUWAdd(LLVMBuilderRef B, LLVMValueRef LHS,
+                             LLVMValueRef RHS, const char *Name) {
   return wrap(unwrap(B)->CreateNUWAdd(unwrap(LHS), unwrap(RHS), Name));
 }
 
 LLVMValueRef LLVMBuildFAdd(LLVMBuilderRef B, LLVMValueRef LHS, LLVMValueRef RHS,
-                          const char *Name) {
+                           const char *Name) {
   return wrap(unwrap(B)->CreateFAdd(unwrap(LHS), unwrap(RHS), Name));
 }
 
@@ -3363,18 +3324,18 @@ LLVMValueRef LLVMBuildSub(LLVMBuilderRef B, LLVMValueRef LHS, LLVMValueRef RHS,
   return wrap(unwrap(B)->CreateSub(unwrap(LHS), unwrap(RHS), Name));
 }
 
-LLVMValueRef LLVMBuildNSWSub(LLVMBuilderRef B, LLVMValueRef LHS, LLVMValueRef RHS,
-                          const char *Name) {
+LLVMValueRef LLVMBuildNSWSub(LLVMBuilderRef B, LLVMValueRef LHS,
+                             LLVMValueRef RHS, const char *Name) {
   return wrap(unwrap(B)->CreateNSWSub(unwrap(LHS), unwrap(RHS), Name));
 }
 
-LLVMValueRef LLVMBuildNUWSub(LLVMBuilderRef B, LLVMValueRef LHS, LLVMValueRef RHS,
-                          const char *Name) {
+LLVMValueRef LLVMBuildNUWSub(LLVMBuilderRef B, LLVMValueRef LHS,
+                             LLVMValueRef RHS, const char *Name) {
   return wrap(unwrap(B)->CreateNUWSub(unwrap(LHS), unwrap(RHS), Name));
 }
 
 LLVMValueRef LLVMBuildFSub(LLVMBuilderRef B, LLVMValueRef LHS, LLVMValueRef RHS,
-                          const char *Name) {
+                           const char *Name) {
   return wrap(unwrap(B)->CreateFSub(unwrap(LHS), unwrap(RHS), Name));
 }
 
@@ -3383,18 +3344,18 @@ LLVMValueRef LLVMBuildMul(LLVMBuilderRef B, LLVMValueRef LHS, LLVMValueRef RHS,
   return wrap(unwrap(B)->CreateMul(unwrap(LHS), unwrap(RHS), Name));
 }
 
-LLVMValueRef LLVMBuildNSWMul(LLVMBuilderRef B, LLVMValueRef LHS, LLVMValueRef RHS,
-                          const char *Name) {
+LLVMValueRef LLVMBuildNSWMul(LLVMBuilderRef B, LLVMValueRef LHS,
+                             LLVMValueRef RHS, const char *Name) {
   return wrap(unwrap(B)->CreateNSWMul(unwrap(LHS), unwrap(RHS), Name));
 }
 
-LLVMValueRef LLVMBuildNUWMul(LLVMBuilderRef B, LLVMValueRef LHS, LLVMValueRef RHS,
-                          const char *Name) {
+LLVMValueRef LLVMBuildNUWMul(LLVMBuilderRef B, LLVMValueRef LHS,
+                             LLVMValueRef RHS, const char *Name) {
   return wrap(unwrap(B)->CreateNUWMul(unwrap(LHS), unwrap(RHS), Name));
 }
 
 LLVMValueRef LLVMBuildFMul(LLVMBuilderRef B, LLVMValueRef LHS, LLVMValueRef RHS,
-                          const char *Name) {
+                           const char *Name) {
   return wrap(unwrap(B)->CreateFMul(unwrap(LHS), unwrap(RHS), Name));
 }
 
@@ -3468,11 +3429,11 @@ LLVMValueRef LLVMBuildXor(LLVMBuilderRef B, LLVMValueRef LHS, LLVMValueRef RHS,
   return wrap(unwrap(B)->CreateXor(unwrap(LHS), unwrap(RHS), Name));
 }
 
-LLVMValueRef LLVMBuildBinOp(LLVMBuilderRef B, LLVMOpcode Op,
-                            LLVMValueRef LHS, LLVMValueRef RHS,
-                            const char *Name) {
-  return wrap(unwrap(B)->CreateBinOp(Instruction::BinaryOps(map_from_llvmopcode(Op)), unwrap(LHS),
-                                     unwrap(RHS), Name));
+LLVMValueRef LLVMBuildBinOp(LLVMBuilderRef B, LLVMOpcode Op, LLVMValueRef LHS,
+                            LLVMValueRef RHS, const char *Name) {
+  return wrap(
+      unwrap(B)->CreateBinOp(Instruction::BinaryOps(map_from_llvmopcode(Op)),
+                             unwrap(LHS), unwrap(RHS), Name));
 }
 
 LLVMValueRef LLVMBuildNeg(LLVMBuilderRef B, LLVMValueRef V, const char *Name) {
@@ -3531,23 +3492,23 @@ void LLVMSetExact(LLVMValueRef DivOrShrInst, LLVMBool IsExact) {
 
 LLVMValueRef LLVMBuildMalloc(LLVMBuilderRef B, LLVMTypeRef Ty,
                              const char *Name) {
-  Type* ITy = Type::getInt32Ty(unwrap(B)->GetInsertBlock()->getContext());
-  Constant* AllocSize = ConstantExpr::getSizeOf(unwrap(Ty));
+  Type *ITy = Type::getInt32Ty(unwrap(B)->GetInsertBlock()->getContext());
+  Constant *AllocSize = ConstantExpr::getSizeOf(unwrap(Ty));
   AllocSize = ConstantExpr::getTruncOrBitCast(AllocSize, ITy);
-  Instruction* Malloc = CallInst::CreateMalloc(unwrap(B)->GetInsertBlock(),
-                                               ITy, unwrap(Ty), AllocSize,
-                                               nullptr, nullptr, "");
+  Instruction *Malloc =
+      CallInst::CreateMalloc(unwrap(B)->GetInsertBlock(), ITy, unwrap(Ty),
+                             AllocSize, nullptr, nullptr, "");
   return wrap(unwrap(B)->Insert(Malloc, Twine(Name)));
 }
 
 LLVMValueRef LLVMBuildArrayMalloc(LLVMBuilderRef B, LLVMTypeRef Ty,
                                   LLVMValueRef Val, const char *Name) {
-  Type* ITy = Type::getInt32Ty(unwrap(B)->GetInsertBlock()->getContext());
-  Constant* AllocSize = ConstantExpr::getSizeOf(unwrap(Ty));
+  Type *ITy = Type::getInt32Ty(unwrap(B)->GetInsertBlock()->getContext());
+  Constant *AllocSize = ConstantExpr::getSizeOf(unwrap(Ty));
   AllocSize = ConstantExpr::getTruncOrBitCast(AllocSize, ITy);
-  Instruction* Malloc = CallInst::CreateMalloc(unwrap(B)->GetInsertBlock(),
-                                               ITy, unwrap(Ty), AllocSize,
-                                               unwrap(Val), nullptr, "");
+  Instruction *Malloc =
+      CallInst::CreateMalloc(unwrap(B)->GetInsertBlock(), ITy, unwrap(Ty),
+                             AllocSize, unwrap(Val), nullptr, "");
   return wrap(unwrap(B)->Insert(Malloc, Twine(Name)));
 }
 
@@ -3558,19 +3519,17 @@ LLVMValueRef LLVMBuildMemSet(LLVMBuilderRef B, LLVMValueRef Ptr,
                                       MaybeAlign(Align)));
 }
 
-LLVMValueRef LLVMBuildMemCpy(LLVMBuilderRef B,
-                             LLVMValueRef Dst, unsigned DstAlign,
-                             LLVMValueRef Src, unsigned SrcAlign,
-                             LLVMValueRef Size) {
+LLVMValueRef LLVMBuildMemCpy(LLVMBuilderRef B, LLVMValueRef Dst,
+                             unsigned DstAlign, LLVMValueRef Src,
+                             unsigned SrcAlign, LLVMValueRef Size) {
   return wrap(unwrap(B)->CreateMemCpy(unwrap(Dst), MaybeAlign(DstAlign),
                                       unwrap(Src), MaybeAlign(SrcAlign),
                                       unwrap(Size)));
 }
 
-LLVMValueRef LLVMBuildMemMove(LLVMBuilderRef B,
-                              LLVMValueRef Dst, unsigned DstAlign,
-                              LLVMValueRef Src, unsigned SrcAlign,
-                              LLVMValueRef Size) {
+LLVMValueRef LLVMBuildMemMove(LLVMBuilderRef B, LLVMValueRef Dst,
+                              unsigned DstAlign, LLVMValueRef Src,
+                              unsigned SrcAlign, LLVMValueRef Size) {
   return wrap(unwrap(B)->CreateMemMove(unwrap(Dst), MaybeAlign(DstAlign),
                                        unwrap(Src), MaybeAlign(SrcAlign),
                                        unwrap(Size)));
@@ -3588,7 +3547,7 @@ LLVMValueRef LLVMBuildArrayAlloca(LLVMBuilderRef B, LLVMTypeRef Ty,
 
 LLVMValueRef LLVMBuildFree(LLVMBuilderRef B, LLVMValueRef PointerVal) {
   return wrap(unwrap(B)->Insert(
-     CallInst::CreateFree(unwrap(PointerVal), unwrap(B)->GetInsertBlock())));
+      CallInst::CreateFree(unwrap(PointerVal), unwrap(B)->GetInsertBlock())));
 }
 
 LLVMValueRef LLVMBuildLoad2(LLVMBuilderRef B, LLVMTypeRef Ty,
@@ -3603,15 +3562,20 @@ LLVMValueRef LLVMBuildStore(LLVMBuilderRef B, LLVMValueRef Val,
 
 static AtomicOrdering mapFromLLVMOrdering(LLVMAtomicOrdering Ordering) {
   switch (Ordering) {
-    case LLVMAtomicOrderingNotAtomic: return AtomicOrdering::NotAtomic;
-    case LLVMAtomicOrderingUnordered: return AtomicOrdering::Unordered;
-    case LLVMAtomicOrderingMonotonic: return AtomicOrdering::Monotonic;
-    case LLVMAtomicOrderingAcquire: return AtomicOrdering::Acquire;
-    case LLVMAtomicOrderingRelease: return AtomicOrdering::Release;
-    case LLVMAtomicOrderingAcquireRelease:
-      return AtomicOrdering::AcquireRelease;
-    case LLVMAtomicOrderingSequentiallyConsistent:
-      return AtomicOrdering::SequentiallyConsistent;
+  case LLVMAtomicOrderingNotAtomic:
+    return AtomicOrdering::NotAtomic;
+  case LLVMAtomicOrderingUnordered:
+    return AtomicOrdering::Unordered;
+  case LLVMAtomicOrderingMonotonic:
+    return AtomicOrdering::Monotonic;
+  case LLVMAtomicOrderingAcquire:
+    return AtomicOrdering::Acquire;
+  case LLVMAtomicOrderingRelease:
+    return AtomicOrdering::Release;
+  case LLVMAtomicOrderingAcquireRelease:
+    return AtomicOrdering::AcquireRelease;
+  case LLVMAtomicOrderingSequentiallyConsistent:
+    return AtomicOrdering::SequentiallyConsistent;
   }
 
   llvm_unreachable("Invalid LLVMAtomicOrdering value!");
@@ -3619,15 +3583,20 @@ static AtomicOrdering mapFromLLVMOrdering(LLVMAtomicOrdering Ordering) {
 
 static LLVMAtomicOrdering mapToLLVMOrdering(AtomicOrdering Ordering) {
   switch (Ordering) {
-    case AtomicOrdering::NotAtomic: return LLVMAtomicOrderingNotAtomic;
-    case AtomicOrdering::Unordered: return LLVMAtomicOrderingUnordered;
-    case AtomicOrdering::Monotonic: return LLVMAtomicOrderingMonotonic;
-    case AtomicOrdering::Acquire: return LLVMAtomicOrderingAcquire;
-    case AtomicOrdering::Release: return LLVMAtomicOrderingRelease;
-    case AtomicOrdering::AcquireRelease:
-      return LLVMAtomicOrderingAcquireRelease;
-    case AtomicOrdering::SequentiallyConsistent:
-      return LLVMAtomicOrderingSequentiallyConsistent;
+  case AtomicOrdering::NotAtomic:
+    return LLVMAtomicOrderingNotAtomic;
+  case AtomicOrdering::Unordered:
+    return LLVMAtomicOrderingUnordered;
+  case AtomicOrdering::Monotonic:
+    return LLVMAtomicOrderingMonotonic;
+  case AtomicOrdering::Acquire:
+    return LLVMAtomicOrderingAcquire;
+  case AtomicOrdering::Release:
+    return LLVMAtomicOrderingRelease;
+  case AtomicOrdering::AcquireRelease:
+    return LLVMAtomicOrderingAcquireRelease;
+  case AtomicOrdering::SequentiallyConsistent:
+    return LLVMAtomicOrderingSequentiallyConsistent;
   }
 
   llvm_unreachable("Invalid AtomicOrdering value!");
@@ -3635,21 +3604,36 @@ static LLVMAtomicOrdering mapToLLVMOrdering(AtomicOrdering Ordering) {
 
 static AtomicRMWInst::BinOp mapFromLLVMRMWBinOp(LLVMAtomicRMWBinOp BinOp) {
   switch (BinOp) {
-    case LLVMAtomicRMWBinOpXchg: return AtomicRMWInst::Xchg;
-    case LLVMAtomicRMWBinOpAdd: return AtomicRMWInst::Add;
-    case LLVMAtomicRMWBinOpSub: return AtomicRMWInst::Sub;
-    case LLVMAtomicRMWBinOpAnd: return AtomicRMWInst::And;
-    case LLVMAtomicRMWBinOpNand: return AtomicRMWInst::Nand;
-    case LLVMAtomicRMWBinOpOr: return AtomicRMWInst::Or;
-    case LLVMAtomicRMWBinOpXor: return AtomicRMWInst::Xor;
-    case LLVMAtomicRMWBinOpMax: return AtomicRMWInst::Max;
-    case LLVMAtomicRMWBinOpMin: return AtomicRMWInst::Min;
-    case LLVMAtomicRMWBinOpUMax: return AtomicRMWInst::UMax;
-    case LLVMAtomicRMWBinOpUMin: return AtomicRMWInst::UMin;
-    case LLVMAtomicRMWBinOpFAdd: return AtomicRMWInst::FAdd;
-    case LLVMAtomicRMWBinOpFSub: return AtomicRMWInst::FSub;
-    case LLVMAtomicRMWBinOpFMax: return AtomicRMWInst::FMax;
-    case LLVMAtomicRMWBinOpFMin: return AtomicRMWInst::FMin;
+  case LLVMAtomicRMWBinOpXchg:
+    return AtomicRMWInst::Xchg;
+  case LLVMAtomicRMWBinOpAdd:
+    return AtomicRMWInst::Add;
+  case LLVMAtomicRMWBinOpSub:
+    return AtomicRMWInst::Sub;
+  case LLVMAtomicRMWBinOpAnd:
+    return AtomicRMWInst::And;
+  case LLVMAtomicRMWBinOpNand:
+    return AtomicRMWInst::Nand;
+  case LLVMAtomicRMWBinOpOr:
+    return AtomicRMWInst::Or;
+  case LLVMAtomicRMWBinOpXor:
+    return AtomicRMWInst::Xor;
+  case LLVMAtomicRMWBinOpMax:
+    return AtomicRMWInst::Max;
+  case LLVMAtomicRMWBinOpMin:
+    return AtomicRMWInst::Min;
+  case LLVMAtomicRMWBinOpUMax:
+    return AtomicRMWInst::UMax;
+  case LLVMAtomicRMWBinOpUMin:
+    return AtomicRMWInst::UMin;
+  case LLVMAtomicRMWBinOpFAdd:
+    return AtomicRMWInst::FAdd;
+  case LLVMAtomicRMWBinOpFSub:
+    return AtomicRMWInst::FSub;
+  case LLVMAtomicRMWBinOpFMax:
+    return AtomicRMWInst::FMax;
+  case LLVMAtomicRMWBinOpFMin:
+    return AtomicRMWInst::FMin;
   }
 
   llvm_unreachable("Invalid LLVMAtomicRMWBinOp value!");
@@ -3657,22 +3641,38 @@ static AtomicRMWInst::BinOp mapFromLLVMRMWBinOp(LLVMAtomicRMWBinOp BinOp) {
 
 static LLVMAtomicRMWBinOp mapToLLVMRMWBinOp(AtomicRMWInst::BinOp BinOp) {
   switch (BinOp) {
-    case AtomicRMWInst::Xchg: return LLVMAtomicRMWBinOpXchg;
-    case AtomicRMWInst::Add: return LLVMAtomicRMWBinOpAdd;
-    case AtomicRMWInst::Sub: return LLVMAtomicRMWBinOpSub;
-    case AtomicRMWInst::And: return LLVMAtomicRMWBinOpAnd;
-    case AtomicRMWInst::Nand: return LLVMAtomicRMWBinOpNand;
-    case AtomicRMWInst::Or: return LLVMAtomicRMWBinOpOr;
-    case AtomicRMWInst::Xor: return LLVMAtomicRMWBinOpXor;
-    case AtomicRMWInst::Max: return LLVMAtomicRMWBinOpMax;
-    case AtomicRMWInst::Min: return LLVMAtomicRMWBinOpMin;
-    case AtomicRMWInst::UMax: return LLVMAtomicRMWBinOpUMax;
-    case AtomicRMWInst::UMin: return LLVMAtomicRMWBinOpUMin;
-    case AtomicRMWInst::FAdd: return LLVMAtomicRMWBinOpFAdd;
-    case AtomicRMWInst::FSub: return LLVMAtomicRMWBinOpFSub;
-    case AtomicRMWInst::FMax: return LLVMAtomicRMWBinOpFMax;
-    case AtomicRMWInst::FMin: return LLVMAtomicRMWBinOpFMin;
-    default: break;
+  case AtomicRMWInst::Xchg:
+    return LLVMAtomicRMWBinOpXchg;
+  case AtomicRMWInst::Add:
+    return LLVMAtomicRMWBinOpAdd;
+  case AtomicRMWInst::Sub:
+    return LLVMAtomicRMWBinOpSub;
+  case AtomicRMWInst::And:
+    return LLVMAtomicRMWBinOpAnd;
+  case AtomicRMWInst::Nand:
+    return LLVMAtomicRMWBinOpNand;
+  case AtomicRMWInst::Or:
+    return LLVMAtomicRMWBinOpOr;
+  case AtomicRMWInst::Xor:
+    return LLVMAtomicRMWBinOpXor;
+  case AtomicRMWInst::Max:
+    return LLVMAtomicRMWBinOpMax;
+  case AtomicRMWInst::Min:
+    return LLVMAtomicRMWBinOpMin;
+  case AtomicRMWInst::UMax:
+    return LLVMAtomicRMWBinOpUMax;
+  case AtomicRMWInst::UMin:
+    return LLVMAtomicRMWBinOpUMin;
+  case AtomicRMWInst::FAdd:
+    return LLVMAtomicRMWBinOpFAdd;
+  case AtomicRMWInst::FSub:
+    return LLVMAtomicRMWBinOpFSub;
+  case AtomicRMWInst::FMax:
+    return LLVMAtomicRMWBinOpFMax;
+  case AtomicRMWInst::FMin:
+    return LLVMAtomicRMWBinOpFMin;
+  default:
+    break;
   }
 
   llvm_unreachable("Invalid AtomicRMWBinOp value!");
@@ -3682,11 +3682,9 @@ static LLVMAtomicRMWBinOp mapToLLVMRMWBinOp(AtomicRMWInst::BinOp BinOp) {
 // "syncscope"?
 LLVMValueRef LLVMBuildFence(LLVMBuilderRef B, LLVMAtomicOrdering Ordering,
                             LLVMBool isSingleThread, const char *Name) {
-  return wrap(
-    unwrap(B)->CreateFence(mapFromLLVMOrdering(Ordering),
-                           isSingleThread ? SyncScope::SingleThread
-                                          : SyncScope::System,
-                           Name));
+  return wrap(unwrap(B)->CreateFence(
+      mapFromLLVMOrdering(Ordering),
+      isSingleThread ? SyncScope::SingleThread : SyncScope::System, Name));
 }
 
 LLVMValueRef LLVMBuildGEP2(LLVMBuilderRef B, LLVMTypeRef Ty,
@@ -3844,31 +3842,33 @@ LLVMValueRef LLVMBuildBitCast(LLVMBuilderRef B, LLVMValueRef Val,
 
 LLVMValueRef LLVMBuildAddrSpaceCast(LLVMBuilderRef B, LLVMValueRef Val,
                                     LLVMTypeRef DestTy, const char *Name) {
-  return wrap(unwrap(B)->CreateAddrSpaceCast(unwrap(Val), unwrap(DestTy), Name));
+  return wrap(
+      unwrap(B)->CreateAddrSpaceCast(unwrap(Val), unwrap(DestTy), Name));
 }
 
 LLVMValueRef LLVMBuildZExtOrBitCast(LLVMBuilderRef B, LLVMValueRef Val,
                                     LLVMTypeRef DestTy, const char *Name) {
-  return wrap(unwrap(B)->CreateZExtOrBitCast(unwrap(Val), unwrap(DestTy),
-                                             Name));
+  return wrap(
+      unwrap(B)->CreateZExtOrBitCast(unwrap(Val), unwrap(DestTy), Name));
 }
 
 LLVMValueRef LLVMBuildSExtOrBitCast(LLVMBuilderRef B, LLVMValueRef Val,
                                     LLVMTypeRef DestTy, const char *Name) {
-  return wrap(unwrap(B)->CreateSExtOrBitCast(unwrap(Val), unwrap(DestTy),
-                                             Name));
+  return wrap(
+      unwrap(B)->CreateSExtOrBitCast(unwrap(Val), unwrap(DestTy), Name));
 }
 
 LLVMValueRef LLVMBuildTruncOrBitCast(LLVMBuilderRef B, LLVMValueRef Val,
                                      LLVMTypeRef DestTy, const char *Name) {
-  return wrap(unwrap(B)->CreateTruncOrBitCast(unwrap(Val), unwrap(DestTy),
-                                              Name));
+  return wrap(
+      unwrap(B)->CreateTruncOrBitCast(unwrap(Val), unwrap(DestTy), Name));
 }
 
 LLVMValueRef LLVMBuildCast(LLVMBuilderRef B, LLVMOpcode Op, LLVMValueRef Val,
                            LLVMTypeRef DestTy, const char *Name) {
-  return wrap(unwrap(B)->CreateCast(Instruction::CastOps(map_from_llvmopcode(Op)), unwrap(Val),
-                                    unwrap(DestTy), Name));
+  return wrap(
+      unwrap(B)->CreateCast(Instruction::CastOps(map_from_llvmopcode(Op)),
+                            unwrap(Val), unwrap(DestTy), Name));
 }
 
 LLVMValueRef LLVMBuildPointerCast(LLVMBuilderRef B, LLVMValueRef Val,
@@ -3886,7 +3886,7 @@ LLVMValueRef LLVMBuildIntCast2(LLVMBuilderRef B, LLVMValueRef Val,
 LLVMValueRef LLVMBuildIntCast(LLVMBuilderRef B, LLVMValueRef Val,
                               LLVMTypeRef DestTy, const char *Name) {
   return wrap(unwrap(B)->CreateIntCast(unwrap(Val), unwrap(DestTy),
-                                       /*isSigned*/true, Name));
+                                       /*isSigned*/ true, Name));
 }
 
 LLVMValueRef LLVMBuildFPCast(LLVMBuilderRef B, LLVMValueRef Val,
@@ -3933,19 +3933,19 @@ LLVMValueRef LLVMBuildCall2(LLVMBuilderRef B, LLVMTypeRef Ty, LLVMValueRef Fn,
 LLVMValueRef LLVMBuildSelect(LLVMBuilderRef B, LLVMValueRef If,
                              LLVMValueRef Then, LLVMValueRef Else,
                              const char *Name) {
-  return wrap(unwrap(B)->CreateSelect(unwrap(If), unwrap(Then), unwrap(Else),
-                                      Name));
+  return wrap(
+      unwrap(B)->CreateSelect(unwrap(If), unwrap(Then), unwrap(Else), Name));
 }
 
-LLVMValueRef LLVMBuildVAArg(LLVMBuilderRef B, LLVMValueRef List,
-                            LLVMTypeRef Ty, const char *Name) {
+LLVMValueRef LLVMBuildVAArg(LLVMBuilderRef B, LLVMValueRef List, LLVMTypeRef Ty,
+                            const char *Name) {
   return wrap(unwrap(B)->CreateVAArg(unwrap(List), unwrap(Ty), Name));
 }
 
 LLVMValueRef LLVMBuildExtractElement(LLVMBuilderRef B, LLVMValueRef VecVal,
-                                      LLVMValueRef Index, const char *Name) {
-  return wrap(unwrap(B)->CreateExtractElement(unwrap(VecVal), unwrap(Index),
-                                              Name));
+                                     LLVMValueRef Index, const char *Name) {
+  return wrap(
+      unwrap(B)->CreateExtractElement(unwrap(VecVal), unwrap(Index), Name));
 }
 
 LLVMValueRef LLVMBuildInsertElement(LLVMBuilderRef B, LLVMValueRef VecVal,
@@ -3992,14 +3992,14 @@ LLVMValueRef LLVMBuildIsNotNull(LLVMBuilderRef B, LLVMValueRef Val,
 LLVMValueRef LLVMBuildPtrDiff2(LLVMBuilderRef B, LLVMTypeRef ElemTy,
                                LLVMValueRef LHS, LLVMValueRef RHS,
                                const char *Name) {
-  return wrap(unwrap(B)->CreatePtrDiff(unwrap(ElemTy), unwrap(LHS),
-                                       unwrap(RHS), Name));
+  return wrap(
+      unwrap(B)->CreatePtrDiff(unwrap(ElemTy), unwrap(LHS), unwrap(RHS), Name));
 }
 
-LLVMValueRef LLVMBuildAtomicRMW(LLVMBuilderRef B,LLVMAtomicRMWBinOp op,
-                               LLVMValueRef PTR, LLVMValueRef Val,
-                               LLVMAtomicOrdering ordering,
-                               LLVMBool singleThread) {
+LLVMValueRef LLVMBuildAtomicRMW(LLVMBuilderRef B, LLVMAtomicRMWBinOp op,
+                                LLVMValueRef PTR, LLVMValueRef Val,
+                                LLVMAtomicOrdering ordering,
+                                LLVMBool singleThread) {
   AtomicRMWInst::BinOp intop = mapFromLLVMRMWBinOp(op);
   return wrap(unwrap(B)->CreateAtomicRMW(
       intop, unwrap(PTR), unwrap(Val), MaybeAlign(),
@@ -4040,7 +4040,7 @@ LLVMBool LLVMIsAtomicSingleThread(LLVMValueRef AtomicInst) {
   if (AtomicRMWInst *I = dyn_cast<AtomicRMWInst>(P))
     return I->getSyncScopeID() == SyncScope::SingleThread;
   return cast<AtomicCmpXchgInst>(P)->getSyncScopeID() ==
-             SyncScope::SingleThread;
+         SyncScope::SingleThread;
 }
 
 void LLVMSetAtomicSingleThread(LLVMValueRef AtomicInst, LLVMBool NewValue) {
@@ -4052,7 +4052,7 @@ void LLVMSetAtomicSingleThread(LLVMValueRef AtomicInst, LLVMBool NewValue) {
   return cast<AtomicCmpXchgInst>(P)->setSyncScopeID(SSID);
 }
 
-LLVMAtomicOrdering LLVMGetCmpXchgSuccessOrdering(LLVMValueRef CmpXchgInst)  {
+LLVMAtomicOrdering LLVMGetCmpXchgSuccessOrdering(LLVMValueRef CmpXchgInst) {
   Value *P = unwrap(CmpXchgInst);
   return mapToLLVMOrdering(cast<AtomicCmpXchgInst>(P)->getSuccessOrdering());
 }
@@ -4065,7 +4065,7 @@ void LLVMSetCmpXchgSuccessOrdering(LLVMValueRef CmpXchgInst,
   return cast<AtomicCmpXchgInst>(P)->setSuccessOrdering(O);
 }
 
-LLVMAtomicOrdering LLVMGetCmpXchgFailureOrdering(LLVMValueRef CmpXchgInst)  {
+LLVMAtomicOrdering LLVMGetCmpXchgFailureOrdering(LLVMValueRef CmpXchgInst) {
   Value *P = unwrap(CmpXchgInst);
   return mapToLLVMOrdering(cast<AtomicCmpXchgInst>(P)->getFailureOrdering());
 }
@@ -4085,17 +4085,12 @@ LLVMCreateModuleProviderForExistingModule(LLVMModuleRef M) {
   return reinterpret_cast<LLVMModuleProviderRef>(M);
 }
 
-void LLVMDisposeModuleProvider(LLVMModuleProviderRef MP) {
-  delete unwrap(MP);
-}
-
+void LLVMDisposeModuleProvider(LLVMModuleProviderRef MP) { delete unwrap(MP); }
 
 /*===-- Memory buffers ----------------------------------------------------===*/
 
 LLVMBool LLVMCreateMemoryBufferWithContentsOfFile(
-    const char *Path,
-    LLVMMemoryBufferRef *OutMemBuf,
-    char **OutMessage) {
+    const char *Path, LLVMMemoryBufferRef *OutMemBuf, char **OutMessage) {
 
   ErrorOr<std::unique_ptr<MemoryBuffer>> MBOrErr = MemoryBuffer::getFile(Path);
   if (std::error_code EC = MBOrErr.getError()) {
@@ -4118,24 +4113,21 @@ LLVMBool LLVMCreateMemoryBufferWithSTDIN(LLVMMemoryBufferRef *OutMemBuf,
 }
 
 LLVMMemoryBufferRef LLVMCreateMemoryBufferWithMemoryRange(
-    const char *InputData,
-    size_t InputDataLength,
-    const char *BufferName,
+    const char *InputData, size_t InputDataLength, const char *BufferName,
     LLVMBool RequiresNullTerminator) {
 
   return wrap(MemoryBuffer::getMemBuffer(StringRef(InputData, InputDataLength),
                                          StringRef(BufferName),
-                                         RequiresNullTerminator).release());
+                                         RequiresNullTerminator)
+                  .release());
 }
 
 LLVMMemoryBufferRef LLVMCreateMemoryBufferWithMemoryRangeCopy(
-    const char *InputData,
-    size_t InputDataLength,
-    const char *BufferName) {
+    const char *InputData, size_t InputDataLength, const char *BufferName) {
 
-  return wrap(
-      MemoryBuffer::getMemBufferCopy(StringRef(InputData, InputDataLength),
-                                     StringRef(BufferName)).release());
+  return wrap(MemoryBuffer::getMemBufferCopy(
+                  StringRef(InputData, InputDataLength), StringRef(BufferName))
+                  .release());
 }
 
 const char *LLVMGetBufferStart(LLVMMemoryBufferRef MemBuf) {
@@ -4162,7 +4154,7 @@ LLVMPassManagerRef LLVMCreateFunctionPassManagerForModule(LLVMModuleRef M) {
 
 LLVMPassManagerRef LLVMCreateFunctionPassManager(LLVMModuleProviderRef P) {
   return LLVMCreateFunctionPassManagerForModule(
-                                            reinterpret_cast<LLVMModuleRef>(P));
+      reinterpret_cast<LLVMModuleRef>(P));
 }
 
 LLVMBool LLVMRunPassManager(LLVMPassManagerRef PM, LLVMModuleRef M) {
@@ -4181,19 +4173,12 @@ LLVMBool LLVMFinalizeFunctionPassManager(LLVMPassManagerRef FPM) {
   return unwrap<legacy::FunctionPassManager>(FPM)->doFinalization();
 }
 
-void LLVMDisposePassManager(LLVMPassManagerRef PM) {
-  delete unwrap(PM);
-}
+void LLVMDisposePassManager(LLVMPassManagerRef PM) { delete unwrap(PM); }
 
 /*===-- Threading ------------------------------------------------------===*/
 
-LLVMBool LLVMStartMultithreaded() {
-  return LLVMIsMultithreaded();
-}
+LLVMBool LLVMStartMultithreaded() { return LLVMIsMultithreaded(); }
 
-void LLVMStopMultithreaded() {
-}
+void LLVMStopMultithreaded() {}
 
-LLVMBool LLVMIsMultithreaded() {
-  return llvm_is_multithreaded();
-}
+LLVMBool LLVMIsMultithreaded() { return llvm_is_multithreaded(); }
diff --git a/llvm/lib/IR/DIBuilder.cpp b/llvm/lib/IR/DIBuilder.cpp
index 1ce8c17f8a880f6..de400b77acd7b98 100644
--- a/llvm/lib/IR/DIBuilder.cpp
+++ b/llvm/lib/IR/DIBuilder.cpp
@@ -70,8 +70,8 @@ void DIBuilder::finalize() {
 
   if (!AllEnumTypes.empty())
     CUNode->replaceEnumTypes(MDTuple::get(
-        VMContext, SmallVector<Metadata *, 16>(AllEnumTypes.begin(),
-                                               AllEnumTypes.end())));
+        VMContext,
+        SmallVector<Metadata *, 16>(AllEnumTypes.begin(), AllEnumTypes.end())));
 
   SmallVector<Metadata *, 16> RetainValues;
   // Declarations and definitions of the same type may be retained. Some
@@ -755,8 +755,7 @@ DIGlobalVariable *DIBuilder::createTempGlobalVariableFwdDecl(
 }
 
 static DILocalVariable *createLocalVariable(
-    LLVMContext &VMContext,
-    SmallVectorImpl<TrackingMDNodeRef> &PreservedNodes,
+    LLVMContext &VMContext, SmallVectorImpl<TrackingMDNodeRef> &PreservedNodes,
     DIScope *Context, StringRef Name, unsigned ArgNo, DIFile *File,
     unsigned LineNo, DIType *Ty, bool AlwaysPreserve, DINode::DIFlags Flags,
     uint32_t AlignInBits, DINodeArray Annotations = nullptr) {
@@ -799,7 +798,7 @@ DILocalVariable *DIBuilder::createParameterVariable(
 }
 
 DILabel *DIBuilder::createLabel(DIScope *Context, StringRef Name, DIFile *File,
-                                 unsigned LineNo, bool AlwaysPreserve) {
+                                unsigned LineNo, bool AlwaysPreserve) {
   auto *Scope = cast<DILocalScope>(Context);
   auto *Node = DILabel::get(VMContext, Scope, Name, File, LineNo);
 
diff --git a/llvm/lib/IR/DataLayout.cpp b/llvm/lib/IR/DataLayout.cpp
index 53842b184ed6be0..8a5ef8634c34eee 100644
--- a/llvm/lib/IR/DataLayout.cpp
+++ b/llvm/lib/IR/DataLayout.cpp
@@ -155,8 +155,7 @@ PointerAlignElem PointerAlignElem::getInBits(uint32_t AddressSpace,
   return retval;
 }
 
-bool
-PointerAlignElem::operator==(const PointerAlignElem &rhs) const {
+bool PointerAlignElem::operator==(const PointerAlignElem &rhs) const {
   return (ABIAlign == rhs.ABIAlign && AddressSpace == rhs.AddressSpace &&
           PrefAlign == rhs.PrefAlign && TypeBitWidth == rhs.TypeBitWidth &&
           IndexBitWidth == rhs.IndexBitWidth);
@@ -245,7 +244,8 @@ static Error split(StringRef Str, char Separator,
 
 /// Get an unsigned integer, including error checks.
 template <typename IntTy> static Error getInt(StringRef R, IntTy &Result) {
-  bool error = R.getAsInteger(10, Result); (void)error;
+  bool error = R.getAsInteger(10, Result);
+  (void)error;
   if (error)
     return reportError("not a number, or does not fit in an unsigned int");
   return Error::success();
@@ -285,7 +285,7 @@ Error DataLayout::parseSpecifier(StringRef Desc) {
       return Err;
 
     // Aliases used below.
-    StringRef &Tok  = Split.first;  // Current token.
+    StringRef &Tok = Split.first;   // Current token.
     StringRef &Rest = Split.second; // The rest of the string.
 
     if (Tok == "ni") {
@@ -388,11 +388,20 @@ Error DataLayout::parseSpecifier(StringRef Desc) {
     case 'a': {
       AlignTypeEnum AlignType;
       switch (Specifier) {
-      default: llvm_unreachable("Unexpected specifier!");
-      case 'i': AlignType = INTEGER_ALIGN; break;
-      case 'v': AlignType = VECTOR_ALIGN; break;
-      case 'f': AlignType = FLOAT_ALIGN; break;
-      case 'a': AlignType = AGGREGATE_ALIGN; break;
+      default:
+        llvm_unreachable("Unexpected specifier!");
+      case 'i':
+        AlignType = INTEGER_ALIGN;
+        break;
+      case 'v':
+        AlignType = VECTOR_ALIGN;
+        break;
+      case 'f':
+        AlignType = FLOAT_ALIGN;
+        break;
+      case 'a':
+        AlignType = AGGREGATE_ALIGN;
+        break;
       }
 
       // Bit size.
@@ -447,7 +456,7 @@ Error DataLayout::parseSpecifier(StringRef Desc) {
 
       break;
     }
-    case 'n':  // Native integer types.
+    case 'n': // Native integer types.
       while (true) {
         unsigned Width;
         if (Error Err = getInt(Tok, Width))
@@ -515,7 +524,7 @@ Error DataLayout::parseSpecifier(StringRef Desc) {
         return reportError("Expected mangling specifier in datalayout string");
       if (Rest.size() > 1)
         return reportError("Unknown mangling specifier in datalayout string");
-      switch(Rest[0]) {
+      switch (Rest[0]) {
       default:
         return reportError("Unknown mangling in datalayout string");
       case 'e':
@@ -550,9 +559,7 @@ Error DataLayout::parseSpecifier(StringRef Desc) {
   return Error::success();
 }
 
-DataLayout::DataLayout(const Module *M) {
-  init(M);
-}
+DataLayout::DataLayout(const Module *M) { init(M); }
 
 void DataLayout::init(const Module *M) { *this = M->getDataLayout(); }
 
@@ -632,8 +639,8 @@ DataLayout::getPointerAlignElem(uint32_t AddressSpace) const {
   if (AddressSpace != 0) {
     auto I = lower_bound(Pointers, AddressSpace,
                          [](const PointerAlignElem &A, uint32_t AddressSpace) {
-      return A.AddressSpace < AddressSpace;
-    });
+                           return A.AddressSpace < AddressSpace;
+                         });
     if (I != Pointers.end() && I->AddressSpace == AddressSpace)
       return *I;
   }
@@ -652,8 +659,8 @@ Error DataLayout::setPointerAlignmentInBits(uint32_t AddrSpace, Align ABIAlign,
 
   auto I = lower_bound(Pointers, AddrSpace,
                        [](const PointerAlignElem &A, uint32_t AddressSpace) {
-    return A.AddressSpace < AddressSpace;
-  });
+                         return A.AddressSpace < AddressSpace;
+                       });
   if (I == Pointers.end() || I->AddressSpace != AddrSpace) {
     Pointers.insert(I,
                     PointerAlignElem::getInBits(AddrSpace, ABIAlign, PrefAlign,
@@ -681,7 +688,7 @@ Align DataLayout::getIntegerAlignment(uint32_t BitWidth,
 namespace {
 
 class StructLayoutMap {
-  using LayoutInfoTy = DenseMap<StructType*, StructLayout*>;
+  using LayoutInfoTy = DenseMap<StructType *, StructLayout *>;
   LayoutInfoTy LayoutInfo;
 
 public:
@@ -694,9 +701,7 @@ class StructLayoutMap {
     }
   }
 
-  StructLayout *&operator[](StructType *STy) {
-    return LayoutInfo[STy];
-  }
+  StructLayout *&operator[](StructType *STy) { return LayoutInfo[STy]; }
 };
 
 } // end anonymous namespace
@@ -711,17 +716,16 @@ void DataLayout::clear() {
   LayoutMap = nullptr;
 }
 
-DataLayout::~DataLayout() {
-  clear();
-}
+DataLayout::~DataLayout() { clear(); }
 
 const StructLayout *DataLayout::getStructLayout(StructType *Ty) const {
   if (!LayoutMap)
     LayoutMap = new StructLayoutMap();
 
-  StructLayoutMap *STM = static_cast<StructLayoutMap*>(LayoutMap);
+  StructLayoutMap *STM = static_cast<StructLayoutMap *>(LayoutMap);
   StructLayout *&SL = (*STM)[Ty];
-  if (SL) return SL;
+  if (SL)
+    return SL;
 
   // Otherwise, create the struct layout.  Because it is variable length, we
   // malloc it, then use placement new.
@@ -794,7 +798,7 @@ Align DataLayout::getAlignment(Type *Ty, bool abi_or_pref) const {
     unsigned AS = cast<PointerType>(Ty)->getAddressSpace();
     return abi_or_pref ? getPointerABIAlignment(AS)
                        : getPointerPrefAlignment(AS);
-    }
+  }
   case Type::ArrayTyID:
     return getAlignment(cast<ArrayType>(Ty)->getElementType(), abi_or_pref);
 
@@ -888,7 +892,8 @@ Type *DataLayout::getIntPtrType(Type *Ty) const {
   return IntTy;
 }
 
-Type *DataLayout::getSmallestLegalIntType(LLVMContext &C, unsigned Width) const {
+Type *DataLayout::getSmallestLegalIntType(LLVMContext &C,
+                                          unsigned Width) const {
   for (unsigned LegalIntWidth : LegalIntWidths)
     if (Width <= LegalIntWidth)
       return Type::getIntNTy(C, LegalIntWidth);
@@ -919,9 +924,9 @@ int64_t DataLayout::getIndexedOffsetInType(Type *ElemTy,
                                            ArrayRef<Value *> Indices) const {
   int64_t Result = 0;
 
-  generic_gep_type_iterator<Value* const*>
-    GTI = gep_type_begin(ElemTy, Indices),
-    GTE = gep_type_end(ElemTy, Indices);
+  generic_gep_type_iterator<Value *const *> GTI =
+                                                gep_type_begin(ElemTy, Indices),
+                                            GTE = gep_type_end(ElemTy, Indices);
   for (; GTI != GTE; ++GTI) {
     Value *Idx = GTI.getOperand();
     if (StructType *STy = GTI.getStructTypeOrNull()) {
diff --git a/llvm/lib/IR/DebugInfo.cpp b/llvm/lib/IR/DebugInfo.cpp
index 48b5501c55ba47d..e04c275457bc58d 100644
--- a/llvm/lib/IR/DebugInfo.cpp
+++ b/llvm/lib/IR/DebugInfo.cpp
@@ -469,10 +469,10 @@ static MDNode *stripDebugLocFromLoopID(MDNode *N) {
   // needed by updateLoopMetadataDebugLocationsImpl; the use of
   // count_if avoids an early exit.
   if (!llvm::count_if(llvm::drop_begin(N->operands()),
-                     [&Visited, &DILocationReachable](const MDOperand &Op) {
-                       return isDILocationReachable(
-                                  Visited, DILocationReachable, Op.get());
-                     }))
+                      [&Visited, &DILocationReachable](const MDOperand &Op) {
+                        return isDILocationReachable(
+                            Visited, DILocationReachable, Op.get());
+                      }))
     return N;
 
   Visited.clear();
@@ -536,8 +536,7 @@ bool llvm::StripDebugInfo(Module &M) {
   for (NamedMDNode &NMD : llvm::make_early_inc_range(M.named_metadata())) {
     // We're stripping debug info, and without them, coverage information
     // doesn't quite make sense.
-    if (NMD.getName().startswith("llvm.dbg.") ||
-        NMD.getName() == "llvm.gcov") {
+    if (NMD.getName().startswith("llvm.dbg.") || NMD.getName() == "llvm.gcov") {
       NMD.eraseFromParent();
       Changed = true;
     }
@@ -972,9 +971,7 @@ pack_into_DISPFlags(bool IsLocalToUnit, bool IsDefinition, bool IsOptimized) {
   return DISubprogram::toSPFlags(IsLocalToUnit, IsDefinition, IsOptimized);
 }
 
-unsigned LLVMDebugMetadataVersion() {
-  return DEBUG_METADATA_VERSION;
-}
+unsigned LLVMDebugMetadataVersion() { return DEBUG_METADATA_VERSION; }
 
 LLVMDIBuilderRef LLVMCreateDIBuilderDisallowUnresolved(LLVMModuleRef M) {
   return wrap(new DIBuilder(*unwrap(M), false));
@@ -992,9 +989,7 @@ LLVMBool LLVMStripModuleDebugInfo(LLVMModuleRef M) {
   return StripDebugInfo(*unwrap(M));
 }
 
-void LLVMDisposeDIBuilder(LLVMDIBuilderRef Builder) {
-  delete unwrap(Builder);
-}
+void LLVMDisposeDIBuilder(LLVMDIBuilderRef Builder) { delete unwrap(Builder); }
 
 void LLVMDIBuilderFinalize(LLVMDIBuilderRef Builder) {
   unwrap(Builder)->finalize();
@@ -1025,10 +1020,11 @@ LLVMMetadataRef LLVMDIBuilderCreateCompileUnit(
       StringRef(SysRoot, SysRootLen), StringRef(SDK, SDKLen)));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateFile(LLVMDIBuilderRef Builder, const char *Filename,
-                        size_t FilenameLen, const char *Directory,
-                        size_t DirectoryLen) {
+LLVMMetadataRef LLVMDIBuilderCreateFile(LLVMDIBuilderRef Builder,
+                                        const char *Filename,
+                                        size_t FilenameLen,
+                                        const char *Directory,
+                                        size_t DirectoryLen) {
   return wrap(unwrap(Builder)->createFile(StringRef(Filename, FilenameLen),
                                           StringRef(Directory, DirectoryLen)));
 }
@@ -1058,8 +1054,8 @@ LLVMMetadataRef LLVMDIBuilderCreateFunction(
     LLVMDIBuilderRef Builder, LLVMMetadataRef Scope, const char *Name,
     size_t NameLen, const char *LinkageName, size_t LinkageNameLen,
     LLVMMetadataRef File, unsigned LineNo, LLVMMetadataRef Ty,
-    LLVMBool IsLocalToUnit, LLVMBool IsDefinition,
-    unsigned ScopeLine, LLVMDIFlags Flags, LLVMBool IsOptimized) {
+    LLVMBool IsLocalToUnit, LLVMBool IsDefinition, unsigned ScopeLine,
+    LLVMDIFlags Flags, LLVMBool IsOptimized) {
   return wrap(unwrap(Builder)->createFunction(
       unwrapDI<DIScope>(Scope), {Name, NameLen}, {LinkageName, LinkageNameLen},
       unwrapDI<DIFile>(File), LineNo, unwrapDI<DISubroutineType>(Ty), ScopeLine,
@@ -1068,35 +1064,28 @@ LLVMMetadataRef LLVMDIBuilderCreateFunction(
       nullptr, nullptr));
 }
 
-
-LLVMMetadataRef LLVMDIBuilderCreateLexicalBlock(
-    LLVMDIBuilderRef Builder, LLVMMetadataRef Scope,
-    LLVMMetadataRef File, unsigned Line, unsigned Col) {
-  return wrap(unwrap(Builder)->createLexicalBlock(unwrapDI<DIScope>(Scope),
-                                                  unwrapDI<DIFile>(File),
-                                                  Line, Col));
+LLVMMetadataRef LLVMDIBuilderCreateLexicalBlock(LLVMDIBuilderRef Builder,
+                                                LLVMMetadataRef Scope,
+                                                LLVMMetadataRef File,
+                                                unsigned Line, unsigned Col) {
+  return wrap(unwrap(Builder)->createLexicalBlock(
+      unwrapDI<DIScope>(Scope), unwrapDI<DIFile>(File), Line, Col));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateLexicalBlockFile(LLVMDIBuilderRef Builder,
-                                    LLVMMetadataRef Scope,
-                                    LLVMMetadataRef File,
-                                    unsigned Discriminator) {
-  return wrap(unwrap(Builder)->createLexicalBlockFile(unwrapDI<DIScope>(Scope),
-                                                      unwrapDI<DIFile>(File),
-                                                      Discriminator));
+LLVMMetadataRef LLVMDIBuilderCreateLexicalBlockFile(LLVMDIBuilderRef Builder,
+                                                    LLVMMetadataRef Scope,
+                                                    LLVMMetadataRef File,
+                                                    unsigned Discriminator) {
+  return wrap(unwrap(Builder)->createLexicalBlockFile(
+      unwrapDI<DIScope>(Scope), unwrapDI<DIFile>(File), Discriminator));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateImportedModuleFromNamespace(LLVMDIBuilderRef Builder,
-                                               LLVMMetadataRef Scope,
-                                               LLVMMetadataRef NS,
-                                               LLVMMetadataRef File,
-                                               unsigned Line) {
-  return wrap(unwrap(Builder)->createImportedModule(unwrapDI<DIScope>(Scope),
-                                                    unwrapDI<DINamespace>(NS),
-                                                    unwrapDI<DIFile>(File),
-                                                    Line));
+LLVMMetadataRef LLVMDIBuilderCreateImportedModuleFromNamespace(
+    LLVMDIBuilderRef Builder, LLVMMetadataRef Scope, LLVMMetadataRef NS,
+    LLVMMetadataRef File, unsigned Line) {
+  return wrap(unwrap(Builder)->createImportedModule(
+      unwrapDI<DIScope>(Scope), unwrapDI<DINamespace>(NS),
+      unwrapDI<DIFile>(File), Line));
 }
 
 LLVMMetadataRef LLVMDIBuilderCreateImportedModuleFromAlias(
@@ -1138,10 +1127,10 @@ LLVMMetadataRef LLVMDIBuilderCreateImportedDeclaration(
       Line, {Name, NameLen}, Elts));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateDebugLocation(LLVMContextRef Ctx, unsigned Line,
-                                 unsigned Column, LLVMMetadataRef Scope,
-                                 LLVMMetadataRef InlinedAt) {
+LLVMMetadataRef LLVMDIBuilderCreateDebugLocation(LLVMContextRef Ctx,
+                                                 unsigned Line, unsigned Column,
+                                                 LLVMMetadataRef Scope,
+                                                 LLVMMetadataRef InlinedAt) {
   return wrap(DILocation::get(*unwrap(Ctx), Line, Column, unwrap(Scope),
                               unwrap(InlinedAt)));
 }
@@ -1216,71 +1205,66 @@ LLVMMetadataRef LLVMDIBuilderCreateEnumerator(LLVMDIBuilderRef Builder,
 }
 
 LLVMMetadataRef LLVMDIBuilderCreateEnumerationType(
-  LLVMDIBuilderRef Builder, LLVMMetadataRef Scope, const char *Name,
-  size_t NameLen, LLVMMetadataRef File, unsigned LineNumber,
-  uint64_t SizeInBits, uint32_t AlignInBits, LLVMMetadataRef *Elements,
-  unsigned NumElements, LLVMMetadataRef ClassTy) {
-auto Elts = unwrap(Builder)->getOrCreateArray({unwrap(Elements),
-                                               NumElements});
-return wrap(unwrap(Builder)->createEnumerationType(
-    unwrapDI<DIScope>(Scope), {Name, NameLen}, unwrapDI<DIFile>(File),
-    LineNumber, SizeInBits, AlignInBits, Elts, unwrapDI<DIType>(ClassTy)));
+    LLVMDIBuilderRef Builder, LLVMMetadataRef Scope, const char *Name,
+    size_t NameLen, LLVMMetadataRef File, unsigned LineNumber,
+    uint64_t SizeInBits, uint32_t AlignInBits, LLVMMetadataRef *Elements,
+    unsigned NumElements, LLVMMetadataRef ClassTy) {
+  auto Elts =
+      unwrap(Builder)->getOrCreateArray({unwrap(Elements), NumElements});
+  return wrap(unwrap(Builder)->createEnumerationType(
+      unwrapDI<DIScope>(Scope), {Name, NameLen}, unwrapDI<DIFile>(File),
+      LineNumber, SizeInBits, AlignInBits, Elts, unwrapDI<DIType>(ClassTy)));
 }
 
 LLVMMetadataRef LLVMDIBuilderCreateUnionType(
-  LLVMDIBuilderRef Builder, LLVMMetadataRef Scope, const char *Name,
-  size_t NameLen, LLVMMetadataRef File, unsigned LineNumber,
-  uint64_t SizeInBits, uint32_t AlignInBits, LLVMDIFlags Flags,
-  LLVMMetadataRef *Elements, unsigned NumElements, unsigned RunTimeLang,
-  const char *UniqueId, size_t UniqueIdLen) {
-  auto Elts = unwrap(Builder)->getOrCreateArray({unwrap(Elements),
-                                                 NumElements});
+    LLVMDIBuilderRef Builder, LLVMMetadataRef Scope, const char *Name,
+    size_t NameLen, LLVMMetadataRef File, unsigned LineNumber,
+    uint64_t SizeInBits, uint32_t AlignInBits, LLVMDIFlags Flags,
+    LLVMMetadataRef *Elements, unsigned NumElements, unsigned RunTimeLang,
+    const char *UniqueId, size_t UniqueIdLen) {
+  auto Elts =
+      unwrap(Builder)->getOrCreateArray({unwrap(Elements), NumElements});
   return wrap(unwrap(Builder)->createUnionType(
-     unwrapDI<DIScope>(Scope), {Name, NameLen}, unwrapDI<DIFile>(File),
-     LineNumber, SizeInBits, AlignInBits, map_from_llvmDIFlags(Flags),
-     Elts, RunTimeLang, {UniqueId, UniqueIdLen}));
+      unwrapDI<DIScope>(Scope), {Name, NameLen}, unwrapDI<DIFile>(File),
+      LineNumber, SizeInBits, AlignInBits, map_from_llvmDIFlags(Flags), Elts,
+      RunTimeLang, {UniqueId, UniqueIdLen}));
 }
 
-
-LLVMMetadataRef
-LLVMDIBuilderCreateArrayType(LLVMDIBuilderRef Builder, uint64_t Size,
-                             uint32_t AlignInBits, LLVMMetadataRef Ty,
-                             LLVMMetadataRef *Subscripts,
-                             unsigned NumSubscripts) {
-  auto Subs = unwrap(Builder)->getOrCreateArray({unwrap(Subscripts),
-                                                 NumSubscripts});
+LLVMMetadataRef LLVMDIBuilderCreateArrayType(
+    LLVMDIBuilderRef Builder, uint64_t Size, uint32_t AlignInBits,
+    LLVMMetadataRef Ty, LLVMMetadataRef *Subscripts, unsigned NumSubscripts) {
+  auto Subs =
+      unwrap(Builder)->getOrCreateArray({unwrap(Subscripts), NumSubscripts});
   return wrap(unwrap(Builder)->createArrayType(Size, AlignInBits,
                                                unwrapDI<DIType>(Ty), Subs));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateVectorType(LLVMDIBuilderRef Builder, uint64_t Size,
-                              uint32_t AlignInBits, LLVMMetadataRef Ty,
-                              LLVMMetadataRef *Subscripts,
-                              unsigned NumSubscripts) {
-  auto Subs = unwrap(Builder)->getOrCreateArray({unwrap(Subscripts),
-                                                 NumSubscripts});
+LLVMMetadataRef LLVMDIBuilderCreateVectorType(
+    LLVMDIBuilderRef Builder, uint64_t Size, uint32_t AlignInBits,
+    LLVMMetadataRef Ty, LLVMMetadataRef *Subscripts, unsigned NumSubscripts) {
+  auto Subs =
+      unwrap(Builder)->getOrCreateArray({unwrap(Subscripts), NumSubscripts});
   return wrap(unwrap(Builder)->createVectorType(Size, AlignInBits,
                                                 unwrapDI<DIType>(Ty), Subs));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateBasicType(LLVMDIBuilderRef Builder, const char *Name,
-                             size_t NameLen, uint64_t SizeInBits,
-                             LLVMDWARFTypeEncoding Encoding,
-                             LLVMDIFlags Flags) {
-  return wrap(unwrap(Builder)->createBasicType({Name, NameLen},
-                                               SizeInBits, Encoding,
-                                               map_from_llvmDIFlags(Flags)));
+LLVMMetadataRef LLVMDIBuilderCreateBasicType(LLVMDIBuilderRef Builder,
+                                             const char *Name, size_t NameLen,
+                                             uint64_t SizeInBits,
+                                             LLVMDWARFTypeEncoding Encoding,
+                                             LLVMDIFlags Flags) {
+  return wrap(unwrap(Builder)->createBasicType(
+      {Name, NameLen}, SizeInBits, Encoding, map_from_llvmDIFlags(Flags)));
 }
 
-LLVMMetadataRef LLVMDIBuilderCreatePointerType(
-    LLVMDIBuilderRef Builder, LLVMMetadataRef PointeeTy,
-    uint64_t SizeInBits, uint32_t AlignInBits, unsigned AddressSpace,
-    const char *Name, size_t NameLen) {
-  return wrap(unwrap(Builder)->createPointerType(unwrapDI<DIType>(PointeeTy),
-                                         SizeInBits, AlignInBits,
-                                         AddressSpace, {Name, NameLen}));
+LLVMMetadataRef
+LLVMDIBuilderCreatePointerType(LLVMDIBuilderRef Builder,
+                               LLVMMetadataRef PointeeTy, uint64_t SizeInBits,
+                               uint32_t AlignInBits, unsigned AddressSpace,
+                               const char *Name, size_t NameLen) {
+  return wrap(unwrap(Builder)->createPointerType(
+      unwrapDI<DIType>(PointeeTy), SizeInBits, AlignInBits, AddressSpace,
+      {Name, NameLen}));
 }
 
 LLVMMetadataRef LLVMDIBuilderCreateStructType(
@@ -1290,8 +1274,8 @@ LLVMMetadataRef LLVMDIBuilderCreateStructType(
     LLVMMetadataRef DerivedFrom, LLVMMetadataRef *Elements,
     unsigned NumElements, unsigned RunTimeLang, LLVMMetadataRef VTableHolder,
     const char *UniqueId, size_t UniqueIdLen) {
-  auto Elts = unwrap(Builder)->getOrCreateArray({unwrap(Elements),
-                                                 NumElements});
+  auto Elts =
+      unwrap(Builder)->getOrCreateArray({unwrap(Elements), NumElements});
   return wrap(unwrap(Builder)->createStructType(
       unwrapDI<DIScope>(Scope), {Name, NameLen}, unwrapDI<DIFile>(File),
       LineNumber, SizeInBits, AlignInBits, map_from_llvmDIFlags(Flags),
@@ -1304,61 +1288,53 @@ LLVMMetadataRef LLVMDIBuilderCreateMemberType(
     size_t NameLen, LLVMMetadataRef File, unsigned LineNo, uint64_t SizeInBits,
     uint32_t AlignInBits, uint64_t OffsetInBits, LLVMDIFlags Flags,
     LLVMMetadataRef Ty) {
-  return wrap(unwrap(Builder)->createMemberType(unwrapDI<DIScope>(Scope),
-      {Name, NameLen}, unwrapDI<DIFile>(File), LineNo, SizeInBits, AlignInBits,
-      OffsetInBits, map_from_llvmDIFlags(Flags), unwrapDI<DIType>(Ty)));
+  return wrap(unwrap(Builder)->createMemberType(
+      unwrapDI<DIScope>(Scope), {Name, NameLen}, unwrapDI<DIFile>(File), LineNo,
+      SizeInBits, AlignInBits, OffsetInBits, map_from_llvmDIFlags(Flags),
+      unwrapDI<DIType>(Ty)));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateUnspecifiedType(LLVMDIBuilderRef Builder, const char *Name,
-                                   size_t NameLen) {
+LLVMMetadataRef LLVMDIBuilderCreateUnspecifiedType(LLVMDIBuilderRef Builder,
+                                                   const char *Name,
+                                                   size_t NameLen) {
   return wrap(unwrap(Builder)->createUnspecifiedType({Name, NameLen}));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateStaticMemberType(
+LLVMMetadataRef LLVMDIBuilderCreateStaticMemberType(
     LLVMDIBuilderRef Builder, LLVMMetadataRef Scope, const char *Name,
     size_t NameLen, LLVMMetadataRef File, unsigned LineNumber,
     LLVMMetadataRef Type, LLVMDIFlags Flags, LLVMValueRef ConstantVal,
     uint32_t AlignInBits) {
   return wrap(unwrap(Builder)->createStaticMemberType(
-                  unwrapDI<DIScope>(Scope), {Name, NameLen},
-                  unwrapDI<DIFile>(File), LineNumber, unwrapDI<DIType>(Type),
-                  map_from_llvmDIFlags(Flags), unwrap<Constant>(ConstantVal),
-                  AlignInBits));
+      unwrapDI<DIScope>(Scope), {Name, NameLen}, unwrapDI<DIFile>(File),
+      LineNumber, unwrapDI<DIType>(Type), map_from_llvmDIFlags(Flags),
+      unwrap<Constant>(ConstantVal), AlignInBits));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateObjCIVar(LLVMDIBuilderRef Builder,
-                            const char *Name, size_t NameLen,
-                            LLVMMetadataRef File, unsigned LineNo,
-                            uint64_t SizeInBits, uint32_t AlignInBits,
-                            uint64_t OffsetInBits, LLVMDIFlags Flags,
-                            LLVMMetadataRef Ty, LLVMMetadataRef PropertyNode) {
+LLVMMetadataRef LLVMDIBuilderCreateObjCIVar(
+    LLVMDIBuilderRef Builder, const char *Name, size_t NameLen,
+    LLVMMetadataRef File, unsigned LineNo, uint64_t SizeInBits,
+    uint32_t AlignInBits, uint64_t OffsetInBits, LLVMDIFlags Flags,
+    LLVMMetadataRef Ty, LLVMMetadataRef PropertyNode) {
   return wrap(unwrap(Builder)->createObjCIVar(
-                  {Name, NameLen}, unwrapDI<DIFile>(File), LineNo,
-                  SizeInBits, AlignInBits, OffsetInBits,
-                  map_from_llvmDIFlags(Flags), unwrapDI<DIType>(Ty),
-                  unwrapDI<MDNode>(PropertyNode)));
+      {Name, NameLen}, unwrapDI<DIFile>(File), LineNo, SizeInBits, AlignInBits,
+      OffsetInBits, map_from_llvmDIFlags(Flags), unwrapDI<DIType>(Ty),
+      unwrapDI<MDNode>(PropertyNode)));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateObjCProperty(LLVMDIBuilderRef Builder,
-                                const char *Name, size_t NameLen,
-                                LLVMMetadataRef File, unsigned LineNo,
-                                const char *GetterName, size_t GetterNameLen,
-                                const char *SetterName, size_t SetterNameLen,
-                                unsigned PropertyAttributes,
-                                LLVMMetadataRef Ty) {
+LLVMMetadataRef LLVMDIBuilderCreateObjCProperty(
+    LLVMDIBuilderRef Builder, const char *Name, size_t NameLen,
+    LLVMMetadataRef File, unsigned LineNo, const char *GetterName,
+    size_t GetterNameLen, const char *SetterName, size_t SetterNameLen,
+    unsigned PropertyAttributes, LLVMMetadataRef Ty) {
   return wrap(unwrap(Builder)->createObjCProperty(
-                  {Name, NameLen}, unwrapDI<DIFile>(File), LineNo,
-                  {GetterName, GetterNameLen}, {SetterName, SetterNameLen},
-                  PropertyAttributes, unwrapDI<DIType>(Ty)));
+      {Name, NameLen}, unwrapDI<DIFile>(File), LineNo,
+      {GetterName, GetterNameLen}, {SetterName, SetterNameLen},
+      PropertyAttributes, unwrapDI<DIType>(Ty)));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateObjectPointerType(LLVMDIBuilderRef Builder,
-                                     LLVMMetadataRef Type) {
+LLVMMetadataRef LLVMDIBuilderCreateObjectPointerType(LLVMDIBuilderRef Builder,
+                                                     LLVMMetadataRef Type) {
   return wrap(unwrap(Builder)->createObjectPointerType(unwrapDI<DIType>(Type)));
 }
 
@@ -1373,112 +1349,95 @@ LLVMDIBuilderCreateTypedef(LLVMDIBuilderRef Builder, LLVMMetadataRef Type,
 }
 
 LLVMMetadataRef
-LLVMDIBuilderCreateInheritance(LLVMDIBuilderRef Builder,
-                               LLVMMetadataRef Ty, LLVMMetadataRef BaseTy,
-                               uint64_t BaseOffset, uint32_t VBPtrOffset,
-                               LLVMDIFlags Flags) {
+LLVMDIBuilderCreateInheritance(LLVMDIBuilderRef Builder, LLVMMetadataRef Ty,
+                               LLVMMetadataRef BaseTy, uint64_t BaseOffset,
+                               uint32_t VBPtrOffset, LLVMDIFlags Flags) {
   return wrap(unwrap(Builder)->createInheritance(
-                  unwrapDI<DIType>(Ty), unwrapDI<DIType>(BaseTy),
-                  BaseOffset, VBPtrOffset, map_from_llvmDIFlags(Flags)));
+      unwrapDI<DIType>(Ty), unwrapDI<DIType>(BaseTy), BaseOffset, VBPtrOffset,
+      map_from_llvmDIFlags(Flags)));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateForwardDecl(
-    LLVMDIBuilderRef Builder, unsigned Tag, const char *Name,
-    size_t NameLen, LLVMMetadataRef Scope, LLVMMetadataRef File, unsigned Line,
+LLVMMetadataRef LLVMDIBuilderCreateForwardDecl(
+    LLVMDIBuilderRef Builder, unsigned Tag, const char *Name, size_t NameLen,
+    LLVMMetadataRef Scope, LLVMMetadataRef File, unsigned Line,
     unsigned RuntimeLang, uint64_t SizeInBits, uint32_t AlignInBits,
     const char *UniqueIdentifier, size_t UniqueIdentifierLen) {
   return wrap(unwrap(Builder)->createForwardDecl(
-                  Tag, {Name, NameLen}, unwrapDI<DIScope>(Scope),
-                  unwrapDI<DIFile>(File), Line, RuntimeLang, SizeInBits,
-                  AlignInBits, {UniqueIdentifier, UniqueIdentifierLen}));
+      Tag, {Name, NameLen}, unwrapDI<DIScope>(Scope), unwrapDI<DIFile>(File),
+      Line, RuntimeLang, SizeInBits, AlignInBits,
+      {UniqueIdentifier, UniqueIdentifierLen}));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateReplaceableCompositeType(
-    LLVMDIBuilderRef Builder, unsigned Tag, const char *Name,
-    size_t NameLen, LLVMMetadataRef Scope, LLVMMetadataRef File, unsigned Line,
+LLVMMetadataRef LLVMDIBuilderCreateReplaceableCompositeType(
+    LLVMDIBuilderRef Builder, unsigned Tag, const char *Name, size_t NameLen,
+    LLVMMetadataRef Scope, LLVMMetadataRef File, unsigned Line,
     unsigned RuntimeLang, uint64_t SizeInBits, uint32_t AlignInBits,
     LLVMDIFlags Flags, const char *UniqueIdentifier,
     size_t UniqueIdentifierLen) {
   return wrap(unwrap(Builder)->createReplaceableCompositeType(
-                  Tag, {Name, NameLen}, unwrapDI<DIScope>(Scope),
-                  unwrapDI<DIFile>(File), Line, RuntimeLang, SizeInBits,
-                  AlignInBits, map_from_llvmDIFlags(Flags),
-                  {UniqueIdentifier, UniqueIdentifierLen}));
+      Tag, {Name, NameLen}, unwrapDI<DIScope>(Scope), unwrapDI<DIFile>(File),
+      Line, RuntimeLang, SizeInBits, AlignInBits, map_from_llvmDIFlags(Flags),
+      {UniqueIdentifier, UniqueIdentifierLen}));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateQualifiedType(LLVMDIBuilderRef Builder, unsigned Tag,
-                                 LLVMMetadataRef Type) {
-  return wrap(unwrap(Builder)->createQualifiedType(Tag,
-                                                   unwrapDI<DIType>(Type)));
+LLVMMetadataRef LLVMDIBuilderCreateQualifiedType(LLVMDIBuilderRef Builder,
+                                                 unsigned Tag,
+                                                 LLVMMetadataRef Type) {
+  return wrap(
+      unwrap(Builder)->createQualifiedType(Tag, unwrapDI<DIType>(Type)));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateReferenceType(LLVMDIBuilderRef Builder, unsigned Tag,
-                                 LLVMMetadataRef Type) {
-  return wrap(unwrap(Builder)->createReferenceType(Tag,
-                                                   unwrapDI<DIType>(Type)));
+LLVMMetadataRef LLVMDIBuilderCreateReferenceType(LLVMDIBuilderRef Builder,
+                                                 unsigned Tag,
+                                                 LLVMMetadataRef Type) {
+  return wrap(
+      unwrap(Builder)->createReferenceType(Tag, unwrapDI<DIType>(Type)));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateNullPtrType(LLVMDIBuilderRef Builder) {
+LLVMMetadataRef LLVMDIBuilderCreateNullPtrType(LLVMDIBuilderRef Builder) {
   return wrap(unwrap(Builder)->createNullPtrType());
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateMemberPointerType(LLVMDIBuilderRef Builder,
-                                     LLVMMetadataRef PointeeType,
-                                     LLVMMetadataRef ClassType,
-                                     uint64_t SizeInBits,
-                                     uint32_t AlignInBits,
-                                     LLVMDIFlags Flags) {
+LLVMMetadataRef LLVMDIBuilderCreateMemberPointerType(
+    LLVMDIBuilderRef Builder, LLVMMetadataRef PointeeType,
+    LLVMMetadataRef ClassType, uint64_t SizeInBits, uint32_t AlignInBits,
+    LLVMDIFlags Flags) {
   return wrap(unwrap(Builder)->createMemberPointerType(
-                  unwrapDI<DIType>(PointeeType),
-                  unwrapDI<DIType>(ClassType), AlignInBits, SizeInBits,
-                  map_from_llvmDIFlags(Flags)));
+      unwrapDI<DIType>(PointeeType), unwrapDI<DIType>(ClassType), AlignInBits,
+      SizeInBits, map_from_llvmDIFlags(Flags)));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateBitFieldMemberType(LLVMDIBuilderRef Builder,
-                                      LLVMMetadataRef Scope,
-                                      const char *Name, size_t NameLen,
-                                      LLVMMetadataRef File, unsigned LineNumber,
-                                      uint64_t SizeInBits,
-                                      uint64_t OffsetInBits,
-                                      uint64_t StorageOffsetInBits,
-                                      LLVMDIFlags Flags, LLVMMetadataRef Type) {
+LLVMMetadataRef LLVMDIBuilderCreateBitFieldMemberType(
+    LLVMDIBuilderRef Builder, LLVMMetadataRef Scope, const char *Name,
+    size_t NameLen, LLVMMetadataRef File, unsigned LineNumber,
+    uint64_t SizeInBits, uint64_t OffsetInBits, uint64_t StorageOffsetInBits,
+    LLVMDIFlags Flags, LLVMMetadataRef Type) {
   return wrap(unwrap(Builder)->createBitFieldMemberType(
-                  unwrapDI<DIScope>(Scope), {Name, NameLen},
-                  unwrapDI<DIFile>(File), LineNumber,
-                  SizeInBits, OffsetInBits, StorageOffsetInBits,
-                  map_from_llvmDIFlags(Flags), unwrapDI<DIType>(Type)));
+      unwrapDI<DIScope>(Scope), {Name, NameLen}, unwrapDI<DIFile>(File),
+      LineNumber, SizeInBits, OffsetInBits, StorageOffsetInBits,
+      map_from_llvmDIFlags(Flags), unwrapDI<DIType>(Type)));
 }
 
-LLVMMetadataRef LLVMDIBuilderCreateClassType(LLVMDIBuilderRef Builder,
-    LLVMMetadataRef Scope, const char *Name, size_t NameLen,
-    LLVMMetadataRef File, unsigned LineNumber, uint64_t SizeInBits,
-    uint32_t AlignInBits, uint64_t OffsetInBits, LLVMDIFlags Flags,
-    LLVMMetadataRef DerivedFrom,
-    LLVMMetadataRef *Elements, unsigned NumElements,
-    LLVMMetadataRef VTableHolder, LLVMMetadataRef TemplateParamsNode,
-    const char *UniqueIdentifier, size_t UniqueIdentifierLen) {
-  auto Elts = unwrap(Builder)->getOrCreateArray({unwrap(Elements),
-                                                 NumElements});
+LLVMMetadataRef LLVMDIBuilderCreateClassType(
+    LLVMDIBuilderRef Builder, LLVMMetadataRef Scope, const char *Name,
+    size_t NameLen, LLVMMetadataRef File, unsigned LineNumber,
+    uint64_t SizeInBits, uint32_t AlignInBits, uint64_t OffsetInBits,
+    LLVMDIFlags Flags, LLVMMetadataRef DerivedFrom, LLVMMetadataRef *Elements,
+    unsigned NumElements, LLVMMetadataRef VTableHolder,
+    LLVMMetadataRef TemplateParamsNode, const char *UniqueIdentifier,
+    size_t UniqueIdentifierLen) {
+  auto Elts =
+      unwrap(Builder)->getOrCreateArray({unwrap(Elements), NumElements});
   return wrap(unwrap(Builder)->createClassType(
-                  unwrapDI<DIScope>(Scope), {Name, NameLen},
-                  unwrapDI<DIFile>(File), LineNumber,
-                  SizeInBits, AlignInBits, OffsetInBits,
-                  map_from_llvmDIFlags(Flags), unwrapDI<DIType>(DerivedFrom),
-                  Elts, unwrapDI<DIType>(VTableHolder),
-                  unwrapDI<MDNode>(TemplateParamsNode),
-                  {UniqueIdentifier, UniqueIdentifierLen}));
+      unwrapDI<DIScope>(Scope), {Name, NameLen}, unwrapDI<DIFile>(File),
+      LineNumber, SizeInBits, AlignInBits, OffsetInBits,
+      map_from_llvmDIFlags(Flags), unwrapDI<DIType>(DerivedFrom), Elts,
+      unwrapDI<DIType>(VTableHolder), unwrapDI<MDNode>(TemplateParamsNode),
+      {UniqueIdentifier, UniqueIdentifierLen}));
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateArtificialType(LLVMDIBuilderRef Builder,
-                                  LLVMMetadataRef Type) {
+LLVMMetadataRef LLVMDIBuilderCreateArtificialType(LLVMDIBuilderRef Builder,
+                                                  LLVMMetadataRef Type) {
   return wrap(unwrap(Builder)->createArtificialType(unwrapDI<DIType>(Type)));
 }
 
@@ -1519,16 +1478,14 @@ LLVMMetadataRef LLVMDIBuilderGetOrCreateTypeArray(LLVMDIBuilderRef Builder,
       unwrap(Builder)->getOrCreateTypeArray({unwrap(Types), Length}).get());
 }
 
-LLVMMetadataRef
-LLVMDIBuilderCreateSubroutineType(LLVMDIBuilderRef Builder,
-                                  LLVMMetadataRef File,
-                                  LLVMMetadataRef *ParameterTypes,
-                                  unsigned NumParameterTypes,
-                                  LLVMDIFlags Flags) {
-  auto Elts = unwrap(Builder)->getOrCreateTypeArray({unwrap(ParameterTypes),
-                                                     NumParameterTypes});
-  return wrap(unwrap(Builder)->createSubroutineType(
-    Elts, map_from_llvmDIFlags(Flags)));
+LLVMMetadataRef LLVMDIBuilderCreateSubroutineType(
+    LLVMDIBuilderRef Builder, LLVMMetadataRef File,
+    LLVMMetadataRef *ParameterTypes, unsigned NumParameterTypes,
+    LLVMDIFlags Flags) {
+  auto Elts = unwrap(Builder)->getOrCreateTypeArray(
+      {unwrap(ParameterTypes), NumParameterTypes});
+  return wrap(
+      unwrap(Builder)->createSubroutineType(Elts, map_from_llvmDIFlags(Flags)));
 }
 
 LLVMMetadataRef LLVMDIBuilderCreateExpression(LLVMDIBuilderRef Builder,
@@ -1550,17 +1507,17 @@ LLVMMetadataRef LLVMDIBuilderCreateGlobalVariableExpression(
     LLVMMetadataRef Expr, LLVMMetadataRef Decl, uint32_t AlignInBits) {
   return wrap(unwrap(Builder)->createGlobalVariableExpression(
       unwrapDI<DIScope>(Scope), {Name, NameLen}, {Linkage, LinkLen},
-      unwrapDI<DIFile>(File), LineNo, unwrapDI<DIType>(Ty), LocalToUnit,
-      true, unwrap<DIExpression>(Expr), unwrapDI<MDNode>(Decl),
-      nullptr, AlignInBits));
+      unwrapDI<DIFile>(File), LineNo, unwrapDI<DIType>(Ty), LocalToUnit, true,
+      unwrap<DIExpression>(Expr), unwrapDI<MDNode>(Decl), nullptr,
+      AlignInBits));
 }
 
 LLVMMetadataRef LLVMDIGlobalVariableExpressionGetVariable(LLVMMetadataRef GVE) {
   return wrap(unwrapDI<DIGlobalVariableExpression>(GVE)->getVariable());
 }
 
-LLVMMetadataRef LLVMDIGlobalVariableExpressionGetExpression(
-    LLVMMetadataRef GVE) {
+LLVMMetadataRef
+LLVMDIGlobalVariableExpressionGetExpression(LLVMMetadataRef GVE) {
   return wrap(unwrapDI<DIGlobalVariableExpression>(GVE)->getExpression());
 }
 
@@ -1609,42 +1566,34 @@ LLVMDIBuilderInsertDeclareBefore(LLVMDIBuilderRef Builder, LLVMValueRef Storage,
                                  LLVMMetadataRef VarInfo, LLVMMetadataRef Expr,
                                  LLVMMetadataRef DL, LLVMValueRef Instr) {
   return wrap(unwrap(Builder)->insertDeclare(
-                  unwrap(Storage), unwrap<DILocalVariable>(VarInfo),
-                  unwrap<DIExpression>(Expr), unwrap<DILocation>(DL),
-                  unwrap<Instruction>(Instr)));
+      unwrap(Storage), unwrap<DILocalVariable>(VarInfo),
+      unwrap<DIExpression>(Expr), unwrap<DILocation>(DL),
+      unwrap<Instruction>(Instr)));
 }
 
-LLVMValueRef LLVMDIBuilderInsertDeclareAtEnd(
-    LLVMDIBuilderRef Builder, LLVMValueRef Storage, LLVMMetadataRef VarInfo,
-    LLVMMetadataRef Expr, LLVMMetadataRef DL, LLVMBasicBlockRef Block) {
+LLVMValueRef
+LLVMDIBuilderInsertDeclareAtEnd(LLVMDIBuilderRef Builder, LLVMValueRef Storage,
+                                LLVMMetadataRef VarInfo, LLVMMetadataRef Expr,
+                                LLVMMetadataRef DL, LLVMBasicBlockRef Block) {
   return wrap(unwrap(Builder)->insertDeclare(
-                  unwrap(Storage), unwrap<DILocalVariable>(VarInfo),
-                  unwrap<DIExpression>(Expr), unwrap<DILocation>(DL),
-                  unwrap(Block)));
+      unwrap(Storage), unwrap<DILocalVariable>(VarInfo),
+      unwrap<DIExpression>(Expr), unwrap<DILocation>(DL), unwrap(Block)));
 }
 
-LLVMValueRef LLVMDIBuilderInsertDbgValueBefore(LLVMDIBuilderRef Builder,
-                                               LLVMValueRef Val,
-                                               LLVMMetadataRef VarInfo,
-                                               LLVMMetadataRef Expr,
-                                               LLVMMetadataRef DebugLoc,
-                                               LLVMValueRef Instr) {
+LLVMValueRef LLVMDIBuilderInsertDbgValueBefore(
+    LLVMDIBuilderRef Builder, LLVMValueRef Val, LLVMMetadataRef VarInfo,
+    LLVMMetadataRef Expr, LLVMMetadataRef DebugLoc, LLVMValueRef Instr) {
   return wrap(unwrap(Builder)->insertDbgValueIntrinsic(
-                  unwrap(Val), unwrap<DILocalVariable>(VarInfo),
-                  unwrap<DIExpression>(Expr), unwrap<DILocation>(DebugLoc),
-                  unwrap<Instruction>(Instr)));
+      unwrap(Val), unwrap<DILocalVariable>(VarInfo), unwrap<DIExpression>(Expr),
+      unwrap<DILocation>(DebugLoc), unwrap<Instruction>(Instr)));
 }
 
-LLVMValueRef LLVMDIBuilderInsertDbgValueAtEnd(LLVMDIBuilderRef Builder,
-                                              LLVMValueRef Val,
-                                              LLVMMetadataRef VarInfo,
-                                              LLVMMetadataRef Expr,
-                                              LLVMMetadataRef DebugLoc,
-                                              LLVMBasicBlockRef Block) {
+LLVMValueRef LLVMDIBuilderInsertDbgValueAtEnd(
+    LLVMDIBuilderRef Builder, LLVMValueRef Val, LLVMMetadataRef VarInfo,
+    LLVMMetadataRef Expr, LLVMMetadataRef DebugLoc, LLVMBasicBlockRef Block) {
   return wrap(unwrap(Builder)->insertDbgValueIntrinsic(
-                  unwrap(Val), unwrap<DILocalVariable>(VarInfo),
-                  unwrap<DIExpression>(Expr), unwrap<DILocation>(DebugLoc),
-                  unwrap(Block)));
+      unwrap(Val), unwrap<DILocalVariable>(VarInfo), unwrap<DIExpression>(Expr),
+      unwrap<DILocation>(DebugLoc), unwrap(Block)));
 }
 
 LLVMMetadataRef LLVMDIBuilderCreateAutoVariable(
@@ -1652,9 +1601,9 @@ LLVMMetadataRef LLVMDIBuilderCreateAutoVariable(
     size_t NameLen, LLVMMetadataRef File, unsigned LineNo, LLVMMetadataRef Ty,
     LLVMBool AlwaysPreserve, LLVMDIFlags Flags, uint32_t AlignInBits) {
   return wrap(unwrap(Builder)->createAutoVariable(
-                  unwrap<DIScope>(Scope), {Name, NameLen}, unwrap<DIFile>(File),
-                  LineNo, unwrap<DIType>(Ty), AlwaysPreserve,
-                  map_from_llvmDIFlags(Flags), AlignInBits));
+      unwrap<DIScope>(Scope), {Name, NameLen}, unwrap<DIFile>(File), LineNo,
+      unwrap<DIType>(Ty), AlwaysPreserve, map_from_llvmDIFlags(Flags),
+      AlignInBits));
 }
 
 LLVMMetadataRef LLVMDIBuilderCreateParameterVariable(
@@ -1662,9 +1611,8 @@ LLVMMetadataRef LLVMDIBuilderCreateParameterVariable(
     size_t NameLen, unsigned ArgNo, LLVMMetadataRef File, unsigned LineNo,
     LLVMMetadataRef Ty, LLVMBool AlwaysPreserve, LLVMDIFlags Flags) {
   return wrap(unwrap(Builder)->createParameterVariable(
-                  unwrap<DIScope>(Scope), {Name, NameLen}, ArgNo, unwrap<DIFile>(File),
-                  LineNo, unwrap<DIType>(Ty), AlwaysPreserve,
-                  map_from_llvmDIFlags(Flags)));
+      unwrap<DIScope>(Scope), {Name, NameLen}, ArgNo, unwrap<DIFile>(File),
+      LineNo, unwrap<DIType>(Ty), AlwaysPreserve, map_from_llvmDIFlags(Flags)));
 }
 
 LLVMMetadataRef LLVMDIBuilderGetOrCreateSubrange(LLVMDIBuilderRef Builder,
@@ -1703,9 +1651,9 @@ void LLVMInstructionSetDebugLoc(LLVMValueRef Inst, LLVMMetadataRef Loc) {
 }
 
 LLVMMetadataKind LLVMGetMetadataKind(LLVMMetadataRef Metadata) {
-  switch(unwrap(Metadata)->getMetadataID()) {
-#define HANDLE_METADATA_LEAF(CLASS) \
-  case Metadata::CLASS##Kind: \
+  switch (unwrap(Metadata)->getMetadataID()) {
+#define HANDLE_METADATA_LEAF(CLASS)                                            \
+  case Metadata::CLASS##Kind:                                                  \
     return (LLVMMetadataKind)LLVM##CLASS##MetadataKind;
 #include "llvm/IR/Metadata.def"
   default:
diff --git a/llvm/lib/IR/DebugInfoMetadata.cpp b/llvm/lib/IR/DebugInfoMetadata.cpp
index 4933b603268801b..193f056268fdf6a 100644
--- a/llvm/lib/IR/DebugInfoMetadata.cpp
+++ b/llvm/lib/IR/DebugInfoMetadata.cpp
@@ -1114,17 +1114,15 @@ DISubprogram *DISubprogram::getImpl(
   assert(isCanonical(Name) && "Expected canonical MDString");
   assert(isCanonical(LinkageName) && "Expected canonical MDString");
   assert(isCanonical(TargetFuncName) && "Expected canonical MDString");
-  DEFINE_GETIMPL_LOOKUP(DISubprogram,
-                        (Scope, Name, LinkageName, File, Line, Type, ScopeLine,
-                         ContainingType, VirtualIndex, ThisAdjustment, Flags,
-                         SPFlags, Unit, TemplateParams, Declaration,
-                         RetainedNodes, ThrownTypes, Annotations,
-                         TargetFuncName));
+  DEFINE_GETIMPL_LOOKUP(
+      DISubprogram,
+      (Scope, Name, LinkageName, File, Line, Type, ScopeLine, ContainingType,
+       VirtualIndex, ThisAdjustment, Flags, SPFlags, Unit, TemplateParams,
+       Declaration, RetainedNodes, ThrownTypes, Annotations, TargetFuncName));
   SmallVector<Metadata *, 13> Ops = {
-      File,           Scope,          Name,        LinkageName,
-      Type,           Unit,           Declaration, RetainedNodes,
-      ContainingType, TemplateParams, ThrownTypes, Annotations,
-      TargetFuncName};
+      File,        Scope,       Name,          LinkageName,    Type,
+      Unit,        Declaration, RetainedNodes, ContainingType, TemplateParams,
+      ThrownTypes, Annotations, TargetFuncName};
   if (!TargetFuncName) {
     Ops.pop_back();
     if (!Annotations) {
@@ -1641,7 +1639,7 @@ void DIExpression::appendOffset(SmallVectorImpl<uint64_t> &Ops,
     Ops.push_back(dwarf::DW_OP_constu);
     // Avoid UB when encountering LLONG_MIN, because in 2's complement
     // abs(LLONG_MIN) is LLONG_MAX+1.
-    uint64_t AbsMinusOne = -(Offset+1);
+    uint64_t AbsMinusOne = -(Offset + 1);
     Ops.push_back(AbsMinusOne + 1);
     Ops.push_back(dwarf::DW_OP_minus);
   }
diff --git a/llvm/lib/IR/DiagnosticHandler.cpp b/llvm/lib/IR/DiagnosticHandler.cpp
index 683eade502917d3..2c9e3bfbd8c4ac5 100644
--- a/llvm/lib/IR/DiagnosticHandler.cpp
+++ b/llvm/lib/IR/DiagnosticHandler.cpp
@@ -66,7 +66,7 @@ static cl::opt<PassRemarksOpt, true, cl::parser<std::string>>
             "Enable optimization analysis remarks from passes whose name match "
             "the given regular expression"),
         cl::Hidden, cl::location(PassRemarksAnalysisOptLoc), cl::ValueRequired);
-}
+} // namespace
 
 bool DiagnosticHandler::isAnalysisRemarkEnabled(StringRef PassName) const {
   return (PassRemarksAnalysisOptLoc.Pattern &&
diff --git a/llvm/lib/IR/DiagnosticInfo.cpp b/llvm/lib/IR/DiagnosticInfo.cpp
index 342c4cbbc39d65f..326c71b37318148 100644
--- a/llvm/lib/IR/DiagnosticInfo.cpp
+++ b/llvm/lib/IR/DiagnosticInfo.cpp
@@ -168,8 +168,7 @@ DiagnosticInfoOptimizationBase::Argument::Argument(StringRef Key,
   if (auto *F = dyn_cast<Function>(V)) {
     if (DISubprogram *SP = F->getSubprogram())
       Loc = SP;
-  }
-  else if (auto *I = dyn_cast<Instruction>(V))
+  } else if (auto *I = dyn_cast<Instruction>(V))
     Loc = I->getDebugLoc();
 
   // Only include names that correspond to user variables.  FIXME: We should use
@@ -233,7 +232,8 @@ DiagnosticInfoOptimizationBase::Argument::Argument(StringRef Key, DebugLoc Loc)
     : Key(std::string(Key)), Loc(Loc) {
   if (Loc) {
     Val = (Loc->getFilename() + ":" + Twine(Loc.getLine()) + ":" +
-           Twine(Loc.getCol())).str();
+           Twine(Loc.getCol()))
+              .str();
   } else {
     Val = "<UNKNOWN LOCATION>";
   }
diff --git a/llvm/lib/IR/DiagnosticPrinter.cpp b/llvm/lib/IR/DiagnosticPrinter.cpp
index 49b8bbae53be9e9..c6f6104d8e86086 100644
--- a/llvm/lib/IR/DiagnosticPrinter.cpp
+++ b/llvm/lib/IR/DiagnosticPrinter.cpp
@@ -44,8 +44,8 @@ DiagnosticPrinter &DiagnosticPrinterRawOStream::operator<<(const char *Str) {
   return *this;
 }
 
-DiagnosticPrinter &DiagnosticPrinterRawOStream::operator<<(
-    const std::string &Str) {
+DiagnosticPrinter &
+DiagnosticPrinterRawOStream::operator<<(const std::string &Str) {
   Stream << Str;
   return *this;
 }
@@ -59,8 +59,8 @@ DiagnosticPrinter &DiagnosticPrinterRawOStream::operator<<(long N) {
   return *this;
 }
 
-DiagnosticPrinter &DiagnosticPrinterRawOStream::operator<<(
-    unsigned long long N) {
+DiagnosticPrinter &
+DiagnosticPrinterRawOStream::operator<<(unsigned long long N) {
   Stream << N;
   return *this;
 }
@@ -107,8 +107,8 @@ DiagnosticPrinter &DiagnosticPrinterRawOStream::operator<<(const Module &M) {
 }
 
 // Other types.
-DiagnosticPrinter &DiagnosticPrinterRawOStream::
-operator<<(const SMDiagnostic &Diag) {
+DiagnosticPrinter &
+DiagnosticPrinterRawOStream::operator<<(const SMDiagnostic &Diag) {
   // We don't have to print the SMDiagnostic kind, as the diagnostic severity
   // is printed by the diagnostic handler.
   Diag.print("", Stream, /*ShowColors=*/true, /*ShowKindLabel=*/false);
diff --git a/llvm/lib/IR/Dominators.cpp b/llvm/lib/IR/Dominators.cpp
index 24cc9f46ff79fe7..be39d68d45dd47d 100644
--- a/llvm/lib/IR/Dominators.cpp
+++ b/llvm/lib/IR/Dominators.cpp
@@ -72,7 +72,7 @@ bool BasicBlockEdge::isSingleEdge() const {
 
 template class llvm::DomTreeNodeBase<BasicBlock>;
 template class llvm::DominatorTreeBase<BasicBlock, false>; // DomTreeBase
-template class llvm::DominatorTreeBase<BasicBlock, true>; // PostDomTreeBase
+template class llvm::DominatorTreeBase<BasicBlock, true>;  // PostDomTreeBase
 
 template class llvm::cfg::Update<BasicBlock *>;
 
@@ -323,7 +323,8 @@ bool DominatorTree::isReachableFromEntry(const Use &U) const {
 
   // ConstantExprs aren't really reachable from the entry block, but they
   // don't need to be treated like unreachable code either.
-  if (!I) return true;
+  if (!I)
+    return true;
 
   // PHI nodes use their operands on their incoming edges.
   if (PHINode *PN = dyn_cast<PHINode>(I))
diff --git a/llvm/lib/IR/Function.cpp b/llvm/lib/IR/Function.cpp
index b1b8404157c3b2d..719af79d0506cbf 100644
--- a/llvm/lib/IR/Function.cpp
+++ b/llvm/lib/IR/Function.cpp
@@ -90,12 +90,11 @@ Argument::Argument(Type *Ty, const Twine &Name, Function *Par, unsigned ArgNo)
   setName(Name);
 }
 
-void Argument::setParent(Function *parent) {
-  Parent = parent;
-}
+void Argument::setParent(Function *parent) { Parent = parent; }
 
 bool Argument::hasNonNullAttr(bool AllowUndefOrPoison) const {
-  if (!getType()->isPointerTy()) return false;
+  if (!getType()->isPointerTy())
+    return false;
   if (getParent()->hasParamAttribute(getArgNo(), Attribute::NonNull) &&
       (AllowUndefOrPoison ||
        getParent()->hasParamAttribute(getArgNo(), Attribute::NoUndef)))
@@ -108,7 +107,8 @@ bool Argument::hasNonNullAttr(bool AllowUndefOrPoison) const {
 }
 
 bool Argument::hasByValAttr() const {
-  if (!getType()->isPointerTy()) return false;
+  if (!getType()->isPointerTy())
+    return false;
   return hasAttribute(Attribute::ByVal);
 }
 
@@ -127,7 +127,8 @@ bool Argument::hasSwiftErrorAttr() const {
 }
 
 bool Argument::hasInAllocaAttr() const {
-  if (!getType()->isPointerTy()) return false;
+  if (!getType()->isPointerTy())
+    return false;
   return hasAttribute(Attribute::InAlloca);
 }
 
@@ -138,7 +139,8 @@ bool Argument::hasPreallocatedAttr() const {
 }
 
 bool Argument::hasPassPointeeByValueCopyAttr() const {
-  if (!getType()->isPointerTy()) return false;
+  if (!getType()->isPointerTy())
+    return false;
   AttributeList Attrs = getParent()->getAttributes();
   return Attrs.hasParamAttr(getArgNo(), Attribute::ByVal) ||
          Attrs.hasParamAttr(getArgNo(), Attribute::InAlloca) ||
@@ -235,45 +237,44 @@ FPClassTest Argument::getNoFPClass() const {
 }
 
 bool Argument::hasNestAttr() const {
-  if (!getType()->isPointerTy()) return false;
+  if (!getType()->isPointerTy())
+    return false;
   return hasAttribute(Attribute::Nest);
 }
 
 bool Argument::hasNoAliasAttr() const {
-  if (!getType()->isPointerTy()) return false;
+  if (!getType()->isPointerTy())
+    return false;
   return hasAttribute(Attribute::NoAlias);
 }
 
 bool Argument::hasNoCaptureAttr() const {
-  if (!getType()->isPointerTy()) return false;
+  if (!getType()->isPointerTy())
+    return false;
   return hasAttribute(Attribute::NoCapture);
 }
 
 bool Argument::hasNoFreeAttr() const {
-  if (!getType()->isPointerTy()) return false;
+  if (!getType()->isPointerTy())
+    return false;
   return hasAttribute(Attribute::NoFree);
 }
 
 bool Argument::hasStructRetAttr() const {
-  if (!getType()->isPointerTy()) return false;
+  if (!getType()->isPointerTy())
+    return false;
   return hasAttribute(Attribute::StructRet);
 }
 
-bool Argument::hasInRegAttr() const {
-  return hasAttribute(Attribute::InReg);
-}
+bool Argument::hasInRegAttr() const { return hasAttribute(Attribute::InReg); }
 
 bool Argument::hasReturnedAttr() const {
   return hasAttribute(Attribute::Returned);
 }
 
-bool Argument::hasZExtAttr() const {
-  return hasAttribute(Attribute::ZExt);
-}
+bool Argument::hasZExtAttr() const { return hasAttribute(Attribute::ZExt); }
 
-bool Argument::hasSExtAttr() const {
-  return hasAttribute(Attribute::SExt);
-}
+bool Argument::hasSExtAttr() const { return hasAttribute(Attribute::SExt); }
 
 bool Argument::onlyReadsMemory() const {
   AttributeList Attrs = getParent()->getAttributes();
@@ -317,9 +318,7 @@ Attribute Argument::getAttribute(Attribute::AttrKind Kind) const {
 // Helper Methods in Function
 //===----------------------------------------------------------------------===//
 
-LLVMContext &Function::getContext() const {
-  return getType()->getContext();
-}
+LLVMContext &Function::getContext() const { return getType()->getContext(); }
 
 unsigned Function::getInstructionCount() const {
   unsigned NumInstrs = 0;
@@ -413,7 +412,7 @@ Function::Function(FunctionType *Ty, LinkageTypes Linkage, unsigned AddrSpace,
 
   // If the function has arguments, mark them as lazily built.
   if (Ty->getNumParams())
-    setValueSubclassData(1);   // Set the "has lazy arguments" bit.
+    setValueSubclassData(1); // Set the "has lazy arguments" bit.
 
   if (ParentModule)
     ParentModule->getFunctionList().push_back(this);
@@ -427,7 +426,7 @@ Function::Function(FunctionType *Ty, LinkageTypes Linkage, unsigned AddrSpace,
 }
 
 Function::~Function() {
-  dropAllReferences();    // After this it is safe to delete instructions.
+  dropAllReferences(); // After this it is safe to delete instructions.
 
   // Delete all of the method arguments and unlink from symbol table...
   if (Arguments)
@@ -452,7 +451,7 @@ void Function::BuildLazyArguments() const {
   // Clear the lazy arguments bit.
   unsigned SDC = getSubclassDataFromValue();
   SDC &= ~(1 << 0);
-  const_cast<Function*>(this)->setValueSubclassData(SDC);
+  const_cast<Function *>(this)->setValueSubclassData(SDC);
   assert(!hasLazyArguments());
 }
 
@@ -830,8 +829,8 @@ void Function::setOnlyAccessesInaccessibleMemOrArgMem() {
 }
 
 /// Table of string intrinsic names indexed by enum value.
-static const char * const IntrinsicNameTable[] = {
-  "not_intrinsic",
+static const char *const IntrinsicNameTable[] = {
+    "not_intrinsic",
 #define GET_INTRINSIC_NAME_TABLE
 #include "llvm/IR/IntrinsicImpl.inc"
 #undef GET_INTRINSIC_NAME_TABLE
@@ -846,9 +845,7 @@ bool Function::isTargetIntrinsic(Intrinsic::ID IID) {
   return IID > TargetInfos[0].Count;
 }
 
-bool Function::isTargetIntrinsic() const {
-  return isTargetIntrinsic(IntID);
-}
+bool Function::isTargetIntrinsic() const { return isTargetIntrinsic(IntID); }
 
 /// Find the segment of \c IntrinsicNameTable for intrinsics with the same
 /// target as \c Name, or the generic table if \c Name is not target specific.
@@ -960,18 +957,41 @@ static std::string getMangledTypeStr(Type *Ty, bool &HasUnnamedType) {
     Result += "t";
   } else if (Ty) {
     switch (Ty->getTypeID()) {
-    default: llvm_unreachable("Unhandled type");
-    case Type::VoidTyID:      Result += "isVoid";   break;
-    case Type::MetadataTyID:  Result += "Metadata"; break;
-    case Type::HalfTyID:      Result += "f16";      break;
-    case Type::BFloatTyID:    Result += "bf16";     break;
-    case Type::FloatTyID:     Result += "f32";      break;
-    case Type::DoubleTyID:    Result += "f64";      break;
-    case Type::X86_FP80TyID:  Result += "f80";      break;
-    case Type::FP128TyID:     Result += "f128";     break;
-    case Type::PPC_FP128TyID: Result += "ppcf128";  break;
-    case Type::X86_MMXTyID:   Result += "x86mmx";   break;
-    case Type::X86_AMXTyID:   Result += "x86amx";   break;
+    default:
+      llvm_unreachable("Unhandled type");
+    case Type::VoidTyID:
+      Result += "isVoid";
+      break;
+    case Type::MetadataTyID:
+      Result += "Metadata";
+      break;
+    case Type::HalfTyID:
+      Result += "f16";
+      break;
+    case Type::BFloatTyID:
+      Result += "bf16";
+      break;
+    case Type::FloatTyID:
+      Result += "f32";
+      break;
+    case Type::DoubleTyID:
+      Result += "f64";
+      break;
+    case Type::X86_FP80TyID:
+      Result += "f80";
+      break;
+    case Type::FP128TyID:
+      Result += "f128";
+      break;
+    case Type::PPC_FP128TyID:
+      Result += "ppcf128";
+      break;
+    case Type::X86_MMXTyID:
+      Result += "x86mmx";
+      break;
+    case Type::X86_AMXTyID:
+      Result += "x86amx";
+      break;
     case Type::IntegerTyID:
       Result += "i" + utostr(cast<IntegerType>(Ty)->getBitWidth());
       break;
@@ -1039,9 +1059,10 @@ enum IIT_Info {
 #undef GET_INTRINSIC_IITINFO
 };
 
-static void DecodeIITType(unsigned &NextElt, ArrayRef<unsigned char> Infos,
-                      IIT_Info LastInfo,
-                      SmallVectorImpl<Intrinsic::IITDescriptor> &OutputTable) {
+static void
+DecodeIITType(unsigned &NextElt, ArrayRef<unsigned char> Infos,
+              IIT_Info LastInfo,
+              SmallVectorImpl<Intrinsic::IITDescriptor> &OutputTable) {
   using namespace Intrinsic;
 
   bool IsScalableVector = (LastInfo == IIT_SCALABLE_VEC);
@@ -1102,7 +1123,7 @@ static void DecodeIITType(unsigned &NextElt, ArrayRef<unsigned char> Infos,
     OutputTable.push_back(IITDescriptor::get(IITDescriptor::Integer, 8));
     return;
   case IIT_I16:
-    OutputTable.push_back(IITDescriptor::get(IITDescriptor::Integer,16));
+    OutputTable.push_back(IITDescriptor::get(IITDescriptor::Integer, 16));
     return;
   case IIT_I32:
     OutputTable.push_back(IITDescriptor::get(IITDescriptor::Integer, 32));
@@ -1171,8 +1192,8 @@ static void DecodeIITType(unsigned &NextElt, ArrayRef<unsigned char> Infos,
     OutputTable.push_back(IITDescriptor::get(IITDescriptor::Pointer, 0));
     return;
   case IIT_ANYPTR: // [ANYPTR addrspace]
-    OutputTable.push_back(IITDescriptor::get(IITDescriptor::Pointer,
-                                             Infos[NextElt++]));
+    OutputTable.push_back(
+        IITDescriptor::get(IITDescriptor::Pointer, Infos[NextElt++]));
     return;
   case IIT_ARG: {
     unsigned ArgInfo = (NextElt == Infos.size() ? 0 : Infos[NextElt++]);
@@ -1181,26 +1202,26 @@ static void DecodeIITType(unsigned &NextElt, ArrayRef<unsigned char> Infos,
   }
   case IIT_EXTEND_ARG: {
     unsigned ArgInfo = (NextElt == Infos.size() ? 0 : Infos[NextElt++]);
-    OutputTable.push_back(IITDescriptor::get(IITDescriptor::ExtendArgument,
-                                             ArgInfo));
+    OutputTable.push_back(
+        IITDescriptor::get(IITDescriptor::ExtendArgument, ArgInfo));
     return;
   }
   case IIT_TRUNC_ARG: {
     unsigned ArgInfo = (NextElt == Infos.size() ? 0 : Infos[NextElt++]);
-    OutputTable.push_back(IITDescriptor::get(IITDescriptor::TruncArgument,
-                                             ArgInfo));
+    OutputTable.push_back(
+        IITDescriptor::get(IITDescriptor::TruncArgument, ArgInfo));
     return;
   }
   case IIT_HALF_VEC_ARG: {
     unsigned ArgInfo = (NextElt == Infos.size() ? 0 : Infos[NextElt++]);
-    OutputTable.push_back(IITDescriptor::get(IITDescriptor::HalfVecArgument,
-                                             ArgInfo));
+    OutputTable.push_back(
+        IITDescriptor::get(IITDescriptor::HalfVecArgument, ArgInfo));
     return;
   }
   case IIT_SAME_VEC_WIDTH_ARG: {
     unsigned ArgInfo = (NextElt == Infos.size() ? 0 : Infos[NextElt++]);
-    OutputTable.push_back(IITDescriptor::get(IITDescriptor::SameVecWidthArgument,
-                                             ArgInfo));
+    OutputTable.push_back(
+        IITDescriptor::get(IITDescriptor::SameVecWidthArgument, ArgInfo));
     return;
   }
   case IIT_VEC_OF_ANYPTRS_TO_ELT: {
@@ -1213,15 +1234,30 @@ static void DecodeIITType(unsigned &NextElt, ArrayRef<unsigned char> Infos,
   case IIT_EMPTYSTRUCT:
     OutputTable.push_back(IITDescriptor::get(IITDescriptor::Struct, 0));
     return;
-  case IIT_STRUCT9: ++StructElts; [[fallthrough]];
-  case IIT_STRUCT8: ++StructElts; [[fallthrough]];
-  case IIT_STRUCT7: ++StructElts; [[fallthrough]];
-  case IIT_STRUCT6: ++StructElts; [[fallthrough]];
-  case IIT_STRUCT5: ++StructElts; [[fallthrough]];
-  case IIT_STRUCT4: ++StructElts; [[fallthrough]];
-  case IIT_STRUCT3: ++StructElts; [[fallthrough]];
+  case IIT_STRUCT9:
+    ++StructElts;
+    [[fallthrough]];
+  case IIT_STRUCT8:
+    ++StructElts;
+    [[fallthrough]];
+  case IIT_STRUCT7:
+    ++StructElts;
+    [[fallthrough]];
+  case IIT_STRUCT6:
+    ++StructElts;
+    [[fallthrough]];
+  case IIT_STRUCT5:
+    ++StructElts;
+    [[fallthrough]];
+  case IIT_STRUCT4:
+    ++StructElts;
+    [[fallthrough]];
+  case IIT_STRUCT3:
+    ++StructElts;
+    [[fallthrough]];
   case IIT_STRUCT2: {
-    OutputTable.push_back(IITDescriptor::get(IITDescriptor::Struct,StructElts));
+    OutputTable.push_back(
+        IITDescriptor::get(IITDescriptor::Struct, StructElts));
 
     for (unsigned i = 0; i != StructElts; ++i)
       DecodeIITType(NextElt, Infos, Info, OutputTable);
@@ -1229,20 +1265,20 @@ static void DecodeIITType(unsigned &NextElt, ArrayRef<unsigned char> Infos,
   }
   case IIT_SUBDIVIDE2_ARG: {
     unsigned ArgInfo = (NextElt == Infos.size() ? 0 : Infos[NextElt++]);
-    OutputTable.push_back(IITDescriptor::get(IITDescriptor::Subdivide2Argument,
-                                             ArgInfo));
+    OutputTable.push_back(
+        IITDescriptor::get(IITDescriptor::Subdivide2Argument, ArgInfo));
     return;
   }
   case IIT_SUBDIVIDE4_ARG: {
     unsigned ArgInfo = (NextElt == Infos.size() ? 0 : Infos[NextElt++]);
-    OutputTable.push_back(IITDescriptor::get(IITDescriptor::Subdivide4Argument,
-                                             ArgInfo));
+    OutputTable.push_back(
+        IITDescriptor::get(IITDescriptor::Subdivide4Argument, ArgInfo));
     return;
   }
   case IIT_VEC_ELEMENT: {
     unsigned ArgInfo = (NextElt == Infos.size() ? 0 : Infos[NextElt++]);
-    OutputTable.push_back(IITDescriptor::get(IITDescriptor::VecElementArgument,
-                                             ArgInfo));
+    OutputTable.push_back(
+        IITDescriptor::get(IITDescriptor::VecElementArgument, ArgInfo));
     return;
   }
   case IIT_SCALABLE_VEC: {
@@ -1251,8 +1287,8 @@ static void DecodeIITType(unsigned &NextElt, ArrayRef<unsigned char> Infos,
   }
   case IIT_VEC_OF_BITCASTS_TO_INT: {
     unsigned ArgInfo = (NextElt == Infos.size() ? 0 : Infos[NextElt++]);
-    OutputTable.push_back(IITDescriptor::get(IITDescriptor::VecOfBitcastsToInt,
-                                             ArgInfo));
+    OutputTable.push_back(
+        IITDescriptor::get(IITDescriptor::VecOfBitcastsToInt, ArgInfo));
     return;
   }
   }
@@ -1263,10 +1299,10 @@ static void DecodeIITType(unsigned &NextElt, ArrayRef<unsigned char> Infos,
 #include "llvm/IR/IntrinsicImpl.inc"
 #undef GET_INTRINSIC_GENERATOR_GLOBAL
 
-void Intrinsic::getIntrinsicInfoTableEntries(ID id,
-                                             SmallVectorImpl<IITDescriptor> &T){
+void Intrinsic::getIntrinsicInfoTableEntries(
+    ID id, SmallVectorImpl<IITDescriptor> &T) {
   // Check to see if the intrinsic's type was expressible by the table.
-  unsigned TableVal = IIT_Table[id-1];
+  unsigned TableVal = IIT_Table[id - 1];
 
   // Decode the TableVal into an array of IITValues.
   SmallVector<unsigned char, 8> IITValues;
@@ -1297,25 +1333,37 @@ void Intrinsic::getIntrinsicInfoTableEntries(ID id,
 }
 
 static Type *DecodeFixedType(ArrayRef<Intrinsic::IITDescriptor> &Infos,
-                             ArrayRef<Type*> Tys, LLVMContext &Context) {
+                             ArrayRef<Type *> Tys, LLVMContext &Context) {
   using namespace Intrinsic;
 
   IITDescriptor D = Infos.front();
   Infos = Infos.slice(1);
 
   switch (D.Kind) {
-  case IITDescriptor::Void: return Type::getVoidTy(Context);
-  case IITDescriptor::VarArg: return Type::getVoidTy(Context);
-  case IITDescriptor::MMX: return Type::getX86_MMXTy(Context);
-  case IITDescriptor::AMX: return Type::getX86_AMXTy(Context);
-  case IITDescriptor::Token: return Type::getTokenTy(Context);
-  case IITDescriptor::Metadata: return Type::getMetadataTy(Context);
-  case IITDescriptor::Half: return Type::getHalfTy(Context);
-  case IITDescriptor::BFloat: return Type::getBFloatTy(Context);
-  case IITDescriptor::Float: return Type::getFloatTy(Context);
-  case IITDescriptor::Double: return Type::getDoubleTy(Context);
-  case IITDescriptor::Quad: return Type::getFP128Ty(Context);
-  case IITDescriptor::PPCQuad: return Type::getPPC_FP128Ty(Context);
+  case IITDescriptor::Void:
+    return Type::getVoidTy(Context);
+  case IITDescriptor::VarArg:
+    return Type::getVoidTy(Context);
+  case IITDescriptor::MMX:
+    return Type::getX86_MMXTy(Context);
+  case IITDescriptor::AMX:
+    return Type::getX86_AMXTy(Context);
+  case IITDescriptor::Token:
+    return Type::getTokenTy(Context);
+  case IITDescriptor::Metadata:
+    return Type::getMetadataTy(Context);
+  case IITDescriptor::Half:
+    return Type::getHalfTy(Context);
+  case IITDescriptor::BFloat:
+    return Type::getBFloatTy(Context);
+  case IITDescriptor::Float:
+    return Type::getFloatTy(Context);
+  case IITDescriptor::Double:
+    return Type::getDoubleTy(Context);
+  case IITDescriptor::Quad:
+    return Type::getFP128Ty(Context);
+  case IITDescriptor::PPCQuad:
+    return Type::getPPC_FP128Ty(Context);
   case IITDescriptor::AArch64Svcount:
     return TargetExtType::get(Context, "aarch64.svcount");
 
@@ -1359,8 +1407,8 @@ static Type *DecodeFixedType(ArrayRef<Intrinsic::IITDescriptor> &Infos,
     return VectorType::getSubdividedVectorType(VTy, SubDivs);
   }
   case IITDescriptor::HalfVecArgument:
-    return VectorType::getHalfElementsVectorType(cast<VectorType>(
-                                                  Tys[D.getArgumentNumber()]));
+    return VectorType::getHalfElementsVectorType(
+        cast<VectorType>(Tys[D.getArgumentNumber()]));
   case IITDescriptor::SameVecWidthArgument: {
     Type *EltTy = DecodeFixedType(Infos, Tys, Context);
     Type *Ty = Tys[D.getArgumentNumber()];
@@ -1387,20 +1435,21 @@ static Type *DecodeFixedType(ArrayRef<Intrinsic::IITDescriptor> &Infos,
   llvm_unreachable("unhandled");
 }
 
-FunctionType *Intrinsic::getType(LLVMContext &Context,
-                                 ID id, ArrayRef<Type*> Tys) {
+FunctionType *Intrinsic::getType(LLVMContext &Context, ID id,
+                                 ArrayRef<Type *> Tys) {
   SmallVector<IITDescriptor, 8> Table;
   getIntrinsicInfoTableEntries(id, Table);
 
   ArrayRef<IITDescriptor> TableRef = Table;
   Type *ResultTy = DecodeFixedType(TableRef, Tys, Context);
 
-  SmallVector<Type*, 8> ArgTys;
+  SmallVector<Type *, 8> ArgTys;
   while (!TableRef.empty())
     ArgTys.push_back(DecodeFixedType(TableRef, Tys, Context));
 
-  // DecodeFixedType returns Void for IITDescriptor::Void and IITDescriptor::VarArg
-  // If we see void type as the type of the last argument, it is vararg intrinsic
+  // DecodeFixedType returns Void for IITDescriptor::Void and
+  // IITDescriptor::VarArg If we see void type as the type of the last argument,
+  // it is vararg intrinsic
   if (!ArgTys.empty() && ArgTys.back()->isVoidTy()) {
     ArgTys.pop_back();
     return FunctionType::get(ResultTy, ArgTys, true);
@@ -1408,7 +1457,7 @@ FunctionType *Intrinsic::getType(LLVMContext &Context,
   return FunctionType::get(ResultTy, ArgTys, false);
 }
 
-bool Intrinsic::isOverloaded(ID id) {
+bool Intrinsic::isOverloaded(ID id){
 #define GET_INTRINSIC_OVERLOAD_TABLE
 #include "llvm/IR/IntrinsicImpl.inc"
 #undef GET_INTRINSIC_OVERLOAD_TABLE
@@ -1419,7 +1468,7 @@ bool Intrinsic::isOverloaded(ID id) {
 #include "llvm/IR/IntrinsicImpl.inc"
 #undef GET_INTRINSIC_ATTRIBUTES
 
-Function *Intrinsic::getDeclaration(Module *M, ID id, ArrayRef<Type*> Tys) {
+Function *Intrinsic::getDeclaration(Module *M, ID id, ArrayRef<Type *> Tys) {
   // There can never be multiple globals with the same name of different types,
   // because intrinsics must be a specific type.
   auto *FT = getType(M->getContext(), id, Tys);
@@ -1442,15 +1491,16 @@ Function *Intrinsic::getDeclaration(Module *M, ID id, ArrayRef<Type*> Tys) {
 using DeferredIntrinsicMatchPair =
     std::pair<Type *, ArrayRef<Intrinsic::IITDescriptor>>;
 
-static bool matchIntrinsicType(
-    Type *Ty, ArrayRef<Intrinsic::IITDescriptor> &Infos,
-    SmallVectorImpl<Type *> &ArgTys,
-    SmallVectorImpl<DeferredIntrinsicMatchPair> &DeferredChecks,
-    bool IsDeferredCheck) {
+static bool
+matchIntrinsicType(Type *Ty, ArrayRef<Intrinsic::IITDescriptor> &Infos,
+                   SmallVectorImpl<Type *> &ArgTys,
+                   SmallVectorImpl<DeferredIntrinsicMatchPair> &DeferredChecks,
+                   bool IsDeferredCheck) {
   using namespace Intrinsic;
 
   // If we ran out of descriptors, there are too many arguments.
-  if (Infos.empty()) return true;
+  if (Infos.empty())
+    return true;
 
   // Do this before slicing off the 'front' part
   auto InfosRef = Infos;
@@ -1463,184 +1513,202 @@ static bool matchIntrinsicType(
   Infos = Infos.slice(1);
 
   switch (D.Kind) {
-    case IITDescriptor::Void: return !Ty->isVoidTy();
-    case IITDescriptor::VarArg: return true;
-    case IITDescriptor::MMX:  return !Ty->isX86_MMXTy();
-    case IITDescriptor::AMX:  return !Ty->isX86_AMXTy();
-    case IITDescriptor::Token: return !Ty->isTokenTy();
-    case IITDescriptor::Metadata: return !Ty->isMetadataTy();
-    case IITDescriptor::Half: return !Ty->isHalfTy();
-    case IITDescriptor::BFloat: return !Ty->isBFloatTy();
-    case IITDescriptor::Float: return !Ty->isFloatTy();
-    case IITDescriptor::Double: return !Ty->isDoubleTy();
-    case IITDescriptor::Quad: return !Ty->isFP128Ty();
-    case IITDescriptor::PPCQuad: return !Ty->isPPC_FP128Ty();
-    case IITDescriptor::Integer: return !Ty->isIntegerTy(D.Integer_Width);
-    case IITDescriptor::AArch64Svcount:
-      return !isa<TargetExtType>(Ty) ||
-             cast<TargetExtType>(Ty)->getName() != "aarch64.svcount";
-    case IITDescriptor::Vector: {
-      VectorType *VT = dyn_cast<VectorType>(Ty);
-      return !VT || VT->getElementCount() != D.Vector_Width ||
-             matchIntrinsicType(VT->getElementType(), Infos, ArgTys,
-                                DeferredChecks, IsDeferredCheck);
-    }
-    case IITDescriptor::Pointer: {
-      PointerType *PT = dyn_cast<PointerType>(Ty);
-      return !PT || PT->getAddressSpace() != D.Pointer_AddressSpace;
-    }
+  case IITDescriptor::Void:
+    return !Ty->isVoidTy();
+  case IITDescriptor::VarArg:
+    return true;
+  case IITDescriptor::MMX:
+    return !Ty->isX86_MMXTy();
+  case IITDescriptor::AMX:
+    return !Ty->isX86_AMXTy();
+  case IITDescriptor::Token:
+    return !Ty->isTokenTy();
+  case IITDescriptor::Metadata:
+    return !Ty->isMetadataTy();
+  case IITDescriptor::Half:
+    return !Ty->isHalfTy();
+  case IITDescriptor::BFloat:
+    return !Ty->isBFloatTy();
+  case IITDescriptor::Float:
+    return !Ty->isFloatTy();
+  case IITDescriptor::Double:
+    return !Ty->isDoubleTy();
+  case IITDescriptor::Quad:
+    return !Ty->isFP128Ty();
+  case IITDescriptor::PPCQuad:
+    return !Ty->isPPC_FP128Ty();
+  case IITDescriptor::Integer:
+    return !Ty->isIntegerTy(D.Integer_Width);
+  case IITDescriptor::AArch64Svcount:
+    return !isa<TargetExtType>(Ty) ||
+           cast<TargetExtType>(Ty)->getName() != "aarch64.svcount";
+  case IITDescriptor::Vector: {
+    VectorType *VT = dyn_cast<VectorType>(Ty);
+    return !VT || VT->getElementCount() != D.Vector_Width ||
+           matchIntrinsicType(VT->getElementType(), Infos, ArgTys,
+                              DeferredChecks, IsDeferredCheck);
+  }
+  case IITDescriptor::Pointer: {
+    PointerType *PT = dyn_cast<PointerType>(Ty);
+    return !PT || PT->getAddressSpace() != D.Pointer_AddressSpace;
+  }
 
-    case IITDescriptor::Struct: {
-      StructType *ST = dyn_cast<StructType>(Ty);
-      if (!ST || !ST->isLiteral() || ST->isPacked() ||
-          ST->getNumElements() != D.Struct_NumElements)
+  case IITDescriptor::Struct: {
+    StructType *ST = dyn_cast<StructType>(Ty);
+    if (!ST || !ST->isLiteral() || ST->isPacked() ||
+        ST->getNumElements() != D.Struct_NumElements)
+      return true;
+
+    for (unsigned i = 0, e = D.Struct_NumElements; i != e; ++i)
+      if (matchIntrinsicType(ST->getElementType(i), Infos, ArgTys,
+                             DeferredChecks, IsDeferredCheck))
         return true;
+    return false;
+  }
 
-      for (unsigned i = 0, e = D.Struct_NumElements; i != e; ++i)
-        if (matchIntrinsicType(ST->getElementType(i), Infos, ArgTys,
-                               DeferredChecks, IsDeferredCheck))
-          return true;
-      return false;
+  case IITDescriptor::Argument:
+    // If this is the second occurrence of an argument,
+    // verify that the later instance matches the previous instance.
+    if (D.getArgumentNumber() < ArgTys.size())
+      return Ty != ArgTys[D.getArgumentNumber()];
+
+    if (D.getArgumentNumber() > ArgTys.size() ||
+        D.getArgumentKind() == IITDescriptor::AK_MatchType)
+      return IsDeferredCheck || DeferCheck(Ty);
+
+    assert(D.getArgumentNumber() == ArgTys.size() && !IsDeferredCheck &&
+           "Table consistency error");
+    ArgTys.push_back(Ty);
+
+    switch (D.getArgumentKind()) {
+    case IITDescriptor::AK_Any:
+      return false; // Success
+    case IITDescriptor::AK_AnyInteger:
+      return !Ty->isIntOrIntVectorTy();
+    case IITDescriptor::AK_AnyFloat:
+      return !Ty->isFPOrFPVectorTy();
+    case IITDescriptor::AK_AnyVector:
+      return !isa<VectorType>(Ty);
+    case IITDescriptor::AK_AnyPointer:
+      return !isa<PointerType>(Ty);
+    default:
+      break;
     }
+    llvm_unreachable("all argument kinds not covered");
 
-    case IITDescriptor::Argument:
-      // If this is the second occurrence of an argument,
-      // verify that the later instance matches the previous instance.
-      if (D.getArgumentNumber() < ArgTys.size())
-        return Ty != ArgTys[D.getArgumentNumber()];
-
-      if (D.getArgumentNumber() > ArgTys.size() ||
-          D.getArgumentKind() == IITDescriptor::AK_MatchType)
-        return IsDeferredCheck || DeferCheck(Ty);
-
-      assert(D.getArgumentNumber() == ArgTys.size() && !IsDeferredCheck &&
-             "Table consistency error");
-      ArgTys.push_back(Ty);
+  case IITDescriptor::ExtendArgument: {
+    // If this is a forward reference, defer the check for later.
+    if (D.getArgumentNumber() >= ArgTys.size())
+      return IsDeferredCheck || DeferCheck(Ty);
+
+    Type *NewTy = ArgTys[D.getArgumentNumber()];
+    if (VectorType *VTy = dyn_cast<VectorType>(NewTy))
+      NewTy = VectorType::getExtendedElementVectorType(VTy);
+    else if (IntegerType *ITy = dyn_cast<IntegerType>(NewTy))
+      NewTy = IntegerType::get(ITy->getContext(), 2 * ITy->getBitWidth());
+    else
+      return true;
 
-      switch (D.getArgumentKind()) {
-        case IITDescriptor::AK_Any:        return false; // Success
-        case IITDescriptor::AK_AnyInteger: return !Ty->isIntOrIntVectorTy();
-        case IITDescriptor::AK_AnyFloat:   return !Ty->isFPOrFPVectorTy();
-        case IITDescriptor::AK_AnyVector:  return !isa<VectorType>(Ty);
-        case IITDescriptor::AK_AnyPointer: return !isa<PointerType>(Ty);
-        default:                           break;
-      }
-      llvm_unreachable("all argument kinds not covered");
-
-    case IITDescriptor::ExtendArgument: {
-      // If this is a forward reference, defer the check for later.
-      if (D.getArgumentNumber() >= ArgTys.size())
-        return IsDeferredCheck || DeferCheck(Ty);
-
-      Type *NewTy = ArgTys[D.getArgumentNumber()];
-      if (VectorType *VTy = dyn_cast<VectorType>(NewTy))
-        NewTy = VectorType::getExtendedElementVectorType(VTy);
-      else if (IntegerType *ITy = dyn_cast<IntegerType>(NewTy))
-        NewTy = IntegerType::get(ITy->getContext(), 2 * ITy->getBitWidth());
-      else
-        return true;
+    return Ty != NewTy;
+  }
+  case IITDescriptor::TruncArgument: {
+    // If this is a forward reference, defer the check for later.
+    if (D.getArgumentNumber() >= ArgTys.size())
+      return IsDeferredCheck || DeferCheck(Ty);
+
+    Type *NewTy = ArgTys[D.getArgumentNumber()];
+    if (VectorType *VTy = dyn_cast<VectorType>(NewTy))
+      NewTy = VectorType::getTruncatedElementVectorType(VTy);
+    else if (IntegerType *ITy = dyn_cast<IntegerType>(NewTy))
+      NewTy = IntegerType::get(ITy->getContext(), ITy->getBitWidth() / 2);
+    else
+      return true;
 
-      return Ty != NewTy;
+    return Ty != NewTy;
+  }
+  case IITDescriptor::HalfVecArgument:
+    // If this is a forward reference, defer the check for later.
+    if (D.getArgumentNumber() >= ArgTys.size())
+      return IsDeferredCheck || DeferCheck(Ty);
+    return !isa<VectorType>(ArgTys[D.getArgumentNumber()]) ||
+           VectorType::getHalfElementsVectorType(
+               cast<VectorType>(ArgTys[D.getArgumentNumber()])) != Ty;
+  case IITDescriptor::SameVecWidthArgument: {
+    if (D.getArgumentNumber() >= ArgTys.size()) {
+      // Defer check and subsequent check for the vector element type.
+      Infos = Infos.slice(1);
+      return IsDeferredCheck || DeferCheck(Ty);
     }
-    case IITDescriptor::TruncArgument: {
-      // If this is a forward reference, defer the check for later.
-      if (D.getArgumentNumber() >= ArgTys.size())
-        return IsDeferredCheck || DeferCheck(Ty);
-
-      Type *NewTy = ArgTys[D.getArgumentNumber()];
-      if (VectorType *VTy = dyn_cast<VectorType>(NewTy))
-        NewTy = VectorType::getTruncatedElementVectorType(VTy);
-      else if (IntegerType *ITy = dyn_cast<IntegerType>(NewTy))
-        NewTy = IntegerType::get(ITy->getContext(), ITy->getBitWidth() / 2);
-      else
+    auto *ReferenceType = dyn_cast<VectorType>(ArgTys[D.getArgumentNumber()]);
+    auto *ThisArgType = dyn_cast<VectorType>(Ty);
+    // Both must be vectors of the same number of elements or neither.
+    if ((ReferenceType != nullptr) != (ThisArgType != nullptr))
+      return true;
+    Type *EltTy = Ty;
+    if (ThisArgType) {
+      if (ReferenceType->getElementCount() != ThisArgType->getElementCount())
         return true;
-
-      return Ty != NewTy;
+      EltTy = ThisArgType->getElementType();
     }
-    case IITDescriptor::HalfVecArgument:
-      // If this is a forward reference, defer the check for later.
-      if (D.getArgumentNumber() >= ArgTys.size())
-        return IsDeferredCheck || DeferCheck(Ty);
-      return !isa<VectorType>(ArgTys[D.getArgumentNumber()]) ||
-             VectorType::getHalfElementsVectorType(
-                     cast<VectorType>(ArgTys[D.getArgumentNumber()])) != Ty;
-    case IITDescriptor::SameVecWidthArgument: {
-      if (D.getArgumentNumber() >= ArgTys.size()) {
-        // Defer check and subsequent check for the vector element type.
-        Infos = Infos.slice(1);
-        return IsDeferredCheck || DeferCheck(Ty);
-      }
-      auto *ReferenceType = dyn_cast<VectorType>(ArgTys[D.getArgumentNumber()]);
-      auto *ThisArgType = dyn_cast<VectorType>(Ty);
-      // Both must be vectors of the same number of elements or neither.
-      if ((ReferenceType != nullptr) != (ThisArgType != nullptr))
+    return matchIntrinsicType(EltTy, Infos, ArgTys, DeferredChecks,
+                              IsDeferredCheck);
+  }
+  case IITDescriptor::VecOfAnyPtrsToElt: {
+    unsigned RefArgNumber = D.getRefArgNumber();
+    if (RefArgNumber >= ArgTys.size()) {
+      if (IsDeferredCheck)
         return true;
-      Type *EltTy = Ty;
-      if (ThisArgType) {
-        if (ReferenceType->getElementCount() !=
-            ThisArgType->getElementCount())
-          return true;
-        EltTy = ThisArgType->getElementType();
-      }
-      return matchIntrinsicType(EltTy, Infos, ArgTys, DeferredChecks,
-                                IsDeferredCheck);
+      // If forward referencing, already add the pointer-vector type and
+      // defer the checks for later.
+      ArgTys.push_back(Ty);
+      return DeferCheck(Ty);
     }
-    case IITDescriptor::VecOfAnyPtrsToElt: {
-      unsigned RefArgNumber = D.getRefArgNumber();
-      if (RefArgNumber >= ArgTys.size()) {
-        if (IsDeferredCheck)
-          return true;
-        // If forward referencing, already add the pointer-vector type and
-        // defer the checks for later.
-        ArgTys.push_back(Ty);
-        return DeferCheck(Ty);
-      }
-
-      if (!IsDeferredCheck){
-        assert(D.getOverloadArgNumber() == ArgTys.size() &&
-               "Table consistency error");
-        ArgTys.push_back(Ty);
-      }
 
-      // Verify the overloaded type "matches" the Ref type.
-      // i.e. Ty is a vector with the same width as Ref.
-      // Composed of pointers to the same element type as Ref.
-      auto *ReferenceType = dyn_cast<VectorType>(ArgTys[RefArgNumber]);
-      auto *ThisArgVecTy = dyn_cast<VectorType>(Ty);
-      if (!ThisArgVecTy || !ReferenceType ||
-          (ReferenceType->getElementCount() != ThisArgVecTy->getElementCount()))
-        return true;
-      return !ThisArgVecTy->getElementType()->isPointerTy();
-    }
-    case IITDescriptor::VecElementArgument: {
-      if (D.getArgumentNumber() >= ArgTys.size())
-        return IsDeferredCheck ? true : DeferCheck(Ty);
-      auto *ReferenceType = dyn_cast<VectorType>(ArgTys[D.getArgumentNumber()]);
-      return !ReferenceType || Ty != ReferenceType->getElementType();
+    if (!IsDeferredCheck) {
+      assert(D.getOverloadArgNumber() == ArgTys.size() &&
+             "Table consistency error");
+      ArgTys.push_back(Ty);
     }
-    case IITDescriptor::Subdivide2Argument:
-    case IITDescriptor::Subdivide4Argument: {
-      // If this is a forward reference, defer the check for later.
-      if (D.getArgumentNumber() >= ArgTys.size())
-        return IsDeferredCheck || DeferCheck(Ty);
-
-      Type *NewTy = ArgTys[D.getArgumentNumber()];
-      if (auto *VTy = dyn_cast<VectorType>(NewTy)) {
-        int SubDivs = D.Kind == IITDescriptor::Subdivide2Argument ? 1 : 2;
-        NewTy = VectorType::getSubdividedVectorType(VTy, SubDivs);
-        return Ty != NewTy;
-      }
+
+    // Verify the overloaded type "matches" the Ref type.
+    // i.e. Ty is a vector with the same width as Ref.
+    // Composed of pointers to the same element type as Ref.
+    auto *ReferenceType = dyn_cast<VectorType>(ArgTys[RefArgNumber]);
+    auto *ThisArgVecTy = dyn_cast<VectorType>(Ty);
+    if (!ThisArgVecTy || !ReferenceType ||
+        (ReferenceType->getElementCount() != ThisArgVecTy->getElementCount()))
       return true;
+    return !ThisArgVecTy->getElementType()->isPointerTy();
+  }
+  case IITDescriptor::VecElementArgument: {
+    if (D.getArgumentNumber() >= ArgTys.size())
+      return IsDeferredCheck ? true : DeferCheck(Ty);
+    auto *ReferenceType = dyn_cast<VectorType>(ArgTys[D.getArgumentNumber()]);
+    return !ReferenceType || Ty != ReferenceType->getElementType();
+  }
+  case IITDescriptor::Subdivide2Argument:
+  case IITDescriptor::Subdivide4Argument: {
+    // If this is a forward reference, defer the check for later.
+    if (D.getArgumentNumber() >= ArgTys.size())
+      return IsDeferredCheck || DeferCheck(Ty);
+
+    Type *NewTy = ArgTys[D.getArgumentNumber()];
+    if (auto *VTy = dyn_cast<VectorType>(NewTy)) {
+      int SubDivs = D.Kind == IITDescriptor::Subdivide2Argument ? 1 : 2;
+      NewTy = VectorType::getSubdividedVectorType(VTy, SubDivs);
+      return Ty != NewTy;
     }
-    case IITDescriptor::VecOfBitcastsToInt: {
-      if (D.getArgumentNumber() >= ArgTys.size())
-        return IsDeferredCheck || DeferCheck(Ty);
-      auto *ReferenceType = dyn_cast<VectorType>(ArgTys[D.getArgumentNumber()]);
-      auto *ThisArgVecTy = dyn_cast<VectorType>(Ty);
-      if (!ThisArgVecTy || !ReferenceType)
-        return true;
-      return ThisArgVecTy != VectorType::getInteger(ReferenceType);
-    }
+    return true;
+  }
+  case IITDescriptor::VecOfBitcastsToInt: {
+    if (D.getArgumentNumber() >= ArgTys.size())
+      return IsDeferredCheck || DeferCheck(Ty);
+    auto *ReferenceType = dyn_cast<VectorType>(ArgTys[D.getArgumentNumber()]);
+    auto *ThisArgVecTy = dyn_cast<VectorType>(Ty);
+    if (!ThisArgVecTy || !ReferenceType)
+      return true;
+    return ThisArgVecTy != VectorType::getInteger(ReferenceType);
+  }
   }
   llvm_unreachable("unhandled");
 }
@@ -1671,9 +1739,8 @@ Intrinsic::matchIntrinsicSignature(FunctionType *FTy,
   return MatchIntrinsicTypes_Match;
 }
 
-bool
-Intrinsic::matchIntrinsicVarArg(bool isVarArg,
-                                ArrayRef<Intrinsic::IITDescriptor> &Infos) {
+bool Intrinsic::matchIntrinsicVarArg(
+    bool isVarArg, ArrayRef<Intrinsic::IITDescriptor> &Infos) {
   // If there are no descriptors left, then it can't be a vararg.
   if (Infos.empty())
     return isVarArg;
@@ -1877,7 +1944,7 @@ void Function::allocHungoffUselist() {
   if (getNumOperands())
     return;
 
-  allocHungoffUses(3, /*IsPhi=*/ false);
+  allocHungoffUses(3, /*IsPhi=*/false);
   setNumHungOffUseOperands(3);
 
   // Initialize the uselist with placeholder operands to allow traversal.
@@ -1887,8 +1954,7 @@ void Function::allocHungoffUselist() {
   Op<2>().set(CPN);
 }
 
-template <int Idx>
-void Function::setHungoffOperand(Constant *C) {
+template <int Idx> void Function::setHungoffOperand(Constant *C) {
   if (C) {
     allocHungoffUselist();
     Op<Idx>().set(C);
diff --git a/llvm/lib/IR/IRBuilder.cpp b/llvm/lib/IR/IRBuilder.cpp
index 974e29841e1bc63..b5f52e781cd8268 100644
--- a/llvm/lib/IR/IRBuilder.cpp
+++ b/llvm/lib/IR/IRBuilder.cpp
@@ -139,7 +139,7 @@ CallInst *IRBuilderBase::CreateMemSet(Value *Ptr, Value *Val, Value *Size,
                                       MDNode *TBAATag, MDNode *ScopeTag,
                                       MDNode *NoAliasTag) {
   Value *Ops[] = {Ptr, Val, Size, getInt1(isVolatile)};
-  Type *Tys[] = { Ptr->getType(), Size->getType() };
+  Type *Tys[] = {Ptr->getType(), Size->getType()};
   Module *M = BB->getParent()->getParent();
   Function *TheFn = Intrinsic::getDeclaration(M, Intrinsic::memset, Tys);
 
@@ -224,13 +224,13 @@ CallInst *IRBuilderBase::CreateMemTransferInst(
           IntrID == Intrinsic::memmove) &&
          "Unexpected intrinsic ID");
   Value *Ops[] = {Dst, Src, Size, getInt1(isVolatile)};
-  Type *Tys[] = { Dst->getType(), Src->getType(), Size->getType() };
+  Type *Tys[] = {Dst->getType(), Src->getType(), Size->getType()};
   Module *M = BB->getParent()->getParent();
   Function *TheFn = Intrinsic::getDeclaration(M, IntrID, Tys);
 
   CallInst *CI = CreateCall(TheFn, Ops);
 
-  auto* MCI = cast<MemTransferInst>(CI);
+  auto *MCI = cast<MemTransferInst>(CI);
   if (DstAlign)
     MCI->setDestAlignment(*DstAlign);
   if (SrcAlign)
@@ -331,7 +331,7 @@ CallInst *IRBuilderBase::CreateElementUnorderedAtomicMemMove(
 CallInst *IRBuilderBase::getReductionIntrinsic(Intrinsic::ID ID, Value *Src) {
   Module *M = GetInsertBlock()->getParent()->getParent();
   Value *Ops[] = {Src};
-  Type *Tys[] = { Src->getType() };
+  Type *Tys[] = {Src->getType()};
   auto Decl = Intrinsic::getDeclaration(M, ID, Tys);
   return CreateCall(Decl, Ops);
 }
@@ -408,7 +408,7 @@ CallInst *IRBuilderBase::CreateLifetimeStart(Value *Ptr, ConstantInt *Size) {
   else
     assert(Size->getType() == getInt64Ty() &&
            "lifetime.start requires the size to be an i64");
-  Value *Ops[] = { Size, Ptr };
+  Value *Ops[] = {Size, Ptr};
   Module *M = BB->getParent()->getParent();
   Function *TheFn =
       Intrinsic::getDeclaration(M, Intrinsic::lifetime_start, {Ptr->getType()});
@@ -423,7 +423,7 @@ CallInst *IRBuilderBase::CreateLifetimeEnd(Value *Ptr, ConstantInt *Size) {
   else
     assert(Size->getType() == getInt64Ty() &&
            "lifetime.end requires the size to be an i64");
-  Value *Ops[] = { Size, Ptr };
+  Value *Ops[] = {Size, Ptr};
   Module *M = BB->getParent()->getParent();
   Function *TheFn =
       Intrinsic::getDeclaration(M, Intrinsic::lifetime_end, {Ptr->getType()});
@@ -486,7 +486,7 @@ IRBuilderBase::CreateAssumption(Value *Cond,
   assert(Cond->getType() == getInt1Ty() &&
          "an assumption condition must be of type i1");
 
-  Value *Ops[] = { Cond };
+  Value *Ops[] = {Cond};
   Module *M = BB->getParent()->getParent();
   Function *FnAssume = Intrinsic::getDeclaration(M, Intrinsic::assume);
   return CreateCall(FnAssume, Ops, OpBundles);
@@ -516,10 +516,10 @@ CallInst *IRBuilderBase::CreateMaskedLoad(Type *Ty, Value *Ptr, Align Alignment,
   assert(Mask && "Mask should not be all-ones (null)");
   if (!PassThru)
     PassThru = PoisonValue::get(Ty);
-  Type *OverloadedTypes[] = { Ty, PtrTy };
+  Type *OverloadedTypes[] = {Ty, PtrTy};
   Value *Ops[] = {Ptr, getInt32(Alignment.value()), Mask, PassThru};
-  return CreateMaskedIntrinsic(Intrinsic::masked_load, Ops,
-                               OverloadedTypes, Name);
+  return CreateMaskedIntrinsic(Intrinsic::masked_load, Ops, OverloadedTypes,
+                               Name);
 }
 
 /// Create a call to a Masked Store intrinsic.
@@ -534,7 +534,7 @@ CallInst *IRBuilderBase::CreateMaskedStore(Value *Val, Value *Ptr,
   Type *DataTy = Val->getType();
   assert(DataTy->isVectorTy() && "Val should be a vector");
   assert(Mask && "Mask should not be all-ones (null)");
-  Type *OverloadedTypes[] = { DataTy, PtrTy };
+  Type *OverloadedTypes[] = {DataTy, PtrTy};
   Value *Ops[] = {Val, Ptr, getInt32(Alignment.value()), Mask};
   return CreateMaskedIntrinsic(Intrinsic::masked_store, Ops, OverloadedTypes);
 }
@@ -664,24 +664,24 @@ getStatepointArgs(IRBuilderBase &B, uint64_t ID, uint32_t NumPatchBytes,
   return Args;
 }
 
-template<typename T1, typename T2, typename T3>
+template <typename T1, typename T2, typename T3>
 static std::vector<OperandBundleDef>
 getStatepointBundles(std::optional<ArrayRef<T1>> TransitionArgs,
                      std::optional<ArrayRef<T2>> DeoptArgs,
                      ArrayRef<T3> GCArgs) {
   std::vector<OperandBundleDef> Rval;
   if (DeoptArgs) {
-    SmallVector<Value*, 16> DeoptValues;
+    SmallVector<Value *, 16> DeoptValues;
     llvm::append_range(DeoptValues, *DeoptArgs);
     Rval.emplace_back("deopt", DeoptValues);
   }
   if (TransitionArgs) {
-    SmallVector<Value*, 16> TransitionValues;
+    SmallVector<Value *, 16> TransitionValues;
     llvm::append_range(TransitionValues, *TransitionArgs);
     Rval.emplace_back("gc-transition", TransitionValues);
   }
   if (GCArgs.size()) {
-    SmallVector<Value*, 16> LiveValues;
+    SmallVector<Value *, 16> LiveValues;
     llvm::append_range(LiveValues, GCArgs);
     Rval.emplace_back("gc-live", LiveValues);
   }
@@ -856,7 +856,7 @@ CallInst *IRBuilderBase::CreateBinaryIntrinsic(Intrinsic::ID ID, Value *LHS,
                                                Instruction *FMFSource,
                                                const Twine &Name) {
   Module *M = BB->getModule();
-  Function *Fn = Intrinsic::getDeclaration(M, ID, { LHS->getType() });
+  Function *Fn = Intrinsic::getDeclaration(M, ID, {LHS->getType()});
   return createCallHelper(Fn, {LHS, RHS}, Name, FMFSource);
 }
 
@@ -899,8 +899,7 @@ CallInst *IRBuilderBase::CreateIntrinsic(Type *RetTy, Intrinsic::ID ID,
 
 CallInst *IRBuilderBase::CreateConstrainedFPBinOp(
     Intrinsic::ID ID, Value *L, Value *R, Instruction *FMFSource,
-    const Twine &Name, MDNode *FPMathTag,
-    std::optional<RoundingMode> Rounding,
+    const Twine &Name, MDNode *FPMathTag, std::optional<RoundingMode> Rounding,
     std::optional<fp::ExceptionBehavior> Except) {
   Value *RoundingV = getConstrainedFPRounding(Rounding);
   Value *ExceptV = getConstrainedFPExcept(Except);
@@ -909,8 +908,8 @@ CallInst *IRBuilderBase::CreateConstrainedFPBinOp(
   if (FMFSource)
     UseFMF = FMFSource->getFastMathFlags();
 
-  CallInst *C = CreateIntrinsic(ID, {L->getType()},
-                                {L, R, RoundingV, ExceptV}, nullptr, Name);
+  CallInst *C = CreateIntrinsic(ID, {L->getType()}, {L, R, RoundingV, ExceptV},
+                                nullptr, Name);
   setConstrainedFPCallAttr(C);
   setFPAttrs(C, FPMathTag, UseFMF);
   return C;
@@ -937,21 +936,20 @@ Value *IRBuilderBase::CreateNAryOp(unsigned Opc, ArrayRef<Value *> Ops,
                                    const Twine &Name, MDNode *FPMathTag) {
   if (Instruction::isBinaryOp(Opc)) {
     assert(Ops.size() == 2 && "Invalid number of operands!");
-    return CreateBinOp(static_cast<Instruction::BinaryOps>(Opc),
-                       Ops[0], Ops[1], Name, FPMathTag);
+    return CreateBinOp(static_cast<Instruction::BinaryOps>(Opc), Ops[0], Ops[1],
+                       Name, FPMathTag);
   }
   if (Instruction::isUnaryOp(Opc)) {
     assert(Ops.size() == 1 && "Invalid number of operands!");
-    return CreateUnOp(static_cast<Instruction::UnaryOps>(Opc),
-                      Ops[0], Name, FPMathTag);
+    return CreateUnOp(static_cast<Instruction::UnaryOps>(Opc), Ops[0], Name,
+                      FPMathTag);
   }
   llvm_unreachable("Unexpected opcode!");
 }
 
 CallInst *IRBuilderBase::CreateConstrainedFPCast(
-    Intrinsic::ID ID, Value *V, Type *DestTy,
-    Instruction *FMFSource, const Twine &Name, MDNode *FPMathTag,
-    std::optional<RoundingMode> Rounding,
+    Intrinsic::ID ID, Value *V, Type *DestTy, Instruction *FMFSource,
+    const Twine &Name, MDNode *FPMathTag, std::optional<RoundingMode> Rounding,
     std::optional<fp::ExceptionBehavior> Except) {
   Value *ExceptV = getConstrainedFPExcept(Except);
 
@@ -964,9 +962,9 @@ CallInst *IRBuilderBase::CreateConstrainedFPCast(
   switch (ID) {
   default:
     break;
-#define INSTRUCTION(NAME, NARG, ROUND_MODE, INTRINSIC)        \
-  case Intrinsic::INTRINSIC:                                \
-    HasRoundingMD = ROUND_MODE;                             \
+#define INSTRUCTION(NAME, NARG, ROUND_MODE, INTRINSIC)                         \
+  case Intrinsic::INTRINSIC:                                                   \
+    HasRoundingMD = ROUND_MODE;                                                \
     break;
 #include "llvm/IR/ConstrainedOps.def"
   }
@@ -985,9 +983,9 @@ CallInst *IRBuilderBase::CreateConstrainedFPCast(
   return C;
 }
 
-Value *IRBuilderBase::CreateFCmpHelper(
-    CmpInst::Predicate P, Value *LHS, Value *RHS, const Twine &Name,
-    MDNode *FPMathTag, bool IsSignaling) {
+Value *IRBuilderBase::CreateFCmpHelper(CmpInst::Predicate P, Value *LHS,
+                                       Value *RHS, const Twine &Name,
+                                       MDNode *FPMathTag, bool IsSignaling) {
   if (IsFPConstrained) {
     auto ID = IsSignaling ? Intrinsic::experimental_constrained_fcmps
                           : Intrinsic::experimental_constrained_fcmp;
@@ -1006,8 +1004,8 @@ CallInst *IRBuilderBase::CreateConstrainedFPCmp(
   Value *PredicateV = getConstrainedFPPredicate(P);
   Value *ExceptV = getConstrainedFPExcept(Except);
 
-  CallInst *C = CreateIntrinsic(ID, {L->getType()},
-                                {L, R, PredicateV, ExceptV}, nullptr, Name);
+  CallInst *C = CreateIntrinsic(ID, {L->getType()}, {L, R, PredicateV, ExceptV},
+                                nullptr, Name);
   setConstrainedFPCallAttr(C);
   return C;
 }
@@ -1023,9 +1021,9 @@ CallInst *IRBuilderBase::CreateConstrainedFPCall(
   switch (Callee->getIntrinsicID()) {
   default:
     break;
-#define INSTRUCTION(NAME, NARG, ROUND_MODE, INTRINSIC)        \
-  case Intrinsic::INTRINSIC:                                \
-    HasRoundingMD = ROUND_MODE;                             \
+#define INSTRUCTION(NAME, NARG, ROUND_MODE, INTRINSIC)                         \
+  case Intrinsic::INTRINSIC:                                                   \
+    HasRoundingMD = ROUND_MODE;                                                \
     break;
 #include "llvm/IR/ConstrainedOps.def"
   }
@@ -1061,8 +1059,7 @@ Value *IRBuilderBase::CreatePtrDiff(Type *ElemTy, Value *LHS, Value *RHS,
   Value *LHS_int = CreatePtrToInt(LHS, Type::getInt64Ty(Context));
   Value *RHS_int = CreatePtrToInt(RHS, Type::getInt64Ty(Context));
   Value *Difference = CreateSub(LHS_int, RHS_int);
-  return CreateExactSDiv(Difference, ConstantExpr::getSizeOf(ElemTy),
-                         Name);
+  return CreateExactSDiv(Difference, ConstantExpr::getSizeOf(ElemTy), Name);
 }
 
 Value *IRBuilderBase::CreateLaunderInvariantGroup(Value *Ptr) {
@@ -1087,12 +1084,11 @@ Value *IRBuilderBase::CreateStripInvariantGroup(Value *Ptr) {
 
   auto *PtrType = Ptr->getType();
   Module *M = BB->getParent()->getParent();
-  Function *FnStripInvariantGroup = Intrinsic::getDeclaration(
-      M, Intrinsic::strip_invariant_group, {PtrType});
+  Function *FnStripInvariantGroup =
+      Intrinsic::getDeclaration(M, Intrinsic::strip_invariant_group, {PtrType});
 
   assert(FnStripInvariantGroup->getReturnType() == PtrType &&
-         FnStripInvariantGroup->getFunctionType()->getParamType(0) ==
-             PtrType &&
+         FnStripInvariantGroup->getFunctionType()->getParamType(0) == PtrType &&
          "StripInvariantGroup should take and return the same type");
 
   return CreateCall(FnStripInvariantGroup, {Ptr});
@@ -1162,9 +1158,10 @@ Value *IRBuilderBase::CreateVectorSplat(ElementCount EC, Value *V,
   return CreateShuffleVector(V, Zeros, Name + ".splat");
 }
 
-Value *IRBuilderBase::CreatePreserveArrayAccessIndex(
-    Type *ElTy, Value *Base, unsigned Dimension, unsigned LastIndex,
-    MDNode *DbgInfo) {
+Value *IRBuilderBase::CreatePreserveArrayAccessIndex(Type *ElTy, Value *Base,
+                                                     unsigned Dimension,
+                                                     unsigned LastIndex,
+                                                     MDNode *DbgInfo) {
   auto *BaseType = Base->getType();
   assert(isa<PointerType>(BaseType) &&
          "Invalid Base ptr type for preserve.array.access.index.");
@@ -1191,8 +1188,9 @@ Value *IRBuilderBase::CreatePreserveArrayAccessIndex(
   return Fn;
 }
 
-Value *IRBuilderBase::CreatePreserveUnionAccessIndex(
-    Value *Base, unsigned FieldIndex, MDNode *DbgInfo) {
+Value *IRBuilderBase::CreatePreserveUnionAccessIndex(Value *Base,
+                                                     unsigned FieldIndex,
+                                                     MDNode *DbgInfo) {
   assert(isa<PointerType>(Base->getType()) &&
          "Invalid Base ptr type for preserve.union.access.index.");
   auto *BaseType = Base->getType();
@@ -1202,17 +1200,17 @@ Value *IRBuilderBase::CreatePreserveUnionAccessIndex(
       M, Intrinsic::preserve_union_access_index, {BaseType, BaseType});
 
   Value *DIIndex = getInt32(FieldIndex);
-  CallInst *Fn =
-      CreateCall(FnPreserveUnionAccessIndex, {Base, DIIndex});
+  CallInst *Fn = CreateCall(FnPreserveUnionAccessIndex, {Base, DIIndex});
   if (DbgInfo)
     Fn->setMetadata(LLVMContext::MD_preserve_access_index, DbgInfo);
 
   return Fn;
 }
 
-Value *IRBuilderBase::CreatePreserveStructAccessIndex(
-    Type *ElTy, Value *Base, unsigned Index, unsigned FieldIndex,
-    MDNode *DbgInfo) {
+Value *IRBuilderBase::CreatePreserveStructAccessIndex(Type *ElTy, Value *Base,
+                                                      unsigned Index,
+                                                      unsigned FieldIndex,
+                                                      MDNode *DbgInfo) {
   auto *BaseType = Base->getType();
   assert(isa<PointerType>(BaseType) &&
          "Invalid Base ptr type for preserve.struct.access.index.");
@@ -1227,8 +1225,8 @@ Value *IRBuilderBase::CreatePreserveStructAccessIndex(
       M, Intrinsic::preserve_struct_access_index, {ResultType, BaseType});
 
   Value *DIIndex = getInt32(FieldIndex);
-  CallInst *Fn = CreateCall(FnPreserveStructAccessIndex,
-                            {Base, GEPIndex, DIIndex});
+  CallInst *Fn =
+      CreateCall(FnPreserveStructAccessIndex, {Base, GEPIndex, DIIndex});
   Fn->addParamAttr(
       0, Attribute::get(Fn->getContext(), Attribute::ElementType, ElTy));
   if (DbgInfo)
diff --git a/llvm/lib/IR/InlineAsm.cpp b/llvm/lib/IR/InlineAsm.cpp
index aeaa6a3741b949a..211e53ef345cf70 100644
--- a/llvm/lib/IR/InlineAsm.cpp
+++ b/llvm/lib/IR/InlineAsm.cpp
@@ -55,9 +55,7 @@ void InlineAsm::destroyConstant() {
   delete this;
 }
 
-FunctionType *InlineAsm::getFunctionType() const {
-  return FTy;
-}
+FunctionType *InlineAsm::getFunctionType() const { return FTy; }
 
 void InlineAsm::collectAsmStrs(SmallVectorImpl<StringRef> &AsmStrs) const {
   StringRef AsmStr(AsmString);
@@ -75,8 +73,8 @@ void InlineAsm::collectAsmStrs(SmallVectorImpl<StringRef> &AsmStrs) const {
 /// Parse - Analyze the specified string (e.g. "==&{eax}") and fill in the
 /// fields in this structure.  If the constraint string is not understood,
 /// return true, otherwise return false.
-bool InlineAsm::ConstraintInfo::Parse(StringRef Str,
-                     InlineAsm::ConstraintInfoVector &ConstraintsSoFar) {
+bool InlineAsm::ConstraintInfo::Parse(
+    StringRef Str, InlineAsm::ConstraintInfoVector &ConstraintsSoFar) {
   StringRef::iterator I = Str.begin(), E = Str.end();
   unsigned multipleAlternativeCount = Str.count('|') + 1;
   unsigned multipleAlternativeIndex = 0;
@@ -116,7 +114,8 @@ bool InlineAsm::ConstraintInfo::Parse(StringRef Str,
     ++I;
   }
 
-  if (I == E) return true;  // Just a prefix, like "==" or "~".
+  if (I == E)
+    return true; // Just a prefix, like "==" or "~".
 
   // Parse the modifiers.
   bool DoneWithModifiers = false;
@@ -125,37 +124,39 @@ bool InlineAsm::ConstraintInfo::Parse(StringRef Str,
     default:
       DoneWithModifiers = true;
       break;
-    case '&':     // Early clobber.
-      if (Type != isOutput ||      // Cannot early clobber anything but output.
-          isEarlyClobber)          // Reject &&&&&&
+    case '&':                 // Early clobber.
+      if (Type != isOutput || // Cannot early clobber anything but output.
+          isEarlyClobber)     // Reject &&&&&&
         return true;
       isEarlyClobber = true;
       break;
-    case '%':     // Commutative.
-      if (Type == isClobber ||     // Cannot commute clobbers.
-          isCommutative)           // Reject %%%%%
+    case '%':                  // Commutative.
+      if (Type == isClobber || // Cannot commute clobbers.
+          isCommutative)       // Reject %%%%%
         return true;
       isCommutative = true;
       break;
-    case '#':     // Comment.
-    case '*':     // Register preferencing.
-      return true;     // Not supported.
+    case '#':      // Comment.
+    case '*':      // Register preferencing.
+      return true; // Not supported.
     }
 
     if (!DoneWithModifiers) {
       ++I;
-      if (I == E) return true;   // Just prefixes and modifiers!
+      if (I == E)
+        return true; // Just prefixes and modifiers!
     }
   }
 
   // Parse the various constraints.
   while (I != E) {
-    if (*I == '{') {   // Physical register reference.
+    if (*I == '{') { // Physical register reference.
       // Find the end of the register name.
-      StringRef::iterator ConstraintEnd = std::find(I+1, E, '}');
-      if (ConstraintEnd == E) return true;  // "{foo"
+      StringRef::iterator ConstraintEnd = std::find(I + 1, E, '}');
+      if (ConstraintEnd == E)
+        return true; // "{foo"
       pCodes->push_back(std::string(StringRef(I, ConstraintEnd + 1 - I)));
-      I = ConstraintEnd+1;
+      I = ConstraintEnd + 1;
     } else if (isdigit(static_cast<unsigned char>(*I))) { // Matching Constraint
       // Maximal munch numbers.
       StringRef::iterator NumStart = I;
@@ -164,9 +165,9 @@ bool InlineAsm::ConstraintInfo::Parse(StringRef Str,
       pCodes->push_back(std::string(StringRef(NumStart, I - NumStart)));
       unsigned N = atoi(pCodes->back().c_str());
       // Check that this is a valid matching constraint!
-      if (N >= ConstraintsSoFar.size() || ConstraintsSoFar[N].Type != isOutput||
-          Type != isInput)
-        return true;  // Invalid constraint number.
+      if (N >= ConstraintsSoFar.size() ||
+          ConstraintsSoFar[N].Type != isOutput || Type != isInput)
+        return true; // Invalid constraint number.
 
       // If Operand N already has a matching input, reject this.  An output
       // can't be constrained to the same value as multiple inputs.
@@ -175,7 +176,7 @@ bool InlineAsm::ConstraintInfo::Parse(StringRef Str,
             ConstraintsSoFar[N].multipleAlternatives.size())
           return true;
         InlineAsm::SubConstraintInfo &scInfo =
-          ConstraintsSoFar[N].multipleAlternatives[multipleAlternativeIndex];
+            ConstraintsSoFar[N].multipleAlternatives[multipleAlternativeIndex];
         if (scInfo.MatchingInput != -1)
           return true;
         // Note that operand #n has a matching input.
@@ -189,7 +190,7 @@ bool InlineAsm::ConstraintInfo::Parse(StringRef Str,
         // Note that operand #n has a matching input.
         ConstraintsSoFar[N].MatchingInput = ConstraintsSoFar.size();
         assert(ConstraintsSoFar[N].MatchingInput >= 0);
-        }
+      }
     } else if (*I == '|') {
       multipleAlternativeIndex++;
       pCodes = &multipleAlternatives[multipleAlternativeIndex].Codes;
@@ -225,7 +226,7 @@ void InlineAsm::ConstraintInfo::selectAlternative(unsigned index) {
   if (index < multipleAlternatives.size()) {
     currentAlternativeIndex = index;
     InlineAsm::SubConstraintInfo &scInfo =
-      multipleAlternatives[currentAlternativeIndex];
+        multipleAlternatives[currentAlternativeIndex];
     MatchingInput = scInfo.MatchingInput;
     Codes = scInfo.Codes;
   }
@@ -236,16 +237,16 @@ InlineAsm::ParseConstraints(StringRef Constraints) {
   ConstraintInfoVector Result;
 
   // Scan the constraints string.
-  for (StringRef::iterator I = Constraints.begin(),
-         E = Constraints.end(); I != E; ) {
+  for (StringRef::iterator I = Constraints.begin(), E = Constraints.end();
+       I != E;) {
     ConstraintInfo Info;
 
     // Find the end of this constraint.
     StringRef::iterator ConstraintEnd = std::find(I, E, ',');
 
-    if (ConstraintEnd == I ||  // Empty constraint like ",,"
-        Info.Parse(StringRef(I, ConstraintEnd-I), Result)) {
-      Result.clear();          // Erroneous constraint?
+    if (ConstraintEnd == I || // Empty constraint like ",,"
+        Info.Parse(StringRef(I, ConstraintEnd - I), Result)) {
+      Result.clear(); // Erroneous constraint?
       break;
     }
 
@@ -286,7 +287,7 @@ Error InlineAsm::verify(FunctionType *Ty, StringRef ConstStr) {
   for (const ConstraintInfo &Constraint : Constraints) {
     switch (Constraint.Type) {
     case InlineAsm::isOutput:
-      if ((NumInputs-NumIndirect) != 0 || NumClobbers != 0 || NumLabels != 0)
+      if ((NumInputs - NumIndirect) != 0 || NumClobbers != 0 || NumLabels != 0)
         return makeStringError("output constraint occurs after input, "
                                "clobber or label constraint");
 
diff --git a/llvm/lib/IR/Instruction.cpp b/llvm/lib/IR/Instruction.cpp
index 0dcf0ac6a78ab80..690bb053b2307d1 100644
--- a/llvm/lib/IR/Instruction.cpp
+++ b/llvm/lib/IR/Instruction.cpp
@@ -24,7 +24,7 @@ using namespace llvm;
 
 Instruction::Instruction(Type *ty, unsigned it, Use *Ops, unsigned NumOps,
                          Instruction *InsertBefore)
-  : User(ty, Value::InstructionVal + it, Ops, NumOps), Parent(nullptr) {
+    : User(ty, Value::InstructionVal + it, Ops, NumOps), Parent(nullptr) {
 
   // If requested, insert this instruction into a basic block...
   if (InsertBefore) {
@@ -36,7 +36,7 @@ Instruction::Instruction(Type *ty, unsigned it, Use *Ops, unsigned NumOps,
 
 Instruction::Instruction(Type *ty, unsigned it, Use *Ops, unsigned NumOps,
                          BasicBlock *InsertAtEnd)
-  : User(ty, Value::InstructionVal + it, Ops, NumOps), Parent(nullptr) {
+    : User(ty, Value::InstructionVal + it, Ops, NumOps), Parent(nullptr) {
 
   // append this instruction into the basic block
   assert(InsertAtEnd && "Basic block to append to may not be NULL!");
@@ -63,10 +63,7 @@ Instruction::~Instruction() {
   setMetadata(LLVMContext::MD_DIAssignID, nullptr);
 }
 
-
-void Instruction::setParent(BasicBlock *P) {
-  Parent = P;
-}
+void Instruction::setParent(BasicBlock *P) { Parent = P; }
 
 const Module *Instruction::getModule() const {
   return getParent()->getModule();
@@ -409,85 +406,151 @@ void Instruction::andIRFlags(const Value *V) {
 const char *Instruction::getOpcodeName(unsigned OpCode) {
   switch (OpCode) {
   // Terminators
-  case Ret:    return "ret";
-  case Br:     return "br";
-  case Switch: return "switch";
-  case IndirectBr: return "indirectbr";
-  case Invoke: return "invoke";
-  case Resume: return "resume";
-  case Unreachable: return "unreachable";
-  case CleanupRet: return "cleanupret";
-  case CatchRet: return "catchret";
-  case CatchPad: return "catchpad";
-  case CatchSwitch: return "catchswitch";
-  case CallBr: return "callbr";
+  case Ret:
+    return "ret";
+  case Br:
+    return "br";
+  case Switch:
+    return "switch";
+  case IndirectBr:
+    return "indirectbr";
+  case Invoke:
+    return "invoke";
+  case Resume:
+    return "resume";
+  case Unreachable:
+    return "unreachable";
+  case CleanupRet:
+    return "cleanupret";
+  case CatchRet:
+    return "catchret";
+  case CatchPad:
+    return "catchpad";
+  case CatchSwitch:
+    return "catchswitch";
+  case CallBr:
+    return "callbr";
 
   // Standard unary operators...
-  case FNeg: return "fneg";
+  case FNeg:
+    return "fneg";
 
   // Standard binary operators...
-  case Add: return "add";
-  case FAdd: return "fadd";
-  case Sub: return "sub";
-  case FSub: return "fsub";
-  case Mul: return "mul";
-  case FMul: return "fmul";
-  case UDiv: return "udiv";
-  case SDiv: return "sdiv";
-  case FDiv: return "fdiv";
-  case URem: return "urem";
-  case SRem: return "srem";
-  case FRem: return "frem";
+  case Add:
+    return "add";
+  case FAdd:
+    return "fadd";
+  case Sub:
+    return "sub";
+  case FSub:
+    return "fsub";
+  case Mul:
+    return "mul";
+  case FMul:
+    return "fmul";
+  case UDiv:
+    return "udiv";
+  case SDiv:
+    return "sdiv";
+  case FDiv:
+    return "fdiv";
+  case URem:
+    return "urem";
+  case SRem:
+    return "srem";
+  case FRem:
+    return "frem";
 
   // Logical operators...
-  case And: return "and";
-  case Or : return "or";
-  case Xor: return "xor";
+  case And:
+    return "and";
+  case Or:
+    return "or";
+  case Xor:
+    return "xor";
 
   // Memory instructions...
-  case Alloca:        return "alloca";
-  case Load:          return "load";
-  case Store:         return "store";
-  case AtomicCmpXchg: return "cmpxchg";
-  case AtomicRMW:     return "atomicrmw";
-  case Fence:         return "fence";
-  case GetElementPtr: return "getelementptr";
+  case Alloca:
+    return "alloca";
+  case Load:
+    return "load";
+  case Store:
+    return "store";
+  case AtomicCmpXchg:
+    return "cmpxchg";
+  case AtomicRMW:
+    return "atomicrmw";
+  case Fence:
+    return "fence";
+  case GetElementPtr:
+    return "getelementptr";
 
   // Convert instructions...
-  case Trunc:         return "trunc";
-  case ZExt:          return "zext";
-  case SExt:          return "sext";
-  case FPTrunc:       return "fptrunc";
-  case FPExt:         return "fpext";
-  case FPToUI:        return "fptoui";
-  case FPToSI:        return "fptosi";
-  case UIToFP:        return "uitofp";
-  case SIToFP:        return "sitofp";
-  case IntToPtr:      return "inttoptr";
-  case PtrToInt:      return "ptrtoint";
-  case BitCast:       return "bitcast";
-  case AddrSpaceCast: return "addrspacecast";
+  case Trunc:
+    return "trunc";
+  case ZExt:
+    return "zext";
+  case SExt:
+    return "sext";
+  case FPTrunc:
+    return "fptrunc";
+  case FPExt:
+    return "fpext";
+  case FPToUI:
+    return "fptoui";
+  case FPToSI:
+    return "fptosi";
+  case UIToFP:
+    return "uitofp";
+  case SIToFP:
+    return "sitofp";
+  case IntToPtr:
+    return "inttoptr";
+  case PtrToInt:
+    return "ptrtoint";
+  case BitCast:
+    return "bitcast";
+  case AddrSpaceCast:
+    return "addrspacecast";
 
   // Other instructions...
-  case ICmp:           return "icmp";
-  case FCmp:           return "fcmp";
-  case PHI:            return "phi";
-  case Select:         return "select";
-  case Call:           return "call";
-  case Shl:            return "shl";
-  case LShr:           return "lshr";
-  case AShr:           return "ashr";
-  case VAArg:          return "va_arg";
-  case ExtractElement: return "extractelement";
-  case InsertElement:  return "insertelement";
-  case ShuffleVector:  return "shufflevector";
-  case ExtractValue:   return "extractvalue";
-  case InsertValue:    return "insertvalue";
-  case LandingPad:     return "landingpad";
-  case CleanupPad:     return "cleanuppad";
-  case Freeze:         return "freeze";
-
-  default: return "<Invalid operator> ";
+  case ICmp:
+    return "icmp";
+  case FCmp:
+    return "fcmp";
+  case PHI:
+    return "phi";
+  case Select:
+    return "select";
+  case Call:
+    return "call";
+  case Shl:
+    return "shl";
+  case LShr:
+    return "lshr";
+  case AShr:
+    return "ashr";
+  case VAArg:
+    return "va_arg";
+  case ExtractElement:
+    return "extractelement";
+  case InsertElement:
+    return "insertelement";
+  case ShuffleVector:
+    return "shufflevector";
+  case ExtractValue:
+    return "extractvalue";
+  case InsertValue:
+    return "insertvalue";
+  case LandingPad:
+    return "landingpad";
+  case CleanupPad:
+    return "cleanuppad";
+  case Freeze:
+    return "freeze";
+
+  default:
+    return "<Invalid operator> ";
   }
 }
 
@@ -568,8 +631,7 @@ bool Instruction::isIdenticalTo(const Instruction *I) const {
 
 bool Instruction::isIdenticalToWhenDefined(const Instruction *I) const {
   if (getOpcode() != I->getOpcode() ||
-      getNumOperands() != I->getNumOperands() ||
-      getType() != I->getType())
+      getNumOperands() != I->getNumOperands() || getType() != I->getType())
     return false;
 
   // If both instructions have no operands, they are identical.
@@ -596,22 +658,22 @@ bool Instruction::isIdenticalToWhenDefined(const Instruction *I) const {
 bool Instruction::isSameOperationAs(const Instruction *I,
                                     unsigned flags) const {
   bool IgnoreAlignment = flags & CompareIgnoringAlignment;
-  bool UseScalarTypes  = flags & CompareUsingScalarTypes;
+  bool UseScalarTypes = flags & CompareUsingScalarTypes;
 
   if (getOpcode() != I->getOpcode() ||
       getNumOperands() != I->getNumOperands() ||
-      (UseScalarTypes ?
-       getType()->getScalarType() != I->getType()->getScalarType() :
-       getType() != I->getType()))
+      (UseScalarTypes
+           ? getType()->getScalarType() != I->getType()->getScalarType()
+           : getType() != I->getType()))
     return false;
 
   // We have two instructions of identical opcode and #operands.  Check to see
   // if all operands are the same type
   for (unsigned i = 0, e = getNumOperands(); i != e; ++i)
-    if (UseScalarTypes ?
-        getOperand(i)->getType()->getScalarType() !=
-          I->getOperand(i)->getType()->getScalarType() :
-        getOperand(i)->getType() != I->getOperand(i)->getType())
+    if (UseScalarTypes
+            ? getOperand(i)->getType()->getScalarType() !=
+                  I->getOperand(i)->getType()->getScalarType()
+            : getOperand(i)->getType() != I->getOperand(i)->getType())
       return false;
 
   return this->hasSameSpecialState(I, IgnoreAlignment);
@@ -637,7 +699,8 @@ bool Instruction::isUsedOutsideOfBlock(const BasicBlock *BB) const {
 
 bool Instruction::mayReadFromMemory() const {
   switch (getOpcode()) {
-  default: return false;
+  default:
+    return false;
   case Instruction::VAArg:
   case Instruction::Load:
   case Instruction::Fence: // FIXME: refine definition of mayReadFromMemory
@@ -657,7 +720,8 @@ bool Instruction::mayReadFromMemory() const {
 
 bool Instruction::mayWriteToMemory() const {
   switch (getOpcode()) {
-  default: return false;
+  default:
+    return false;
   case Instruction::Fence: // FIXME: refine definition of mayWriteToMemory
   case Instruction::Store:
   case Instruction::VAArg:
@@ -733,7 +797,8 @@ bool Instruction::isVolatile() const {
       if (auto *MI = dyn_cast<MemIntrinsic>(II))
         return MI->isVolatile();
       switch (II->getIntrinsicID()) {
-      default: break;
+      default:
+        break;
       case Intrinsic::matrix_column_major_load:
         return cast<ConstantInt>(II->getArgOperand(2))->isOne();
       case Intrinsic::matrix_column_major_store:
diff --git a/llvm/lib/IR/Instructions.cpp b/llvm/lib/IR/Instructions.cpp
index e995293dcd47d9f..6492fbdae7c4972 100644
--- a/llvm/lib/IR/Instructions.cpp
+++ b/llvm/lib/IR/Instructions.cpp
@@ -99,7 +99,7 @@ const char *SelectInst::areInvalidOperands(Value *Op0, Value *Op1, Value *Op2) {
       return "selected values for vector select must be vectors";
     if (ET->getElementCount() != VT->getElementCount())
       return "vector select requires selected vectors to have "
-                   "the same vector length as select condition";
+             "the same vector length as select condition";
   } else if (Op0->getType() != Type::getInt1Ty(Op0->getContext())) {
     return "select condition must be i1 or <n x i1>";
   }
@@ -164,9 +164,10 @@ void PHINode::removeIncomingValueIf(function_ref<bool(unsigned)> Predicate,
 
   // Remove incoming blocks.
   (void)std::remove_if(const_cast<block_iterator>(block_begin()),
-                 const_cast<block_iterator>(block_end()), [&](BasicBlock *&BB) {
-                   return RemoveIndices.contains(&BB - block_begin());
-                 });
+                       const_cast<block_iterator>(block_end()),
+                       [&](BasicBlock *&BB) {
+                         return RemoveIndices.contains(&BB - block_begin());
+                       });
 
   setNumHungOffUseOperands(getNumOperands() - RemoveIndices.size());
 
@@ -185,7 +186,8 @@ void PHINode::removeIncomingValueIf(function_ref<bool(unsigned)> Predicate,
 void PHINode::growOperands() {
   unsigned e = getNumOperands();
   unsigned NumOps = e + e / 2;
-  if (NumOps < 2) NumOps = 2;      // 2 op PHI nodes are VERY common.
+  if (NumOps < 2)
+    NumOps = 2; // 2 op PHI nodes are VERY common.
 
   ReservedSpace = NumOps;
   growHungoffUses(ReservedSpace, /* IsPhi */ true);
@@ -200,7 +202,7 @@ Value *PHINode::hasConstantValue() const {
     if (getIncomingValue(i) != ConstantValue && getIncomingValue(i) != this) {
       if (ConstantValue != this)
         return nullptr; // Incoming values not all the same.
-       // The case where the first value is this PHI.
+                        // The case where the first value is this PHI.
       ConstantValue = getIncomingValue(i);
     }
   if (ConstantValue == this)
@@ -279,7 +281,8 @@ void LandingPadInst::init(unsigned NumReservedValues, const Twine &NameStr) {
 /// push_back style of operation. This grows the number of ops by 2 times.
 void LandingPadInst::growOperands(unsigned Size) {
   unsigned e = getNumOperands();
-  if (ReservedSpace >= e + Size) return;
+  if (ReservedSpace >= e + Size)
+    return;
   ReservedSpace = (std::max(e, 1U) + Size / 2) * 2;
   growHungoffUses(ReservedSpace);
 }
@@ -322,7 +325,6 @@ CallBase *CallBase::Create(CallBase *CI, OperandBundleDef OpB,
   return CallBase::Create(CI, OpDefs, InsertPt);
 }
 
-
 Function *CallBase::getCaller() { return getParent()->getParent(); }
 
 unsigned CallBase::getNumSubclassExtraOperandsDynamic() const {
@@ -802,9 +804,8 @@ void CallInst::updateProfWeight(uint64_t S, uint64_t T) {
       // Using APInt::div may be expensive, but most cases should fit 64 bits.
       APInt Val(128, Count);
       Val *= APS;
-      Vals.push_back(MDB.createConstant(
-          ConstantInt::get(Type::getInt64Ty(getContext()),
-                           Val.udiv(APT).getLimitedValue())));
+      Vals.push_back(MDB.createConstant(ConstantInt::get(
+          Type::getInt64Ty(getContext()), Val.udiv(APT).getLimitedValue())));
     }
   setMetadata(LLVMContext::MD_prof, MDNode::get(getContext(), Vals));
 }
@@ -833,19 +834,19 @@ static Instruction *createMalloc(Instruction *InsertBefore,
     ArraySize = ConstantInt::get(IntPtrTy, 1);
   else if (ArraySize->getType() != IntPtrTy) {
     if (InsertBefore)
-      ArraySize = CastInst::CreateIntegerCast(ArraySize, IntPtrTy, false,
-                                              "", InsertBefore);
+      ArraySize = CastInst::CreateIntegerCast(ArraySize, IntPtrTy, false, "",
+                                              InsertBefore);
     else
-      ArraySize = CastInst::CreateIntegerCast(ArraySize, IntPtrTy, false,
-                                              "", InsertAtEnd);
+      ArraySize = CastInst::CreateIntegerCast(ArraySize, IntPtrTy, false, "",
+                                              InsertAtEnd);
   }
 
   if (!IsConstantOne(ArraySize)) {
     if (IsConstantOne(AllocSize)) {
-      AllocSize = ArraySize;         // Operand * 1 = Operand
+      AllocSize = ArraySize; // Operand * 1 = Operand
     } else if (Constant *CO = dyn_cast<Constant>(ArraySize)) {
-      Constant *Scale = ConstantExpr::getIntegerCast(CO, IntPtrTy,
-                                                     false /*ZExt*/);
+      Constant *Scale =
+          ConstantExpr::getIntegerCast(CO, IntPtrTy, false /*ZExt*/);
       // Malloc arg is constant product of type size and array size
       AllocSize = ConstantExpr::getMul(Scale, cast<Constant>(AllocSize));
     } else {
@@ -904,20 +905,18 @@ static Instruction *createMalloc(Instruction *InsertBefore,
 ///    constant 1.
 /// 2. Call malloc with that argument.
 /// 3. Bitcast the result of the malloc call to the specified type.
-Instruction *CallInst::CreateMalloc(Instruction *InsertBefore,
-                                    Type *IntPtrTy, Type *AllocTy,
-                                    Value *AllocSize, Value *ArraySize,
-                                    Function *MallocF,
+Instruction *CallInst::CreateMalloc(Instruction *InsertBefore, Type *IntPtrTy,
+                                    Type *AllocTy, Value *AllocSize,
+                                    Value *ArraySize, Function *MallocF,
                                     const Twine &Name) {
   return createMalloc(InsertBefore, nullptr, IntPtrTy, AllocTy, AllocSize,
                       ArraySize, std::nullopt, MallocF, Name);
 }
-Instruction *CallInst::CreateMalloc(Instruction *InsertBefore,
-                                    Type *IntPtrTy, Type *AllocTy,
-                                    Value *AllocSize, Value *ArraySize,
+Instruction *CallInst::CreateMalloc(Instruction *InsertBefore, Type *IntPtrTy,
+                                    Type *AllocTy, Value *AllocSize,
+                                    Value *ArraySize,
                                     ArrayRef<OperandBundleDef> OpB,
-                                    Function *MallocF,
-                                    const Twine &Name) {
+                                    Function *MallocF, const Twine &Name) {
   return createMalloc(InsertBefore, nullptr, IntPtrTy, AllocTy, AllocSize,
                       ArraySize, OpB, MallocF, Name);
 }
@@ -930,16 +929,16 @@ Instruction *CallInst::CreateMalloc(Instruction *InsertBefore,
 /// 3. Bitcast the result of the malloc call to the specified type.
 /// Note: This function does not add the bitcast to the basic block, that is the
 /// responsibility of the caller.
-Instruction *CallInst::CreateMalloc(BasicBlock *InsertAtEnd,
-                                    Type *IntPtrTy, Type *AllocTy,
-                                    Value *AllocSize, Value *ArraySize,
-                                    Function *MallocF, const Twine &Name) {
+Instruction *CallInst::CreateMalloc(BasicBlock *InsertAtEnd, Type *IntPtrTy,
+                                    Type *AllocTy, Value *AllocSize,
+                                    Value *ArraySize, Function *MallocF,
+                                    const Twine &Name) {
   return createMalloc(nullptr, InsertAtEnd, IntPtrTy, AllocTy, AllocSize,
                       ArraySize, std::nullopt, MallocF, Name);
 }
-Instruction *CallInst::CreateMalloc(BasicBlock *InsertAtEnd,
-                                    Type *IntPtrTy, Type *AllocTy,
-                                    Value *AllocSize, Value *ArraySize,
+Instruction *CallInst::CreateMalloc(BasicBlock *InsertAtEnd, Type *IntPtrTy,
+                                    Type *AllocTy, Value *AllocSize,
+                                    Value *ArraySize,
                                     ArrayRef<OperandBundleDef> OpB,
                                     Function *MallocF, const Twine &Name) {
   return createMalloc(nullptr, InsertAtEnd, IntPtrTy, AllocTy, AllocSize,
@@ -1508,11 +1507,11 @@ static Align computeAllocaDefaultAlign(Type *Ty, Instruction *I) {
 
 AllocaInst::AllocaInst(Type *Ty, unsigned AddrSpace, const Twine &Name,
                        Instruction *InsertBefore)
-  : AllocaInst(Ty, AddrSpace, /*ArraySize=*/nullptr, Name, InsertBefore) {}
+    : AllocaInst(Ty, AddrSpace, /*ArraySize=*/nullptr, Name, InsertBefore) {}
 
 AllocaInst::AllocaInst(Type *Ty, unsigned AddrSpace, const Twine &Name,
                        BasicBlock *InsertAtEnd)
-  : AllocaInst(Ty, AddrSpace, /*ArraySize=*/nullptr, Name, InsertAtEnd) {}
+    : AllocaInst(Ty, AddrSpace, /*ArraySize=*/nullptr, Name, InsertAtEnd) {}
 
 AllocaInst::AllocaInst(Type *Ty, unsigned AddrSpace, Value *ArraySize,
                        const Twine &Name, Instruction *InsertBefore)
@@ -1547,7 +1546,6 @@ AllocaInst::AllocaInst(Type *Ty, unsigned AddrSpace, Value *ArraySize,
   setName(Name);
 }
 
-
 bool AllocaInst::isArrayAllocation() const {
   if (ConstantInt *CI = dyn_cast<ConstantInt>(getOperand(0)))
     return !CI->isOne();
@@ -1559,7 +1557,8 @@ bool AllocaInst::isArrayAllocation() const {
 /// into the prolog/epilog code, so it is basically free.
 bool AllocaInst::isStaticAlloca() const {
   // Must be constant size.
-  if (!isa<ConstantInt>(getArraySize())) return false;
+  if (!isa<ConstantInt>(getArraySize()))
+    return false;
 
   // Must be in the entry block.
   const BasicBlock *Parent = getParent();
@@ -1704,7 +1703,6 @@ StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, Align Align,
   AssertOK();
 }
 
-
 //===----------------------------------------------------------------------===//
 //                       AtomicCmpXchgInst Implementation
 //===----------------------------------------------------------------------===//
@@ -1773,8 +1771,7 @@ void AtomicRMWInst::Init(BinOp Operation, Value *Ptr, Value *Val,
   setSyncScopeID(SSID);
   setAlignment(Alignment);
 
-  assert(getOperand(0) && getOperand(1) &&
-         "All operands must be non-null!");
+  assert(getOperand(0) && getOperand(1) && "All operands must be non-null!");
   assert(getOperand(0)->getType()->isPointerTy() &&
          "Ptr must have pointer type!");
   assert(Ordering != AtomicOrdering::NotAtomic &&
@@ -1847,17 +1844,15 @@ StringRef AtomicRMWInst::getOperationName(BinOp Op) {
 //===----------------------------------------------------------------------===//
 
 FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering,
-                     SyncScope::ID SSID,
-                     Instruction *InsertBefore)
-  : Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertBefore) {
+                     SyncScope::ID SSID, Instruction *InsertBefore)
+    : Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertBefore) {
   setOrdering(Ordering);
   setSyncScopeID(SSID);
 }
 
 FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering,
-                     SyncScope::ID SSID,
-                     BasicBlock *InsertAtEnd)
-  : Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertAtEnd) {
+                     SyncScope::ID SSID, BasicBlock *InsertAtEnd)
+    : Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertAtEnd) {
   setOrdering(Ordering);
   setSyncScopeID(SSID);
 }
@@ -1945,7 +1940,8 @@ Type *GetElementPtrInst::getIndexedType(Type *Ty, ArrayRef<uint64_t> IdxList) {
 bool GetElementPtrInst::hasAllZeroIndices() const {
   for (unsigned i = 1, e = getNumOperands(); i != e; ++i) {
     if (ConstantInt *CI = dyn_cast<ConstantInt>(getOperand(i))) {
-      if (!CI->isZero()) return false;
+      if (!CI->isZero())
+        return false;
     } else {
       return false;
     }
@@ -1980,8 +1976,7 @@ bool GetElementPtrInst::accumulateConstantOffset(const DataLayout &DL,
 
 bool GetElementPtrInst::collectOffset(
     const DataLayout &DL, unsigned BitWidth,
-    MapVector<Value *, APInt> &VariableOffsets,
-    APInt &ConstantOffset) const {
+    MapVector<Value *, APInt> &VariableOffsets, APInt &ConstantOffset) const {
   // Delegate to the generic GEPOperator implementation.
   return cast<GEPOperator>(this)->collectOffset(DL, BitWidth, VariableOffsets,
                                                 ConstantOffset);
@@ -1994,10 +1989,9 @@ bool GetElementPtrInst::collectOffset(
 ExtractElementInst::ExtractElementInst(Value *Val, Value *Index,
                                        const Twine &Name,
                                        Instruction *InsertBef)
-  : Instruction(cast<VectorType>(Val->getType())->getElementType(),
-                ExtractElement,
-                OperandTraits<ExtractElementInst>::op_begin(this),
-                2, InsertBef) {
+    : Instruction(
+          cast<VectorType>(Val->getType())->getElementType(), ExtractElement,
+          OperandTraits<ExtractElementInst>::op_begin(this), 2, InsertBef) {
   assert(isValidOperands(Val, Index) &&
          "Invalid extractelement instruction operands!");
   Op<0>() = Val;
@@ -2006,12 +2000,10 @@ ExtractElementInst::ExtractElementInst(Value *Val, Value *Index,
 }
 
 ExtractElementInst::ExtractElementInst(Value *Val, Value *Index,
-                                       const Twine &Name,
-                                       BasicBlock *InsertAE)
-  : Instruction(cast<VectorType>(Val->getType())->getElementType(),
-                ExtractElement,
-                OperandTraits<ExtractElementInst>::op_begin(this),
-                2, InsertAE) {
+                                       const Twine &Name, BasicBlock *InsertAE)
+    : Instruction(
+          cast<VectorType>(Val->getType())->getElementType(), ExtractElement,
+          OperandTraits<ExtractElementInst>::op_begin(this), 2, InsertAE) {
   assert(isValidOperands(Val, Index) &&
          "Invalid extractelement instruction operands!");
 
@@ -2031,11 +2023,10 @@ bool ExtractElementInst::isValidOperands(const Value *Val, const Value *Index) {
 //===----------------------------------------------------------------------===//
 
 InsertElementInst::InsertElementInst(Value *Vec, Value *Elt, Value *Index,
-                                     const Twine &Name,
-                                     Instruction *InsertBef)
-  : Instruction(Vec->getType(), InsertElement,
-                OperandTraits<InsertElementInst>::op_begin(this),
-                3, InsertBef) {
+                                     const Twine &Name, Instruction *InsertBef)
+    : Instruction(Vec->getType(), InsertElement,
+                  OperandTraits<InsertElementInst>::op_begin(this), 3,
+                  InsertBef) {
   assert(isValidOperands(Vec, Elt, Index) &&
          "Invalid insertelement instruction operands!");
   Op<0>() = Vec;
@@ -2045,11 +2036,10 @@ InsertElementInst::InsertElementInst(Value *Vec, Value *Elt, Value *Index,
 }
 
 InsertElementInst::InsertElementInst(Value *Vec, Value *Elt, Value *Index,
-                                     const Twine &Name,
-                                     BasicBlock *InsertAE)
-  : Instruction(Vec->getType(), InsertElement,
-                OperandTraits<InsertElementInst>::op_begin(this),
-                3, InsertAE) {
+                                     const Twine &Name, BasicBlock *InsertAE)
+    : Instruction(Vec->getType(), InsertElement,
+                  OperandTraits<InsertElementInst>::op_begin(this), 3,
+                  InsertAE) {
   assert(isValidOperands(Vec, Elt, Index) &&
          "Invalid insertelement instruction operands!");
 
@@ -2062,13 +2052,14 @@ InsertElementInst::InsertElementInst(Value *Vec, Value *Elt, Value *Index,
 bool InsertElementInst::isValidOperands(const Value *Vec, const Value *Elt,
                                         const Value *Index) {
   if (!Vec->getType()->isVectorTy())
-    return false;   // First operand of insertelement must be vector type.
+    return false; // First operand of insertelement must be vector type.
 
   if (Elt->getType() != cast<VectorType>(Vec->getType())->getElementType())
-    return false;// Second operand of insertelement must be vector element type.
+    return false; // Second operand of insertelement must be vector element
+                  // type.
 
   if (!Index->getType()->isIntegerTy())
-    return false;  // Third operand of insertelement must be i32.
+    return false; // Third operand of insertelement must be i32.
   return true;
 }
 
@@ -2230,7 +2221,7 @@ bool ShuffleVectorInst::isValidOperands(const Value *V1, const Value *V2,
     unsigned V1Size = cast<FixedVectorType>(V1->getType())->getNumElements();
     for (Value *Op : MV->operands()) {
       if (auto *CI = dyn_cast<ConstantInt>(Op)) {
-        if (CI->uge(V1Size*2))
+        if (CI->uge(V1Size * 2))
           return false;
       } else if (!isa<UndefValue>(Op)) {
         return false;
@@ -2243,7 +2234,7 @@ bool ShuffleVectorInst::isValidOperands(const Value *V1, const Value *V2,
     unsigned V1Size = cast<FixedVectorType>(V1->getType())->getNumElements();
     for (unsigned i = 0, e = cast<FixedVectorType>(MaskTy)->getNumElements();
          i != e; ++i)
-      if (CDS->getElementAsInteger(i) >= V1Size*2)
+      if (CDS->getElementAsInteger(i) >= V1Size * 2)
         return false;
     return true;
   }
@@ -2280,8 +2271,8 @@ void ShuffleVectorInst::getShuffleMask(const Constant *Mask,
   }
   for (unsigned i = 0; i != NumElts; ++i) {
     Constant *C = Mask->getAggregateElement(i);
-    Result.push_back(isa<UndefValue>(C) ? -1 :
-                     cast<ConstantInt>(C)->getZExtValue());
+    Result.push_back(isa<UndefValue>(C) ? -1
+                                        : cast<ConstantInt>(C)->getZExtValue());
   }
 }
 
@@ -2324,7 +2315,8 @@ static bool isSingleSourceMaskImpl(ArrayRef<int> Mask, int NumOpElts) {
     if (UsesLHS && UsesRHS)
       return false;
   }
-  // Allow for degenerate case: completely undef mask means neither source is used.
+  // Allow for degenerate case: completely undef mask means neither source is
+  // used.
   return UsesLHS || UsesRHS;
 }
 
@@ -2887,7 +2879,8 @@ void InsertValueInst::init(Value *Agg, Value *Val, ArrayRef<unsigned> Idxs,
   assert(!Idxs.empty() && "InsertValueInst must have at least one index");
 
   assert(ExtractValueInst::getIndexedType(Agg->getType(), Idxs) ==
-         Val->getType() && "Inserted value must match indexed type!");
+             Val->getType() &&
+         "Inserted value must match indexed type!");
   Op<0>() = Agg;
   Op<1>() = Val;
 
@@ -2896,9 +2889,9 @@ void InsertValueInst::init(Value *Agg, Value *Val, ArrayRef<unsigned> Idxs,
 }
 
 InsertValueInst::InsertValueInst(const InsertValueInst &IVI)
-  : Instruction(IVI.getType(), InsertValue,
-                OperandTraits<InsertValueInst>::op_begin(this), 2),
-    Indices(IVI.Indices) {
+    : Instruction(IVI.getType(), InsertValue,
+                  OperandTraits<InsertValueInst>::op_begin(this), 2),
+      Indices(IVI.Indices) {
   Op<0>() = IVI.getOperand(0);
   Op<1>() = IVI.getOperand(1);
   SubclassOptionalData = IVI.SubclassOptionalData;
@@ -2920,8 +2913,8 @@ void ExtractValueInst::init(ArrayRef<unsigned> Idxs, const Twine &Name) {
 }
 
 ExtractValueInst::ExtractValueInst(const ExtractValueInst &EVI)
-  : UnaryInstruction(EVI.getType(), ExtractValue, EVI.getOperand(0)),
-    Indices(EVI.Indices) {
+    : UnaryInstruction(EVI.getType(), ExtractValue, EVI.getOperand(0)),
+      Indices(EVI.Indices) {
   SubclassOptionalData = EVI.SubclassOptionalData;
 }
 
@@ -2931,8 +2924,7 @@ ExtractValueInst::ExtractValueInst(const ExtractValueInst &EVI)
 // A null type is returned if the indices are invalid for the specified
 // pointer type.
 //
-Type *ExtractValueInst::getIndexedType(Type *Agg,
-                                       ArrayRef<unsigned> Idxs) {
+Type *ExtractValueInst::getIndexedType(Type *Agg, ArrayRef<unsigned> Idxs) {
   for (unsigned Index : Idxs) {
     // We can't use CompositeType::indexValid(Index) here.
     // indexValid() always returns true for arrays because getelementptr allows
@@ -2953,39 +2945,35 @@ Type *ExtractValueInst::getIndexedType(Type *Agg,
       return nullptr;
     }
   }
-  return const_cast<Type*>(Agg);
+  return const_cast<Type *>(Agg);
 }
 
 //===----------------------------------------------------------------------===//
 //                             UnaryOperator Class
 //===----------------------------------------------------------------------===//
 
-UnaryOperator::UnaryOperator(UnaryOps iType, Value *S,
-                             Type *Ty, const Twine &Name,
-                             Instruction *InsertBefore)
-  : UnaryInstruction(Ty, iType, S, InsertBefore) {
+UnaryOperator::UnaryOperator(UnaryOps iType, Value *S, Type *Ty,
+                             const Twine &Name, Instruction *InsertBefore)
+    : UnaryInstruction(Ty, iType, S, InsertBefore) {
   Op<0>() = S;
   setName(Name);
   AssertOK();
 }
 
-UnaryOperator::UnaryOperator(UnaryOps iType, Value *S,
-                             Type *Ty, const Twine &Name,
-                             BasicBlock *InsertAtEnd)
-  : UnaryInstruction(Ty, iType, S, InsertAtEnd) {
+UnaryOperator::UnaryOperator(UnaryOps iType, Value *S, Type *Ty,
+                             const Twine &Name, BasicBlock *InsertAtEnd)
+    : UnaryInstruction(Ty, iType, S, InsertAtEnd) {
   Op<0>() = S;
   setName(Name);
   AssertOK();
 }
 
-UnaryOperator *UnaryOperator::Create(UnaryOps Op, Value *S,
-                                     const Twine &Name,
+UnaryOperator *UnaryOperator::Create(UnaryOps Op, Value *S, const Twine &Name,
                                      Instruction *InsertBefore) {
   return new UnaryOperator(Op, S, S->getType(), Name, InsertBefore);
 }
 
-UnaryOperator *UnaryOperator::Create(UnaryOps Op, Value *S,
-                                     const Twine &Name,
+UnaryOperator *UnaryOperator::Create(UnaryOps Op, Value *S, const Twine &Name,
                                      BasicBlock *InsertAtEnd) {
   UnaryOperator *Res = Create(Op, S, Name);
   Res->insertInto(InsertAtEnd, InsertAtEnd->end());
@@ -3004,7 +2992,8 @@ void UnaryOperator::AssertOK() {
            "Tried to create a floating-point operation on a "
            "non-floating-point type!");
     break;
-  default: llvm_unreachable("Invalid opcode provided");
+  default:
+    llvm_unreachable("Invalid opcode provided");
   }
 #endif
 }
@@ -3013,26 +3002,20 @@ void UnaryOperator::AssertOK() {
 //                             BinaryOperator Class
 //===----------------------------------------------------------------------===//
 
-BinaryOperator::BinaryOperator(BinaryOps iType, Value *S1, Value *S2,
-                               Type *Ty, const Twine &Name,
-                               Instruction *InsertBefore)
-  : Instruction(Ty, iType,
-                OperandTraits<BinaryOperator>::op_begin(this),
-                OperandTraits<BinaryOperator>::operands(this),
-                InsertBefore) {
+BinaryOperator::BinaryOperator(BinaryOps iType, Value *S1, Value *S2, Type *Ty,
+                               const Twine &Name, Instruction *InsertBefore)
+    : Instruction(Ty, iType, OperandTraits<BinaryOperator>::op_begin(this),
+                  OperandTraits<BinaryOperator>::operands(this), InsertBefore) {
   Op<0>() = S1;
   Op<1>() = S2;
   setName(Name);
   AssertOK();
 }
 
-BinaryOperator::BinaryOperator(BinaryOps iType, Value *S1, Value *S2,
-                               Type *Ty, const Twine &Name,
-                               BasicBlock *InsertAtEnd)
-  : Instruction(Ty, iType,
-                OperandTraits<BinaryOperator>::op_begin(this),
-                OperandTraits<BinaryOperator>::operands(this),
-                InsertAtEnd) {
+BinaryOperator::BinaryOperator(BinaryOps iType, Value *S1, Value *S2, Type *Ty,
+                               const Twine &Name, BasicBlock *InsertAtEnd)
+    : Instruction(Ty, iType, OperandTraits<BinaryOperator>::op_begin(this),
+                  OperandTraits<BinaryOperator>::operands(this), InsertAtEnd) {
   Op<0>() = S1;
   Op<1>() = S2;
   setName(Name);
@@ -3041,19 +3024,22 @@ BinaryOperator::BinaryOperator(BinaryOps iType, Value *S1, Value *S2,
 
 void BinaryOperator::AssertOK() {
   Value *LHS = getOperand(0), *RHS = getOperand(1);
-  (void)LHS; (void)RHS; // Silence warnings.
+  (void)LHS;
+  (void)RHS; // Silence warnings.
   assert(LHS->getType() == RHS->getType() &&
          "Binary operator operand types must match!");
 #ifndef NDEBUG
   switch (getOpcode()) {
-  case Add: case Sub:
+  case Add:
+  case Sub:
   case Mul:
     assert(getType() == LHS->getType() &&
            "Arithmetic operation should return same type as operands!");
     assert(getType()->isIntOrIntVectorTy() &&
            "Tried to create an integer operation on a non-integer type!");
     break;
-  case FAdd: case FSub:
+  case FAdd:
+  case FSub:
   case FMul:
     assert(getType() == LHS->getType() &&
            "Arithmetic operation should return same type as operands!");
@@ -3095,14 +3081,16 @@ void BinaryOperator::AssertOK() {
     assert(getType()->isIntOrIntVectorTy() &&
            "Tried to create a shift operation on a non-integral type!");
     break;
-  case And: case Or:
+  case And:
+  case Or:
   case Xor:
     assert(getType() == LHS->getType() &&
            "Logical operation should return same type as operands!");
     assert(getType()->isIntOrIntVectorTy() &&
            "Tried to create a logical operation on a non-integral type!");
     break;
-  default: llvm_unreachable("Invalid opcode provided");
+  default:
+    llvm_unreachable("Invalid opcode provided");
   }
 #endif
 }
@@ -3126,17 +3114,15 @@ BinaryOperator *BinaryOperator::Create(BinaryOps Op, Value *S1, Value *S2,
 BinaryOperator *BinaryOperator::CreateNeg(Value *Op, const Twine &Name,
                                           Instruction *InsertBefore) {
   Value *Zero = ConstantInt::get(Op->getType(), 0);
-  return new BinaryOperator(Instruction::Sub,
-                            Zero, Op,
-                            Op->getType(), Name, InsertBefore);
+  return new BinaryOperator(Instruction::Sub, Zero, Op, Op->getType(), Name,
+                            InsertBefore);
 }
 
 BinaryOperator *BinaryOperator::CreateNeg(Value *Op, const Twine &Name,
                                           BasicBlock *InsertAtEnd) {
   Value *Zero = ConstantInt::get(Op->getType(), 0);
-  return new BinaryOperator(Instruction::Sub,
-                            Zero, Op,
-                            Op->getType(), Name, InsertAtEnd);
+  return new BinaryOperator(Instruction::Sub, Zero, Op, Op->getType(), Name,
+                            InsertAtEnd);
 }
 
 BinaryOperator *BinaryOperator::CreateNSWNeg(Value *Op, const Twine &Name,
@@ -3166,15 +3152,15 @@ BinaryOperator *BinaryOperator::CreateNUWNeg(Value *Op, const Twine &Name,
 BinaryOperator *BinaryOperator::CreateNot(Value *Op, const Twine &Name,
                                           Instruction *InsertBefore) {
   Constant *C = Constant::getAllOnesValue(Op->getType());
-  return new BinaryOperator(Instruction::Xor, Op, C,
-                            Op->getType(), Name, InsertBefore);
+  return new BinaryOperator(Instruction::Xor, Op, C, Op->getType(), Name,
+                            InsertBefore);
 }
 
 BinaryOperator *BinaryOperator::CreateNot(Value *Op, const Twine &Name,
                                           BasicBlock *InsertAtEnd) {
   Constant *AllOnes = Constant::getAllOnesValue(Op->getType());
-  return new BinaryOperator(Instruction::Xor, Op, AllOnes,
-                            Op->getType(), Name, InsertAtEnd);
+  return new BinaryOperator(Instruction::Xor, Op, AllOnes, Op->getType(), Name,
+                            InsertAtEnd);
 }
 
 // Exchange the two operands to this instruction. This instruction is safe to
@@ -3208,14 +3194,14 @@ float FPMathOperator::getFPAccuracy() const {
 // Just determine if this cast only deals with integral->integral conversion.
 bool CastInst::isIntegerCast() const {
   switch (getOpcode()) {
-    default: return false;
-    case Instruction::ZExt:
-    case Instruction::SExt:
-    case Instruction::Trunc:
-      return true;
-    case Instruction::BitCast:
-      return getOperand(0)->getType()->isIntegerTy() &&
-        getType()->isIntegerTy();
+  default:
+    return false;
+  case Instruction::ZExt:
+  case Instruction::SExt:
+  case Instruction::Trunc:
+    return true;
+  case Instruction::BitCast:
+    return getOperand(0)->getType()->isIntegerTy() && getType()->isIntegerTy();
   }
 }
 
@@ -3227,33 +3213,32 @@ bool CastInst::isIntegerCast() const {
 /// # bitcast <2 x i32> %x to <4 x i16>
 /// # ptrtoint i32* %x to i32     ; on 32-bit plaforms only
 /// Determine if the described cast is a no-op.
-bool CastInst::isNoopCast(Instruction::CastOps Opcode,
-                          Type *SrcTy,
-                          Type *DestTy,
-                          const DataLayout &DL) {
+bool CastInst::isNoopCast(Instruction::CastOps Opcode, Type *SrcTy,
+                          Type *DestTy, const DataLayout &DL) {
   assert(castIsValid(Opcode, SrcTy, DestTy) && "method precondition");
   switch (Opcode) {
-    default: llvm_unreachable("Invalid CastOp");
-    case Instruction::Trunc:
-    case Instruction::ZExt:
-    case Instruction::SExt:
-    case Instruction::FPTrunc:
-    case Instruction::FPExt:
-    case Instruction::UIToFP:
-    case Instruction::SIToFP:
-    case Instruction::FPToUI:
-    case Instruction::FPToSI:
-    case Instruction::AddrSpaceCast:
-      // TODO: Target informations may give a more accurate answer here.
-      return false;
-    case Instruction::BitCast:
-      return true;  // BitCast never modifies bits.
-    case Instruction::PtrToInt:
-      return DL.getIntPtrType(SrcTy)->getScalarSizeInBits() ==
-             DestTy->getScalarSizeInBits();
-    case Instruction::IntToPtr:
-      return DL.getIntPtrType(DestTy)->getScalarSizeInBits() ==
-             SrcTy->getScalarSizeInBits();
+  default:
+    llvm_unreachable("Invalid CastOp");
+  case Instruction::Trunc:
+  case Instruction::ZExt:
+  case Instruction::SExt:
+  case Instruction::FPTrunc:
+  case Instruction::FPExt:
+  case Instruction::UIToFP:
+  case Instruction::SIToFP:
+  case Instruction::FPToUI:
+  case Instruction::FPToSI:
+  case Instruction::AddrSpaceCast:
+    // TODO: Target informations may give a more accurate answer here.
+    return false;
+  case Instruction::BitCast:
+    return true; // BitCast never modifies bits.
+  case Instruction::PtrToInt:
+    return DL.getIntPtrType(SrcTy)->getScalarSizeInBits() ==
+           DestTy->getScalarSizeInBits();
+  case Instruction::IntToPtr:
+    return DL.getIntPtrType(DestTy)->getScalarSizeInBits() ==
+           SrcTy->getScalarSizeInBits();
   }
 }
 
@@ -3269,10 +3254,11 @@ bool CastInst::isNoopCast(const DataLayout &DL) const {
 /// The function returns a resultOpcode so these two casts can be replaced with:
 /// *  %Replacement = resultOpcode %SrcTy %x to DstTy
 /// If no such cast is permitted, the function returns 0.
-unsigned CastInst::isEliminableCastPair(
-  Instruction::CastOps firstOp, Instruction::CastOps secondOp,
-  Type *SrcTy, Type *MidTy, Type *DstTy, Type *SrcIntPtrTy, Type *MidIntPtrTy,
-  Type *DstIntPtrTy) {
+unsigned CastInst::isEliminableCastPair(Instruction::CastOps firstOp,
+                                        Instruction::CastOps secondOp,
+                                        Type *SrcTy, Type *MidTy, Type *DstTy,
+                                        Type *SrcIntPtrTy, Type *MidIntPtrTy,
+                                        Type *DstIntPtrTy) {
   // Define the 144 possibilities for these two cast instructions. The values
   // in this matrix determine what to do in a given situation and select the
   // case in the switch below.  The rows correspond to firstOp, the columns
@@ -3304,284 +3290,300 @@ unsigned CastInst::isEliminableCastPair(
   // and causes issues when building libgcc.  We disallow fptosi+sext for the
   // same reason.
   const unsigned numCastOps =
-    Instruction::CastOpsEnd - Instruction::CastOpsBegin;
+      Instruction::CastOpsEnd - Instruction::CastOpsBegin;
   static const uint8_t CastResults[numCastOps][numCastOps] = {
-    // T        F  F  U  S  F  F  P  I  B  A  -+
-    // R  Z  S  P  P  I  I  T  P  2  N  T  S   |
-    // U  E  E  2  2  2  2  R  E  I  T  C  C   +- secondOp
-    // N  X  X  U  S  F  F  N  X  N  2  V  V   |
-    // C  T  T  I  I  P  P  C  T  T  P  T  T  -+
-    {  1, 0, 0,99,99, 0, 0,99,99,99, 0, 3, 0}, // Trunc         -+
-    {  8, 1, 9,99,99, 2,17,99,99,99, 2, 3, 0}, // ZExt           |
-    {  8, 0, 1,99,99, 0, 2,99,99,99, 0, 3, 0}, // SExt           |
-    {  0, 0, 0,99,99, 0, 0,99,99,99, 0, 3, 0}, // FPToUI         |
-    {  0, 0, 0,99,99, 0, 0,99,99,99, 0, 3, 0}, // FPToSI         |
-    { 99,99,99, 0, 0,99,99, 0, 0,99,99, 4, 0}, // UIToFP         +- firstOp
-    { 99,99,99, 0, 0,99,99, 0, 0,99,99, 4, 0}, // SIToFP         |
-    { 99,99,99, 0, 0,99,99, 0, 0,99,99, 4, 0}, // FPTrunc        |
-    { 99,99,99, 2, 2,99,99, 8, 2,99,99, 4, 0}, // FPExt          |
-    {  1, 0, 0,99,99, 0, 0,99,99,99, 7, 3, 0}, // PtrToInt       |
-    { 99,99,99,99,99,99,99,99,99,11,99,15, 0}, // IntToPtr       |
-    {  5, 5, 5, 6, 6, 5, 5, 6, 6,16, 5, 1,14}, // BitCast        |
-    {  0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,13,12}, // AddrSpaceCast -+
+      // T        F  F  U  S  F  F  P  I  B  A  -+
+      // R  Z  S  P  P  I  I  T  P  2  N  T  S   |
+      // U  E  E  2  2  2  2  R  E  I  T  C  C   +- secondOp
+      // N  X  X  U  S  F  F  N  X  N  2  V  V   |
+      // C  T  T  I  I  P  P  C  T  T  P  T  T  -+
+      {1, 0, 0, 99, 99, 0, 0, 99, 99, 99, 0, 3, 0},  // Trunc         -+
+      {8, 1, 9, 99, 99, 2, 17, 99, 99, 99, 2, 3, 0}, // ZExt           |
+      {8, 0, 1, 99, 99, 0, 2, 99, 99, 99, 0, 3, 0},  // SExt           |
+      {0, 0, 0, 99, 99, 0, 0, 99, 99, 99, 0, 3, 0},  // FPToUI         |
+      {0, 0, 0, 99, 99, 0, 0, 99, 99, 99, 0, 3, 0},  // FPToSI         |
+      {99, 99, 99, 0, 0, 99, 99, 0, 0, 99, 99, 4,
+       0}, // UIToFP         +- firstOp
+      {99, 99, 99, 0, 0, 99, 99, 0, 0, 99, 99, 4, 0},      // SIToFP         |
+      {99, 99, 99, 0, 0, 99, 99, 0, 0, 99, 99, 4, 0},      // FPTrunc        |
+      {99, 99, 99, 2, 2, 99, 99, 8, 2, 99, 99, 4, 0},      // FPExt          |
+      {1, 0, 0, 99, 99, 0, 0, 99, 99, 99, 7, 3, 0},        // PtrToInt       |
+      {99, 99, 99, 99, 99, 99, 99, 99, 99, 11, 99, 15, 0}, // IntToPtr       |
+      {5, 5, 5, 6, 6, 5, 5, 6, 6, 16, 5, 1, 14},           // BitCast        |
+      {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 13, 12},           // AddrSpaceCast -+
   };
 
   // TODO: This logic could be encoded into the table above and handled in the
   // switch below.
   // If either of the casts are a bitcast from scalar to vector, disallow the
   // merging. However, any pair of bitcasts are allowed.
-  bool IsFirstBitcast  = (firstOp == Instruction::BitCast);
+  bool IsFirstBitcast = (firstOp == Instruction::BitCast);
   bool IsSecondBitcast = (secondOp == Instruction::BitCast);
   bool AreBothBitcasts = IsFirstBitcast && IsSecondBitcast;
 
   // Check if any of the casts convert scalars <-> vectors.
-  if ((IsFirstBitcast  && isa<VectorType>(SrcTy) != isa<VectorType>(MidTy)) ||
+  if ((IsFirstBitcast && isa<VectorType>(SrcTy) != isa<VectorType>(MidTy)) ||
       (IsSecondBitcast && isa<VectorType>(MidTy) != isa<VectorType>(DstTy)))
     if (!AreBothBitcasts)
       return 0;
 
-  int ElimCase = CastResults[firstOp-Instruction::CastOpsBegin]
-                            [secondOp-Instruction::CastOpsBegin];
+  int ElimCase = CastResults[firstOp - Instruction::CastOpsBegin]
+                            [secondOp - Instruction::CastOpsBegin];
   switch (ElimCase) {
-    case 0:
-      // Categorically disallowed.
-      return 0;
-    case 1:
-      // Allowed, use first cast's opcode.
+  case 0:
+    // Categorically disallowed.
+    return 0;
+  case 1:
+    // Allowed, use first cast's opcode.
+    return firstOp;
+  case 2:
+    // Allowed, use second cast's opcode.
+    return secondOp;
+  case 3:
+    // No-op cast in second op implies firstOp as long as the DestTy
+    // is integer and we are not converting between a vector and a
+    // non-vector type.
+    if (!SrcTy->isVectorTy() && DstTy->isIntegerTy())
       return firstOp;
-    case 2:
-      // Allowed, use second cast's opcode.
+    return 0;
+  case 4:
+    // No-op cast in second op implies firstOp as long as the DestTy
+    // is floating point.
+    if (DstTy->isFloatingPointTy())
+      return firstOp;
+    return 0;
+  case 5:
+    // No-op cast in first op implies secondOp as long as the SrcTy
+    // is an integer.
+    if (SrcTy->isIntegerTy())
       return secondOp;
-    case 3:
-      // No-op cast in second op implies firstOp as long as the DestTy
-      // is integer and we are not converting between a vector and a
-      // non-vector type.
-      if (!SrcTy->isVectorTy() && DstTy->isIntegerTy())
-        return firstOp;
-      return 0;
-    case 4:
-      // No-op cast in second op implies firstOp as long as the DestTy
-      // is floating point.
-      if (DstTy->isFloatingPointTy())
-        return firstOp;
-      return 0;
-    case 5:
-      // No-op cast in first op implies secondOp as long as the SrcTy
-      // is an integer.
-      if (SrcTy->isIntegerTy())
-        return secondOp;
-      return 0;
-    case 6:
-      // No-op cast in first op implies secondOp as long as the SrcTy
-      // is a floating point.
-      if (SrcTy->isFloatingPointTy())
-        return secondOp;
-      return 0;
-    case 7: {
-      // Disable inttoptr/ptrtoint optimization if enabled.
-      if (DisableI2pP2iOpt)
-        return 0;
-
-      // Cannot simplify if address spaces are different!
-      if (SrcTy->getPointerAddressSpace() != DstTy->getPointerAddressSpace())
-        return 0;
-
-      unsigned MidSize = MidTy->getScalarSizeInBits();
-      // We can still fold this without knowing the actual sizes as long we
-      // know that the intermediate pointer is the largest possible
-      // pointer size.
-      // FIXME: Is this always true?
-      if (MidSize == 64)
-        return Instruction::BitCast;
-
-      // ptrtoint, inttoptr -> bitcast (ptr -> ptr) if int size is >= ptr size.
-      if (!SrcIntPtrTy || DstIntPtrTy != SrcIntPtrTy)
-        return 0;
-      unsigned PtrSize = SrcIntPtrTy->getScalarSizeInBits();
-      if (MidSize >= PtrSize)
-        return Instruction::BitCast;
+    return 0;
+  case 6:
+    // No-op cast in first op implies secondOp as long as the SrcTy
+    // is a floating point.
+    if (SrcTy->isFloatingPointTy())
+      return secondOp;
+    return 0;
+  case 7: {
+    // Disable inttoptr/ptrtoint optimization if enabled.
+    if (DisableI2pP2iOpt)
       return 0;
-    }
-    case 8: {
-      // ext, trunc -> bitcast,    if the SrcTy and DstTy are the same
-      // ext, trunc -> ext,        if sizeof(SrcTy) < sizeof(DstTy)
-      // ext, trunc -> trunc,      if sizeof(SrcTy) > sizeof(DstTy)
-      unsigned SrcSize = SrcTy->getScalarSizeInBits();
-      unsigned DstSize = DstTy->getScalarSizeInBits();
-      if (SrcTy == DstTy)
-        return Instruction::BitCast;
-      if (SrcSize < DstSize)
-        return firstOp;
-      if (SrcSize > DstSize)
-        return secondOp;
+
+    // Cannot simplify if address spaces are different!
+    if (SrcTy->getPointerAddressSpace() != DstTy->getPointerAddressSpace())
       return 0;
-    }
-    case 9:
-      // zext, sext -> zext, because sext can't sign extend after zext
-      return Instruction::ZExt;
-    case 11: {
-      // inttoptr, ptrtoint -> bitcast if SrcSize<=PtrSize and SrcSize==DstSize
-      if (!MidIntPtrTy)
-        return 0;
-      unsigned PtrSize = MidIntPtrTy->getScalarSizeInBits();
-      unsigned SrcSize = SrcTy->getScalarSizeInBits();
-      unsigned DstSize = DstTy->getScalarSizeInBits();
-      if (SrcSize <= PtrSize && SrcSize == DstSize)
-        return Instruction::BitCast;
+
+    unsigned MidSize = MidTy->getScalarSizeInBits();
+    // We can still fold this without knowing the actual sizes as long we
+    // know that the intermediate pointer is the largest possible
+    // pointer size.
+    // FIXME: Is this always true?
+    if (MidSize == 64)
+      return Instruction::BitCast;
+
+    // ptrtoint, inttoptr -> bitcast (ptr -> ptr) if int size is >= ptr size.
+    if (!SrcIntPtrTy || DstIntPtrTy != SrcIntPtrTy)
       return 0;
-    }
-    case 12:
-      // addrspacecast, addrspacecast -> bitcast,       if SrcAS == DstAS
-      // addrspacecast, addrspacecast -> addrspacecast, if SrcAS != DstAS
-      if (SrcTy->getPointerAddressSpace() != DstTy->getPointerAddressSpace())
-        return Instruction::AddrSpaceCast;
+    unsigned PtrSize = SrcIntPtrTy->getScalarSizeInBits();
+    if (MidSize >= PtrSize)
       return Instruction::BitCast;
-    case 13:
-      // FIXME: this state can be merged with (1), but the following assert
-      // is useful to check the correcteness of the sequence due to semantic
-      // change of bitcast.
-      assert(
-        SrcTy->isPtrOrPtrVectorTy() &&
-        MidTy->isPtrOrPtrVectorTy() &&
-        DstTy->isPtrOrPtrVectorTy() &&
-        SrcTy->getPointerAddressSpace() != MidTy->getPointerAddressSpace() &&
-        MidTy->getPointerAddressSpace() == DstTy->getPointerAddressSpace() &&
-        "Illegal addrspacecast, bitcast sequence!");
-      // Allowed, use first cast's opcode
-      return firstOp;
-    case 14:
-      // bitcast, addrspacecast -> addrspacecast
-      return Instruction::AddrSpaceCast;
-    case 15:
-      // FIXME: this state can be merged with (1), but the following assert
-      // is useful to check the correcteness of the sequence due to semantic
-      // change of bitcast.
-      assert(
-        SrcTy->isIntOrIntVectorTy() &&
-        MidTy->isPtrOrPtrVectorTy() &&
-        DstTy->isPtrOrPtrVectorTy() &&
-        MidTy->getPointerAddressSpace() == DstTy->getPointerAddressSpace() &&
-        "Illegal inttoptr, bitcast sequence!");
-      // Allowed, use first cast's opcode
+    return 0;
+  }
+  case 8: {
+    // ext, trunc -> bitcast,    if the SrcTy and DstTy are the same
+    // ext, trunc -> ext,        if sizeof(SrcTy) < sizeof(DstTy)
+    // ext, trunc -> trunc,      if sizeof(SrcTy) > sizeof(DstTy)
+    unsigned SrcSize = SrcTy->getScalarSizeInBits();
+    unsigned DstSize = DstTy->getScalarSizeInBits();
+    if (SrcTy == DstTy)
+      return Instruction::BitCast;
+    if (SrcSize < DstSize)
       return firstOp;
-    case 16:
-      // FIXME: this state can be merged with (2), but the following assert
-      // is useful to check the correcteness of the sequence due to semantic
-      // change of bitcast.
-      assert(
-        SrcTy->isPtrOrPtrVectorTy() &&
-        MidTy->isPtrOrPtrVectorTy() &&
-        DstTy->isIntOrIntVectorTy() &&
-        SrcTy->getPointerAddressSpace() == MidTy->getPointerAddressSpace() &&
-        "Illegal bitcast, ptrtoint sequence!");
-      // Allowed, use second cast's opcode
+    if (SrcSize > DstSize)
       return secondOp;
-    case 17:
-      // (sitofp (zext x)) -> (uitofp x)
-      return Instruction::UIToFP;
-    case 99:
-      // Cast combination can't happen (error in input). This is for all cases
-      // where the MidTy is not the same for the two cast instructions.
-      llvm_unreachable("Invalid Cast Combination");
-    default:
-      llvm_unreachable("Error in CastResults table!!!");
+    return 0;
+  }
+  case 9:
+    // zext, sext -> zext, because sext can't sign extend after zext
+    return Instruction::ZExt;
+  case 11: {
+    // inttoptr, ptrtoint -> bitcast if SrcSize<=PtrSize and SrcSize==DstSize
+    if (!MidIntPtrTy)
+      return 0;
+    unsigned PtrSize = MidIntPtrTy->getScalarSizeInBits();
+    unsigned SrcSize = SrcTy->getScalarSizeInBits();
+    unsigned DstSize = DstTy->getScalarSizeInBits();
+    if (SrcSize <= PtrSize && SrcSize == DstSize)
+      return Instruction::BitCast;
+    return 0;
+  }
+  case 12:
+    // addrspacecast, addrspacecast -> bitcast,       if SrcAS == DstAS
+    // addrspacecast, addrspacecast -> addrspacecast, if SrcAS != DstAS
+    if (SrcTy->getPointerAddressSpace() != DstTy->getPointerAddressSpace())
+      return Instruction::AddrSpaceCast;
+    return Instruction::BitCast;
+  case 13:
+    // FIXME: this state can be merged with (1), but the following assert
+    // is useful to check the correcteness of the sequence due to semantic
+    // change of bitcast.
+    assert(SrcTy->isPtrOrPtrVectorTy() && MidTy->isPtrOrPtrVectorTy() &&
+           DstTy->isPtrOrPtrVectorTy() &&
+           SrcTy->getPointerAddressSpace() != MidTy->getPointerAddressSpace() &&
+           MidTy->getPointerAddressSpace() == DstTy->getPointerAddressSpace() &&
+           "Illegal addrspacecast, bitcast sequence!");
+    // Allowed, use first cast's opcode
+    return firstOp;
+  case 14:
+    // bitcast, addrspacecast -> addrspacecast
+    return Instruction::AddrSpaceCast;
+  case 15:
+    // FIXME: this state can be merged with (1), but the following assert
+    // is useful to check the correcteness of the sequence due to semantic
+    // change of bitcast.
+    assert(SrcTy->isIntOrIntVectorTy() && MidTy->isPtrOrPtrVectorTy() &&
+           DstTy->isPtrOrPtrVectorTy() &&
+           MidTy->getPointerAddressSpace() == DstTy->getPointerAddressSpace() &&
+           "Illegal inttoptr, bitcast sequence!");
+    // Allowed, use first cast's opcode
+    return firstOp;
+  case 16:
+    // FIXME: this state can be merged with (2), but the following assert
+    // is useful to check the correcteness of the sequence due to semantic
+    // change of bitcast.
+    assert(SrcTy->isPtrOrPtrVectorTy() && MidTy->isPtrOrPtrVectorTy() &&
+           DstTy->isIntOrIntVectorTy() &&
+           SrcTy->getPointerAddressSpace() == MidTy->getPointerAddressSpace() &&
+           "Illegal bitcast, ptrtoint sequence!");
+    // Allowed, use second cast's opcode
+    return secondOp;
+  case 17:
+    // (sitofp (zext x)) -> (uitofp x)
+    return Instruction::UIToFP;
+  case 99:
+    // Cast combination can't happen (error in input). This is for all cases
+    // where the MidTy is not the same for the two cast instructions.
+    llvm_unreachable("Invalid Cast Combination");
+  default:
+    llvm_unreachable("Error in CastResults table!!!");
   }
 }
 
 CastInst *CastInst::Create(Instruction::CastOps op, Value *S, Type *Ty,
-  const Twine &Name, Instruction *InsertBefore) {
+                           const Twine &Name, Instruction *InsertBefore) {
   assert(castIsValid(op, S, Ty) && "Invalid cast!");
   // Construct and return the appropriate CastInst subclass
   switch (op) {
-  case Trunc:         return new TruncInst         (S, Ty, Name, InsertBefore);
-  case ZExt:          return new ZExtInst          (S, Ty, Name, InsertBefore);
-  case SExt:          return new SExtInst          (S, Ty, Name, InsertBefore);
-  case FPTrunc:       return new FPTruncInst       (S, Ty, Name, InsertBefore);
-  case FPExt:         return new FPExtInst         (S, Ty, Name, InsertBefore);
-  case UIToFP:        return new UIToFPInst        (S, Ty, Name, InsertBefore);
-  case SIToFP:        return new SIToFPInst        (S, Ty, Name, InsertBefore);
-  case FPToUI:        return new FPToUIInst        (S, Ty, Name, InsertBefore);
-  case FPToSI:        return new FPToSIInst        (S, Ty, Name, InsertBefore);
-  case PtrToInt:      return new PtrToIntInst      (S, Ty, Name, InsertBefore);
-  case IntToPtr:      return new IntToPtrInst      (S, Ty, Name, InsertBefore);
-  case BitCast:       return new BitCastInst       (S, Ty, Name, InsertBefore);
-  case AddrSpaceCast: return new AddrSpaceCastInst (S, Ty, Name, InsertBefore);
-  default: llvm_unreachable("Invalid opcode provided");
+  case Trunc:
+    return new TruncInst(S, Ty, Name, InsertBefore);
+  case ZExt:
+    return new ZExtInst(S, Ty, Name, InsertBefore);
+  case SExt:
+    return new SExtInst(S, Ty, Name, InsertBefore);
+  case FPTrunc:
+    return new FPTruncInst(S, Ty, Name, InsertBefore);
+  case FPExt:
+    return new FPExtInst(S, Ty, Name, InsertBefore);
+  case UIToFP:
+    return new UIToFPInst(S, Ty, Name, InsertBefore);
+  case SIToFP:
+    return new SIToFPInst(S, Ty, Name, InsertBefore);
+  case FPToUI:
+    return new FPToUIInst(S, Ty, Name, InsertBefore);
+  case FPToSI:
+    return new FPToSIInst(S, Ty, Name, InsertBefore);
+  case PtrToInt:
+    return new PtrToIntInst(S, Ty, Name, InsertBefore);
+  case IntToPtr:
+    return new IntToPtrInst(S, Ty, Name, InsertBefore);
+  case BitCast:
+    return new BitCastInst(S, Ty, Name, InsertBefore);
+  case AddrSpaceCast:
+    return new AddrSpaceCastInst(S, Ty, Name, InsertBefore);
+  default:
+    llvm_unreachable("Invalid opcode provided");
   }
 }
 
 CastInst *CastInst::Create(Instruction::CastOps op, Value *S, Type *Ty,
-  const Twine &Name, BasicBlock *InsertAtEnd) {
+                           const Twine &Name, BasicBlock *InsertAtEnd) {
   assert(castIsValid(op, S, Ty) && "Invalid cast!");
   // Construct and return the appropriate CastInst subclass
   switch (op) {
-  case Trunc:         return new TruncInst         (S, Ty, Name, InsertAtEnd);
-  case ZExt:          return new ZExtInst          (S, Ty, Name, InsertAtEnd);
-  case SExt:          return new SExtInst          (S, Ty, Name, InsertAtEnd);
-  case FPTrunc:       return new FPTruncInst       (S, Ty, Name, InsertAtEnd);
-  case FPExt:         return new FPExtInst         (S, Ty, Name, InsertAtEnd);
-  case UIToFP:        return new UIToFPInst        (S, Ty, Name, InsertAtEnd);
-  case SIToFP:        return new SIToFPInst        (S, Ty, Name, InsertAtEnd);
-  case FPToUI:        return new FPToUIInst        (S, Ty, Name, InsertAtEnd);
-  case FPToSI:        return new FPToSIInst        (S, Ty, Name, InsertAtEnd);
-  case PtrToInt:      return new PtrToIntInst      (S, Ty, Name, InsertAtEnd);
-  case IntToPtr:      return new IntToPtrInst      (S, Ty, Name, InsertAtEnd);
-  case BitCast:       return new BitCastInst       (S, Ty, Name, InsertAtEnd);
-  case AddrSpaceCast: return new AddrSpaceCastInst (S, Ty, Name, InsertAtEnd);
-  default: llvm_unreachable("Invalid opcode provided");
+  case Trunc:
+    return new TruncInst(S, Ty, Name, InsertAtEnd);
+  case ZExt:
+    return new ZExtInst(S, Ty, Name, InsertAtEnd);
+  case SExt:
+    return new SExtInst(S, Ty, Name, InsertAtEnd);
+  case FPTrunc:
+    return new FPTruncInst(S, Ty, Name, InsertAtEnd);
+  case FPExt:
+    return new FPExtInst(S, Ty, Name, InsertAtEnd);
+  case UIToFP:
+    return new UIToFPInst(S, Ty, Name, InsertAtEnd);
+  case SIToFP:
+    return new SIToFPInst(S, Ty, Name, InsertAtEnd);
+  case FPToUI:
+    return new FPToUIInst(S, Ty, Name, InsertAtEnd);
+  case FPToSI:
+    return new FPToSIInst(S, Ty, Name, InsertAtEnd);
+  case PtrToInt:
+    return new PtrToIntInst(S, Ty, Name, InsertAtEnd);
+  case IntToPtr:
+    return new IntToPtrInst(S, Ty, Name, InsertAtEnd);
+  case BitCast:
+    return new BitCastInst(S, Ty, Name, InsertAtEnd);
+  case AddrSpaceCast:
+    return new AddrSpaceCastInst(S, Ty, Name, InsertAtEnd);
+  default:
+    llvm_unreachable("Invalid opcode provided");
   }
 }
 
-CastInst *CastInst::CreateZExtOrBitCast(Value *S, Type *Ty,
-                                        const Twine &Name,
+CastInst *CastInst::CreateZExtOrBitCast(Value *S, Type *Ty, const Twine &Name,
                                         Instruction *InsertBefore) {
   if (S->getType()->getScalarSizeInBits() == Ty->getScalarSizeInBits())
     return Create(Instruction::BitCast, S, Ty, Name, InsertBefore);
   return Create(Instruction::ZExt, S, Ty, Name, InsertBefore);
 }
 
-CastInst *CastInst::CreateZExtOrBitCast(Value *S, Type *Ty,
-                                        const Twine &Name,
+CastInst *CastInst::CreateZExtOrBitCast(Value *S, Type *Ty, const Twine &Name,
                                         BasicBlock *InsertAtEnd) {
   if (S->getType()->getScalarSizeInBits() == Ty->getScalarSizeInBits())
     return Create(Instruction::BitCast, S, Ty, Name, InsertAtEnd);
   return Create(Instruction::ZExt, S, Ty, Name, InsertAtEnd);
 }
 
-CastInst *CastInst::CreateSExtOrBitCast(Value *S, Type *Ty,
-                                        const Twine &Name,
+CastInst *CastInst::CreateSExtOrBitCast(Value *S, Type *Ty, const Twine &Name,
                                         Instruction *InsertBefore) {
   if (S->getType()->getScalarSizeInBits() == Ty->getScalarSizeInBits())
     return Create(Instruction::BitCast, S, Ty, Name, InsertBefore);
   return Create(Instruction::SExt, S, Ty, Name, InsertBefore);
 }
 
-CastInst *CastInst::CreateSExtOrBitCast(Value *S, Type *Ty,
-                                        const Twine &Name,
+CastInst *CastInst::CreateSExtOrBitCast(Value *S, Type *Ty, const Twine &Name,
                                         BasicBlock *InsertAtEnd) {
   if (S->getType()->getScalarSizeInBits() == Ty->getScalarSizeInBits())
     return Create(Instruction::BitCast, S, Ty, Name, InsertAtEnd);
   return Create(Instruction::SExt, S, Ty, Name, InsertAtEnd);
 }
 
-CastInst *CastInst::CreateTruncOrBitCast(Value *S, Type *Ty,
-                                         const Twine &Name,
+CastInst *CastInst::CreateTruncOrBitCast(Value *S, Type *Ty, const Twine &Name,
                                          Instruction *InsertBefore) {
   if (S->getType()->getScalarSizeInBits() == Ty->getScalarSizeInBits())
     return Create(Instruction::BitCast, S, Ty, Name, InsertBefore);
   return Create(Instruction::Trunc, S, Ty, Name, InsertBefore);
 }
 
-CastInst *CastInst::CreateTruncOrBitCast(Value *S, Type *Ty,
-                                         const Twine &Name,
+CastInst *CastInst::CreateTruncOrBitCast(Value *S, Type *Ty, const Twine &Name,
                                          BasicBlock *InsertAtEnd) {
   if (S->getType()->getScalarSizeInBits() == Ty->getScalarSizeInBits())
     return Create(Instruction::BitCast, S, Ty, Name, InsertAtEnd);
   return Create(Instruction::Trunc, S, Ty, Name, InsertAtEnd);
 }
 
-CastInst *CastInst::CreatePointerCast(Value *S, Type *Ty,
-                                      const Twine &Name,
+CastInst *CastInst::CreatePointerCast(Value *S, Type *Ty, const Twine &Name,
                                       BasicBlock *InsertAtEnd) {
   assert(S->getType()->isPtrOrPtrVectorTy() && "Invalid cast");
   assert((Ty->isIntOrIntVectorTy() || Ty->isPtrOrPtrVectorTy()) &&
@@ -3599,8 +3601,7 @@ CastInst *CastInst::CreatePointerCast(Value *S, Type *Ty,
 }
 
 /// Create a BitCast or a PtrToInt cast instruction
-CastInst *CastInst::CreatePointerCast(Value *S, Type *Ty,
-                                      const Twine &Name,
+CastInst *CastInst::CreatePointerCast(Value *S, Type *Ty, const Twine &Name,
                                       Instruction *InsertBefore) {
   assert(S->getType()->isPtrOrPtrVectorTy() && "Invalid cast");
   assert((Ty->isIntOrIntVectorTy() || Ty->isPtrOrPtrVectorTy()) &&
@@ -3618,9 +3619,7 @@ CastInst *CastInst::CreatePointerCast(Value *S, Type *Ty,
 }
 
 CastInst *CastInst::CreatePointerBitCastOrAddrSpaceCast(
-  Value *S, Type *Ty,
-  const Twine &Name,
-  BasicBlock *InsertAtEnd) {
+    Value *S, Type *Ty, const Twine &Name, BasicBlock *InsertAtEnd) {
   assert(S->getType()->isPtrOrPtrVectorTy() && "Invalid cast");
   assert(Ty->isPtrOrPtrVectorTy() && "Invalid cast");
 
@@ -3631,9 +3630,7 @@ CastInst *CastInst::CreatePointerBitCastOrAddrSpaceCast(
 }
 
 CastInst *CastInst::CreatePointerBitCastOrAddrSpaceCast(
-  Value *S, Type *Ty,
-  const Twine &Name,
-  Instruction *InsertBefore) {
+    Value *S, Type *Ty, const Twine &Name, Instruction *InsertBefore) {
   assert(S->getType()->isPtrOrPtrVectorTy() && "Invalid cast");
   assert(Ty->isPtrOrPtrVectorTy() && "Invalid cast");
 
@@ -3654,57 +3651,61 @@ CastInst *CastInst::CreateBitOrPointerCast(Value *S, Type *Ty,
   return Create(Instruction::BitCast, S, Ty, Name, InsertBefore);
 }
 
-CastInst *CastInst::CreateIntegerCast(Value *C, Type *Ty,
-                                      bool isSigned, const Twine &Name,
+CastInst *CastInst::CreateIntegerCast(Value *C, Type *Ty, bool isSigned,
+                                      const Twine &Name,
                                       Instruction *InsertBefore) {
   assert(C->getType()->isIntOrIntVectorTy() && Ty->isIntOrIntVectorTy() &&
          "Invalid integer cast");
   unsigned SrcBits = C->getType()->getScalarSizeInBits();
   unsigned DstBits = Ty->getScalarSizeInBits();
   Instruction::CastOps opcode =
-    (SrcBits == DstBits ? Instruction::BitCast :
-     (SrcBits > DstBits ? Instruction::Trunc :
-      (isSigned ? Instruction::SExt : Instruction::ZExt)));
+      (SrcBits == DstBits
+           ? Instruction::BitCast
+           : (SrcBits > DstBits
+                  ? Instruction::Trunc
+                  : (isSigned ? Instruction::SExt : Instruction::ZExt)));
   return Create(opcode, C, Ty, Name, InsertBefore);
 }
 
-CastInst *CastInst::CreateIntegerCast(Value *C, Type *Ty,
-                                      bool isSigned, const Twine &Name,
+CastInst *CastInst::CreateIntegerCast(Value *C, Type *Ty, bool isSigned,
+                                      const Twine &Name,
                                       BasicBlock *InsertAtEnd) {
   assert(C->getType()->isIntOrIntVectorTy() && Ty->isIntOrIntVectorTy() &&
          "Invalid cast");
   unsigned SrcBits = C->getType()->getScalarSizeInBits();
   unsigned DstBits = Ty->getScalarSizeInBits();
   Instruction::CastOps opcode =
-    (SrcBits == DstBits ? Instruction::BitCast :
-     (SrcBits > DstBits ? Instruction::Trunc :
-      (isSigned ? Instruction::SExt : Instruction::ZExt)));
+      (SrcBits == DstBits
+           ? Instruction::BitCast
+           : (SrcBits > DstBits
+                  ? Instruction::Trunc
+                  : (isSigned ? Instruction::SExt : Instruction::ZExt)));
   return Create(opcode, C, Ty, Name, InsertAtEnd);
 }
 
-CastInst *CastInst::CreateFPCast(Value *C, Type *Ty,
-                                 const Twine &Name,
+CastInst *CastInst::CreateFPCast(Value *C, Type *Ty, const Twine &Name,
                                  Instruction *InsertBefore) {
   assert(C->getType()->isFPOrFPVectorTy() && Ty->isFPOrFPVectorTy() &&
          "Invalid cast");
   unsigned SrcBits = C->getType()->getScalarSizeInBits();
   unsigned DstBits = Ty->getScalarSizeInBits();
   Instruction::CastOps opcode =
-    (SrcBits == DstBits ? Instruction::BitCast :
-     (SrcBits > DstBits ? Instruction::FPTrunc : Instruction::FPExt));
+      (SrcBits == DstBits
+           ? Instruction::BitCast
+           : (SrcBits > DstBits ? Instruction::FPTrunc : Instruction::FPExt));
   return Create(opcode, C, Ty, Name, InsertBefore);
 }
 
-CastInst *CastInst::CreateFPCast(Value *C, Type *Ty,
-                                 const Twine &Name,
+CastInst *CastInst::CreateFPCast(Value *C, Type *Ty, const Twine &Name,
                                  BasicBlock *InsertAtEnd) {
   assert(C->getType()->isFPOrFPVectorTy() && Ty->isFPOrFPVectorTy() &&
          "Invalid cast");
   unsigned SrcBits = C->getType()->getScalarSizeInBits();
   unsigned DstBits = Ty->getScalarSizeInBits();
   Instruction::CastOps opcode =
-    (SrcBits == DstBits ? Instruction::BitCast :
-     (SrcBits > DstBits ? Instruction::FPTrunc : Instruction::FPExt));
+      (SrcBits == DstBits
+           ? Instruction::BitCast
+           : (SrcBits > DstBits ? Instruction::FPTrunc : Instruction::FPExt));
   return Create(opcode, C, Ty, Name, InsertAtEnd);
 }
 
@@ -3769,9 +3770,8 @@ bool CastInst::isBitOrNoopPointerCastable(Type *SrcTy, Type *DestTy,
 //   castIsValid( getCastOpcode(Val, Ty), Val, Ty)
 // should not assert in castIsValid. In other words, this produces a "correct"
 // casting opcode for the arguments passed to it.
-Instruction::CastOps
-CastInst::getCastOpcode(
-  const Value *Src, bool SrcIsSigned, Type *DestTy, bool DestIsSigned) {
+Instruction::CastOps CastInst::getCastOpcode(const Value *Src, bool SrcIsSigned,
+                                             Type *DestTy, bool DestIsSigned) {
   Type *SrcTy = Src->getType();
 
   assert(SrcTy->isFirstClassType() && DestTy->isFirstClassType() &&
@@ -3795,50 +3795,50 @@ CastInst::getCastOpcode(
   unsigned DestBits = DestTy->getPrimitiveSizeInBits(); // 0 for ptr
 
   // Run through the possibilities ...
-  if (DestTy->isIntegerTy()) {                      // Casting to integral
-    if (SrcTy->isIntegerTy()) {                     // Casting from integral
+  if (DestTy->isIntegerTy()) {  // Casting to integral
+    if (SrcTy->isIntegerTy()) { // Casting from integral
       if (DestBits < SrcBits)
-        return Trunc;                               // int -> smaller int
-      else if (DestBits > SrcBits) {                // its an extension
+        return Trunc;                // int -> smaller int
+      else if (DestBits > SrcBits) { // its an extension
         if (SrcIsSigned)
-          return SExt;                              // signed -> SEXT
+          return SExt; // signed -> SEXT
         else
-          return ZExt;                              // unsigned -> ZEXT
+          return ZExt; // unsigned -> ZEXT
       } else {
-        return BitCast;                             // Same size, No-op cast
+        return BitCast; // Same size, No-op cast
       }
-    } else if (SrcTy->isFloatingPointTy()) {        // Casting from floating pt
+    } else if (SrcTy->isFloatingPointTy()) { // Casting from floating pt
       if (DestIsSigned)
-        return FPToSI;                              // FP -> sint
+        return FPToSI; // FP -> sint
       else
-        return FPToUI;                              // FP -> uint
+        return FPToUI; // FP -> uint
     } else if (SrcTy->isVectorTy()) {
       assert(DestBits == SrcBits &&
              "Casting vector to integer of different width");
-      return BitCast;                             // Same size, no-op cast
+      return BitCast; // Same size, no-op cast
     } else {
       assert(SrcTy->isPointerTy() &&
              "Casting from a value that is not first-class type");
-      return PtrToInt;                              // ptr -> int
+      return PtrToInt; // ptr -> int
     }
-  } else if (DestTy->isFloatingPointTy()) {         // Casting to floating pt
-    if (SrcTy->isIntegerTy()) {                     // Casting from integral
+  } else if (DestTy->isFloatingPointTy()) { // Casting to floating pt
+    if (SrcTy->isIntegerTy()) {             // Casting from integral
       if (SrcIsSigned)
-        return SIToFP;                              // sint -> FP
+        return SIToFP; // sint -> FP
       else
-        return UIToFP;                              // uint -> FP
-    } else if (SrcTy->isFloatingPointTy()) {        // Casting from floating pt
+        return UIToFP;                       // uint -> FP
+    } else if (SrcTy->isFloatingPointTy()) { // Casting from floating pt
       if (DestBits < SrcBits) {
-        return FPTrunc;                             // FP -> smaller FP
+        return FPTrunc; // FP -> smaller FP
       } else if (DestBits > SrcBits) {
-        return FPExt;                               // FP -> larger FP
-      } else  {
-        return BitCast;                             // same size, no-op cast
+        return FPExt; // FP -> larger FP
+      } else {
+        return BitCast; // same size, no-op cast
       }
     } else if (SrcTy->isVectorTy()) {
       assert(DestBits == SrcBits &&
              "Casting vector to floating point of different width");
-      return BitCast;                             // same size, no-op cast
+      return BitCast; // same size, no-op cast
     }
     llvm_unreachable("Casting pointer or non-first class to float");
   } else if (DestTy->isVectorTy()) {
@@ -3849,15 +3849,15 @@ CastInst::getCastOpcode(
     if (SrcTy->isPointerTy()) {
       if (DestTy->getPointerAddressSpace() != SrcTy->getPointerAddressSpace())
         return AddrSpaceCast;
-      return BitCast;                               // ptr -> ptr
+      return BitCast; // ptr -> ptr
     } else if (SrcTy->isIntegerTy()) {
-      return IntToPtr;                              // int -> ptr
+      return IntToPtr; // int -> ptr
     }
     llvm_unreachable("Casting pointer to other than pointer or int");
   } else if (DestTy->isX86_MMXTy()) {
     if (SrcTy->isVectorTy()) {
       assert(DestBits == SrcBits && "Casting vector of wrong width to X86_MMX");
-      return BitCast;                               // 64-bit vector to MMX
+      return BitCast; // 64-bit vector to MMX
     }
     llvm_unreachable("Illegal cast to X86_MMX");
   }
@@ -3872,8 +3872,7 @@ CastInst::getCastOpcode(
 /// could be broken out into the separate constructors but it is useful to have
 /// it in one place and to eliminate the redundant code for getting the sizes
 /// of the types involved.
-bool
-CastInst::castIsValid(Instruction::CastOps op, Type *SrcTy, Type *DstTy) {
+bool CastInst::castIsValid(Instruction::CastOps op, Type *SrcTy, Type *DstTy) {
   if (!SrcTy->isFirstClassType() || !DstTy->isFirstClassType() ||
       SrcTy->isAggregateType() || DstTy->isAggregateType())
     return false;
@@ -3895,7 +3894,8 @@ CastInst::castIsValid(Instruction::CastOps op, Type *SrcTy, Type *DstTy) {
 
   // Switch on the opcode provided
   switch (op) {
-  default: return false; // This is an input error
+  default:
+    return false; // This is an input error
   case Instruction::Trunc:
     return SrcTy->isIntOrIntVectorTy() && DstTy->isIntOrIntVectorTy() &&
            SrcEC == DstEC && SrcScalarBitSize > DstScalarBitSize;
@@ -3972,158 +3972,158 @@ CastInst::castIsValid(Instruction::CastOps op, Type *SrcTy, Type *DstTy) {
   }
 }
 
-TruncInst::TruncInst(
-  Value *S, Type *Ty, const Twine &Name, Instruction *InsertBefore
-) : CastInst(Ty, Trunc, S, Name, InsertBefore) {
+TruncInst::TruncInst(Value *S, Type *Ty, const Twine &Name,
+                     Instruction *InsertBefore)
+    : CastInst(Ty, Trunc, S, Name, InsertBefore) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal Trunc");
 }
 
-TruncInst::TruncInst(
-  Value *S, Type *Ty, const Twine &Name, BasicBlock *InsertAtEnd
-) : CastInst(Ty, Trunc, S, Name, InsertAtEnd) {
+TruncInst::TruncInst(Value *S, Type *Ty, const Twine &Name,
+                     BasicBlock *InsertAtEnd)
+    : CastInst(Ty, Trunc, S, Name, InsertAtEnd) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal Trunc");
 }
 
-ZExtInst::ZExtInst(
-  Value *S, Type *Ty, const Twine &Name, Instruction *InsertBefore
-)  : CastInst(Ty, ZExt, S, Name, InsertBefore) {
+ZExtInst::ZExtInst(Value *S, Type *Ty, const Twine &Name,
+                   Instruction *InsertBefore)
+    : CastInst(Ty, ZExt, S, Name, InsertBefore) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal ZExt");
 }
 
-ZExtInst::ZExtInst(
-  Value *S, Type *Ty, const Twine &Name, BasicBlock *InsertAtEnd
-)  : CastInst(Ty, ZExt, S, Name, InsertAtEnd) {
+ZExtInst::ZExtInst(Value *S, Type *Ty, const Twine &Name,
+                   BasicBlock *InsertAtEnd)
+    : CastInst(Ty, ZExt, S, Name, InsertAtEnd) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal ZExt");
 }
-SExtInst::SExtInst(
-  Value *S, Type *Ty, const Twine &Name, Instruction *InsertBefore
-) : CastInst(Ty, SExt, S, Name, InsertBefore) {
+SExtInst::SExtInst(Value *S, Type *Ty, const Twine &Name,
+                   Instruction *InsertBefore)
+    : CastInst(Ty, SExt, S, Name, InsertBefore) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal SExt");
 }
 
-SExtInst::SExtInst(
-  Value *S, Type *Ty, const Twine &Name, BasicBlock *InsertAtEnd
-)  : CastInst(Ty, SExt, S, Name, InsertAtEnd) {
+SExtInst::SExtInst(Value *S, Type *Ty, const Twine &Name,
+                   BasicBlock *InsertAtEnd)
+    : CastInst(Ty, SExt, S, Name, InsertAtEnd) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal SExt");
 }
 
-FPTruncInst::FPTruncInst(
-  Value *S, Type *Ty, const Twine &Name, Instruction *InsertBefore
-) : CastInst(Ty, FPTrunc, S, Name, InsertBefore) {
+FPTruncInst::FPTruncInst(Value *S, Type *Ty, const Twine &Name,
+                         Instruction *InsertBefore)
+    : CastInst(Ty, FPTrunc, S, Name, InsertBefore) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal FPTrunc");
 }
 
-FPTruncInst::FPTruncInst(
-  Value *S, Type *Ty, const Twine &Name, BasicBlock *InsertAtEnd
-) : CastInst(Ty, FPTrunc, S, Name, InsertAtEnd) {
+FPTruncInst::FPTruncInst(Value *S, Type *Ty, const Twine &Name,
+                         BasicBlock *InsertAtEnd)
+    : CastInst(Ty, FPTrunc, S, Name, InsertAtEnd) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal FPTrunc");
 }
 
-FPExtInst::FPExtInst(
-  Value *S, Type *Ty, const Twine &Name, Instruction *InsertBefore
-) : CastInst(Ty, FPExt, S, Name, InsertBefore) {
+FPExtInst::FPExtInst(Value *S, Type *Ty, const Twine &Name,
+                     Instruction *InsertBefore)
+    : CastInst(Ty, FPExt, S, Name, InsertBefore) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal FPExt");
 }
 
-FPExtInst::FPExtInst(
-  Value *S, Type *Ty, const Twine &Name, BasicBlock *InsertAtEnd
-) : CastInst(Ty, FPExt, S, Name, InsertAtEnd) {
+FPExtInst::FPExtInst(Value *S, Type *Ty, const Twine &Name,
+                     BasicBlock *InsertAtEnd)
+    : CastInst(Ty, FPExt, S, Name, InsertAtEnd) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal FPExt");
 }
 
-UIToFPInst::UIToFPInst(
-  Value *S, Type *Ty, const Twine &Name, Instruction *InsertBefore
-) : CastInst(Ty, UIToFP, S, Name, InsertBefore) {
+UIToFPInst::UIToFPInst(Value *S, Type *Ty, const Twine &Name,
+                       Instruction *InsertBefore)
+    : CastInst(Ty, UIToFP, S, Name, InsertBefore) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal UIToFP");
 }
 
-UIToFPInst::UIToFPInst(
-  Value *S, Type *Ty, const Twine &Name, BasicBlock *InsertAtEnd
-) : CastInst(Ty, UIToFP, S, Name, InsertAtEnd) {
+UIToFPInst::UIToFPInst(Value *S, Type *Ty, const Twine &Name,
+                       BasicBlock *InsertAtEnd)
+    : CastInst(Ty, UIToFP, S, Name, InsertAtEnd) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal UIToFP");
 }
 
-SIToFPInst::SIToFPInst(
-  Value *S, Type *Ty, const Twine &Name, Instruction *InsertBefore
-) : CastInst(Ty, SIToFP, S, Name, InsertBefore) {
+SIToFPInst::SIToFPInst(Value *S, Type *Ty, const Twine &Name,
+                       Instruction *InsertBefore)
+    : CastInst(Ty, SIToFP, S, Name, InsertBefore) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal SIToFP");
 }
 
-SIToFPInst::SIToFPInst(
-  Value *S, Type *Ty, const Twine &Name, BasicBlock *InsertAtEnd
-) : CastInst(Ty, SIToFP, S, Name, InsertAtEnd) {
+SIToFPInst::SIToFPInst(Value *S, Type *Ty, const Twine &Name,
+                       BasicBlock *InsertAtEnd)
+    : CastInst(Ty, SIToFP, S, Name, InsertAtEnd) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal SIToFP");
 }
 
-FPToUIInst::FPToUIInst(
-  Value *S, Type *Ty, const Twine &Name, Instruction *InsertBefore
-) : CastInst(Ty, FPToUI, S, Name, InsertBefore) {
+FPToUIInst::FPToUIInst(Value *S, Type *Ty, const Twine &Name,
+                       Instruction *InsertBefore)
+    : CastInst(Ty, FPToUI, S, Name, InsertBefore) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal FPToUI");
 }
 
-FPToUIInst::FPToUIInst(
-  Value *S, Type *Ty, const Twine &Name, BasicBlock *InsertAtEnd
-) : CastInst(Ty, FPToUI, S, Name, InsertAtEnd) {
+FPToUIInst::FPToUIInst(Value *S, Type *Ty, const Twine &Name,
+                       BasicBlock *InsertAtEnd)
+    : CastInst(Ty, FPToUI, S, Name, InsertAtEnd) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal FPToUI");
 }
 
-FPToSIInst::FPToSIInst(
-  Value *S, Type *Ty, const Twine &Name, Instruction *InsertBefore
-) : CastInst(Ty, FPToSI, S, Name, InsertBefore) {
+FPToSIInst::FPToSIInst(Value *S, Type *Ty, const Twine &Name,
+                       Instruction *InsertBefore)
+    : CastInst(Ty, FPToSI, S, Name, InsertBefore) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal FPToSI");
 }
 
-FPToSIInst::FPToSIInst(
-  Value *S, Type *Ty, const Twine &Name, BasicBlock *InsertAtEnd
-) : CastInst(Ty, FPToSI, S, Name, InsertAtEnd) {
+FPToSIInst::FPToSIInst(Value *S, Type *Ty, const Twine &Name,
+                       BasicBlock *InsertAtEnd)
+    : CastInst(Ty, FPToSI, S, Name, InsertAtEnd) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal FPToSI");
 }
 
-PtrToIntInst::PtrToIntInst(
-  Value *S, Type *Ty, const Twine &Name, Instruction *InsertBefore
-) : CastInst(Ty, PtrToInt, S, Name, InsertBefore) {
+PtrToIntInst::PtrToIntInst(Value *S, Type *Ty, const Twine &Name,
+                           Instruction *InsertBefore)
+    : CastInst(Ty, PtrToInt, S, Name, InsertBefore) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal PtrToInt");
 }
 
-PtrToIntInst::PtrToIntInst(
-  Value *S, Type *Ty, const Twine &Name, BasicBlock *InsertAtEnd
-) : CastInst(Ty, PtrToInt, S, Name, InsertAtEnd) {
+PtrToIntInst::PtrToIntInst(Value *S, Type *Ty, const Twine &Name,
+                           BasicBlock *InsertAtEnd)
+    : CastInst(Ty, PtrToInt, S, Name, InsertAtEnd) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal PtrToInt");
 }
 
-IntToPtrInst::IntToPtrInst(
-  Value *S, Type *Ty, const Twine &Name, Instruction *InsertBefore
-) : CastInst(Ty, IntToPtr, S, Name, InsertBefore) {
+IntToPtrInst::IntToPtrInst(Value *S, Type *Ty, const Twine &Name,
+                           Instruction *InsertBefore)
+    : CastInst(Ty, IntToPtr, S, Name, InsertBefore) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal IntToPtr");
 }
 
-IntToPtrInst::IntToPtrInst(
-  Value *S, Type *Ty, const Twine &Name, BasicBlock *InsertAtEnd
-) : CastInst(Ty, IntToPtr, S, Name, InsertAtEnd) {
+IntToPtrInst::IntToPtrInst(Value *S, Type *Ty, const Twine &Name,
+                           BasicBlock *InsertAtEnd)
+    : CastInst(Ty, IntToPtr, S, Name, InsertAtEnd) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal IntToPtr");
 }
 
-BitCastInst::BitCastInst(
-  Value *S, Type *Ty, const Twine &Name, Instruction *InsertBefore
-) : CastInst(Ty, BitCast, S, Name, InsertBefore) {
+BitCastInst::BitCastInst(Value *S, Type *Ty, const Twine &Name,
+                         Instruction *InsertBefore)
+    : CastInst(Ty, BitCast, S, Name, InsertBefore) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal BitCast");
 }
 
-BitCastInst::BitCastInst(
-  Value *S, Type *Ty, const Twine &Name, BasicBlock *InsertAtEnd
-) : CastInst(Ty, BitCast, S, Name, InsertAtEnd) {
+BitCastInst::BitCastInst(Value *S, Type *Ty, const Twine &Name,
+                         BasicBlock *InsertAtEnd)
+    : CastInst(Ty, BitCast, S, Name, InsertAtEnd) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal BitCast");
 }
 
-AddrSpaceCastInst::AddrSpaceCastInst(
-  Value *S, Type *Ty, const Twine &Name, Instruction *InsertBefore
-) : CastInst(Ty, AddrSpaceCast, S, Name, InsertBefore) {
+AddrSpaceCastInst::AddrSpaceCastInst(Value *S, Type *Ty, const Twine &Name,
+                                     Instruction *InsertBefore)
+    : CastInst(Ty, AddrSpaceCast, S, Name, InsertBefore) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal AddrSpaceCast");
 }
 
-AddrSpaceCastInst::AddrSpaceCastInst(
-  Value *S, Type *Ty, const Twine &Name, BasicBlock *InsertAtEnd
-) : CastInst(Ty, AddrSpaceCast, S, Name, InsertAtEnd) {
+AddrSpaceCastInst::AddrSpaceCastInst(Value *S, Type *Ty, const Twine &Name,
+                                     BasicBlock *InsertAtEnd)
+    : CastInst(Ty, AddrSpaceCast, S, Name, InsertAtEnd) {
   assert(castIsValid(getOpcode(), S, Ty) && "Illegal AddrSpaceCast");
 }
 
@@ -4134,10 +4134,8 @@ AddrSpaceCastInst::AddrSpaceCastInst(
 CmpInst::CmpInst(Type *ty, OtherOps op, Predicate predicate, Value *LHS,
                  Value *RHS, const Twine &Name, Instruction *InsertBefore,
                  Instruction *FlagsSource)
-  : Instruction(ty, op,
-                OperandTraits<CmpInst>::op_begin(this),
-                OperandTraits<CmpInst>::operands(this),
-                InsertBefore) {
+    : Instruction(ty, op, OperandTraits<CmpInst>::op_begin(this),
+                  OperandTraits<CmpInst>::operands(this), InsertBefore) {
   Op<0>() = LHS;
   Op<1>() = RHS;
   setPredicate((Predicate)predicate);
@@ -4148,45 +4146,39 @@ CmpInst::CmpInst(Type *ty, OtherOps op, Predicate predicate, Value *LHS,
 
 CmpInst::CmpInst(Type *ty, OtherOps op, Predicate predicate, Value *LHS,
                  Value *RHS, const Twine &Name, BasicBlock *InsertAtEnd)
-  : Instruction(ty, op,
-                OperandTraits<CmpInst>::op_begin(this),
-                OperandTraits<CmpInst>::operands(this),
-                InsertAtEnd) {
+    : Instruction(ty, op, OperandTraits<CmpInst>::op_begin(this),
+                  OperandTraits<CmpInst>::operands(this), InsertAtEnd) {
   Op<0>() = LHS;
   Op<1>() = RHS;
   setPredicate((Predicate)predicate);
   setName(Name);
 }
 
-CmpInst *
-CmpInst::Create(OtherOps Op, Predicate predicate, Value *S1, Value *S2,
-                const Twine &Name, Instruction *InsertBefore) {
+CmpInst *CmpInst::Create(OtherOps Op, Predicate predicate, Value *S1, Value *S2,
+                         const Twine &Name, Instruction *InsertBefore) {
   if (Op == Instruction::ICmp) {
     if (InsertBefore)
-      return new ICmpInst(InsertBefore, CmpInst::Predicate(predicate),
-                          S1, S2, Name);
+      return new ICmpInst(InsertBefore, CmpInst::Predicate(predicate), S1, S2,
+                          Name);
     else
-      return new ICmpInst(CmpInst::Predicate(predicate),
-                          S1, S2, Name);
+      return new ICmpInst(CmpInst::Predicate(predicate), S1, S2, Name);
   }
 
   if (InsertBefore)
-    return new FCmpInst(InsertBefore, CmpInst::Predicate(predicate),
-                        S1, S2, Name);
+    return new FCmpInst(InsertBefore, CmpInst::Predicate(predicate), S1, S2,
+                        Name);
   else
-    return new FCmpInst(CmpInst::Predicate(predicate),
-                        S1, S2, Name);
+    return new FCmpInst(CmpInst::Predicate(predicate), S1, S2, Name);
 }
 
-CmpInst *
-CmpInst::Create(OtherOps Op, Predicate predicate, Value *S1, Value *S2,
-                const Twine &Name, BasicBlock *InsertAtEnd) {
+CmpInst *CmpInst::Create(OtherOps Op, Predicate predicate, Value *S1, Value *S2,
+                         const Twine &Name, BasicBlock *InsertAtEnd) {
   if (Op == Instruction::ICmp) {
-    return new ICmpInst(*InsertAtEnd, CmpInst::Predicate(predicate),
-                        S1, S2, Name);
+    return new ICmpInst(*InsertAtEnd, CmpInst::Predicate(predicate), S1, S2,
+                        Name);
   }
-  return new FCmpInst(*InsertAtEnd, CmpInst::Predicate(predicate),
-                      S1, S2, Name);
+  return new FCmpInst(*InsertAtEnd, CmpInst::Predicate(predicate), S1, S2,
+                      Name);
 }
 
 void CmpInst::swapOperands() {
@@ -4212,66 +4204,120 @@ bool CmpInst::isEquality(Predicate P) {
 
 CmpInst::Predicate CmpInst::getInversePredicate(Predicate pred) {
   switch (pred) {
-    default: llvm_unreachable("Unknown cmp predicate!");
-    case ICMP_EQ: return ICMP_NE;
-    case ICMP_NE: return ICMP_EQ;
-    case ICMP_UGT: return ICMP_ULE;
-    case ICMP_ULT: return ICMP_UGE;
-    case ICMP_UGE: return ICMP_ULT;
-    case ICMP_ULE: return ICMP_UGT;
-    case ICMP_SGT: return ICMP_SLE;
-    case ICMP_SLT: return ICMP_SGE;
-    case ICMP_SGE: return ICMP_SLT;
-    case ICMP_SLE: return ICMP_SGT;
-
-    case FCMP_OEQ: return FCMP_UNE;
-    case FCMP_ONE: return FCMP_UEQ;
-    case FCMP_OGT: return FCMP_ULE;
-    case FCMP_OLT: return FCMP_UGE;
-    case FCMP_OGE: return FCMP_ULT;
-    case FCMP_OLE: return FCMP_UGT;
-    case FCMP_UEQ: return FCMP_ONE;
-    case FCMP_UNE: return FCMP_OEQ;
-    case FCMP_UGT: return FCMP_OLE;
-    case FCMP_ULT: return FCMP_OGE;
-    case FCMP_UGE: return FCMP_OLT;
-    case FCMP_ULE: return FCMP_OGT;
-    case FCMP_ORD: return FCMP_UNO;
-    case FCMP_UNO: return FCMP_ORD;
-    case FCMP_TRUE: return FCMP_FALSE;
-    case FCMP_FALSE: return FCMP_TRUE;
+  default:
+    llvm_unreachable("Unknown cmp predicate!");
+  case ICMP_EQ:
+    return ICMP_NE;
+  case ICMP_NE:
+    return ICMP_EQ;
+  case ICMP_UGT:
+    return ICMP_ULE;
+  case ICMP_ULT:
+    return ICMP_UGE;
+  case ICMP_UGE:
+    return ICMP_ULT;
+  case ICMP_ULE:
+    return ICMP_UGT;
+  case ICMP_SGT:
+    return ICMP_SLE;
+  case ICMP_SLT:
+    return ICMP_SGE;
+  case ICMP_SGE:
+    return ICMP_SLT;
+  case ICMP_SLE:
+    return ICMP_SGT;
+
+  case FCMP_OEQ:
+    return FCMP_UNE;
+  case FCMP_ONE:
+    return FCMP_UEQ;
+  case FCMP_OGT:
+    return FCMP_ULE;
+  case FCMP_OLT:
+    return FCMP_UGE;
+  case FCMP_OGE:
+    return FCMP_ULT;
+  case FCMP_OLE:
+    return FCMP_UGT;
+  case FCMP_UEQ:
+    return FCMP_ONE;
+  case FCMP_UNE:
+    return FCMP_OEQ;
+  case FCMP_UGT:
+    return FCMP_OLE;
+  case FCMP_ULT:
+    return FCMP_OGE;
+  case FCMP_UGE:
+    return FCMP_OLT;
+  case FCMP_ULE:
+    return FCMP_OGT;
+  case FCMP_ORD:
+    return FCMP_UNO;
+  case FCMP_UNO:
+    return FCMP_ORD;
+  case FCMP_TRUE:
+    return FCMP_FALSE;
+  case FCMP_FALSE:
+    return FCMP_TRUE;
   }
 }
 
 StringRef CmpInst::getPredicateName(Predicate Pred) {
   switch (Pred) {
-  default:                   return "unknown";
-  case FCmpInst::FCMP_FALSE: return "false";
-  case FCmpInst::FCMP_OEQ:   return "oeq";
-  case FCmpInst::FCMP_OGT:   return "ogt";
-  case FCmpInst::FCMP_OGE:   return "oge";
-  case FCmpInst::FCMP_OLT:   return "olt";
-  case FCmpInst::FCMP_OLE:   return "ole";
-  case FCmpInst::FCMP_ONE:   return "one";
-  case FCmpInst::FCMP_ORD:   return "ord";
-  case FCmpInst::FCMP_UNO:   return "uno";
-  case FCmpInst::FCMP_UEQ:   return "ueq";
-  case FCmpInst::FCMP_UGT:   return "ugt";
-  case FCmpInst::FCMP_UGE:   return "uge";
-  case FCmpInst::FCMP_ULT:   return "ult";
-  case FCmpInst::FCMP_ULE:   return "ule";
-  case FCmpInst::FCMP_UNE:   return "une";
-  case FCmpInst::FCMP_TRUE:  return "true";
-  case ICmpInst::ICMP_EQ:    return "eq";
-  case ICmpInst::ICMP_NE:    return "ne";
-  case ICmpInst::ICMP_SGT:   return "sgt";
-  case ICmpInst::ICMP_SGE:   return "sge";
-  case ICmpInst::ICMP_SLT:   return "slt";
-  case ICmpInst::ICMP_SLE:   return "sle";
-  case ICmpInst::ICMP_UGT:   return "ugt";
-  case ICmpInst::ICMP_UGE:   return "uge";
-  case ICmpInst::ICMP_ULT:   return "ult";
-  case ICmpInst::ICMP_ULE:   return "ule";
+  default:
+    return "unknown";
+  case FCmpInst::FCMP_FALSE:
+    return "false";
+  case FCmpInst::FCMP_OEQ:
+    return "oeq";
+  case FCmpInst::FCMP_OGT:
+    return "ogt";
+  case FCmpInst::FCMP_OGE:
+    return "oge";
+  case FCmpInst::FCMP_OLT:
+    return "olt";
+  case FCmpInst::FCMP_OLE:
+    return "ole";
+  case FCmpInst::FCMP_ONE:
+    return "one";
+  case FCmpInst::FCMP_ORD:
+    return "ord";
+  case FCmpInst::FCMP_UNO:
+    return "uno";
+  case FCmpInst::FCMP_UEQ:
+    return "ueq";
+  case FCmpInst::FCMP_UGT:
+    return "ugt";
+  case FCmpInst::FCMP_UGE:
+    return "uge";
+  case FCmpInst::FCMP_ULT:
+    return "ult";
+  case FCmpInst::FCMP_ULE:
+    return "ule";
+  case FCmpInst::FCMP_UNE:
+    return "une";
+  case FCmpInst::FCMP_TRUE:
+    return "true";
+  case ICmpInst::ICMP_EQ:
+    return "eq";
+  case ICmpInst::ICMP_NE:
+    return "ne";
+  case ICmpInst::ICMP_SGT:
+    return "sgt";
+  case ICmpInst::ICMP_SGE:
+    return "sge";
+  case ICmpInst::ICMP_SLT:
+    return "slt";
+  case ICmpInst::ICMP_SLE:
+    return "sle";
+  case ICmpInst::ICMP_UGT:
+    return "ugt";
+  case ICmpInst::ICMP_UGE:
+    return "uge";
+  case ICmpInst::ICMP_ULT:
+    return "ult";
+  case ICmpInst::ICMP_ULE:
+    return "ule";
   }
 }
 
@@ -4282,57 +4328,97 @@ raw_ostream &llvm::operator<<(raw_ostream &OS, CmpInst::Predicate Pred) {
 
 ICmpInst::Predicate ICmpInst::getSignedPredicate(Predicate pred) {
   switch (pred) {
-    default: llvm_unreachable("Unknown icmp predicate!");
-    case ICMP_EQ: case ICMP_NE:
-    case ICMP_SGT: case ICMP_SLT: case ICMP_SGE: case ICMP_SLE:
-       return pred;
-    case ICMP_UGT: return ICMP_SGT;
-    case ICMP_ULT: return ICMP_SLT;
-    case ICMP_UGE: return ICMP_SGE;
-    case ICMP_ULE: return ICMP_SLE;
+  default:
+    llvm_unreachable("Unknown icmp predicate!");
+  case ICMP_EQ:
+  case ICMP_NE:
+  case ICMP_SGT:
+  case ICMP_SLT:
+  case ICMP_SGE:
+  case ICMP_SLE:
+    return pred;
+  case ICMP_UGT:
+    return ICMP_SGT;
+  case ICMP_ULT:
+    return ICMP_SLT;
+  case ICMP_UGE:
+    return ICMP_SGE;
+  case ICMP_ULE:
+    return ICMP_SLE;
   }
 }
 
 ICmpInst::Predicate ICmpInst::getUnsignedPredicate(Predicate pred) {
   switch (pred) {
-    default: llvm_unreachable("Unknown icmp predicate!");
-    case ICMP_EQ: case ICMP_NE:
-    case ICMP_UGT: case ICMP_ULT: case ICMP_UGE: case ICMP_ULE:
-       return pred;
-    case ICMP_SGT: return ICMP_UGT;
-    case ICMP_SLT: return ICMP_ULT;
-    case ICMP_SGE: return ICMP_UGE;
-    case ICMP_SLE: return ICMP_ULE;
+  default:
+    llvm_unreachable("Unknown icmp predicate!");
+  case ICMP_EQ:
+  case ICMP_NE:
+  case ICMP_UGT:
+  case ICMP_ULT:
+  case ICMP_UGE:
+  case ICMP_ULE:
+    return pred;
+  case ICMP_SGT:
+    return ICMP_UGT;
+  case ICMP_SLT:
+    return ICMP_ULT;
+  case ICMP_SGE:
+    return ICMP_UGE;
+  case ICMP_SLE:
+    return ICMP_ULE;
   }
 }
 
 CmpInst::Predicate CmpInst::getSwappedPredicate(Predicate pred) {
   switch (pred) {
-    default: llvm_unreachable("Unknown cmp predicate!");
-    case ICMP_EQ: case ICMP_NE:
-      return pred;
-    case ICMP_SGT: return ICMP_SLT;
-    case ICMP_SLT: return ICMP_SGT;
-    case ICMP_SGE: return ICMP_SLE;
-    case ICMP_SLE: return ICMP_SGE;
-    case ICMP_UGT: return ICMP_ULT;
-    case ICMP_ULT: return ICMP_UGT;
-    case ICMP_UGE: return ICMP_ULE;
-    case ICMP_ULE: return ICMP_UGE;
-
-    case FCMP_FALSE: case FCMP_TRUE:
-    case FCMP_OEQ: case FCMP_ONE:
-    case FCMP_UEQ: case FCMP_UNE:
-    case FCMP_ORD: case FCMP_UNO:
-      return pred;
-    case FCMP_OGT: return FCMP_OLT;
-    case FCMP_OLT: return FCMP_OGT;
-    case FCMP_OGE: return FCMP_OLE;
-    case FCMP_OLE: return FCMP_OGE;
-    case FCMP_UGT: return FCMP_ULT;
-    case FCMP_ULT: return FCMP_UGT;
-    case FCMP_UGE: return FCMP_ULE;
-    case FCMP_ULE: return FCMP_UGE;
+  default:
+    llvm_unreachable("Unknown cmp predicate!");
+  case ICMP_EQ:
+  case ICMP_NE:
+    return pred;
+  case ICMP_SGT:
+    return ICMP_SLT;
+  case ICMP_SLT:
+    return ICMP_SGT;
+  case ICMP_SGE:
+    return ICMP_SLE;
+  case ICMP_SLE:
+    return ICMP_SGE;
+  case ICMP_UGT:
+    return ICMP_ULT;
+  case ICMP_ULT:
+    return ICMP_UGT;
+  case ICMP_UGE:
+    return ICMP_ULE;
+  case ICMP_ULE:
+    return ICMP_UGE;
+
+  case FCMP_FALSE:
+  case FCMP_TRUE:
+  case FCMP_OEQ:
+  case FCMP_ONE:
+  case FCMP_UEQ:
+  case FCMP_UNE:
+  case FCMP_ORD:
+  case FCMP_UNO:
+    return pred;
+  case FCMP_OGT:
+    return FCMP_OLT;
+  case FCMP_OLT:
+    return FCMP_OGT;
+  case FCMP_OGE:
+    return FCMP_OLE;
+  case FCMP_OLE:
+    return FCMP_OGE;
+  case FCMP_UGT:
+    return FCMP_ULT;
+  case FCMP_ULT:
+    return FCMP_UGT;
+  case FCMP_UGE:
+    return FCMP_ULE;
+  case FCMP_ULE:
+    return FCMP_UGE;
   }
 }
 
@@ -4461,17 +4547,25 @@ CmpInst::Predicate CmpInst::getUnsignedPredicate(Predicate pred) {
 
 bool CmpInst::isUnsigned(Predicate predicate) {
   switch (predicate) {
-    default: return false;
-    case ICmpInst::ICMP_ULT: case ICmpInst::ICMP_ULE: case ICmpInst::ICMP_UGT:
-    case ICmpInst::ICMP_UGE: return true;
+  default:
+    return false;
+  case ICmpInst::ICMP_ULT:
+  case ICmpInst::ICMP_ULE:
+  case ICmpInst::ICMP_UGT:
+  case ICmpInst::ICMP_UGE:
+    return true;
   }
 }
 
 bool CmpInst::isSigned(Predicate predicate) {
   switch (predicate) {
-    default: return false;
-    case ICmpInst::ICMP_SLT: case ICmpInst::ICMP_SLE: case ICmpInst::ICMP_SGT:
-    case ICmpInst::ICMP_SGE: return true;
+  default:
+    return false;
+  case ICmpInst::ICMP_SLT:
+  case ICmpInst::ICMP_SLE:
+  case ICmpInst::ICMP_SGT:
+  case ICmpInst::ICMP_SGE:
+    return true;
   }
 }
 
@@ -4559,35 +4653,65 @@ CmpInst::Predicate CmpInst::getFlippedSignednessPredicate(Predicate pred) {
 
 bool CmpInst::isOrdered(Predicate predicate) {
   switch (predicate) {
-    default: return false;
-    case FCmpInst::FCMP_OEQ: case FCmpInst::FCMP_ONE: case FCmpInst::FCMP_OGT:
-    case FCmpInst::FCMP_OLT: case FCmpInst::FCMP_OGE: case FCmpInst::FCMP_OLE:
-    case FCmpInst::FCMP_ORD: return true;
+  default:
+    return false;
+  case FCmpInst::FCMP_OEQ:
+  case FCmpInst::FCMP_ONE:
+  case FCmpInst::FCMP_OGT:
+  case FCmpInst::FCMP_OLT:
+  case FCmpInst::FCMP_OGE:
+  case FCmpInst::FCMP_OLE:
+  case FCmpInst::FCMP_ORD:
+    return true;
   }
 }
 
 bool CmpInst::isUnordered(Predicate predicate) {
   switch (predicate) {
-    default: return false;
-    case FCmpInst::FCMP_UEQ: case FCmpInst::FCMP_UNE: case FCmpInst::FCMP_UGT:
-    case FCmpInst::FCMP_ULT: case FCmpInst::FCMP_UGE: case FCmpInst::FCMP_ULE:
-    case FCmpInst::FCMP_UNO: return true;
+  default:
+    return false;
+  case FCmpInst::FCMP_UEQ:
+  case FCmpInst::FCMP_UNE:
+  case FCmpInst::FCMP_UGT:
+  case FCmpInst::FCMP_ULT:
+  case FCmpInst::FCMP_UGE:
+  case FCmpInst::FCMP_ULE:
+  case FCmpInst::FCMP_UNO:
+    return true;
   }
 }
 
 bool CmpInst::isTrueWhenEqual(Predicate predicate) {
-  switch(predicate) {
-    default: return false;
-    case ICMP_EQ:   case ICMP_UGE: case ICMP_ULE: case ICMP_SGE: case ICMP_SLE:
-    case FCMP_TRUE: case FCMP_UEQ: case FCMP_UGE: case FCMP_ULE: return true;
+  switch (predicate) {
+  default:
+    return false;
+  case ICMP_EQ:
+  case ICMP_UGE:
+  case ICMP_ULE:
+  case ICMP_SGE:
+  case ICMP_SLE:
+  case FCMP_TRUE:
+  case FCMP_UEQ:
+  case FCMP_UGE:
+  case FCMP_ULE:
+    return true;
   }
 }
 
 bool CmpInst::isFalseWhenEqual(Predicate predicate) {
-  switch(predicate) {
-  case ICMP_NE:    case ICMP_UGT: case ICMP_ULT: case ICMP_SGT: case ICMP_SLT:
-  case FCMP_FALSE: case FCMP_ONE: case FCMP_OGT: case FCMP_OLT: return true;
-  default: return false;
+  switch (predicate) {
+  case ICMP_NE:
+  case ICMP_UGT:
+  case ICMP_ULT:
+  case ICMP_SGT:
+  case ICMP_SLT:
+  case FCMP_FALSE:
+  case FCMP_ONE:
+  case FCMP_OGT:
+  case FCMP_OLT:
+    return true;
+  default:
+    return false;
   }
 }
 
@@ -4642,7 +4766,7 @@ SwitchInst::SwitchInst(Value *Value, BasicBlock *Default, unsigned NumCases,
                        Instruction *InsertBefore)
     : Instruction(Type::getVoidTy(Value->getContext()), Instruction::Switch,
                   nullptr, 0, InsertBefore) {
-  init(Value, Default, 2+NumCases*2);
+  init(Value, Default, 2 + NumCases * 2);
 }
 
 /// SwitchInst ctor - Create a new switch instruction, specifying a value to
@@ -4653,7 +4777,7 @@ SwitchInst::SwitchInst(Value *Value, BasicBlock *Default, unsigned NumCases,
                        BasicBlock *InsertAtEnd)
     : Instruction(Type::getVoidTy(Value->getContext()), Instruction::Switch,
                   nullptr, 0, InsertAtEnd) {
-  init(Value, Default, 2+NumCases*2);
+  init(Value, Default, 2 + NumCases * 2);
 }
 
 SwitchInst::SwitchInst(const SwitchInst &SI)
@@ -4664,7 +4788,7 @@ SwitchInst::SwitchInst(const SwitchInst &SI)
   const Use *InOL = SI.getOperandList();
   for (unsigned i = 2, E = SI.getNumOperands(); i != E; i += 2) {
     OL[i] = InOL[i];
-    OL[i+1] = InOL[i+1];
+    OL[i + 1] = InOL[i + 1];
   }
   SubclassOptionalData = SI.SubclassOptionalData;
 }
@@ -4674,11 +4798,11 @@ SwitchInst::SwitchInst(const SwitchInst &SI)
 void SwitchInst::addCase(ConstantInt *OnVal, BasicBlock *Dest) {
   unsigned NewCaseIdx = getNumCases();
   unsigned OpNo = getNumOperands();
-  if (OpNo+2 > ReservedSpace)
-    growOperands();  // Get more space!
+  if (OpNo + 2 > ReservedSpace)
+    growOperands(); // Get more space!
   // Initialize some new operands.
-  assert(OpNo+1 < ReservedSpace && "Growing didn't work!");
-  setNumHungOffUseOperands(OpNo+2);
+  assert(OpNo + 1 < ReservedSpace && "Growing didn't work!");
+  setNumHungOffUseOperands(OpNo + 2);
   CaseHandle Case(this, NewCaseIdx);
   Case.setValue(OnVal);
   Case.setSuccessor(Dest);
@@ -4689,7 +4813,7 @@ void SwitchInst::addCase(ConstantInt *OnVal, BasicBlock *Dest) {
 SwitchInst::CaseIt SwitchInst::removeCase(CaseIt I) {
   unsigned idx = I->getCaseIndex();
 
-  assert(2 + idx*2 < getNumOperands() && "Case index out of range!!!");
+  assert(2 + idx * 2 < getNumOperands() && "Case index out of range!!!");
 
   unsigned NumOps = getNumOperands();
   Use *OL = getOperandList();
@@ -4701,9 +4825,9 @@ SwitchInst::CaseIt SwitchInst::removeCase(CaseIt I) {
   }
 
   // Nuke the last value.
-  OL[NumOps-2].set(nullptr);
-  OL[NumOps-2+1].set(nullptr);
-  setNumHungOffUseOperands(NumOps-2);
+  OL[NumOps - 2].set(nullptr);
+  OL[NumOps - 2 + 1].set(nullptr);
+  setNumHungOffUseOperands(NumOps - 2);
 
   return CaseIt(this, idx);
 }
@@ -4713,7 +4837,7 @@ SwitchInst::CaseIt SwitchInst::removeCase(CaseIt I) {
 ///
 void SwitchInst::growOperands() {
   unsigned e = getNumOperands();
-  unsigned NumOps = e*3;
+  unsigned NumOps = e * 3;
 
   ReservedSpace = NumOps;
   growHungoffUses(ReservedSpace);
@@ -4837,20 +4961,19 @@ SwitchInstProfUpdateWrapper::getSuccessorWeight(const SwitchInst &SI,
 void IndirectBrInst::init(Value *Address, unsigned NumDests) {
   assert(Address && Address->getType()->isPointerTy() &&
          "Address of indirectbr must be a pointer");
-  ReservedSpace = 1+NumDests;
+  ReservedSpace = 1 + NumDests;
   setNumHungOffUseOperands(1);
   allocHungoffUses(ReservedSpace);
 
   Op<0>() = Address;
 }
 
-
 /// growOperands - grow operands - This grows the operand list in response
 /// to a push_back style of operation.  This grows the number of ops by 2 times.
 ///
 void IndirectBrInst::growOperands() {
   unsigned e = getNumOperands();
-  unsigned NumOps = e*2;
+  unsigned NumOps = e * 2;
 
   ReservedSpace = NumOps;
   growHungoffUses(ReservedSpace);
@@ -4885,42 +5008,40 @@ IndirectBrInst::IndirectBrInst(const IndirectBrInst &IBI)
 ///
 void IndirectBrInst::addDestination(BasicBlock *DestBB) {
   unsigned OpNo = getNumOperands();
-  if (OpNo+1 > ReservedSpace)
-    growOperands();  // Get more space!
+  if (OpNo + 1 > ReservedSpace)
+    growOperands(); // Get more space!
   // Initialize some new operands.
   assert(OpNo < ReservedSpace && "Growing didn't work!");
-  setNumHungOffUseOperands(OpNo+1);
+  setNumHungOffUseOperands(OpNo + 1);
   getOperandList()[OpNo] = DestBB;
 }
 
 /// removeDestination - This method removes the specified successor from the
 /// indirectbr instruction.
 void IndirectBrInst::removeDestination(unsigned idx) {
-  assert(idx < getNumOperands()-1 && "Successor index out of range!");
+  assert(idx < getNumOperands() - 1 && "Successor index out of range!");
 
   unsigned NumOps = getNumOperands();
   Use *OL = getOperandList();
 
   // Replace this value with the last one.
-  OL[idx+1] = OL[NumOps-1];
+  OL[idx + 1] = OL[NumOps - 1];
 
   // Nuke the last value.
-  OL[NumOps-1].set(nullptr);
-  setNumHungOffUseOperands(NumOps-1);
+  OL[NumOps - 1].set(nullptr);
+  setNumHungOffUseOperands(NumOps - 1);
 }
 
 //===----------------------------------------------------------------------===//
 //                            FreezeInst Implementation
 //===----------------------------------------------------------------------===//
 
-FreezeInst::FreezeInst(Value *S,
-                       const Twine &Name, Instruction *InsertBefore)
+FreezeInst::FreezeInst(Value *S, const Twine &Name, Instruction *InsertBefore)
     : UnaryInstruction(S->getType(), Freeze, S, InsertBefore) {
   setName(Name);
 }
 
-FreezeInst::FreezeInst(Value *S,
-                       const Twine &Name, BasicBlock *InsertAtEnd)
+FreezeInst::FreezeInst(Value *S, const Twine &Name, BasicBlock *InsertAtEnd)
     : UnaryInstruction(S->getType(), Freeze, S, InsertAtEnd) {
   setName(Name);
 }
@@ -5054,9 +5175,9 @@ AddrSpaceCastInst *AddrSpaceCastInst::cloneImpl() const {
 CallInst *CallInst::cloneImpl() const {
   if (hasOperandBundles()) {
     unsigned DescriptorBytes = getNumOperandBundles() * sizeof(BundleOpInfo);
-    return new(getNumOperands(), DescriptorBytes) CallInst(*this);
+    return new (getNumOperands(), DescriptorBytes) CallInst(*this);
   }
-  return  new(getNumOperands()) CallInst(*this);
+  return new (getNumOperands()) CallInst(*this);
 }
 
 SelectInst *SelectInst::cloneImpl() const {
@@ -5086,11 +5207,11 @@ LandingPadInst *LandingPadInst::cloneImpl() const {
 }
 
 ReturnInst *ReturnInst::cloneImpl() const {
-  return new(getNumOperands()) ReturnInst(*this);
+  return new (getNumOperands()) ReturnInst(*this);
 }
 
 BranchInst *BranchInst::cloneImpl() const {
-  return new(getNumOperands()) BranchInst(*this);
+  return new (getNumOperands()) BranchInst(*this);
 }
 
 SwitchInst *SwitchInst::cloneImpl() const { return new SwitchInst(*this); }
@@ -5102,9 +5223,9 @@ IndirectBrInst *IndirectBrInst::cloneImpl() const {
 InvokeInst *InvokeInst::cloneImpl() const {
   if (hasOperandBundles()) {
     unsigned DescriptorBytes = getNumOperandBundles() * sizeof(BundleOpInfo);
-    return new(getNumOperands(), DescriptorBytes) InvokeInst(*this);
+    return new (getNumOperands(), DescriptorBytes) InvokeInst(*this);
   }
-  return new(getNumOperands()) InvokeInst(*this);
+  return new (getNumOperands()) InvokeInst(*this);
 }
 
 CallBrInst *CallBrInst::cloneImpl() const {
diff --git a/llvm/lib/IR/IntrinsicInst.cpp b/llvm/lib/IR/IntrinsicInst.cpp
index 61be167ebaa28db..1daac7d549138c7 100644
--- a/llvm/lib/IR/IntrinsicInst.cpp
+++ b/llvm/lib/IR/IntrinsicInst.cpp
@@ -629,16 +629,16 @@ Function *VPIntrinsic::getDeclarationForParams(Module *M, Intrinsic::ID VPID,
     VPFunc = Intrinsic::getDeclaration(M, VPID, {Params[1]->getType()});
     break;
   case Intrinsic::vp_load:
-    VPFunc = Intrinsic::getDeclaration(
-        M, VPID, {ReturnType, Params[0]->getType()});
+    VPFunc =
+        Intrinsic::getDeclaration(M, VPID, {ReturnType, Params[0]->getType()});
     break;
   case Intrinsic::experimental_vp_strided_load:
     VPFunc = Intrinsic::getDeclaration(
         M, VPID, {ReturnType, Params[0]->getType(), Params[1]->getType()});
     break;
   case Intrinsic::vp_gather:
-    VPFunc = Intrinsic::getDeclaration(
-        M, VPID, {ReturnType, Params[0]->getType()});
+    VPFunc =
+        Intrinsic::getDeclaration(M, VPID, {ReturnType, Params[0]->getType()});
     break;
   case Intrinsic::vp_store:
     VPFunc = Intrinsic::getDeclaration(
@@ -817,11 +817,10 @@ const Value *GCProjectionInst::getStatepoint() const {
 
   // This relocate is on exceptional path of an invoke statepoint
   const BasicBlock *InvokeBB =
-    cast<Instruction>(Token)->getParent()->getUniquePredecessor();
+      cast<Instruction>(Token)->getParent()->getUniquePredecessor();
 
   assert(InvokeBB && "safepoints should have unique landingpads");
-  assert(InvokeBB->getTerminator() &&
-         "safepoint block should be well formed");
+  assert(InvokeBB->getTerminator() && "safepoint block should be well formed");
 
   return cast<GCStatepointInst>(InvokeBB->getTerminator());
 }
diff --git a/llvm/lib/IR/LLVMContext.cpp b/llvm/lib/IR/LLVMContext.cpp
index 8ddf51537ec1af9..848687eff268a2a 100644
--- a/llvm/lib/IR/LLVMContext.cpp
+++ b/llvm/lib/IR/LLVMContext.cpp
@@ -103,8 +103,7 @@ LLVMContext::LLVMContext() : pImpl(new LLVMContextImpl(*this)) {
          "singlethread synchronization scope ID drifted!");
   (void)SingleThreadSSID;
 
-  SyncScope::ID SystemSSID =
-      pImpl->getOrInsertSyncScopeID("");
+  SyncScope::ID SystemSSID = pImpl->getOrInsertSyncScopeID("");
   assert(SystemSSID == SyncScope::System &&
          "system synchronization scope ID drifted!");
   (void)SystemSSID;
@@ -112,13 +111,9 @@ LLVMContext::LLVMContext() : pImpl(new LLVMContextImpl(*this)) {
 
 LLVMContext::~LLVMContext() { delete pImpl; }
 
-void LLVMContext::addModule(Module *M) {
-  pImpl->OwnedModules.insert(M);
-}
+void LLVMContext::addModule(Module *M) { pImpl->OwnedModules.insert(M); }
 
-void LLVMContext::removeModule(Module *M) {
-  pImpl->OwnedModules.erase(M);
-}
+void LLVMContext::removeModule(Module *M) { pImpl->OwnedModules.erase(M); }
 
 //===----------------------------------------------------------------------===//
 // Recoverable Backend Errors
@@ -133,7 +128,7 @@ void LLVMContext::setDiagnosticHandlerCallBack(
 }
 
 void LLVMContext::setDiagnosticHandler(std::unique_ptr<DiagnosticHandler> &&DH,
-                                      bool RespectFilters) {
+                                       bool RespectFilters) {
   pImpl->DiagHandler = std::move(DH);
   pImpl->RespectDiagnosticFilters = RespectFilters;
 }
@@ -145,7 +140,8 @@ bool LLVMContext::getDiagnosticsHotnessRequested() const {
   return pImpl->DiagnosticsHotnessRequested;
 }
 
-void LLVMContext::setDiagnosticsHotnessThreshold(std::optional<uint64_t> Threshold) {
+void LLVMContext::setDiagnosticsHotnessThreshold(
+    std::optional<uint64_t> Threshold) {
   pImpl->DiagnosticsHotnessThreshold = Threshold;
 }
 void LLVMContext::setMisExpectWarningRequested(bool Requested) {
@@ -200,8 +196,8 @@ void *LLVMContext::getDiagnosticContext() const {
   return pImpl->DiagHandler->DiagnosticContext;
 }
 
-void LLVMContext::setYieldCallback(YieldCallbackTy Callback, void *OpaqueHandle)
-{
+void LLVMContext::setYieldCallback(YieldCallbackTy Callback,
+                                   void *OpaqueHandle) {
   pImpl->YieldCallback = Callback;
   pImpl->YieldOpaqueHandle = OpaqueHandle;
 }
@@ -216,7 +212,7 @@ void LLVMContext::emitError(const Twine &ErrorStr) {
 }
 
 void LLVMContext::emitError(const Instruction *I, const Twine &ErrorStr) {
-  assert (I && "Invalid instruction");
+  assert(I && "Invalid instruction");
   diagnose(DiagnosticInfoInlineAsm(*I, ErrorStr));
 }
 
@@ -284,9 +280,8 @@ void LLVMContext::emitError(uint64_t LocCookie, const Twine &ErrorStr) {
 /// Return a unique non-zero ID for the specified metadata kind.
 unsigned LLVMContext::getMDKindID(StringRef Name) const {
   // If this is new, assign it its ID.
-  return pImpl->CustomMDKindNames.insert(
-                                     std::make_pair(
-                                         Name, pImpl->CustomMDKindNames.size()))
+  return pImpl->CustomMDKindNames
+      .insert(std::make_pair(Name, pImpl->CustomMDKindNames.size()))
       .first->second;
 }
 
@@ -295,7 +290,8 @@ unsigned LLVMContext::getMDKindID(StringRef Name) const {
 void LLVMContext::getMDKindNames(SmallVectorImpl<StringRef> &Names) const {
   Names.resize(pImpl->CustomMDKindNames.size());
   for (StringMap<unsigned>::const_iterator I = pImpl->CustomMDKindNames.begin(),
-       E = pImpl->CustomMDKindNames.end(); I != E; ++I)
+                                           E = pImpl->CustomMDKindNames.end();
+       I != E; ++I)
     Names[I->second] = I->first();
 }
 
@@ -334,9 +330,7 @@ const std::string &LLVMContext::getGC(const Function &Fn) {
   return pImpl->GCNames[&Fn];
 }
 
-void LLVMContext::deleteGC(const Function &Fn) {
-  pImpl->GCNames.erase(&Fn);
-}
+void LLVMContext::deleteGC(const Function &Fn) { pImpl->GCNames.erase(&Fn); }
 
 bool LLVMContext::shouldDiscardValueNames() const {
   return pImpl->DiscardValueNames;
@@ -361,7 +355,7 @@ OptPassGate &LLVMContext::getOptPassGate() const {
   return pImpl->getOptPassGate();
 }
 
-void LLVMContext::setOptPassGate(OptPassGate& OPG) {
+void LLVMContext::setOptPassGate(OptPassGate &OPG) {
   pImpl->setOptPassGate(OPG);
 }
 
@@ -377,6 +371,4 @@ void LLVMContext::setOpaquePointers(bool Enable) const {
   assert(Enable && "Cannot disable opaque pointers");
 }
 
-bool LLVMContext::supportsTypedPointers() const {
-  return false;
-}
+bool LLVMContext::supportsTypedPointers() const { return false; }
diff --git a/llvm/lib/IR/LLVMContextImpl.cpp b/llvm/lib/IR/LLVMContextImpl.cpp
index 2076eeed9417691..c6bf85fbb154eac 100644
--- a/llvm/lib/IR/LLVMContextImpl.cpp
+++ b/llvm/lib/IR/LLVMContextImpl.cpp
@@ -116,7 +116,8 @@ LLVMContextImpl::~LLVMContextImpl() {
 
   // Destroy attribute node lists.
   for (FoldingSetIterator<AttributeSetNode> I = AttrsSetNodes.begin(),
-         E = AttrsSetNodes.end(); I != E; ) {
+                                            E = AttrsSetNodes.end();
+       I != E;) {
     FoldingSetIterator<AttributeSetNode> Elem = I++;
     delete &*Elem;
   }
@@ -205,7 +206,8 @@ StringMapEntry<uint32_t> *LLVMContextImpl::getOrInsertBundleTag(StringRef Tag) {
   return &*(BundleTagCache.insert(std::make_pair(Tag, NewIdx)).first);
 }
 
-void LLVMContextImpl::getOperandBundleTags(SmallVectorImpl<StringRef> &Tags) const {
+void LLVMContextImpl::getOperandBundleTags(
+    SmallVectorImpl<StringRef> &Tags) const {
   Tags.resize(BundleTagCache.size());
   for (const auto &T : BundleTagCache)
     Tags[T.second] = T.first();
@@ -239,6 +241,4 @@ OptPassGate &LLVMContextImpl::getOptPassGate() const {
   return *OPG;
 }
 
-void LLVMContextImpl::setOptPassGate(OptPassGate& OPG) {
-  this->OPG = &OPG;
-}
+void LLVMContextImpl::setOptPassGate(OptPassGate &OPG) { this->OPG = &OPG; }
diff --git a/llvm/lib/IR/LLVMRemarkStreamer.cpp b/llvm/lib/IR/LLVMRemarkStreamer.cpp
index 71f8d4a4b1c7ca3..df8c841a76cb694 100644
--- a/llvm/lib/IR/LLVMRemarkStreamer.cpp
+++ b/llvm/lib/IR/LLVMRemarkStreamer.cpp
@@ -80,7 +80,7 @@ LLVMRemarkStreamer::toRemark(const DiagnosticInfoOptimizationBase &Diag) const {
 
 void LLVMRemarkStreamer::emit(const DiagnosticInfoOptimizationBase &Diag) {
   if (!RS.matchesFilter(Diag.getPassName()))
-      return;
+    return;
 
   // First, convert the diagnostic to a remark.
   remarks::Remark R = toRemark(Diag);
@@ -97,7 +97,7 @@ Expected<std::unique_ptr<ToolOutputFile>> llvm::setupLLVMOptimizationRemarks(
     StringRef RemarksFormat, bool RemarksWithHotness,
     std::optional<uint64_t> RemarksHotnessThreshold) {
   if (RemarksWithHotness || RemarksHotnessThreshold.value_or(1))
-      Context.setDiagnosticsHotnessRequested(true);
+    Context.setDiagnosticsHotnessRequested(true);
 
   Context.setDiagnosticsHotnessThreshold(RemarksHotnessThreshold);
 
diff --git a/llvm/lib/IR/LegacyPassManager.cpp b/llvm/lib/IR/LegacyPassManager.cpp
index 6c223d4ec381762..a20e5885ab7b14e 100644
--- a/llvm/lib/IR/LegacyPassManager.cpp
+++ b/llvm/lib/IR/LegacyPassManager.cpp
@@ -42,9 +42,7 @@ using namespace llvm;
 
 namespace {
 // Different debug levels that can be enabled...
-enum PassDebugLevel {
-  Disabled, Arguments, Structure, Executions, Details
-};
+enum PassDebugLevel { Disabled, Arguments, Structure, Executions, Details };
 } // namespace
 
 static cl::opt<enum PassDebugLevel> PassDebugging(
@@ -247,8 +245,10 @@ class FunctionPassManagerImpl : public Pass,
                                 public PMDataManager,
                                 public PMTopLevelManager {
   virtual void anchor();
+
 private:
   bool wasRun;
+
 public:
   static char ID;
   explicit FunctionPassManagerImpl()
@@ -256,9 +256,7 @@ class FunctionPassManagerImpl : public Pass,
         wasRun(false) {}
 
   /// \copydoc FunctionPassManager::add()
-  void add(Pass *P) {
-    schedulePass(P);
-  }
+  void add(Pass *P) { schedulePass(P); }
 
   /// createPrinterPass - Get a function printer pass.
   Pass *createPrinterPass(raw_ostream &O,
@@ -282,7 +280,6 @@ class FunctionPassManagerImpl : public Pass,
   ///
   bool doFinalization(Module &M) override;
 
-
   PMDataManager *getAsPMDataManager() override { return this; }
   Pass *getAsPass() override { return this; }
   PassManagerType getTopLevelPassManagerType() override {
@@ -402,8 +399,8 @@ class MPPassManager : public Pass, public PMDataManager {
   /// whether any of the passes modifies the module, and if so, return true.
   bool runOnModule(Module &M);
 
-  using llvm::Pass::doInitialization;
   using llvm::Pass::doFinalization;
+  using llvm::Pass::doInitialization;
 
   /// Pass Manager itself does not invalidate any analysis info.
   void getAnalysisUsage(AnalysisUsage &Info) const override {
@@ -428,7 +425,7 @@ class MPPassManager : public Pass, public PMDataManager {
 
   // Print passes managed by this manager
   void dumpPassStructure(unsigned Offset) override {
-    dbgs().indent(Offset*2) << "ModulePass Manager\n";
+    dbgs().indent(Offset * 2) << "ModulePass Manager\n";
     for (unsigned Index = 0; Index < getNumContainedPasses(); ++Index) {
       ModulePass *MP = getContainedPass(Index);
       MP->dumpPassStructure(Offset + 1);
@@ -436,7 +433,7 @@ class MPPassManager : public Pass, public PMDataManager {
           OnTheFlyManagers.find(MP);
       if (I != OnTheFlyManagers.end())
         I->second->dumpPassStructure(Offset + 2);
-      dumpLastUses(MP, Offset+1);
+      dumpLastUses(MP, Offset + 1);
     }
   }
 
@@ -449,10 +446,10 @@ class MPPassManager : public Pass, public PMDataManager {
     return PMT_ModulePassManager;
   }
 
- private:
+private:
   /// Collection of on the fly FPPassManagers. These managers manage
   /// function passes that are required by module passes.
-   MapVector<Pass *, legacy::FunctionPassManagerImpl *> OnTheFlyManagers;
+  MapVector<Pass *, legacy::FunctionPassManagerImpl *> OnTheFlyManagers;
 };
 
 char MPPassManager::ID = 0;
@@ -476,9 +473,7 @@ class PassManagerImpl : public Pass,
       : Pass(PT_PassManager, ID), PMTopLevelManager(new MPPassManager()) {}
 
   /// \copydoc PassManager::add()
-  void add(Pass *P) {
-    schedulePass(P);
-  }
+  void add(Pass *P) { schedulePass(P); }
 
   /// createPrinterPass - Get a module printer pass.
   Pass *createPrinterPass(raw_ostream &O,
@@ -490,8 +485,8 @@ class PassManagerImpl : public Pass,
   /// whether any of the passes modifies the module, and if so, return true.
   bool run(Module &M);
 
-  using llvm::Pass::doInitialization;
   using llvm::Pass::doFinalization;
+  using llvm::Pass::doInitialization;
 
   /// Pass Manager itself does not invalidate any analysis info.
   void getAnalysisUsage(AnalysisUsage &Info) const override {
@@ -555,8 +550,7 @@ PMTopLevelManager::PMTopLevelManager(PMDataManager *PMDM) {
 }
 
 /// Set pass P as the last user of the given analysis passes.
-void
-PMTopLevelManager::setLastUser(ArrayRef<Pass*> AnalysisPasses, Pass *P) {
+void PMTopLevelManager::setLastUser(ArrayRef<Pass *> AnalysisPasses, Pass *P) {
   unsigned PDepth = 0;
   if (P->getResolver())
     PDepth = P->getResolver()->getPMDataManager().getDepth();
@@ -633,7 +627,7 @@ AnalysisUsage *PMTopLevelManager::findAnalysisUsage(Pass *P) {
     AnalysisUsage AU;
     P->getAnalysisUsage(AU);
 
-    AUFoldingSetNode* Node = nullptr;
+    AUFoldingSetNode *Node = nullptr;
     FoldingSetNodeID ID;
     AUFoldingSetNode::Profile(ID, AU);
     void *IP = nullptr;
@@ -688,9 +682,12 @@ void PMTopLevelManager::schedulePass(Pass *P) {
 
         if (!PI) {
           // Pass P is not in the global PassRegistry
-          dbgs() << "Pass '"  << P->getPassName() << "' is not initialized." << "\n";
-          dbgs() << "Verify if there is a pass dependency cycle." << "\n";
-          dbgs() << "Required Passes:" << "\n";
+          dbgs() << "Pass '" << P->getPassName() << "' is not initialized."
+                 << "\n";
+          dbgs() << "Verify if there is a pass dependency cycle."
+                 << "\n";
+          dbgs() << "Required Passes:"
+                 << "\n";
           for (const AnalysisID ID2 : RequiredSet) {
             if (ID == ID2)
               break;
@@ -698,20 +695,26 @@ void PMTopLevelManager::schedulePass(Pass *P) {
             if (AnalysisPass2) {
               dbgs() << "\t" << AnalysisPass2->getPassName() << "\n";
             } else {
-              dbgs() << "\t"   << "Error: Required pass not found! Possible causes:"  << "\n";
-              dbgs() << "\t\t" << "- Pass misconfiguration (e.g.: missing macros)"    << "\n";
-              dbgs() << "\t\t" << "- Corruption of the global PassRegistry"           << "\n";
+              dbgs() << "\t"
+                     << "Error: Required pass not found! Possible causes:"
+                     << "\n";
+              dbgs() << "\t\t"
+                     << "- Pass misconfiguration (e.g.: missing macros)"
+                     << "\n";
+              dbgs() << "\t\t"
+                     << "- Corruption of the global PassRegistry"
+                     << "\n";
             }
           }
         }
 
         assert(PI && "Expected required passes to be initialized");
         AnalysisPass = PI->createPass();
-        if (P->getPotentialPassManagerType () ==
+        if (P->getPotentialPassManagerType() ==
             AnalysisPass->getPotentialPassManagerType())
           // Schedule analysis pass that is managed by the same pass manager.
           schedulePass(AnalysisPass);
-        else if (P->getPotentialPassManagerType () >
+        else if (P->getPotentialPassManagerType() >
                  AnalysisPass->getPotentialPassManagerType()) {
           // Schedule analysis pass that is managed by a new manager.
           schedulePass(AnalysisPass);
@@ -878,7 +881,8 @@ void PMDataManager::recordAvailableAnalysis(Pass *P) {
   // This pass is the current implementation of all of the interfaces it
   // implements as well.
   const PassInfo *PInf = TPM->findAnalysisPassInfo(PI);
-  if (!PInf) return;
+  if (!PInf)
+    return;
   for (const PassInfo *PI : PInf->getInterfacesImplemented())
     AvailableAnalysis[PI->getTypeInfo()] = P;
 }
@@ -925,15 +929,16 @@ void PMDataManager::removeNotPreservedAnalysis(Pass *P) {
     return;
 
   const AnalysisUsage::VectorType &PreservedSet = AnUsage->getPreservedSet();
-  for (DenseMap<AnalysisID, Pass*>::iterator I = AvailableAnalysis.begin(),
-         E = AvailableAnalysis.end(); I != E; ) {
-    DenseMap<AnalysisID, Pass*>::iterator Info = I++;
+  for (DenseMap<AnalysisID, Pass *>::iterator I = AvailableAnalysis.begin(),
+                                              E = AvailableAnalysis.end();
+       I != E;) {
+    DenseMap<AnalysisID, Pass *>::iterator Info = I++;
     if (Info->second->getAsImmutablePass() == nullptr &&
         !is_contained(PreservedSet, Info->first)) {
       // Remove this analysis
       if (PassDebugging >= Details) {
         Pass *S = Info->second;
-        dbgs() << " -- '" <<  P->getPassName() << "' is not preserving '";
+        dbgs() << " -- '" << P->getPassName() << "' is not preserving '";
         dbgs() << S->getPassName() << "'\n";
       }
       AvailableAnalysis.erase(Info);
@@ -946,8 +951,7 @@ void PMDataManager::removeNotPreservedAnalysis(Pass *P) {
     if (!IA)
       continue;
 
-    for (DenseMap<AnalysisID, Pass *>::iterator I = IA->begin(),
-                                                E = IA->end();
+    for (DenseMap<AnalysisID, Pass *>::iterator I = IA->begin(), E = IA->end();
          I != E;) {
       DenseMap<AnalysisID, Pass *>::iterator Info = I++;
       if (Info->second->getAsImmutablePass() == nullptr &&
@@ -955,7 +959,7 @@ void PMDataManager::removeNotPreservedAnalysis(Pass *P) {
         // Remove this analysis
         if (PassDebugging >= Details) {
           Pass *S = Info->second;
-          dbgs() << " -- '" <<  P->getPassName() << "' is not preserving '";
+          dbgs() << " -- '" << P->getPassName() << "' is not preserving '";
           dbgs() << S->getPassName() << "'\n";
         }
         IA->erase(Info);
@@ -977,7 +981,7 @@ void PMDataManager::removeDeadPasses(Pass *P, StringRef Msg,
   TPM->collectLastUses(DeadPasses, P);
 
   if (PassDebugging >= Details && !DeadPasses.empty()) {
-    dbgs() << " -*- '" <<  P->getPassName();
+    dbgs() << " -*- '" << P->getPassName();
     dbgs() << "' is the last user of following pass instances.";
     dbgs() << " Free these instances\n";
   }
@@ -1087,7 +1091,6 @@ void PMDataManager::add(Pass *P, bool ProcessAnalysis) {
   PassVector.push_back(P);
 }
 
-
 /// Populate UP with analysis pass that are used or required by
 /// pass P and are available. Populate RP_NotAvail with analysis
 /// pass that are required by pass P but are not available.
@@ -1132,7 +1135,7 @@ void PMDataManager::initializeAnalysisImpl(Pass *P) {
 Pass *PMDataManager::findAnalysisPass(AnalysisID AID, bool SearchParent) {
 
   // Check if AvailableAnalysis map has one entry.
-  DenseMap<AnalysisID, Pass*>::const_iterator I =  AvailableAnalysis.find(AID);
+  DenseMap<AnalysisID, Pass *>::const_iterator I = AvailableAnalysis.find(AID);
 
   if (I != AvailableAnalysis.end())
     return I->second;
@@ -1145,7 +1148,7 @@ Pass *PMDataManager::findAnalysisPass(AnalysisID AID, bool SearchParent) {
 }
 
 // Print list of passes that are last used by P.
-void PMDataManager::dumpLastUses(Pass *P, unsigned Offset) const{
+void PMDataManager::dumpLastUses(Pass *P, unsigned Offset) const {
   if (PassDebugging < Details)
     return;
 
@@ -1158,7 +1161,7 @@ void PMDataManager::dumpLastUses(Pass *P, unsigned Offset) const{
   TPM->collectLastUses(LUses, P);
 
   for (Pass *P : LUses) {
-    dbgs() << "--" << std::string(Offset*2, ' ');
+    dbgs() << "--" << std::string(Offset * 2, ' ');
     P->dumpPassStructure(0);
   }
 }
@@ -1167,17 +1170,14 @@ void PMDataManager::dumpPassArguments() const {
   for (Pass *P : PassVector) {
     if (PMDataManager *PMD = P->getAsPMDataManager())
       PMD->dumpPassArguments();
-    else
-      if (const PassInfo *PI =
-            TPM->findAnalysisPassInfo(P->getPassID()))
-        if (!PI->isAnalysisGroup())
-          dbgs() << " -" << PI->getPassArgument();
+    else if (const PassInfo *PI = TPM->findAnalysisPassInfo(P->getPassID()))
+      if (!PI->isAnalysisGroup())
+        dbgs() << " -" << PI->getPassArgument();
   }
 }
 
 void PMDataManager::dumpPassInfo(Pass *P, enum PassDebuggingString S1,
-                                 enum PassDebuggingString S2,
-                                 StringRef Msg) {
+                                 enum PassDebuggingString S2, StringRef Msg) {
   if (PassDebugging < Executions)
     return;
   dbgs() << "[" << std::chrono::system_clock::now() << "] " << (void *)this
@@ -1200,10 +1200,10 @@ void PMDataManager::dumpPassInfo(Pass *P, enum PassDebuggingString S1,
     dbgs() << "' on Function '" << Msg << "'...\n";
     break;
   case ON_MODULE_MSG:
-    dbgs() << "' on Module '"  << Msg << "'...\n";
+    dbgs() << "' on Module '" << Msg << "'...\n";
     break;
   case ON_REGION_MSG:
-    dbgs() << "' on Region '"  << Msg << "'...\n";
+    dbgs() << "' on Region '" << Msg << "'...\n";
     break;
   case ON_LOOP_MSG:
     dbgs() << "' on Loop '" << Msg << "'...\n";
@@ -1243,14 +1243,16 @@ void PMDataManager::dumpUsedSet(const Pass *P) const {
   dumpAnalysisUsage("Used", P, analysisUsage.getUsedSet());
 }
 
-void PMDataManager::dumpAnalysisUsage(StringRef Msg, const Pass *P,
-                                   const AnalysisUsage::VectorType &Set) const {
+void PMDataManager::dumpAnalysisUsage(
+    StringRef Msg, const Pass *P, const AnalysisUsage::VectorType &Set) const {
   assert(PassDebugging >= Details);
   if (Set.empty())
     return;
-  dbgs() << (const void*)P << std::string(getDepth()*2+3, ' ') << Msg << " Analyses:";
+  dbgs() << (const void *)P << std::string(getDepth() * 2 + 3, ' ') << Msg
+         << " Analyses:";
   for (unsigned i = 0; i != Set.size(); ++i) {
-    if (i) dbgs() << ',';
+    if (i)
+      dbgs() << ',';
     const PassInfo *PInf = TPM->findAnalysisPassInfo(Set[i]);
     if (!PInf) {
       // Some preserved passes, such as AliasAnalysis, may not be initialized by
@@ -1328,13 +1330,9 @@ FunctionPassManager::FunctionPassManager(Module *m) : M(m) {
   FPM->setResolver(AR);
 }
 
-FunctionPassManager::~FunctionPassManager() {
-  delete FPM;
-}
+FunctionPassManager::~FunctionPassManager() { delete FPM; }
 
-void FunctionPassManager::add(Pass *P) {
-  FPM->add(P);
-}
+void FunctionPassManager::add(Pass *P) { FPM->add(P); }
 
 /// run - Execute all of the passes scheduled for execution.  Keep
 /// track of whether any of the passes modifies the function, and if
@@ -1347,7 +1345,6 @@ bool FunctionPassManager::run(Function &F) {
   return FPM->run(F);
 }
 
-
 /// doInitialization - Run all of the initializers for the function passes.
 ///
 bool FunctionPassManager::doInitialization() {
@@ -1356,34 +1353,31 @@ bool FunctionPassManager::doInitialization() {
 
 /// doFinalization - Run all of the finalizers for the function passes.
 ///
-bool FunctionPassManager::doFinalization() {
-  return FPM->doFinalization(*M);
-}
+bool FunctionPassManager::doFinalization() { return FPM->doFinalization(*M); }
 } // namespace legacy
 } // namespace llvm
 
 /// cleanup - After running all passes, clean up pass manager cache.
 void FPPassManager::cleanup() {
- for (unsigned Index = 0; Index < getNumContainedPasses(); ++Index) {
+  for (unsigned Index = 0; Index < getNumContainedPasses(); ++Index) {
     FunctionPass *FP = getContainedPass(Index);
     AnalysisResolver *AR = FP->getResolver();
     assert(AR && "Analysis Resolver is not set");
     AR->clearAnalysisImpls();
- }
+  }
 }
 
-
 //===----------------------------------------------------------------------===//
 // FPPassManager implementation
 
 char FPPassManager::ID = 0;
 /// Print passes managed by this manager
 void FPPassManager::dumpPassStructure(unsigned Offset) {
-  dbgs().indent(Offset*2) << "FunctionPass Manager\n";
+  dbgs().indent(Offset * 2) << "FunctionPass Manager\n";
   for (unsigned Index = 0; Index < getNumContainedPasses(); ++Index) {
     FunctionPass *FP = getContainedPass(Index);
     FP->dumpPassStructure(Offset + 1);
-    dumpLastUses(FP, Offset+1);
+    dumpLastUses(FP, Offset + 1);
   }
 }
 
@@ -1507,8 +1501,7 @@ bool FPPassManager::doFinalization(Module &M) {
 /// Execute all of the passes scheduled for execution by invoking
 /// runOnModule method.  Keep track of whether any of the passes modifies
 /// the module, and if so, return true.
-bool
-MPPassManager::runOnModule(Module &M) {
+bool MPPassManager::runOnModule(Module &M) {
   llvm::TimeTraceScope TimeScope("OptModule", M.getName());
 
   bool Changed = false;
@@ -1622,7 +1615,7 @@ void MPPassManager::addLowerLevelRequiredPass(Pass *P, Pass *RequiredPass) {
   Pass *FoundPass = nullptr;
   if (RequiredPassPI && RequiredPassPI->isAnalysis()) {
     FoundPass =
-      ((PMTopLevelManager*)FPP)->findAnalysisPass(RequiredPass->getPassID());
+        ((PMTopLevelManager *)FPP)->findAnalysisPass(RequiredPass->getPassID());
   }
   if (!FoundPass) {
     FoundPass = RequiredPass;
@@ -1633,7 +1626,7 @@ void MPPassManager::addLowerLevelRequiredPass(Pass *P, Pass *RequiredPass) {
   // Register P as the last user of FoundPass or RequiredPass.
   SmallVector<Pass *, 1> LU;
   LU.push_back(FoundPass);
-  FPP->setLastUser(LU,  P);
+  FPP->setLastUser(LU, P);
 }
 
 /// Return function pass corresponding to PassInfo PI, that is
@@ -1663,19 +1656,13 @@ PassManager::PassManager() {
   PM->setTopLevelManager(PM);
 }
 
-PassManager::~PassManager() {
-  delete PM;
-}
+PassManager::~PassManager() { delete PM; }
 
-void PassManager::add(Pass *P) {
-  PM->add(P);
-}
+void PassManager::add(Pass *P) { PM->add(P); }
 
 /// run - Execute all of the passes scheduled for execution.  Keep track of
 /// whether any of the passes modifies the module, and if so, return true.
-bool PassManager::run(Module &M) {
-  return PM->run(M);
-}
+bool PassManager::run(Module &M) { return PM->run(M); }
 } // namespace legacy
 } // namespace llvm
 
@@ -1695,21 +1682,21 @@ void PMStack::pop() {
 // Push PM on the stack and set its top level manager.
 void PMStack::push(PMDataManager *PM) {
   assert(PM && "Unable to push. Pass Manager expected");
-  assert(PM->getDepth()==0 && "Pass Manager depth set too early");
+  assert(PM->getDepth() == 0 && "Pass Manager depth set too early");
 
   if (!this->empty()) {
-    assert(PM->getPassManagerType() > this->top()->getPassManagerType()
-           && "pushing bad pass manager to PMStack");
+    assert(PM->getPassManagerType() > this->top()->getPassManagerType() &&
+           "pushing bad pass manager to PMStack");
     PMTopLevelManager *TPM = this->top()->getTopLevelManager();
 
     assert(TPM && "Unable to find top level manager");
     TPM->addIndirectPassManager(PM);
     PM->setTopLevelManager(TPM);
-    PM->setDepth(this->top()->getDepth()+1);
+    PM->setDepth(this->top()->getDepth() + 1);
   } else {
-    assert((PM->getPassManagerType() == PMT_ModulePassManager
-           || PM->getPassManagerType() == PMT_FunctionPassManager)
-           && "pushing bad pass manager to PMStack");
+    assert((PM->getPassManagerType() == PMT_ModulePassManager ||
+            PM->getPassManagerType() == PMT_FunctionPassManager) &&
+           "pushing bad pass manager to PMStack");
     PM->setDepth(1);
   }
 
diff --git a/llvm/lib/IR/MDBuilder.cpp b/llvm/lib/IR/MDBuilder.cpp
index 2490b3012bdc2b4..e4ac4ddfeb8272a 100644
--- a/llvm/lib/IR/MDBuilder.cpp
+++ b/llvm/lib/IR/MDBuilder.cpp
@@ -76,9 +76,8 @@ MDNode *MDBuilder::createFunctionEntryCount(
 }
 
 MDNode *MDBuilder::createFunctionSectionPrefix(StringRef Prefix) {
-  return MDNode::get(Context,
-                     {createString("function_section_prefix"),
-                      createString(Prefix)});
+  return MDNode::get(
+      Context, {createString("function_section_prefix"), createString(Prefix)});
 }
 
 MDNode *MDBuilder::createRange(const APInt &Lo, const APInt &Hi) {
@@ -140,7 +139,7 @@ MDNode *MDBuilder::mergeCallbackEncodings(MDNode *ExistingCallbacks,
 
     auto *OldCBCalleeIdxAsCM = cast<ConstantAsMetadata>(Ops[u]);
     uint64_t OldCBCalleeIdx =
-      cast<ConstantInt>(OldCBCalleeIdxAsCM->getValue())->getZExtValue();
+        cast<ConstantInt>(OldCBCalleeIdxAsCM->getValue())->getZExtValue();
     (void)OldCBCalleeIdx;
     assert(NewCBCalleeIdx != OldCBCalleeIdx &&
            "Cannot map a callback callee index twice!");
@@ -329,8 +328,8 @@ MDNode *MDBuilder::createMutableTBAAAccessTag(MDNode *Tag) {
 
 MDNode *MDBuilder::createIrrLoopHeaderWeight(uint64_t Weight) {
   Metadata *Vals[] = {
-    createString("loop_header_weight"),
-    createConstant(ConstantInt::get(Type::getInt64Ty(Context), Weight)),
+      createString("loop_header_weight"),
+      createConstant(ConstantInt::get(Type::getInt64Ty(Context), Weight)),
   };
   return MDNode::get(Context, Vals);
 }
diff --git a/llvm/lib/IR/Mangler.cpp b/llvm/lib/IR/Mangler.cpp
index 8d9880ecba58efb..fde114008dcd9cf 100644
--- a/llvm/lib/IR/Mangler.cpp
+++ b/llvm/lib/IR/Mangler.cpp
@@ -105,9 +105,9 @@ static void addByteCountSuffix(raw_ostream &OS, const Function *F,
       continue;
 
     // 'Dereference' type in case of byval or inalloca parameter attribute.
-    uint64_t AllocSize = A.hasPassPointeeByValueCopyAttr() ?
-      A.getPassPointeeByValueCopySize(DL) :
-      DL.getTypeAllocSize(A.getType());
+    uint64_t AllocSize = A.hasPassPointeeByValueCopyAttr()
+                             ? A.getPassPointeeByValueCopySize(DL)
+                             : DL.getTypeAllocSize(A.getType());
 
     // Size should be aligned to pointer size.
     ArgWords += alignTo(AllocSize, PtrSize);
@@ -278,4 +278,3 @@ void llvm::emitLinkerFlagsForUsedCOFF(raw_ostream &OS, const GlobalValue *GV,
   if (NeedQuotes)
     OS << "\"";
 }
-
diff --git a/llvm/lib/IR/Metadata.cpp b/llvm/lib/IR/Metadata.cpp
index c153ffb71a73bba..14321504d68a1f6 100644
--- a/llvm/lib/IR/Metadata.cpp
+++ b/llvm/lib/IR/Metadata.cpp
@@ -256,7 +256,8 @@ void ReplaceableMetadataImpl::SalvageDebugInfo(const Constant &C) {
   ValueAsMetadata *MD = I->second;
   using UseTy =
       std::pair<void *, std::pair<MetadataTracking::OwnerTy, uint64_t>>;
-  // Copy out uses and update value of Constant used by debug info metadata with undef below
+  // Copy out uses and update value of Constant used by debug info metadata with
+  // undef below
   SmallVector<UseTy, 8> Uses(MD->UseMap.begin(), MD->UseMap.end());
 
   for (const auto &Pair : Uses) {
@@ -1693,10 +1694,9 @@ void GlobalObject::copyMetadata(const GlobalObject *Other, unsigned Offset) {
 void GlobalObject::addTypeMetadata(unsigned Offset, Metadata *TypeID) {
   addMetadata(
       LLVMContext::MD_type,
-      *MDTuple::get(getContext(),
-                    {ConstantAsMetadata::get(ConstantInt::get(
-                         Type::getInt64Ty(getContext()), Offset)),
-                     TypeID}));
+      *MDTuple::get(getContext(), {ConstantAsMetadata::get(ConstantInt::get(
+                                       Type::getInt64Ty(getContext()), Offset)),
+                                   TypeID}));
 }
 
 void GlobalObject::setVCallVisibilityMetadata(VCallVisibility Visibility) {
diff --git a/llvm/lib/IR/Module.cpp b/llvm/lib/IR/Module.cpp
index 5861bbd1f293e9b..107d7a588da4cd0 100644
--- a/llvm/lib/IR/Module.cpp
+++ b/llvm/lib/IR/Module.cpp
@@ -149,7 +149,7 @@ FunctionCallee Module::getOrInsertFunction(StringRef Name, FunctionType *Ty,
     // Nope, add it
     Function *New = Function::Create(Ty, GlobalVariable::ExternalLinkage,
                                      DL.getProgramAddressSpace(), Name);
-    if (!New->isIntrinsic())       // Intrinsics get attrs set on construction
+    if (!New->isIntrinsic()) // Intrinsics get attrs set on construction
       New->setAttributes(AttributeList);
     FunctionList.push_back(New);
     return {Ty, New}; // Return the new prototype.
@@ -190,7 +190,7 @@ Function *Module::getFunction(StringRef Name) const {
 GlobalVariable *Module::getGlobalVariable(StringRef Name,
                                           bool AllowLocal) const {
   if (GlobalVariable *Result =
-      dyn_cast_or_null<GlobalVariable>(getNamedValue(Name)))
+          dyn_cast_or_null<GlobalVariable>(getNamedValue(Name)))
     if (AllowLocal || !Result->hasLocalLinkage())
       return Result;
   return nullptr;
@@ -300,10 +300,11 @@ bool Module::isValidModuleFlag(const MDNode &ModFlag, ModFlagBehavior &MFB,
 }
 
 /// getModuleFlagsMetadata - Returns the module flags in the provided vector.
-void Module::
-getModuleFlagsMetadata(SmallVectorImpl<ModuleFlagEntry> &Flags) const {
+void Module::getModuleFlagsMetadata(
+    SmallVectorImpl<ModuleFlagEntry> &Flags) const {
   const NamedMDNode *ModFlags = getModuleFlagsMetadata();
-  if (!ModFlags) return;
+  if (!ModFlags)
+    return;
 
   for (const MDNode *Flag : ModFlags->operands()) {
     ModFlagBehavior MFB;
@@ -389,9 +390,7 @@ void Module::setModuleFlag(ModFlagBehavior Behavior, StringRef Key,
   addModuleFlag(Behavior, Key, Val);
 }
 
-void Module::setDataLayout(StringRef Desc) {
-  DL.reset(Desc);
-}
+void Module::setDataLayout(StringRef Desc) { DL.reset(Desc); }
 
 void Module::setDataLayout(const DataLayout &Other) { DL = Other; }
 
diff --git a/llvm/lib/IR/ModuleSummaryIndex.cpp b/llvm/lib/IR/ModuleSummaryIndex.cpp
index 198c730418c724d..73f28cac1a549bb 100644
--- a/llvm/lib/IR/ModuleSummaryIndex.cpp
+++ b/llvm/lib/IR/ModuleSummaryIndex.cpp
@@ -398,7 +398,7 @@ struct Edge {
   GlobalValue::GUID Src;
   GlobalValue::GUID Dst;
 };
-}
+} // namespace
 
 void Attributes::add(const Twine &Name, const Twine &Value,
                      const Twine &Comment) {
@@ -480,7 +480,7 @@ static std::string fflagsToString(FunctionSummary::FFlags F) {
 }
 
 // Get string representation of function instruction count and flags.
-static std::string getSummaryAttributes(GlobalValueSummary* GVS) {
+static std::string getSummaryAttributes(GlobalValueSummary *GVS) {
   auto *FS = dyn_cast_or_null<FunctionSummary>(GVS);
   if (!FS)
     return "";
@@ -612,7 +612,8 @@ void ModuleSummaryIndex::exportToDot(
     OS << "    node [style=filled,fillcolor=lightblue];\n";
 
     auto &GVSMap = ModIt.second;
-    auto Draw = [&](GlobalValue::GUID IdFrom, GlobalValue::GUID IdTo, int Hotness) {
+    auto Draw = [&](GlobalValue::GUID IdFrom, GlobalValue::GUID IdTo,
+                    int Hotness) {
       if (!GVSMap.count(IdTo)) {
         CrossModuleEdges.push_back({ModId, Hotness, IdFrom, IdTo});
         return;
diff --git a/llvm/lib/IR/Operator.cpp b/llvm/lib/IR/Operator.cpp
index b57f3e3b2967eb9..47e28c05b2034e7 100644
--- a/llvm/lib/IR/Operator.cpp
+++ b/llvm/lib/IR/Operator.cpp
@@ -173,10 +173,9 @@ bool GEPOperator::accumulateConstantOffset(
   return true;
 }
 
-bool GEPOperator::collectOffset(
-    const DataLayout &DL, unsigned BitWidth,
-    MapVector<Value *, APInt> &VariableOffsets,
-    APInt &ConstantOffset) const {
+bool GEPOperator::collectOffset(const DataLayout &DL, unsigned BitWidth,
+                                MapVector<Value *, APInt> &VariableOffsets,
+                                APInt &ConstantOffset) const {
   assert(BitWidth == DL.getIndexSizeInBits(getPointerAddressSpace()) &&
          "The offset bit width does not match DL specification.");
 
diff --git a/llvm/lib/IR/Pass.cpp b/llvm/lib/IR/Pass.cpp
index d6096ebb3af7007..107569f9a9dc1f1 100644
--- a/llvm/lib/IR/Pass.cpp
+++ b/llvm/lib/IR/Pass.cpp
@@ -40,9 +40,7 @@ using namespace llvm;
 //
 
 // Force out-of-line virtual method.
-Pass::~Pass() {
-  delete Resolver;
-}
+Pass::~Pass() { delete Resolver; }
 
 // Force out-of-line virtual method.
 ModulePass::~ModulePass() = default;
@@ -72,14 +70,14 @@ bool Pass::mustPreserveAnalysisID(char &AID) const {
 
 // dumpPassStructure - Implement the -debug-pass=Structure option
 void Pass::dumpPassStructure(unsigned Offset) {
-  dbgs().indent(Offset*2) << getPassName() << "\n";
+  dbgs().indent(Offset * 2) << getPassName() << "\n";
 }
 
 /// getPassName - Return a nice clean name for a pass.  This usually
 /// implemented in terms of the name that is registered by one of the
 /// Registration templates, but can be overloaded directly.
 StringRef Pass::getPassName() const {
-  AnalysisID AID =  getPassID();
+  AnalysisID AID = getPassID();
   const PassInfo *PI = PassRegistry::getPassRegistry()->getPassInfo(AID);
   if (PI)
     return PI->getPassName();
@@ -107,17 +105,11 @@ void Pass::verifyAnalysis() const {
   // By default, don't do anything.
 }
 
-void *Pass::getAdjustedAnalysisPointer(AnalysisID AID) {
-  return this;
-}
+void *Pass::getAdjustedAnalysisPointer(AnalysisID AID) { return this; }
 
-ImmutablePass *Pass::getAsImmutablePass() {
-  return nullptr;
-}
+ImmutablePass *Pass::getAsImmutablePass() { return nullptr; }
 
-PMDataManager *Pass::getAsPMDataManager() {
-  return nullptr;
-}
+PMDataManager *Pass::getAsPMDataManager() { return nullptr; }
 
 void Pass::setResolver(AnalysisResolver *AR) {
   assert(!Resolver && "Resolver is already set");
@@ -133,9 +125,7 @@ void Pass::print(raw_ostream &OS, const Module *) const {
 
 #if !defined(NDEBUG) || defined(LLVM_ENABLE_DUMP)
 // dump - call print(cerr);
-LLVM_DUMP_METHOD void Pass::dump() const {
-  print(dbgs(), nullptr);
-}
+LLVM_DUMP_METHOD void Pass::dump() const { print(dbgs(), nullptr); }
 #endif
 
 #ifdef EXPENSIVE_CHECKS
diff --git a/llvm/lib/IR/PassTimingInfo.cpp b/llvm/lib/IR/PassTimingInfo.cpp
index cfd27bf78793746..1cbda8c3ded4f50 100644
--- a/llvm/lib/IR/PassTimingInfo.cpp
+++ b/llvm/lib/IR/PassTimingInfo.cpp
@@ -62,7 +62,8 @@ class PassTimingInfo {
 
 private:
   StringMap<unsigned> PassIDCountMap; ///< Map that counts instances of passes
-  DenseMap<PassInstanceID, std::unique_ptr<Timer>> TimingData; ///< timers for pass instances
+  DenseMap<PassInstanceID, std::unique_ptr<Timer>>
+      TimingData; ///< timers for pass instances
   TimerGroup TG;
 
 public:
@@ -140,7 +141,8 @@ Timer *PassTimingInfo::getPassTimer(Pass *P, PassInstanceID Pass) {
     StringRef PassArgument;
     if (const PassInfo *PI = Pass::lookupPassInfo(P->getPassID()))
       PassArgument = PI->getPassArgument();
-    T.reset(newPassTimer(PassArgument.empty() ? PassName : PassArgument, PassName));
+    T.reset(
+        newPassTimer(PassArgument.empty() ? PassName : PassArgument, PassName));
   }
   return T.get();
 }
@@ -200,9 +202,7 @@ TimePassesHandler::TimePassesHandler(bool Enabled, bool PerRun)
 TimePassesHandler::TimePassesHandler()
     : TimePassesHandler(TimePassesIsEnabled, TimePassesPerRun) {}
 
-void TimePassesHandler::setOutStream(raw_ostream &Out) {
-  OutStream = &Out;
-}
+void TimePassesHandler::setOutStream(raw_ostream &Out) { OutStream = &Out; }
 
 void TimePassesHandler::print() {
   if (!Enabled)
@@ -224,21 +224,23 @@ LLVM_DUMP_METHOD void TimePassesHandler::dump() const {
          << ":\n\tRunning:\n";
   for (auto &I : TimingData) {
     StringRef PassID = I.getKey();
-    const TimerVector& MyTimers = I.getValue();
+    const TimerVector &MyTimers = I.getValue();
     for (unsigned idx = 0; idx < MyTimers.size(); idx++) {
-      const Timer* MyTimer = MyTimers[idx].get();
+      const Timer *MyTimer = MyTimers[idx].get();
       if (MyTimer && MyTimer->isRunning())
-        dbgs() << "\tTimer " << MyTimer << " for pass " << PassID << "(" << idx << ")\n";
+        dbgs() << "\tTimer " << MyTimer << " for pass " << PassID << "(" << idx
+               << ")\n";
     }
   }
   dbgs() << "\tTriggered:\n";
   for (auto &I : TimingData) {
     StringRef PassID = I.getKey();
-    const TimerVector& MyTimers = I.getValue();
+    const TimerVector &MyTimers = I.getValue();
     for (unsigned idx = 0; idx < MyTimers.size(); idx++) {
-      const Timer* MyTimer = MyTimers[idx].get();
+      const Timer *MyTimer = MyTimers[idx].get();
       if (MyTimer && MyTimer->hasTriggered() && !MyTimer->isRunning())
-        dbgs() << "\tTimer " << MyTimer << " for pass " << PassID << "(" << idx << ")\n";
+        dbgs() << "\tTimer " << MyTimer << " for pass " << PassID << "(" << idx
+               << ")\n";
     }
   }
 }
diff --git a/llvm/lib/IR/ProfileSummary.cpp b/llvm/lib/IR/ProfileSummary.cpp
index 9f7335ecbe44d72..b15a98b3537d59c 100644
--- a/llvm/lib/IR/ProfileSummary.cpp
+++ b/llvm/lib/IR/ProfileSummary.cpp
@@ -260,8 +260,7 @@ void ProfileSummary::printDetailedSummary(raw_ostream &OS) const {
   OS << "Detailed summary:\n";
   for (const auto &Entry : DetailedSummary) {
     OS << Entry.NumCounts << " blocks with count >= " << Entry.MinCount
-       << " account for "
-       << format("%0.6g", (float)Entry.Cutoff / Scale * 100)
+       << " account for " << format("%0.6g", (float)Entry.Cutoff / Scale * 100)
        << " percentage of the total counts.\n";
   }
 }
diff --git a/llvm/lib/IR/SafepointIRVerifier.cpp b/llvm/lib/IR/SafepointIRVerifier.cpp
index ed99d05975c232b..7a96fe7cc98898a 100644
--- a/llvm/lib/IR/SafepointIRVerifier.cpp
+++ b/llvm/lib/IR/SafepointIRVerifier.cpp
@@ -73,7 +73,7 @@ class CFGDeadness {
 
 public:
   /// Return the edge that coresponds to the predecessor.
-  static const Use& getEdge(const_pred_iterator &PredIt) {
+  static const Use &getEdge(const_pred_iterator &PredIt) {
     auto &PU = PredIt.getUse();
     return PU.getUser()->getOperandUse(PU.getOperandNo());
   }
@@ -82,9 +82,10 @@ class CFGDeadness {
   /// basic block InBB listed in the phi node.
   bool hasLiveIncomingEdge(const PHINode *PN, const BasicBlock *InBB) const {
     assert(!isDeadBlock(InBB) && "block must be live");
-    const BasicBlock* BB = PN->getParent();
+    const BasicBlock *BB = PN->getParent();
     bool Listed = false;
-    for (const_pred_iterator PredIt(BB), End(BB, true); PredIt != End; ++PredIt) {
+    for (const_pred_iterator PredIt(BB), End(BB, true); PredIt != End;
+         ++PredIt) {
       if (InBB == *PredIt) {
         if (!isDeadEdge(&getEdge(PredIt)))
           return true;
@@ -96,10 +97,7 @@ class CFGDeadness {
     return false;
   }
 
-
-  bool isDeadBlock(const BasicBlock *BB) const {
-    return DeadBlocks.count(BB);
-  }
+  bool isDeadBlock(const BasicBlock *BB) const { return DeadBlocks.count(BB); }
 
   bool isDeadEdge(const Use *U) const {
     assert(cast<Instruction>(U->getUser())->isTerminator() &&
@@ -113,7 +111,8 @@ class CFGDeadness {
 
   bool hasLiveIncomingEdges(const BasicBlock *BB) const {
     // Check if all incoming edges are dead.
-    for (const_pred_iterator PredIt(BB), End(BB, true); PredIt != End; ++PredIt) {
+    for (const_pred_iterator PredIt(BB), End(BB, true); PredIt != End;
+         ++PredIt) {
       auto &PU = PredIt.getUse();
       const Use &U = PU.getUser()->getOperandUse(PU.getOperandNo());
       if (!isDeadBlock(*PredIt) && !isDeadEdge(&U))
@@ -136,13 +135,14 @@ class CFGDeadness {
       const Instruction *TI = BB->getTerminator();
       assert(TI && "blocks must be well formed");
 
-      // For conditional branches, we can perform simple conditional propagation on
-      // the condition value itself.
+      // For conditional branches, we can perform simple conditional propagation
+      // on the condition value itself.
       const BranchInst *BI = dyn_cast<BranchInst>(TI);
       if (!BI || !BI->isConditional() || !isa<Constant>(BI->getCondition()))
         continue;
 
-      // If a branch has two identical successors, we cannot declare either dead.
+      // If a branch has two identical successors, we cannot declare either
+      // dead.
       if (BI->getSuccessor(0) == BI->getSuccessor(1))
         continue;
 
@@ -167,7 +167,7 @@ class CFGDeadness {
 
       // All blocks dominated by D are dead.
       SmallVector<BasicBlock *, 8> Dom;
-      DT->getDescendants(const_cast<BasicBlock*>(D), Dom);
+      DT->getDescendants(const_cast<BasicBlock *>(D), Dom);
       // Do not need to mark all in and out edges dead
       // because BB is marked dead and this is enough
       // to run further.
@@ -272,7 +272,7 @@ static bool containsGCPtrType(Type *Ty) {
 }
 
 // Debugging aid -- prints a [Begin, End) range of values.
-template<typename IteratorTy>
+template <typename IteratorTy>
 static void PrintValueSet(raw_ostream &OS, IteratorTy Begin, IteratorTy End) {
   OS << "[ ";
   while (Begin != End) {
@@ -332,7 +332,7 @@ static enum BaseType getBaseType(const Value *Val) {
   // Strip through all the bitcasts and geps to get base pointer. Also check for
   // the exclusive value when there can be multiple base pointers (through phis
   // or selects).
-  while(!Worklist.empty()) {
+  while (!Worklist.empty()) {
     const Value *V = Worklist.pop_back_val();
     if (!Visited.insert(V).second)
       continue;
@@ -542,7 +542,8 @@ class InstructionVerifier {
 } // end anonymous namespace
 
 GCPtrTracker::GCPtrTracker(const Function &F, const DominatorTree &DT,
-                           const CFGDeadness &CD) : F(F), CD(CD) {
+                           const CFGDeadness &CD)
+    : F(F), CD(CD) {
   // Calculate Contribution of each live BB.
   // Allocate BB states for live blocks.
   for (const BasicBlock &BB : F)
@@ -570,8 +571,8 @@ BasicBlockState *GCPtrTracker::getBasicBlockState(const BasicBlock *BB) {
   return BlockMap.lookup(BB);
 }
 
-const BasicBlockState *GCPtrTracker::getBasicBlockState(
-    const BasicBlock *BB) const {
+const BasicBlockState *
+GCPtrTracker::getBasicBlockState(const BasicBlock *BB) const {
   return const_cast<GCPtrTracker *>(this)->getBasicBlockState(BB);
 }
 
@@ -625,7 +626,8 @@ void GCPtrTracker::recalculateBBsStates() {
       continue; // Ignore dead successors.
 
     size_t OldInCount = BBS->AvailableIn.size();
-    for (const_pred_iterator PredIt(BB), End(BB, true); PredIt != End; ++PredIt) {
+    for (const_pred_iterator PredIt(BB), End(BB, true); PredIt != End;
+         ++PredIt) {
       const BasicBlock *PBB = *PredIt;
       BasicBlockState *PBBS = getBasicBlockState(PBB);
       if (PBBS && !CD.isDeadEdge(&CFGDeadness::getEdge(PredIt)))
@@ -669,8 +671,7 @@ bool GCPtrTracker::removeValidUnrelocatedDefs(const BasicBlock *BB,
         bool HasUnrelocatedInputs = false;
         for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
           const BasicBlock *InBB = PN->getIncomingBlock(i);
-          if (!isMapped(InBB) ||
-              !CD.hasLiveIncomingEdge(PN, InBB))
+          if (!isMapped(InBB) || !CD.hasLiveIncomingEdge(PN, InBB))
             continue; // Skip dead block or dead edge.
 
           const Value *InValue = PN->getIncomingValue(i);
@@ -803,8 +804,7 @@ void InstructionVerifier::verifyInstruction(
       for (unsigned i = 0, e = PN->getNumIncomingValues(); i != e; ++i) {
         const BasicBlock *InBB = PN->getIncomingBlock(i);
         const BasicBlockState *InBBS = Tracker->getBasicBlockState(InBB);
-        if (!InBBS ||
-            !Tracker->hasLiveIncomingEdge(PN, InBB))
+        if (!InBBS || !Tracker->hasLiveIncomingEdge(PN, InBB))
           continue; // Skip dead block or dead edge.
 
         const Value *InValue = PN->getIncomingValue(i);
@@ -813,53 +813,51 @@ void InstructionVerifier::verifyInstruction(
             !InBBS->AvailableOut.count(InValue))
           reportInvalidUse(*InValue, *PN);
       }
-  } else if (isa<CmpInst>(I) &&
-             containsGCPtrType(I.getOperand(0)->getType())) {
+  } else if (isa<CmpInst>(I) && containsGCPtrType(I.getOperand(0)->getType())) {
     Value *LHS = I.getOperand(0), *RHS = I.getOperand(1);
-    enum BaseType baseTyLHS = getBaseType(LHS),
-                  baseTyRHS = getBaseType(RHS);
+    enum BaseType baseTyLHS = getBaseType(LHS), baseTyRHS = getBaseType(RHS);
 
     // Returns true if LHS and RHS are unrelocated pointers and they are
     // valid unrelocated uses.
     auto hasValidUnrelocatedUse = [&AvailableSet, Tracker, baseTyLHS, baseTyRHS,
-                                   &LHS, &RHS] () {
-        // A cmp instruction has valid unrelocated pointer operands only if
-        // both operands are unrelocated pointers.
-        // In the comparison between two pointers, if one is an unrelocated
-        // use, the other *should be* an unrelocated use, for this
-        // instruction to contain valid unrelocated uses. This unrelocated
-        // use can be a null constant as well, or another unrelocated
-        // pointer.
-        if (AvailableSet.count(LHS) || AvailableSet.count(RHS))
-          return false;
-        // Constant pointers (that are not exclusively null) may have
-        // meaning in different VMs, so we cannot reorder the compare
-        // against constant pointers before the safepoint. In other words,
-        // comparison of an unrelocated use against a non-null constant
-        // maybe invalid.
-        if ((baseTyLHS == BaseType::ExclusivelySomeConstant &&
-             baseTyRHS == BaseType::NonConstant) ||
-            (baseTyLHS == BaseType::NonConstant &&
-             baseTyRHS == BaseType::ExclusivelySomeConstant))
-          return false;
-
-        // If one of pointers is poisoned and other is not exclusively derived
-        // from null it is an invalid expression: it produces poisoned result
-        // and unless we want to track all defs (not only gc pointers) the only
-        // option is to prohibit such instructions.
-        if ((Tracker->isValuePoisoned(LHS) && baseTyRHS != ExclusivelyNull) ||
-            (Tracker->isValuePoisoned(RHS) && baseTyLHS != ExclusivelyNull))
-            return false;
-
-        // All other cases are valid cases enumerated below:
-        // 1. Comparison between an exclusively derived null pointer and a
-        // constant base pointer.
-        // 2. Comparison between an exclusively derived null pointer and a
-        // non-constant unrelocated base pointer.
-        // 3. Comparison between 2 unrelocated pointers.
-        // 4. Comparison between a pointer exclusively derived from null and a
-        // non-constant poisoned pointer.
-        return true;
+                                   &LHS, &RHS]() {
+      // A cmp instruction has valid unrelocated pointer operands only if
+      // both operands are unrelocated pointers.
+      // In the comparison between two pointers, if one is an unrelocated
+      // use, the other *should be* an unrelocated use, for this
+      // instruction to contain valid unrelocated uses. This unrelocated
+      // use can be a null constant as well, or another unrelocated
+      // pointer.
+      if (AvailableSet.count(LHS) || AvailableSet.count(RHS))
+        return false;
+      // Constant pointers (that are not exclusively null) may have
+      // meaning in different VMs, so we cannot reorder the compare
+      // against constant pointers before the safepoint. In other words,
+      // comparison of an unrelocated use against a non-null constant
+      // maybe invalid.
+      if ((baseTyLHS == BaseType::ExclusivelySomeConstant &&
+           baseTyRHS == BaseType::NonConstant) ||
+          (baseTyLHS == BaseType::NonConstant &&
+           baseTyRHS == BaseType::ExclusivelySomeConstant))
+        return false;
+
+      // If one of pointers is poisoned and other is not exclusively derived
+      // from null it is an invalid expression: it produces poisoned result
+      // and unless we want to track all defs (not only gc pointers) the only
+      // option is to prohibit such instructions.
+      if ((Tracker->isValuePoisoned(LHS) && baseTyRHS != ExclusivelyNull) ||
+          (Tracker->isValuePoisoned(RHS) && baseTyLHS != ExclusivelyNull))
+        return false;
+
+      // All other cases are valid cases enumerated below:
+      // 1. Comparison between an exclusively derived null pointer and a
+      // constant base pointer.
+      // 2. Comparison between an exclusively derived null pointer and a
+      // non-constant unrelocated base pointer.
+      // 3. Comparison between 2 unrelocated pointers.
+      // 4. Comparison between a pointer exclusively derived from null and a
+      // non-constant poisoned pointer.
+      return true;
     };
     if (!hasValidUnrelocatedUse()) {
       // Print out all non-constant derived pointers that are unrelocated
diff --git a/llvm/lib/IR/SymbolTableListTraitsImpl.h b/llvm/lib/IR/SymbolTableListTraitsImpl.h
index 4283744bd058d44..cb7c4e3f46f85a8 100644
--- a/llvm/lib/IR/SymbolTableListTraitsImpl.h
+++ b/llvm/lib/IR/SymbolTableListTraitsImpl.h
@@ -42,11 +42,13 @@ void SymbolTableListTraits<ValueSubClass>::setSymTabObject(TPtr *Dest,
   ValueSymbolTable *NewST = getSymTab(getListOwner());
 
   // If there is nothing to do, quick exit.
-  if (OldST == NewST) return;
+  if (OldST == NewST)
+    return;
 
   // Move all the elements from the old symtab to the new one.
   ListTy &ItemList = getList(getListOwner());
-  if (ItemList.empty()) return;
+  if (ItemList.empty())
+    return;
 
   if (OldST) {
     // Remove all entries from the previous symtab.
@@ -61,7 +63,6 @@ void SymbolTableListTraits<ValueSubClass>::setSymTabObject(TPtr *Dest,
       if (I->hasName())
         NewST->reinsertValue(&*I);
   }
-
 }
 
 template <typename ValueSubClass>
@@ -119,6 +120,6 @@ void SymbolTableListTraits<ValueSubClass>::transferNodesFromList(
   }
 }
 
-} // End llvm namespace
+} // namespace llvm
 
 #endif
diff --git a/llvm/lib/IR/Type.cpp b/llvm/lib/IR/Type.cpp
index f88d3ace64c26f6..d5d3b21732b63e4 100644
--- a/llvm/lib/IR/Type.cpp
+++ b/llvm/lib/IR/Type.cpp
@@ -35,19 +35,32 @@ using namespace llvm;
 
 Type *Type::getPrimitiveType(LLVMContext &C, TypeID IDNumber) {
   switch (IDNumber) {
-  case VoidTyID      : return getVoidTy(C);
-  case HalfTyID      : return getHalfTy(C);
-  case BFloatTyID    : return getBFloatTy(C);
-  case FloatTyID     : return getFloatTy(C);
-  case DoubleTyID    : return getDoubleTy(C);
-  case X86_FP80TyID  : return getX86_FP80Ty(C);
-  case FP128TyID     : return getFP128Ty(C);
-  case PPC_FP128TyID : return getPPC_FP128Ty(C);
-  case LabelTyID     : return getLabelTy(C);
-  case MetadataTyID  : return getMetadataTy(C);
-  case X86_MMXTyID   : return getX86_MMXTy(C);
-  case X86_AMXTyID   : return getX86_AMXTy(C);
-  case TokenTyID     : return getTokenTy(C);
+  case VoidTyID:
+    return getVoidTy(C);
+  case HalfTyID:
+    return getHalfTy(C);
+  case BFloatTyID:
+    return getBFloatTy(C);
+  case FloatTyID:
+    return getFloatTy(C);
+  case DoubleTyID:
+    return getDoubleTy(C);
+  case X86_FP80TyID:
+    return getX86_FP80Ty(C);
+  case FP128TyID:
+    return getFP128Ty(C);
+  case PPC_FP128TyID:
+    return getPPC_FP128Ty(C);
+  case LabelTyID:
+    return getLabelTy(C);
+  case MetadataTyID:
+    return getMetadataTy(C);
+  case X86_MMXTyID:
+    return getX86_MMXTy(C);
+  case X86_AMXTyID:
+    return getX86_AMXTy(C);
+  case TokenTyID:
+    return getTokenTy(C);
   default:
     return nullptr;
   }
@@ -67,14 +80,22 @@ bool Type::isScalableTy() const {
 
 const fltSemantics &Type::getFltSemantics() const {
   switch (getTypeID()) {
-  case HalfTyID: return APFloat::IEEEhalf();
-  case BFloatTyID: return APFloat::BFloat();
-  case FloatTyID: return APFloat::IEEEsingle();
-  case DoubleTyID: return APFloat::IEEEdouble();
-  case X86_FP80TyID: return APFloat::x87DoubleExtended();
-  case FP128TyID: return APFloat::IEEEquad();
-  case PPC_FP128TyID: return APFloat::PPCDoubleDouble();
-  default: llvm_unreachable("Invalid floating type");
+  case HalfTyID:
+    return APFloat::IEEEhalf();
+  case BFloatTyID:
+    return APFloat::BFloat();
+  case FloatTyID:
+    return APFloat::IEEEsingle();
+  case DoubleTyID:
+    return APFloat::IEEEdouble();
+  case X86_FP80TyID:
+    return APFloat::x87DoubleExtended();
+  case FP128TyID:
+    return APFloat::IEEEquad();
+  case PPC_FP128TyID:
+    return APFloat::PPCDoubleDouble();
+  default:
+    llvm_unreachable("Invalid floating type");
   }
 }
 
@@ -148,7 +169,7 @@ bool Type::canLosslesslyBitCastTo(Type *Ty) const {
       return PTy->getAddressSpace() == OtherPTy->getAddressSpace();
     return false;
   }
-  return false;  // Other types have no identity values
+  return false; // Other types have no identity values
 }
 
 bool Type::isEmptyTy() const {
@@ -170,15 +191,24 @@ bool Type::isEmptyTy() const {
 
 TypeSize Type::getPrimitiveSizeInBits() const {
   switch (getTypeID()) {
-  case Type::HalfTyID: return TypeSize::Fixed(16);
-  case Type::BFloatTyID: return TypeSize::Fixed(16);
-  case Type::FloatTyID: return TypeSize::Fixed(32);
-  case Type::DoubleTyID: return TypeSize::Fixed(64);
-  case Type::X86_FP80TyID: return TypeSize::Fixed(80);
-  case Type::FP128TyID: return TypeSize::Fixed(128);
-  case Type::PPC_FP128TyID: return TypeSize::Fixed(128);
-  case Type::X86_MMXTyID: return TypeSize::Fixed(64);
-  case Type::X86_AMXTyID: return TypeSize::Fixed(8192);
+  case Type::HalfTyID:
+    return TypeSize::Fixed(16);
+  case Type::BFloatTyID:
+    return TypeSize::Fixed(16);
+  case Type::FloatTyID:
+    return TypeSize::Fixed(32);
+  case Type::DoubleTyID:
+    return TypeSize::Fixed(64);
+  case Type::X86_FP80TyID:
+    return TypeSize::Fixed(80);
+  case Type::FP128TyID:
+    return TypeSize::Fixed(128);
+  case Type::PPC_FP128TyID:
+    return TypeSize::Fixed(128);
+  case Type::X86_MMXTyID:
+    return TypeSize::Fixed(64);
+  case Type::X86_AMXTyID:
+    return TypeSize::Fixed(8192);
   case Type::IntegerTyID:
     return TypeSize::Fixed(cast<IntegerType>(this)->getBitWidth());
   case Type::FixedVectorTyID:
@@ -189,7 +219,8 @@ TypeSize Type::getPrimitiveSizeInBits() const {
     assert(!ETS.isScalable() && "Vector type should have fixed-width elements");
     return {ETS.getFixedValue() * EC.getKnownMinValue(), EC.isScalable()};
   }
-  default: return TypeSize::Fixed(0);
+  default:
+    return TypeSize::Fixed(0);
   }
 }
 
@@ -202,17 +233,23 @@ int Type::getFPMantissaWidth() const {
   if (auto *VTy = dyn_cast<VectorType>(this))
     return VTy->getElementType()->getFPMantissaWidth();
   assert(isFloatingPointTy() && "Not a floating point type!");
-  if (getTypeID() == HalfTyID) return 11;
-  if (getTypeID() == BFloatTyID) return 8;
-  if (getTypeID() == FloatTyID) return 24;
-  if (getTypeID() == DoubleTyID) return 53;
-  if (getTypeID() == X86_FP80TyID) return 64;
-  if (getTypeID() == FP128TyID) return 113;
+  if (getTypeID() == HalfTyID)
+    return 11;
+  if (getTypeID() == BFloatTyID)
+    return 8;
+  if (getTypeID() == FloatTyID)
+    return 24;
+  if (getTypeID() == DoubleTyID)
+    return 53;
+  if (getTypeID() == X86_FP80TyID)
+    return 64;
+  if (getTypeID() == FP128TyID)
+    return 113;
   assert(getTypeID() == PPC_FP128TyID && "unknown fp type");
   return -1;
 }
 
-bool Type::isSizedDerivedType(SmallPtrSetImpl<Type*> *Visited) const {
+bool Type::isSizedDerivedType(SmallPtrSetImpl<Type *> *Visited) const {
   if (auto *ATy = dyn_cast<ArrayType>(this))
     return ATy->getElementType()->isSized(Visited);
 
@@ -280,12 +317,18 @@ IntegerType *IntegerType::get(LLVMContext &C, unsigned NumBits) {
 
   // Check for the built-in integer types
   switch (NumBits) {
-  case   1: return cast<IntegerType>(Type::getInt1Ty(C));
-  case   8: return cast<IntegerType>(Type::getInt8Ty(C));
-  case  16: return cast<IntegerType>(Type::getInt16Ty(C));
-  case  32: return cast<IntegerType>(Type::getInt32Ty(C));
-  case  64: return cast<IntegerType>(Type::getInt64Ty(C));
-  case 128: return cast<IntegerType>(Type::getInt128Ty(C));
+  case 1:
+    return cast<IntegerType>(Type::getInt1Ty(C));
+  case 8:
+    return cast<IntegerType>(Type::getInt8Ty(C));
+  case 16:
+    return cast<IntegerType>(Type::getInt16Ty(C));
+  case 32:
+    return cast<IntegerType>(Type::getInt32Ty(C));
+  case 64:
+    return cast<IntegerType>(Type::getInt64Ty(C));
+  case 128:
+    return cast<IntegerType>(Type::getInt128Ty(C));
   default:
     break;
   }
@@ -304,10 +347,10 @@ APInt IntegerType::getMask() const { return APInt::getAllOnes(getBitWidth()); }
 //                       FunctionType Implementation
 //===----------------------------------------------------------------------===//
 
-FunctionType::FunctionType(Type *Result, ArrayRef<Type*> Params,
+FunctionType::FunctionType(Type *Result, ArrayRef<Type *> Params,
                            bool IsVarArgs)
-  : Type(Result->getContext(), FunctionTyID) {
-  Type **SubTys = reinterpret_cast<Type**>(this+1);
+    : Type(Result->getContext(), FunctionTyID) {
+  Type **SubTys = reinterpret_cast<Type **>(this + 1);
   assert(isValidReturnType(Result) && "invalid return type for function");
   setSubclassData(IsVarArgs);
 
@@ -316,7 +359,7 @@ FunctionType::FunctionType(Type *Result, ArrayRef<Type*> Params,
   for (unsigned i = 0, e = Params.size(); i != e; ++i) {
     assert(isValidArgumentType(Params[i]) &&
            "Not a valid type for function argument!");
-    SubTys[i+1] = Params[i];
+    SubTys[i + 1] = Params[i];
   }
 
   ContainedTys = SubTys;
@@ -324,8 +367,8 @@ FunctionType::FunctionType(Type *Result, ArrayRef<Type*> Params,
 }
 
 // This is the factory function for the FunctionType class.
-FunctionType *FunctionType::get(Type *ReturnType,
-                                ArrayRef<Type*> Params, bool isVarArg) {
+FunctionType *FunctionType::get(Type *ReturnType, ArrayRef<Type *> Params,
+                                bool isVarArg) {
   LLVMContextImpl *pImpl = ReturnType->getContext().pImpl;
   const FunctionTypeKeyInfo::KeyTy Key(ReturnType, Params, isVarArg);
   FunctionType *FT;
@@ -356,7 +399,7 @@ FunctionType *FunctionType::get(Type *Result, bool isVarArg) {
 
 bool FunctionType::isValidReturnType(Type *RetTy) {
   return !RetTy->isFunctionTy() && !RetTy->isLabelTy() &&
-  !RetTy->isMetadataTy();
+         !RetTy->isMetadataTy();
 }
 
 bool FunctionType::isValidArgumentType(Type *ArgTy) {
@@ -369,7 +412,7 @@ bool FunctionType::isValidArgumentType(Type *ArgTy) {
 
 // Primitive Constructors.
 
-StructType *StructType::get(LLVMContext &Context, ArrayRef<Type*> ETypes,
+StructType *StructType::get(LLVMContext &Context, ArrayRef<Type *> ETypes,
                             bool isPacked) {
   LLVMContextImpl *pImpl = Context.pImpl;
   const AnonStructTypeKeyInfo::KeyTy Key(ETypes, isPacked);
@@ -385,7 +428,7 @@ StructType *StructType::get(LLVMContext &Context, ArrayRef<Type*> ETypes,
     // The struct type was not found. Allocate one and update AnonStructTypes
     // in-place.
     ST = new (Context.pImpl->Alloc) StructType(Context);
-    ST->setSubclassData(SCDB_IsLiteral);  // Literal struct.
+    ST->setSubclassData(SCDB_IsLiteral); // Literal struct.
     ST->setBody(ETypes, isPacked);
     *Insertion.first = ST;
   } else {
@@ -441,7 +484,7 @@ bool StructType::containsHomogeneousScalableVectorTypes() const {
   return true;
 }
 
-void StructType::setBody(ArrayRef<Type*> Elements, bool isPacked) {
+void StructType::setBody(ArrayRef<Type *> Elements, bool isPacked) {
   assert(isOpaque() && "Struct body already set!");
 
   setSubclassData(getSubclassData() | SCDB_HasBody);
@@ -459,7 +502,8 @@ void StructType::setBody(ArrayRef<Type*> Elements, bool isPacked) {
 }
 
 void StructType::setName(StringRef Name) {
-  if (Name == getName()) return;
+  if (Name == getName())
+    return;
 
   StringMap<StructType *> &SymbolTable = getContext().pImpl->NamedStructTypes;
 
@@ -520,14 +564,15 @@ StructType *StructType::get(LLVMContext &Context, bool isPacked) {
   return get(Context, std::nullopt, isPacked);
 }
 
-StructType *StructType::create(LLVMContext &Context, ArrayRef<Type*> Elements,
+StructType *StructType::create(LLVMContext &Context, ArrayRef<Type *> Elements,
                                StringRef Name, bool isPacked) {
   StructType *ST = create(Context, Name);
   ST->setBody(Elements, isPacked);
   return ST;
 }
 
-StructType *StructType::create(LLVMContext &Context, ArrayRef<Type*> Elements) {
+StructType *StructType::create(LLVMContext &Context,
+                               ArrayRef<Type *> Elements) {
   return create(Context, Elements, StringRef());
 }
 
@@ -535,26 +580,26 @@ StructType *StructType::create(LLVMContext &Context) {
   return create(Context, StringRef());
 }
 
-StructType *StructType::create(ArrayRef<Type*> Elements, StringRef Name,
+StructType *StructType::create(ArrayRef<Type *> Elements, StringRef Name,
                                bool isPacked) {
   assert(!Elements.empty() &&
          "This method may not be invoked with an empty list");
   return create(Elements[0]->getContext(), Elements, Name, isPacked);
 }
 
-StructType *StructType::create(ArrayRef<Type*> Elements) {
+StructType *StructType::create(ArrayRef<Type *> Elements) {
   assert(!Elements.empty() &&
          "This method may not be invoked with an empty list");
   return create(Elements[0]->getContext(), Elements, StringRef());
 }
 
-bool StructType::isSized(SmallPtrSetImpl<Type*> *Visited) const {
+bool StructType::isSized(SmallPtrSetImpl<Type *> *Visited) const {
   if ((getSubclassData() & SCDB_IsSized) != 0)
     return true;
   if (isOpaque())
     return false;
 
-  if (Visited && !Visited->insert(const_cast<StructType*>(this)).second)
+  if (Visited && !Visited->insert(const_cast<StructType *>(this)).second)
     return false;
 
   // Okay, our struct is sized if all of the elements are, but if one of the
@@ -581,16 +626,17 @@ bool StructType::isSized(SmallPtrSetImpl<Type*> *Visited) const {
   // Here we cheat a bit and cast away const-ness. The goal is to memoize when
   // we find a sized type, as types can only move from opaque to sized, not the
   // other way.
-  const_cast<StructType*>(this)->setSubclassData(
-    getSubclassData() | SCDB_IsSized);
+  const_cast<StructType *>(this)->setSubclassData(getSubclassData() |
+                                                  SCDB_IsSized);
   return true;
 }
 
 StringRef StructType::getName() const {
   assert(!isLiteral() && "Literal structs never have names");
-  if (!SymbolTableEntry) return StringRef();
+  if (!SymbolTableEntry)
+    return StringRef();
 
-  return ((StringMapEntry<StructType*> *)SymbolTableEntry)->getKey();
+  return ((StringMapEntry<StructType *> *)SymbolTableEntry)->getKey();
 }
 
 bool StructType::isValidElementType(Type *ElemTy) {
@@ -600,7 +646,8 @@ bool StructType::isValidElementType(Type *ElemTy) {
 }
 
 bool StructType::isLayoutIdentical(StructType *Other) const {
-  if (this == Other) return true;
+  if (this == Other)
+    return true;
 
   if (isPacked() != Other->isPacked())
     return false;
@@ -648,7 +695,7 @@ ArrayType *ArrayType::get(Type *ElementType, uint64_t NumElements) {
 
   LLVMContextImpl *pImpl = ElementType->getContext().pImpl;
   ArrayType *&Entry =
-    pImpl->ArrayTypes[std::make_pair(ElementType, NumElements)];
+      pImpl->ArrayTypes[std::make_pair(ElementType, NumElements)];
 
   if (!Entry)
     Entry = new (pImpl->Alloc) ArrayType(ElementType, NumElements);
@@ -758,7 +805,7 @@ PointerType::PointerType(LLVMContext &C, unsigned AddrSpace)
 }
 
 PointerType *Type::getPointerTo(unsigned AddrSpace) const {
-  return PointerType::get(const_cast<Type*>(this), AddrSpace);
+  return PointerType::get(const_cast<Type *>(this), AddrSpace);
 }
 
 bool PointerType::isValidElementType(Type *ElemTy) {
diff --git a/llvm/lib/IRPrinter/CMakeLists.txt b/llvm/lib/IRPrinter/CMakeLists.txt
index 0c8c4a0122c151d..497ef240de864ac 100644
--- a/llvm/lib/IRPrinter/CMakeLists.txt
+++ b/llvm/lib/IRPrinter/CMakeLists.txt
@@ -1,14 +1,9 @@
-add_llvm_component_library(LLVMIRPrinter
-  IRPrintingPasses.cpp
+add_llvm_component_library(LLVMIRPrinter IRPrintingPasses.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/IRPrinter
+                               ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+                           llvm /
+                           IRPrinter
 
-  DEPENDS
-  intrinsics_gen
+                               DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  Analysis
-  Core
-  Support
-  )
+                                   LINK_COMPONENTS Analysis Core Support)
diff --git a/llvm/lib/IRReader/CMakeLists.txt b/llvm/lib/IRReader/CMakeLists.txt
index 256d65c9f939d0a..8c3bff0bc278c56 100644
--- a/llvm/lib/IRReader/CMakeLists.txt
+++ b/llvm/lib/IRReader/CMakeLists.txt
@@ -1,15 +1,10 @@
-add_llvm_component_library(LLVMIRReader
-  IRReader.cpp
+add_llvm_component_library(LLVMIRReader IRReader.cpp
 
-  ADDITIONAL_HEADER_DIRS
-  ${LLVM_MAIN_INCLUDE_DIR}/llvm/IRReader
+                               ADDITIONAL_HEADER_DIRS ${LLVM_MAIN_INCLUDE_DIR} /
+                           llvm /
+                           IRReader
 
-  DEPENDS
-  intrinsics_gen
+                               DEPENDS intrinsics_gen
 
-  LINK_COMPONENTS
-  AsmParser
-  BitReader
-  Core
-  Support
-  )
+                                   LINK_COMPONENTS AsmParser BitReader Core
+                                       Support)
diff --git a/llvm/lib/IRReader/IRReader.cpp b/llvm/lib/IRReader/IRReader.cpp
index 7885c36a79876a7..b25e1e931e4e03b 100644
--- a/llvm/lib/IRReader/IRReader.cpp
+++ b/llvm/lib/IRReader/IRReader.cpp
@@ -22,7 +22,7 @@
 using namespace llvm;
 
 namespace llvm {
-  extern bool TimePassesIsEnabled;
+extern bool TimePassesIsEnabled;
 }
 
 const char TimeIRParsingGroupName[] = "irparse";
@@ -118,7 +118,7 @@ LLVMBool LLVMParseIRInContext(LLVMContextRef ContextRef,
   *OutM =
       wrap(parseIR(MB->getMemBufferRef(), Diag, *unwrap(ContextRef)).release());
 
-  if(!*OutM) {
+  if (!*OutM) {
     if (OutMessage) {
       std::string buf;
       raw_string_ostream os(buf);
diff --git a/llvm/lib/InterfaceStub/CMakeLists.txt b/llvm/lib/InterfaceStub/CMakeLists.txt
index 68fe067701f23f4..c5997d1922f5a80 100644
--- a/llvm/lib/InterfaceStub/CMakeLists.txt
+++ b/llvm/lib/InterfaceStub/CMakeLists.txt
@@ -1,12 +1,4 @@
-add_llvm_component_library(LLVMInterfaceStub
-  ELFObjHandler.cpp
-  IFSHandler.cpp
-  IFSStub.cpp
+add_llvm_component_library(
+    LLVMInterfaceStub ELFObjHandler.cpp IFSHandler.cpp IFSStub.cpp
 
-  LINK_COMPONENTS
-  BinaryFormat
-  MC
-  Object
-  Support
-  TargetParser
-)
+        LINK_COMPONENTS BinaryFormat MC Object Support TargetParser)
diff --git a/llvm/lib/MC/MCDisassembler/CMakeLists.txt b/llvm/lib/MC/MCDisassembler/CMakeLists.txt
index bf6392c4c106f7d..ee392b5ce3f9b9d 100644
--- a/llvm/lib/MC/MCDisassembler/CMakeLists.txt
+++ b/llvm/lib/MC/MCDisassembler/CMakeLists.txt
@@ -1,12 +1,5 @@
-add_llvm_component_library(LLVMMCDisassembler
-  Disassembler.cpp
-  MCDisassembler.cpp
-  MCExternalSymbolizer.cpp
-  MCRelocationInfo.cpp
-  MCSymbolizer.cpp
+add_llvm_component_library(
+    LLVMMCDisassembler Disassembler.cpp MCDisassembler.cpp MCExternalSymbolizer
+        .cpp MCRelocationInfo.cpp MCSymbolizer.cpp
 
-  LINK_COMPONENTS
-  MC
-  Support
-  TargetParser
-  )
+            LINK_COMPONENTS MC Support TargetParser)
diff --git a/llvm/lib/MC/MCDisassembler/Disassembler.cpp b/llvm/lib/MC/MCDisassembler/Disassembler.cpp
index 067b951fbfccb38..d2d2384a1aef5f4 100644
--- a/llvm/lib/MC/MCDisassembler/Disassembler.cpp
+++ b/llvm/lib/MC/MCDisassembler/Disassembler.cpp
@@ -129,7 +129,7 @@ LLVMDisasmContextRef LLVMCreateDisasm(const char *TT, void *DisInfo,
 //
 // LLVMDisasmDispose() disposes of the disassembler specified by the context.
 //
-void LLVMDisasmDispose(LLVMDisasmContextRef DCR){
+void LLVMDisasmDispose(LLVMDisasmContextRef DCR) {
   LLVMDisasmContext *DC = static_cast<LLVMDisasmContext *>(DCR);
   delete DC;
 }
@@ -153,7 +153,7 @@ static void emitComments(LLVMDisasmContext *DC,
     size_t Position = Comments.find('\n');
     FormattedOS << CommentBegin << ' ' << Comments.substr(0, Position);
     // Move after the newline character.
-    Comments = Comments.substr(Position+1);
+    Comments = Comments.substr(Position + 1);
     IsFirst = false;
   }
   FormattedOS.flush();
@@ -177,7 +177,7 @@ static int getItineraryLatency(LLVMDisasmContext *DC, const MCInst &Inst) {
   const MCSubtargetInfo *STI = DC->getSubtargetInfo();
   InstrItineraryData IID = STI->getInstrItineraryForCPU(DC->getCPU());
   // Get the scheduling class of the requested instruction.
-  const MCInstrDesc& Desc = DC->getInstrInfo()->get(Inst.getOpcode());
+  const MCInstrDesc &Desc = DC->getInstrInfo()->get(Inst.getOpcode());
   unsigned SCClass = Desc.getSchedClass();
 
   int Latency = 0;
@@ -204,7 +204,7 @@ static int getLatency(LLVMDisasmContext *DC, const MCInst &Inst) {
     return getItineraryLatency(DC, Inst);
 
   // Get the scheduling class of the requested instruction.
-  const MCInstrDesc& Desc = DC->getInstrInfo()->get(Inst.getOpcode());
+  const MCInstrDesc &Desc = DC->getInstrInfo()->get(Inst.getOpcode());
   unsigned SCClass = Desc.getSchedClass();
   const MCSchedClassDesc *SCDesc = SCModel.getSchedClassDesc(SCClass);
   // Resolving the variant SchedClass requires an MI to pass to
@@ -217,8 +217,8 @@ static int getLatency(LLVMDisasmContext *DC, const MCInst &Inst) {
   for (unsigned DefIdx = 0, DefEnd = SCDesc->NumWriteLatencyEntries;
        DefIdx != DefEnd; ++DefIdx) {
     // Lookup the definition's write latency in SubtargetInfo.
-    const MCWriteLatencyEntry *WLEntry = STI->getWriteLatencyEntry(SCDesc,
-                                                                   DefIdx);
+    const MCWriteLatencyEntry *WLEntry =
+        STI->getWriteLatencyEntry(SCDesc, DefIdx);
     Latency = std::max(Latency, WLEntry->Cycles);
   }
 
@@ -251,7 +251,7 @@ static void emitLatency(LLVMDisasmContext *DC, const MCInst &Inst) {
 //
 size_t LLVMDisasmInstruction(LLVMDisasmContextRef DCR, uint8_t *Bytes,
                              uint64_t BytesSize, uint64_t PC, char *OutString,
-                             size_t OutStringSize){
+                             size_t OutStringSize) {
   LLVMDisasmContext *DC = static_cast<LLVMDisasmContext *>(DCR);
   // Wrap the pointer to the Bytes, BytesSize and PC in a MemoryObject.
   ArrayRef<uint8_t> Data(Bytes, BytesSize);
@@ -285,7 +285,7 @@ size_t LLVMDisasmInstruction(LLVMDisasmContextRef DCR, uint8_t *Bytes,
     emitComments(DC, FormattedOS);
 
     assert(OutStringSize != 0 && "Output buffer cannot be zero size");
-    size_t OutputSize = std::min(OutStringSize-1, InsnStr.size());
+    size_t OutputSize = std::min(OutStringSize - 1, InsnStr.size());
     std::memcpy(OutString, InsnStr.data(), OutputSize);
     OutString[OutputSize] = '\0'; // Terminate string.
 
@@ -299,36 +299,36 @@ size_t LLVMDisasmInstruction(LLVMDisasmContextRef DCR, uint8_t *Bytes,
 // LLVMSetDisasmOptions() sets the disassembler's options.  It returns 1 if it
 // can set all the Options and 0 otherwise.
 //
-int LLVMSetDisasmOptions(LLVMDisasmContextRef DCR, uint64_t Options){
-  if (Options & LLVMDisassembler_Option_UseMarkup){
-      LLVMDisasmContext *DC = static_cast<LLVMDisasmContext *>(DCR);
-      MCInstPrinter *IP = DC->getIP();
-      IP->setUseMarkup(true);
-      DC->addOptions(LLVMDisassembler_Option_UseMarkup);
-      Options &= ~LLVMDisassembler_Option_UseMarkup;
+int LLVMSetDisasmOptions(LLVMDisasmContextRef DCR, uint64_t Options) {
+  if (Options & LLVMDisassembler_Option_UseMarkup) {
+    LLVMDisasmContext *DC = static_cast<LLVMDisasmContext *>(DCR);
+    MCInstPrinter *IP = DC->getIP();
+    IP->setUseMarkup(true);
+    DC->addOptions(LLVMDisassembler_Option_UseMarkup);
+    Options &= ~LLVMDisassembler_Option_UseMarkup;
   }
-  if (Options & LLVMDisassembler_Option_PrintImmHex){
-      LLVMDisasmContext *DC = static_cast<LLVMDisasmContext *>(DCR);
-      MCInstPrinter *IP = DC->getIP();
-      IP->setPrintImmHex(true);
-      DC->addOptions(LLVMDisassembler_Option_PrintImmHex);
-      Options &= ~LLVMDisassembler_Option_PrintImmHex;
+  if (Options & LLVMDisassembler_Option_PrintImmHex) {
+    LLVMDisasmContext *DC = static_cast<LLVMDisasmContext *>(DCR);
+    MCInstPrinter *IP = DC->getIP();
+    IP->setPrintImmHex(true);
+    DC->addOptions(LLVMDisassembler_Option_PrintImmHex);
+    Options &= ~LLVMDisassembler_Option_PrintImmHex;
   }
-  if (Options & LLVMDisassembler_Option_AsmPrinterVariant){
-      LLVMDisasmContext *DC = static_cast<LLVMDisasmContext *>(DCR);
-      // Try to set up the new instruction printer.
-      const MCAsmInfo *MAI = DC->getAsmInfo();
-      const MCInstrInfo *MII = DC->getInstrInfo();
-      const MCRegisterInfo *MRI = DC->getRegisterInfo();
-      int AsmPrinterVariant = MAI->getAssemblerDialect();
-      AsmPrinterVariant = AsmPrinterVariant == 0 ? 1 : 0;
-      MCInstPrinter *IP = DC->getTarget()->createMCInstPrinter(
-          Triple(DC->getTripleName()), AsmPrinterVariant, *MAI, *MII, *MRI);
-      if (IP) {
-        DC->setIP(IP);
-        DC->addOptions(LLVMDisassembler_Option_AsmPrinterVariant);
-        Options &= ~LLVMDisassembler_Option_AsmPrinterVariant;
-      }
+  if (Options & LLVMDisassembler_Option_AsmPrinterVariant) {
+    LLVMDisasmContext *DC = static_cast<LLVMDisasmContext *>(DCR);
+    // Try to set up the new instruction printer.
+    const MCAsmInfo *MAI = DC->getAsmInfo();
+    const MCInstrInfo *MII = DC->getInstrInfo();
+    const MCRegisterInfo *MRI = DC->getRegisterInfo();
+    int AsmPrinterVariant = MAI->getAssemblerDialect();
+    AsmPrinterVariant = AsmPrinterVariant == 0 ? 1 : 0;
+    MCInstPrinter *IP = DC->getTarget()->createMCInstPrinter(
+        Triple(DC->getTripleName()), AsmPrinterVariant, *MAI, *MII, *MRI);
+    if (IP) {
+      DC->setIP(IP);
+      DC->addOptions(LLVMDisassembler_Option_AsmPrinterVariant);
+      Options &= ~LLVMDisassembler_Option_AsmPrinterVariant;
+    }
   }
   if (Options & LLVMDisassembler_Option_SetInstrComments) {
     LLVMDisasmContext *DC = static_cast<LLVMDisasmContext *>(DCR);
diff --git a/llvm/lib/MC/MCDisassembler/MCDisassembler.cpp b/llvm/lib/MC/MCDisassembler/MCDisassembler.cpp
index 80c32ac56082201..c44d11e20fff1ed 100644
--- a/llvm/lib/MC/MCDisassembler/MCDisassembler.cpp
+++ b/llvm/lib/MC/MCDisassembler/MCDisassembler.cpp
@@ -46,7 +46,7 @@ void MCDisassembler::setSymbolizer(std::unique_ptr<MCSymbolizer> Symzer) {
   Symbolizer = std::move(Symzer);
 }
 
-#define SMC_PCASE(A, P)                                                         \
+#define SMC_PCASE(A, P)                                                        \
   case XCOFF::XMC_##A:                                                         \
     return P;
 
diff --git a/llvm/lib/MC/MCDisassembler/MCExternalSymbolizer.cpp b/llvm/lib/MC/MCDisassembler/MCExternalSymbolizer.cpp
index e3f4cdd21557e28..a2058150441df78 100644
--- a/llvm/lib/MC/MCDisassembler/MCExternalSymbolizer.cpp
+++ b/llvm/lib/MC/MCDisassembler/MCExternalSymbolizer.cpp
@@ -57,26 +57,26 @@ bool MCExternalSymbolizer::tryAddingSymbolicOperand(
 
     uint64_t ReferenceType;
     if (IsBranch)
-       ReferenceType = LLVMDisassembler_ReferenceType_In_Branch;
+      ReferenceType = LLVMDisassembler_ReferenceType_In_Branch;
     else
-       ReferenceType = LLVMDisassembler_ReferenceType_InOut_None;
+      ReferenceType = LLVMDisassembler_ReferenceType_InOut_None;
     const char *ReferenceName;
-    const char *Name = SymbolLookUp(DisInfo, Value, &ReferenceType, Address,
-                                    &ReferenceName);
+    const char *Name =
+        SymbolLookUp(DisInfo, Value, &ReferenceType, Address, &ReferenceName);
     if (Name) {
       SymbolicOp.AddSymbol.Name = Name;
       SymbolicOp.AddSymbol.Present = true;
       // If Name is a C++ symbol name put the human readable name in a comment.
-      if(ReferenceType == LLVMDisassembler_ReferenceType_DeMangled_Name)
+      if (ReferenceType == LLVMDisassembler_ReferenceType_DeMangled_Name)
         cStream << ReferenceName;
     }
     // For branches always create an MCExpr so it gets printed as hex address.
     else if (IsBranch) {
       SymbolicOp.Value = Value;
     }
-    if(ReferenceType == LLVMDisassembler_ReferenceType_Out_SymbolStub)
+    if (ReferenceType == LLVMDisassembler_ReferenceType_Out_SymbolStub)
       cStream << "symbol stub for: " << ReferenceName;
-    else if(ReferenceType == LLVMDisassembler_ReferenceType_Out_Objc_Message)
+    else if (ReferenceType == LLVMDisassembler_ReferenceType_Out_Objc_Message)
       cStream << "Objc message: " << ReferenceName;
     if (!Name && !IsBranch)
       return false;
@@ -95,7 +95,7 @@ bool MCExternalSymbolizer::tryAddingSymbolicOperand(
 
   const MCExpr *Sub = nullptr;
   if (SymbolicOp.SubtractSymbol.Present) {
-      if (SymbolicOp.SubtractSymbol.Name) {
+    if (SymbolicOp.SubtractSymbol.Name) {
       StringRef Name(SymbolicOp.SubtractSymbol.Name);
       MCSymbol *Sym = Ctx.getOrCreateSymbol(Name);
       Sub = MCSymbolRefExpr::create(Sym, Ctx);
@@ -142,10 +142,10 @@ bool MCExternalSymbolizer::tryAddingSymbolicOperand(
 // This function tries to add a comment as to what is being referenced by a load
 // instruction with the base register that is the Pc.  These can often be values
 // in a literal pool near the Address of the instruction. The Address of the
-// instruction and its immediate Value are used as a possible literal pool entry.
-// The SymbolLookUp call back will return the name of a symbol referenced by the
-// literal pool's entry if the referenced address is that of a symbol. Or it
-// will return a pointer to a literal 'C' string if the referenced address of
+// instruction and its immediate Value are used as a possible literal pool
+// entry. The SymbolLookUp call back will return the name of a symbol referenced
+// by the literal pool's entry if the referenced address is that of a symbol. Or
+// it will return a pointer to a literal 'C' string if the referenced address of
 // the literal pool's entry is an address into a section with C string literals.
 // Or if the reference is to an Objective-C data structure it will return a
 // specific reference type for it and a string.
@@ -156,28 +156,25 @@ void MCExternalSymbolizer::tryAddingPcLoadReferenceComment(raw_ostream &cStream,
     uint64_t ReferenceType = LLVMDisassembler_ReferenceType_In_PCrel_Load;
     const char *ReferenceName;
     (void)SymbolLookUp(DisInfo, Value, &ReferenceType, Address, &ReferenceName);
-    if(ReferenceType == LLVMDisassembler_ReferenceType_Out_LitPool_SymAddr)
+    if (ReferenceType == LLVMDisassembler_ReferenceType_Out_LitPool_SymAddr)
       cStream << "literal pool symbol address: " << ReferenceName;
-    else if(ReferenceType ==
-            LLVMDisassembler_ReferenceType_Out_LitPool_CstrAddr) {
+    else if (ReferenceType ==
+             LLVMDisassembler_ReferenceType_Out_LitPool_CstrAddr) {
       cStream << "literal pool for: \"";
       cStream.write_escaped(ReferenceName);
       cStream << "\"";
-    }
-    else if(ReferenceType ==
-            LLVMDisassembler_ReferenceType_Out_Objc_CFString_Ref)
+    } else if (ReferenceType ==
+               LLVMDisassembler_ReferenceType_Out_Objc_CFString_Ref)
       cStream << "Objc cfstring ref: @\"" << ReferenceName << "\"";
-    else if(ReferenceType ==
-            LLVMDisassembler_ReferenceType_Out_Objc_Message)
+    else if (ReferenceType == LLVMDisassembler_ReferenceType_Out_Objc_Message)
       cStream << "Objc message: " << ReferenceName;
-    else if(ReferenceType ==
-            LLVMDisassembler_ReferenceType_Out_Objc_Message_Ref)
+    else if (ReferenceType ==
+             LLVMDisassembler_ReferenceType_Out_Objc_Message_Ref)
       cStream << "Objc message ref: " << ReferenceName;
-    else if(ReferenceType ==
-            LLVMDisassembler_ReferenceType_Out_Objc_Selector_Ref)
+    else if (ReferenceType ==
+             LLVMDisassembler_ReferenceType_Out_Objc_Selector_Ref)
       cStream << "Objc selector ref: " << ReferenceName;
-    else if(ReferenceType ==
-            LLVMDisassembler_ReferenceType_Out_Objc_Class_Ref)
+    else if (ReferenceType == LLVMDisassembler_ReferenceType_Out_Objc_Class_Ref)
       cStream << "Objc class ref: " << ReferenceName;
   }
 }
@@ -192,4 +189,4 @@ MCSymbolizer *createMCSymbolizer(const Triple &TT, LLVMOpInfoCallback GetOpInfo,
   return new MCExternalSymbolizer(*Ctx, std::move(RelInfo), GetOpInfo,
                                   SymbolLookUp, DisInfo);
 }
-}
+} // namespace llvm
diff --git a/llvm/lib/MC/MCParser/AsmLexer.cpp b/llvm/lib/MC/MCParser/AsmLexer.cpp
index f13549b24e2dd2e..bf06565cdf644d1 100644
--- a/llvm/lib/MC/MCParser/AsmLexer.cpp
+++ b/llvm/lib/MC/MCParser/AsmLexer.cpp
@@ -93,8 +93,7 @@ AsmToken AsmLexer::LexFloatLiteral() {
       ++CurPtr;
   }
 
-  return AsmToken(AsmToken::Real,
-                  StringRef(TokStart, CurPtr - TokStart));
+  return AsmToken(AsmToken::Real, StringRef(TokStart, CurPtr - TokStart));
 }
 
 /// LexHexFloatLiteral matches essentially (.[0-9a-fA-F]*)?[pP][+-]?[0-9a-fA-F]+
@@ -167,7 +166,7 @@ AsmToken AsmLexer::LexIdentifier() {
     ++CurPtr;
 
   // Handle . as a special case.
-  if (CurPtr == TokStart+1 && TokStart[0] == '.')
+  if (CurPtr == TokStart + 1 && TokStart[0] == '.')
     return AsmToken(AsmToken::Dot, StringRef(TokStart, 1));
 
   return AsmToken(AsmToken::Identifier, StringRef(TokStart, CurPtr - TokStart));
@@ -195,7 +194,7 @@ AsmToken AsmLexer::LexSlash() {
   }
 
   // C Style comment.
-  ++CurPtr;  // skip the star.
+  ++CurPtr; // skip the star.
   const char *CommentTextStart = CurPtr;
   while (CurPtr != CurBuf.end()) {
     switch (*CurPtr++) {
@@ -209,7 +208,7 @@ AsmToken AsmLexer::LexSlash() {
             SMLoc::getFromPointer(CommentTextStart),
             StringRef(CommentTextStart, CurPtr - 1 - CommentTextStart));
       }
-      ++CurPtr;   // End the */.
+      ++CurPtr; // End the */.
       return AsmToken(AsmToken::Comment,
                       StringRef(TokStart, CurPtr - TokStart));
     }
@@ -529,7 +528,7 @@ AsmToken AsmLexer::LexDigit() {
 
     // Otherwise requires at least one hex digit.
     if (CurPtr == NumStart)
-      return ReturnError(CurPtr-2, "invalid hexadecimal number");
+      return ReturnError(CurPtr - 2, "invalid hexadecimal number");
 
     APInt Result(128, 0);
     if (StringRef(TokStart, CurPtr - TokStart).getAsInteger(0, Result))
@@ -602,19 +601,33 @@ AsmToken AsmLexer::LexSingleQuote() {
 
   // The idea here being that 'c' is basically just an integral
   // constant.
-  StringRef Res = StringRef(TokStart,CurPtr - TokStart);
+  StringRef Res = StringRef(TokStart, CurPtr - TokStart);
   long long Value;
 
   if (Res.startswith("\'\\")) {
     char theChar = Res[2];
     switch (theChar) {
-      default: Value = theChar; break;
-      case '\'': Value = '\''; break;
-      case 't': Value = '\t'; break;
-      case 'n': Value = '\n'; break;
-      case 'b': Value = '\b'; break;
-      case 'f': Value = '\f'; break;
-      case 'r': Value = '\r'; break;
+    default:
+      Value = theChar;
+      break;
+    case '\'':
+      Value = '\'';
+      break;
+    case 't':
+      Value = '\t';
+      break;
+    case 'n':
+      Value = '\n';
+      break;
+    case 'b':
+      Value = '\b';
+      break;
+    case 'f':
+      Value = '\f';
+      break;
+    case 'r':
+      Value = '\r';
+      break;
     }
   } else
     Value = TokStart[1];
@@ -670,7 +683,7 @@ StringRef AsmLexer::LexUntilEndOfStatement() {
          *CurPtr != '\n' && *CurPtr != '\r' && CurPtr != CurBuf.end()) {
     ++CurPtr;
   }
-  return StringRef(TokStart, CurPtr-TokStart);
+  return StringRef(TokStart, CurPtr - TokStart);
 }
 
 StringRef AsmLexer::LexUntilEndOfLine() {
@@ -679,7 +692,7 @@ StringRef AsmLexer::LexUntilEndOfLine() {
   while (*CurPtr != '\n' && *CurPtr != '\r' && CurPtr != CurBuf.end()) {
     ++CurPtr;
   }
-  return StringRef(TokStart, CurPtr-TokStart);
+  return StringRef(TokStart, CurPtr - TokStart);
 }
 
 size_t AsmLexer::peekTokens(MutableArrayRef<AsmToken> Buf,
@@ -814,17 +827,28 @@ AsmToken AsmLexer::LexToken() {
     IsAtStartOfLine = true;
     IsAtStartOfStatement = true;
     return AsmToken(AsmToken::EndOfStatement, StringRef(TokStart, 1));
-  case ':': return AsmToken(AsmToken::Colon, StringRef(TokStart, 1));
-  case '+': return AsmToken(AsmToken::Plus, StringRef(TokStart, 1));
-  case '~': return AsmToken(AsmToken::Tilde, StringRef(TokStart, 1));
-  case '(': return AsmToken(AsmToken::LParen, StringRef(TokStart, 1));
-  case ')': return AsmToken(AsmToken::RParen, StringRef(TokStart, 1));
-  case '[': return AsmToken(AsmToken::LBrac, StringRef(TokStart, 1));
-  case ']': return AsmToken(AsmToken::RBrac, StringRef(TokStart, 1));
-  case '{': return AsmToken(AsmToken::LCurly, StringRef(TokStart, 1));
-  case '}': return AsmToken(AsmToken::RCurly, StringRef(TokStart, 1));
-  case '*': return AsmToken(AsmToken::Star, StringRef(TokStart, 1));
-  case ',': return AsmToken(AsmToken::Comma, StringRef(TokStart, 1));
+  case ':':
+    return AsmToken(AsmToken::Colon, StringRef(TokStart, 1));
+  case '+':
+    return AsmToken(AsmToken::Plus, StringRef(TokStart, 1));
+  case '~':
+    return AsmToken(AsmToken::Tilde, StringRef(TokStart, 1));
+  case '(':
+    return AsmToken(AsmToken::LParen, StringRef(TokStart, 1));
+  case ')':
+    return AsmToken(AsmToken::RParen, StringRef(TokStart, 1));
+  case '[':
+    return AsmToken(AsmToken::LBrac, StringRef(TokStart, 1));
+  case ']':
+    return AsmToken(AsmToken::RBrac, StringRef(TokStart, 1));
+  case '{':
+    return AsmToken(AsmToken::LCurly, StringRef(TokStart, 1));
+  case '}':
+    return AsmToken(AsmToken::RCurly, StringRef(TokStart, 1));
+  case '*':
+    return AsmToken(AsmToken::Star, StringRef(TokStart, 1));
+  case ',':
+    return AsmToken(AsmToken::Comma, StringRef(TokStart, 1));
   case '$': {
     if (LexMotorolaIntegers && isHexDigit(*CurPtr))
       return LexDigit();
@@ -844,7 +868,8 @@ AsmToken AsmLexer::LexToken() {
     if (MAI.doesAllowQuestionAtStartOfIdentifier())
       return LexIdentifier();
     return AsmToken(AsmToken::Question, StringRef(TokStart, 1));
-  case '\\': return AsmToken(AsmToken::BackSlash, StringRef(TokStart, 1));
+  case '\\':
+    return AsmToken(AsmToken::BackSlash, StringRef(TokStart, 1));
   case '=':
     if (*CurPtr == '=') {
       ++CurPtr;
@@ -863,7 +888,8 @@ AsmToken AsmLexer::LexToken() {
       return AsmToken(AsmToken::PipePipe, StringRef(TokStart, 2));
     }
     return AsmToken(AsmToken::Pipe, StringRef(TokStart, 1));
-  case '^': return AsmToken(AsmToken::Caret, StringRef(TokStart, 1));
+  case '^':
+    return AsmToken(AsmToken::Caret, StringRef(TokStart, 1));
   case '&':
     if (*CurPtr == '&') {
       ++CurPtr;
@@ -923,10 +949,20 @@ AsmToken AsmLexer::LexToken() {
   case '/':
     IsAtStartOfStatement = OldIsAtStartOfStatement;
     return LexSlash();
-  case '\'': return LexSingleQuote();
-  case '"': return LexQuote();
-  case '0': case '1': case '2': case '3': case '4':
-  case '5': case '6': case '7': case '8': case '9':
+  case '\'':
+    return LexSingleQuote();
+  case '"':
+    return LexQuote();
+  case '0':
+  case '1':
+  case '2':
+  case '3':
+  case '4':
+  case '5':
+  case '6':
+  case '7':
+  case '8':
+  case '9':
     return LexDigit();
   case '<':
     switch (*CurPtr) {
@@ -954,9 +990,9 @@ AsmToken AsmLexer::LexToken() {
       return AsmToken(AsmToken::Greater, StringRef(TokStart, 1));
     }
 
-  // TODO: Quoted identifiers (objc methods etc)
-  // local labels: [0-9][:]
-  // Forward/backward labels: [0-9][fb]
-  // Integers, fp constants, character constants.
+    // TODO: Quoted identifiers (objc methods etc)
+    // local labels: [0-9][:]
+    // Forward/backward labels: [0-9][fb]
+    // Integers, fp constants, character constants.
   }
 }
diff --git a/llvm/lib/MCA/HardwareUnits/LSUnit.cpp b/llvm/lib/MCA/HardwareUnits/LSUnit.cpp
index bdc8b3d0e390e21..3b54cc2c8fddf73 100644
--- a/llvm/lib/MCA/HardwareUnits/LSUnit.cpp
+++ b/llvm/lib/MCA/HardwareUnits/LSUnit.cpp
@@ -96,8 +96,8 @@ unsigned LSUnit::dispatch(const InstRef &IR) {
     if (CurrentStoreBarrierGroupID) {
       MemoryGroup &StoreGroup = getGroup(CurrentStoreBarrierGroupID);
       LLVM_DEBUG(dbgs() << "[LSUnit]: GROUP DEP: ("
-                        << CurrentStoreBarrierGroupID
-                        << ") --> (" << NewGID << ")\n");
+                        << CurrentStoreBarrierGroupID << ") --> (" << NewGID
+                        << ")\n");
       StoreGroup.addSuccessor(&NewGroup, true);
     }
 
@@ -110,7 +110,6 @@ unsigned LSUnit::dispatch(const InstRef &IR) {
       StoreGroup.addSuccessor(&NewGroup, !assumeNoAlias());
     }
 
-
     CurrentStoreGroupID = NewGID;
     if (IsStoreBarrier)
       CurrentStoreBarrierGroupID = NewGID;
@@ -165,8 +164,7 @@ unsigned LSUnit::dispatch(const InstRef &IR) {
     if (IsLoadBarrier) {
       if (ImmediateLoadDominator) {
         MemoryGroup &LoadGroup = getGroup(ImmediateLoadDominator);
-        LLVM_DEBUG(dbgs() << "[LSUnit]: GROUP DEP: ("
-                          << ImmediateLoadDominator
+        LLVM_DEBUG(dbgs() << "[LSUnit]: GROUP DEP: (" << ImmediateLoadDominator
                           << ") --> (" << NewGID << ")\n");
         LoadGroup.addSuccessor(&NewGroup, true);
       }
@@ -175,8 +173,8 @@ unsigned LSUnit::dispatch(const InstRef &IR) {
       if (CurrentLoadBarrierGroupID) {
         MemoryGroup &LoadGroup = getGroup(CurrentLoadBarrierGroupID);
         LLVM_DEBUG(dbgs() << "[LSUnit]: GROUP DEP: ("
-                          << CurrentLoadBarrierGroupID
-                          << ") --> (" << NewGID << ")\n");
+                          << CurrentLoadBarrierGroupID << ") --> (" << NewGID
+                          << ")\n");
         LoadGroup.addSuccessor(&NewGroup, true);
       }
     }
diff --git a/llvm/lib/MCA/HardwareUnits/RetireControlUnit.cpp b/llvm/lib/MCA/HardwareUnits/RetireControlUnit.cpp
index 9297f0c4fd7bf1b..78ec5d06e8fedf9 100644
--- a/llvm/lib/MCA/HardwareUnits/RetireControlUnit.cpp
+++ b/llvm/lib/MCA/HardwareUnits/RetireControlUnit.cpp
@@ -66,7 +66,8 @@ const RetireControlUnit::RUToken &RetireControlUnit::getCurrentToken() const {
 
 unsigned RetireControlUnit::computeNextSlotIdx() const {
   const RetireControlUnit::RUToken &Current = getCurrentToken();
-  unsigned NextSlotIdx = CurrentInstructionSlotIdx + std::max(1U, Current.NumSlots);
+  unsigned NextSlotIdx =
+      CurrentInstructionSlotIdx + std::max(1U, Current.NumSlots);
   return NextSlotIdx % Queue.size();
 }
 
@@ -82,12 +83,13 @@ void RetireControlUnit::consumeCurrentToken() {
   CurrentInstructionSlotIdx += std::max(1U, Current.NumSlots);
   CurrentInstructionSlotIdx %= Queue.size();
   AvailableEntries += Current.NumSlots;
-  Current = { InstRef(), 0U, false };
+  Current = {InstRef(), 0U, false};
 }
 
 void RetireControlUnit::onInstructionExecuted(unsigned TokenID) {
   assert(Queue.size() > TokenID);
-  assert(Queue[TokenID].IR.getInstruction() && "Instruction was not dispatched!");
+  assert(Queue[TokenID].IR.getInstruction() &&
+         "Instruction was not dispatched!");
   assert(Queue[TokenID].Executed == false && "Instruction already executed!");
   Queue[TokenID].Executed = true;
 }
diff --git a/llvm/lib/MCA/Stages/InstructionTables.cpp b/llvm/lib/MCA/Stages/InstructionTables.cpp
index 937cc7da8de7249..270f08ecb33c746 100644
--- a/llvm/lib/MCA/Stages/InstructionTables.cpp
+++ b/llvm/lib/MCA/Stages/InstructionTables.cpp
@@ -24,8 +24,7 @@ Error InstructionTables::execute(InstRef &IR) {
   UsedResources.clear();
 
   // Identify the resources consumed by this instruction.
-  for (const std::pair<uint64_t, ResourceUsage> &Resource :
-       Desc.Resources) {
+  for (const std::pair<uint64_t, ResourceUsage> &Resource : Desc.Resources) {
     // Skip zero-cycle resources (i.e., unused resources).
     if (!Resource.second.size())
       continue;



More information about the llvm-commits mailing list