[Mlir-commits] [mlir] 9db53a1 - [mlir][NFC] Remove usernames and google bug numbers from TODO comments.

Eric Christopher echristo at gmail.com
Sat Aug 8 01:28:22 PDT 2020


Thanks for doing this!

On Tue, Jul 7, 2020 at 1:42 AM River Riddle <llvmlistbot at llvm.org> wrote:

>
> Author: River Riddle
> Date: 2020-07-07T01:40:52-07:00
> New Revision: 9db53a182705ac1f652c6ee375735bea5539272c
>
> URL:
> https://github.com/llvm/llvm-project/commit/9db53a182705ac1f652c6ee375735bea5539272c
> DIFF:
> https://github.com/llvm/llvm-project/commit/9db53a182705ac1f652c6ee375735bea5539272c.diff
>
> LOG: [mlir][NFC] Remove usernames and google bug numbers from TODO
> comments.
>
> These were largely leftover from when MLIR was a google project, and don't
> really follow LLVM guidelines.
>
> Added:
>
>
> Modified:
>     mlir/docs/LangRef.md
>     mlir/docs/OpDefinitions.md
>     mlir/docs/Quantization.md
>     mlir/docs/Rationale/Rationale.md
>     mlir/include/mlir-c/Core.h
>     mlir/include/mlir/Analysis/AffineAnalysis.h
>     mlir/include/mlir/Analysis/AffineStructures.h
>     mlir/include/mlir/Analysis/LoopAnalysis.h
>     mlir/include/mlir/Analysis/NestedMatcher.h
>     mlir/include/mlir/Analysis/Utils.h
>     mlir/include/mlir/Dialect/AVX512/AVX512.td
>     mlir/include/mlir/Dialect/Affine/IR/AffineOps.h
>     mlir/include/mlir/Dialect/GPU/GPUBase.td
>     mlir/include/mlir/Dialect/GPU/GPUOps.td
>     mlir/include/mlir/Dialect/GPU/ParallelLoopMapper.h
>     mlir/include/mlir/Dialect/Linalg/EDSC/Builders.h
>     mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td
>     mlir/include/mlir/Dialect/Linalg/IR/LinalgTraits.h
>     mlir/include/mlir/Dialect/SPIRV/SPIRVBase.td
>     mlir/include/mlir/Dialect/SPIRV/SPIRVLowering.h
>     mlir/include/mlir/Dialect/SPIRV/SPIRVMatrixOps.td
>     mlir/include/mlir/Dialect/SPIRV/SPIRVStructureOps.td
>     mlir/include/mlir/Dialect/SPIRV/SPIRVTypes.h
>     mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td
>     mlir/include/mlir/Dialect/StandardOps/EDSC/Builders.h
>     mlir/include/mlir/Dialect/StandardOps/IR/Ops.h
>     mlir/include/mlir/Dialect/Vector/VectorOps.td
>     mlir/include/mlir/Dialect/Vector/VectorTransforms.h
>     mlir/include/mlir/Dialect/Vector/VectorUtils.h
>     mlir/include/mlir/IR/Attributes.h
>     mlir/include/mlir/IR/Diagnostics.h
>     mlir/include/mlir/IR/OpBase.td
>     mlir/include/mlir/IR/StandardTypes.h
>     mlir/include/mlir/Parser.h
>     mlir/include/mlir/Pass/PassOptions.h
>     mlir/include/mlir/TableGen/Operator.h
>     mlir/include/mlir/TableGen/Pattern.h
>     mlir/include/mlir/Transforms/DialectConversion.h
>     mlir/include/mlir/Transforms/LoopFusionUtils.h
>     mlir/include/mlir/Transforms/Utils.h
>     mlir/lib/Analysis/AffineAnalysis.cpp
>     mlir/lib/Analysis/AffineStructures.cpp
>     mlir/lib/Analysis/LoopAnalysis.cpp
>     mlir/lib/Analysis/NestedMatcher.cpp
>     mlir/lib/Analysis/Utils.cpp
>     mlir/lib/Conversion/AVX512ToLLVM/ConvertAVX512ToLLVM.cpp
>     mlir/lib/Conversion/GPUToNVVM/LowerGpuOpsToNVVMOps.cpp
>     mlir/lib/Conversion/GPUToROCDL/LowerGpuOpsToROCDLOps.cpp
>     mlir/lib/Conversion/LinalgToLLVM/LinalgToLLVM.cpp
>     mlir/lib/Conversion/LinalgToSPIRV/LinalgToSPIRV.cpp
>     mlir/lib/Conversion/LinalgToStandard/LinalgToStandard.cpp
>     mlir/lib/Conversion/SCFToGPU/SCFToGPU.cpp
>     mlir/lib/Conversion/StandardToLLVM/StandardToLLVM.cpp
>     mlir/lib/Conversion/StandardToSPIRV/ConvertStandardToSPIRV.cpp
>     mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
>     mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
>     mlir/lib/Dialect/Affine/IR/AffineOps.cpp
>     mlir/lib/Dialect/Affine/IR/AffineValueMap.cpp
>     mlir/lib/Dialect/Affine/Transforms/AffineDataCopyGeneration.cpp
>     mlir/lib/Dialect/Affine/Transforms/AffineLoopInvariantCodeMotion.cpp
>     mlir/lib/Dialect/Affine/Transforms/LoopTiling.cpp
>     mlir/lib/Dialect/Affine/Transforms/SuperVectorize.cpp
>     mlir/lib/Dialect/GPU/IR/GPUDialect.cpp
>     mlir/lib/Dialect/GPU/Transforms/KernelOutlining.cpp
>     mlir/lib/Dialect/GPU/Transforms/ParallelLoopMapper.cpp
>     mlir/lib/Dialect/LLVMIR/IR/LLVMDialect.cpp
>     mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
>     mlir/lib/Dialect/LLVMIR/IR/ROCDLDialect.cpp
>     mlir/lib/Dialect/Linalg/Analysis/DependenceAnalysis.cpp
>     mlir/lib/Dialect/Linalg/EDSC/Builders.cpp
>     mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
>     mlir/lib/Dialect/Linalg/Transforms/Fusion.cpp
>     mlir/lib/Dialect/Linalg/Transforms/Loops.cpp
>     mlir/lib/Dialect/Linalg/Transforms/Promotion.cpp
>     mlir/lib/Dialect/Linalg/Transforms/Tiling.cpp
>     mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
>     mlir/lib/Dialect/Linalg/Utils/Utils.cpp
>     mlir/lib/Dialect/Quant/Utils/FakeQuantSupport.cpp
>     mlir/lib/Dialect/Quant/Utils/QuantizeUtils.cpp
>     mlir/lib/Dialect/Quant/Utils/UniformSupport.cpp
>     mlir/lib/Dialect/SDBM/SDBM.cpp
>     mlir/lib/Dialect/SDBM/SDBMExpr.cpp
>     mlir/lib/Dialect/SPIRV/SPIRVDialect.cpp
>     mlir/lib/Dialect/SPIRV/SPIRVLowering.cpp
>     mlir/lib/Dialect/SPIRV/SPIRVOps.cpp
>     mlir/lib/Dialect/SPIRV/SPIRVTypes.cpp
>     mlir/lib/Dialect/SPIRV/Serialization/Deserializer.cpp
>     mlir/lib/Dialect/SPIRV/Serialization/Serializer.cpp
>     mlir/lib/Dialect/SPIRV/Transforms/LowerABIAttributesPass.cpp
>     mlir/lib/Dialect/SPIRV/Transforms/UpdateVCEPass.cpp
>     mlir/lib/Dialect/StandardOps/IR/Ops.cpp
>     mlir/lib/Dialect/Vector/VectorOps.cpp
>     mlir/lib/Dialect/Vector/VectorTransforms.cpp
>     mlir/lib/Dialect/Vector/VectorUtils.cpp
>     mlir/lib/ExecutionEngine/ExecutionEngine.cpp
>     mlir/lib/IR/AffineExpr.cpp
>     mlir/lib/IR/AffineMap.cpp
>     mlir/lib/IR/AsmPrinter.cpp
>     mlir/lib/IR/AttributeDetail.h
>     mlir/lib/IR/Operation.cpp
>     mlir/lib/IR/StandardTypes.cpp
>     mlir/lib/IR/Visitors.cpp
>     mlir/lib/Parser/AttributeParser.cpp
>     mlir/lib/Parser/DialectSymbolParser.cpp
>     mlir/lib/Parser/Parser.cpp
>     mlir/lib/Pass/PassRegistry.cpp
>     mlir/lib/TableGen/OpClass.cpp
>     mlir/lib/TableGen/Predicate.cpp
>     mlir/lib/Target/LLVMIR/DebugTranslation.cpp
>     mlir/lib/Target/LLVMIR/DebugTranslation.h
>     mlir/lib/Target/LLVMIR/ModuleTranslation.cpp
>     mlir/lib/Transforms/CSE.cpp
>     mlir/lib/Transforms/DialectConversion.cpp
>     mlir/lib/Transforms/Inliner.cpp
>     mlir/lib/Transforms/LoopFusion.cpp
>     mlir/lib/Transforms/MemRefDataFlowOpt.cpp
>     mlir/lib/Transforms/PipelineDataTransfer.cpp
>     mlir/lib/Transforms/Utils/GreedyPatternRewriteDriver.cpp
>     mlir/lib/Transforms/Utils/LoopFusionUtils.cpp
>     mlir/lib/Transforms/Utils/LoopUtils.cpp
>     mlir/lib/Transforms/Utils/Utils.cpp
>     mlir/test/Conversion/GPUToNVVM/gpu-to-nvvm.mlir
>     mlir/test/Conversion/GPUToSPIRV/if.mlir
>     mlir/test/Dialect/Linalg/roundtrip.mlir
>     mlir/test/Dialect/SPIRV/Serialization/constant.mlir
>     mlir/test/Dialect/SPIRV/canonicalize.mlir
>     mlir/test/Dialect/SPIRV/structure-ops.mlir
>     mlir/test/Dialect/SPIRV/types.mlir
>     mlir/test/Dialect/Vector/vector-flat-transforms.mlir
>     mlir/test/Dialect/Vector/vector-transforms.mlir
>     mlir/test/IR/invalid-affinemap.mlir
>     mlir/test/IR/invalid.mlir
>     mlir/test/Transforms/loop-fusion.mlir
>     mlir/test/Transforms/memref-dependence-check.mlir
>     mlir/test/lib/DeclarativeTransforms/TestVectorTransformPatterns.td
>     mlir/test/lib/Dialect/Test/TestPatterns.cpp
>     mlir/test/lib/Transforms/TestInlining.cpp
>     mlir/test/lib/Transforms/TestMemRefBoundCheck.cpp
>     mlir/test/lib/Transforms/TestMemRefDependenceCheck.cpp
>     mlir/tools/mlir-rocm-runner/mlir-rocm-runner.cpp
>     mlir/tools/mlir-tblgen/OpDefinitionsGen.cpp
>     mlir/tools/mlir-tblgen/OpDocGen.cpp
>     mlir/tools/mlir-tblgen/OpFormatGen.cpp
>     mlir/tools/mlir-tblgen/RewriterGen.cpp
>     mlir/tools/mlir-tblgen/SPIRVUtilsGen.cpp
>     mlir/tools/mlir-vulkan-runner/VulkanRuntime.cpp
>     mlir/unittests/Dialect/SPIRV/DeserializationTest.cpp
>     mlir/utils/spirv/gen_spirv_dialect.py
>
> Removed:
>
>
>
>
> ################################################################################
> diff  --git a/mlir/docs/LangRef.md b/mlir/docs/LangRef.md
> index ff3af3e4e496..41b3984347bb 100644
> --- a/mlir/docs/LangRef.md
> +++ b/mlir/docs/LangRef.md
> @@ -148,7 +148,7 @@ integer-literal ::= decimal-literal |
> hexadecimal-literal
>  decimal-literal ::= digit+
>  hexadecimal-literal ::= `0x` hex_digit+
>  float-literal ::= [-+]?[0-9]+[.][0-9]*([eE][-+]?[0-9]+)?
> -string-literal  ::= `"` [^"\n\f\v\r]* `"`   TODO define escaping rules
> +string-literal  ::= `"` [^"\n\f\v\r]* `"`   TODO: define escaping rules
>  ```
>
>  Not listed here, but MLIR does support comments. They use standard BCPL
> syntax,
>
> diff  --git a/mlir/docs/OpDefinitions.md b/mlir/docs/OpDefinitions.md
> index 025eaf616d73..a10610f87a0a 100644
> --- a/mlir/docs/OpDefinitions.md
> +++ b/mlir/docs/OpDefinitions.md
> @@ -964,9 +964,9 @@ is used. They serve as "hooks" to the enclosing
> environment.  This includes
>    replaced by the operand/result's type. E.g., for `F32` in
> `F32:$operand`, its
>    `$_self` will be expanded as `getOperand(...).getType()`.
>
> -TODO(b/130663252): Reconsider the leading symbol for special placeholders.
> -Eventually we want to allow referencing operand/result $-names; such
> $-names
> -can start with underscore.
> +TODO: Reconsider the leading symbol for special placeholders. Eventually
> we want
> +to allow referencing operand/result $-names; such $-names can start with
> +underscore.
>
>  For example, to write an attribute `attr` is an `IntegerAttr`, in C++ you
> can
>  just call `attr.isa<IntegerAttr>()`. The code can be wrapped in a `CPred`
> as
>
> diff  --git a/mlir/docs/Quantization.md b/mlir/docs/Quantization.md
> index 54eae406c87e..4abc52493cab 100644
> --- a/mlir/docs/Quantization.md
> +++ b/mlir/docs/Quantization.md
> @@ -196,7 +196,7 @@ operations for type conversion and expression of the
> supporting math.
>
>  ### Quantized type
>
> -TODO : Flesh this section out.
> +TODO: Flesh this section out.
>
>  *   QuantizedType base class
>  *   UniformQuantizedType
> @@ -247,7 +247,7 @@ in floating point with appropriate conversions at the
> boundaries.
>
>  ## TFLite native quantization
>
> -TODO : Flesh this out
> +TODO: Flesh this out
>
>  ### General algorithm
>
>
> diff  --git a/mlir/docs/Rationale/Rationale.md
> b/mlir/docs/Rationale/Rationale.md
> index a3c3e5ecc4bf..22e21383e903 100644
> --- a/mlir/docs/Rationale/Rationale.md
> +++ b/mlir/docs/Rationale/Rationale.md
> @@ -765,7 +765,7 @@ func @conv2d(%input: memref<16x1024x1024x3xf32, #lm0,
> /*scratchpad=*/1>,
>  }
>  ```
>
> -TODO (Add more examples showing the IR for a variety of interesting cases)
> +TODO: (Add more examples showing the IR for a variety of interesting
> cases)
>
>  ## Design alternatives and extensions
>
>
> diff  --git a/mlir/include/mlir-c/Core.h b/mlir/include/mlir-c/Core.h
> index 4698a7feeb45..4c0666f56c28 100644
> --- a/mlir/include/mlir-c/Core.h
> +++ b/mlir/include/mlir-c/Core.h
> @@ -96,7 +96,7 @@ unsigned getFunctionArity(mlir_func_t function);
>  /// Returns the rank of the `function` argument at position `pos`.
>  /// If the argument is of MemRefType, this returns the rank of the MemRef.
>  /// Otherwise returns `0`.
> -/// TODO(ntv): support more than MemRefType and scalar Type.
> +/// TODO: support more than MemRefType and scalar Type.
>  unsigned getRankOfFunctionArgument(mlir_func_t function, unsigned pos);
>
>  /// Returns an opaque mlir::Type of the `function` argument at position
> `pos`.
>
> diff  --git a/mlir/include/mlir/Analysis/AffineAnalysis.h
> b/mlir/include/mlir/Analysis/AffineAnalysis.h
> index b421eee95148..3322f3da6a09 100644
> --- a/mlir/include/mlir/Analysis/AffineAnalysis.h
> +++ b/mlir/include/mlir/Analysis/AffineAnalysis.h
> @@ -38,7 +38,7 @@ void getReachableAffineApplyOps(ArrayRef<Value> operands,
>  /// used to add appropriate inequalities. Any symbols founds in the bound
>  /// operands are added as symbols in the system. Returns failure for the
> yet
>  /// unimplemented cases.
> -//  TODO(bondhugula): handle non-unit strides.
> +//  TODO: handle non-unit strides.
>  LogicalResult getIndexSet(MutableArrayRef<AffineForOp> forOps,
>                            FlatAffineConstraints *domain);
>
> @@ -49,8 +49,8 @@ struct MemRefAccess {
>    SmallVector<Value, 4> indices;
>
>    /// Constructs a MemRefAccess from a load or store operation.
> -  // TODO(b/119949820): add accessors to standard op's load, store, DMA
> op's to
> -  // return MemRefAccess, i.e., loadOp->getAccess(),
> dmaOp->getRead/WriteAccess.
> +  // TODO: add accessors to standard op's load, store, DMA op's to return
> +  // MemRefAccess, i.e., loadOp->getAccess(), dmaOp->getRead/WriteAccess.
>    explicit MemRefAccess(Operation *opInst);
>
>    // Returns the rank of the memref associated with this access.
> @@ -95,9 +95,9 @@ struct DependenceComponent {
>  /// access the same memref element. If 'allowRAR' is true, will consider
>  /// read-after-read dependences (typically used by applications trying to
>  /// optimize input reuse).
> -// TODO(andydavis) Wrap 'dependenceConstraints' and
> 'dependenceComponents' into
> -// a single struct.
> -// TODO(andydavis) Make 'dependenceConstraints' optional arg.
> +// TODO: Wrap 'dependenceConstraints' and 'dependenceComponents' into a
> single
> +// struct.
> +// TODO: Make 'dependenceConstraints' optional arg.
>  struct DependenceResult {
>    enum ResultEnum {
>      HasDependence, // A dependence exists between 'srcAccess' and
> 'dstAccess'.
>
> diff  --git a/mlir/include/mlir/Analysis/AffineStructures.h
> b/mlir/include/mlir/Analysis/AffineStructures.h
> index 5858ab2ac62b..0424e0bb7d33 100644
> --- a/mlir/include/mlir/Analysis/AffineStructures.h
> +++ b/mlir/include/mlir/Analysis/AffineStructures.h
> @@ -98,7 +98,6 @@ class FlatAffineConstraints {
>
>    /// Create a flat affine constraint system from an AffineValueMap or a
> list of
>    /// these. The constructed system will only include equalities.
> -  // TODO(bondhugula)
>    explicit FlatAffineConstraints(const AffineValueMap &avm);
>    explicit FlatAffineConstraints(ArrayRef<const AffineValueMap *> avmRef);
>
> @@ -213,7 +212,7 @@ class FlatAffineConstraints {
>    /// 'affine.for' operation are added as trailing identifiers (either
>    /// dimensional or symbolic depending on whether the operand is a valid
>    /// symbol).
> -  //  TODO(bondhugula): add support for non-unit strides.
> +  //  TODO: add support for non-unit strides.
>    LogicalResult addAffineForOpDomain(AffineForOp forOp);
>
>    /// Adds a lower or an upper bound for the identifier at the specified
> @@ -335,8 +334,8 @@ class FlatAffineConstraints {
>    /// Projects out (aka eliminates) 'num' identifiers starting at position
>    /// 'pos'. The resulting constraint system is the shadow along the
> dimensions
>    /// that still exist. This method may not always be integer exact.
> -  // TODO(bondhugula): deal with integer exactness when necessary - can
> return a
> -  // value to mark exactness for example.
> +  // TODO: deal with integer exactness when necessary - can return a
> value to
> +  // mark exactness for example.
>    void projectOut(unsigned pos, unsigned num);
>    inline void projectOut(unsigned pos) { return projectOut(pos, 1); }
>
>
> diff  --git a/mlir/include/mlir/Analysis/LoopAnalysis.h
> b/mlir/include/mlir/Analysis/LoopAnalysis.h
> index 7ed19ef99f87..b9c7d5e1b47a 100644
> --- a/mlir/include/mlir/Analysis/LoopAnalysis.h
> +++ b/mlir/include/mlir/Analysis/LoopAnalysis.h
> @@ -33,8 +33,8 @@ class Value;
>  /// multi-result map. The trip count expression is simplified before
> returning.
>  /// This method only utilizes map composition to construct lower and upper
>  /// bounds before computing the trip count expressions
> -// TODO(mlir-team): this should be moved into 'Transforms/' and be
> replaced by a
> -// pure analysis method relying on FlatAffineConstraints
> +// TODO: this should be moved into 'Transforms/' and be replaced by a pure
> +// analysis method relying on FlatAffineConstraints
>  void buildTripCountMapAndOperands(AffineForOp forOp, AffineMap *map,
>                                    SmallVectorImpl<Value> *operands);
>
> @@ -65,7 +65,7 @@ using VectorizableLoopFun =
> std::function<bool(AffineForOp)>;
>  /// Checks whether the loop is structurally vectorizable; i.e.:
>  ///   1. no conditionals are nested under the loop;
>  ///   2. all nested load/stores are to scalar MemRefs.
> -/// TODO(ntv): relax the no-conditionals restriction
> +/// TODO: relax the no-conditionals restriction
>  bool isVectorizableLoopBody(AffineForOp loop,
>                              NestedPattern &vectorTransferMatcher);
>
> @@ -80,8 +80,8 @@ bool isVectorizableLoopBody(AffineForOp loop, int
> *memRefDim,
>  /// Checks where SSA dominance would be violated if a for op's body
>  /// operations are shifted by the specified shifts. This method checks if
> a
>  /// 'def' and all its uses have the same shift factor.
> -// TODO(mlir-team): extend this to check for memory-based dependence
> -// violation when we have the support.
> +// TODO: extend this to check for memory-based dependence violation when
> we have
> +// the support.
>  bool isOpwiseShiftValid(AffineForOp forOp, ArrayRef<uint64_t> shifts);
>  } // end namespace mlir
>
>
> diff  --git a/mlir/include/mlir/Analysis/NestedMatcher.h
> b/mlir/include/mlir/Analysis/NestedMatcher.h
> index 374fcab1bab5..594ea47e99e9 100644
> --- a/mlir/include/mlir/Analysis/NestedMatcher.h
> +++ b/mlir/include/mlir/Analysis/NestedMatcher.h
> @@ -73,7 +73,7 @@ class NestedMatch {
>  ///   1. recursively matches a substructure in the tree;
>  ///   2. uses a filter function to refine matches with extra semantic
>  ///      constraints (passed via a lambda of type FilterFunctionType);
> -///   3. TODO(ntv) optionally applies actions (lambda).
> +///   3. TODO: optionally applies actions (lambda).
>  ///
>  /// Nested patterns are meant to capture imperfectly nested loops while
> matching
>  /// properties over the whole loop nest. For instance, in vectorization
> we are
>
> diff  --git a/mlir/include/mlir/Analysis/Utils.h
> b/mlir/include/mlir/Analysis/Utils.h
> index ce71e47c8e32..943a2f125b7d 100644
> --- a/mlir/include/mlir/Analysis/Utils.h
> +++ b/mlir/include/mlir/Analysis/Utils.h
> @@ -36,7 +36,7 @@ class Value;
>
>  /// Populates 'loops' with IVs of the loops surrounding 'op' ordered from
>  /// the outermost 'affine.for' operation to the innermost one.
> -//  TODO(bondhugula): handle 'affine.if' ops.
> +//  TODO: handle 'affine.if' ops.
>  void getLoopIVs(Operation &op, SmallVectorImpl<AffineForOp> *loops);
>
>  /// Returns the nesting depth of this operation, i.e., the number of loops
> @@ -135,7 +135,7 @@ void getComputationSliceState(Operation *depSourceOp,
> Operation *depSinkOp,
>  /// surrounding ops in 'opsB', as a function of IVs and symbols of loop
> nest
>  /// surrounding ops in 'opsA' at 'loopDepth'.
>  /// Returns 'success' if union was computed, 'failure' otherwise.
> -// TODO(andydavis) Change this API to take 'forOpA'/'forOpB'.
> +// TODO: Change this API to take 'forOpA'/'forOpB'.
>  LogicalResult computeSliceUnion(ArrayRef<Operation *> opsA,
>                                  ArrayRef<Operation *> opsB, unsigned
> loopDepth,
>                                  unsigned numCommonLoops, bool
> isBackwardSlice,
> @@ -150,7 +150,7 @@ LogicalResult computeSliceUnion(ArrayRef<Operation *>
> opsA,
>  // Loop depth is a crucial optimization choice that determines where to
>  // materialize the results of the backward slice - presenting a trade-off
> b/w
>  // storage and redundant computation in several cases.
> -// TODO(andydavis) Support computation slices with common surrounding
> loops.
> +// TODO: Support computation slices with common surrounding loops.
>  AffineForOp insertBackwardComputationSlice(Operation *srcOpInst,
>                                             Operation *dstOpInst,
>                                             unsigned dstLoopDepth,
> @@ -271,7 +271,7 @@ struct MemRefRegion {
>    /// identifiers since getMemRefRegion() is called with a specific loop
> depth,
>    /// and thus the region is symbolic in the outer surrounding loops at
> that
>    /// depth.
> -  // TODO(bondhugula): Replace this to exploit HyperRectangularSet.
> +  // TODO: Replace this to exploit HyperRectangularSet.
>    FlatAffineConstraints cst;
>  };
>
>
> diff  --git a/mlir/include/mlir/Dialect/AVX512/AVX512.td
> b/mlir/include/mlir/Dialect/AVX512/AVX512.td
> index 82d24ef13d68..e1ed35c50e87 100644
> --- a/mlir/include/mlir/Dialect/AVX512/AVX512.td
> +++ b/mlir/include/mlir/Dialect/AVX512/AVX512.td
> @@ -55,7 +55,7 @@ def MaskRndScaleOp : AVX512_Op<"mask.rndscale",
> [NoSideEffect,
>                     I32:$k,
>                     VectorOfLengthAndType<[16, 8], [F32, F64]>:$a,
>                     AnyTypeOf<[I16, I8]>:$imm,
> -                   // TODO(ntv): figure rounding out (optional operand?).
> +                   // TODO: figure rounding out (optional operand?).
>                     I32:$rounding
>              );
>    let results = (outs VectorOfLengthAndType<[16, 8], [F32, F64]>:$dst);
> @@ -87,7 +87,7 @@ def MaskScaleFOp : AVX512_Op<"mask.scalef",
> [NoSideEffect,
>                     VectorOfLengthAndType<[16, 8], [F32, F64]>:$a,
>                     VectorOfLengthAndType<[16, 8], [F32, F64]>:$b,
>                     AnyTypeOf<[I16, I8]>:$k,
> -                   // TODO(ntv): figure rounding out (optional operand?).
> +                   // TODO: figure rounding out (optional operand?).
>                     I32:$rounding
>              );
>    let results = (outs VectorOfLengthAndType<[16, 8], [F32, F64]>:$dst);
>
> diff  --git a/mlir/include/mlir/Dialect/Affine/IR/AffineOps.h
> b/mlir/include/mlir/Dialect/Affine/IR/AffineOps.h
> index 93bac5a4a29f..8498a45c1147 100644
> --- a/mlir/include/mlir/Dialect/Affine/IR/AffineOps.h
> +++ b/mlir/include/mlir/Dialect/Affine/IR/AffineOps.h
> @@ -75,10 +75,10 @@ bool isTopLevelValue(Value value);
>  //   affine.dma_start %src[%i, %j], %dst[%k, %l], %tag[%idx],
> %num_elements,
>  //     %stride, %num_elt_per_stride : ...
>  //
> -// TODO(mlir-team): add additional operands to allow source and
> destination
> -// striding, and multiple stride levels (possibly using AffineMaps to
> specify
> -// multiple levels of striding).
> -// TODO(andydavis) Consider replacing src/dst memref indices with view
> memrefs.
> +// TODO: add additional operands to allow source and destination
> striding, and
> +// multiple stride levels (possibly using AffineMaps to specify multiple
> levels
> +// of striding).
> +// TODO: Consider replacing src/dst memref indices with view memrefs.
>  class AffineDmaStartOp : public Op<AffineDmaStartOp,
> OpTrait::VariadicOperands,
>                                     OpTrait::ZeroResult> {
>  public:
>
> diff  --git a/mlir/include/mlir/Dialect/GPU/GPUBase.td
> b/mlir/include/mlir/Dialect/GPU/GPUBase.td
> index 16ce93fb1ed9..32e0952a15b4 100644
> --- a/mlir/include/mlir/Dialect/GPU/GPUBase.td
> +++ b/mlir/include/mlir/Dialect/GPU/GPUBase.td
> @@ -39,7 +39,7 @@ def GPU_Dialect : Dialect {
>
>      /// Returns the number of workgroup (thread, block) dimensions
> supported in
>      /// the GPU dialect.
> -    // TODO(zinenko,herhut): consider generalizing this.
> +    // TODO: consider generalizing this.
>      static unsigned getNumWorkgroupDimensions() { return 3; }
>
>      /// Returns the numeric value used to identify the workgroup memory
> address
>
> diff  --git a/mlir/include/mlir/Dialect/GPU/GPUOps.td
> b/mlir/include/mlir/Dialect/GPU/GPUOps.td
> index 281696d0eb98..e7e67e24381d 100644
> --- a/mlir/include/mlir/Dialect/GPU/GPUOps.td
> +++ b/mlir/include/mlir/Dialect/GPU/GPUOps.td
> @@ -224,8 +224,7 @@ def GPU_GPUFuncOp : GPU_Op<"func",
> [HasParent<"GPUModuleOp">,
>      ///  - the argument/result attributes may need an update: if the new
> type
>      ///  has less parameters we drop the extra attributes, if there are
> more
>      ///  parameters they won't have any attributes.
> -    // TODO(b/146349912): consider removing this function thanks to
> rewrite
> -    // patterns.
> +    // TODO: consider removing this function thanks to rewrite patterns.
>      void setType(FunctionType newType);
>
>      /// Returns the number of buffers located in the workgroup memory.
>
> diff  --git a/mlir/include/mlir/Dialect/GPU/ParallelLoopMapper.h
> b/mlir/include/mlir/Dialect/GPU/ParallelLoopMapper.h
> index 842734d256d3..298ec0c803f0 100644
> --- a/mlir/include/mlir/Dialect/GPU/ParallelLoopMapper.h
> +++ b/mlir/include/mlir/Dialect/GPU/ParallelLoopMapper.h
> @@ -44,7 +44,7 @@ inline Processor getProcessor(ParallelLoopDimMapping
> attr) {
>  }
>
>  /// Helper function to create a ParallelDimMapperAttr.
> -/// TODO(ravishankarm/antiagainst): Replace its uses with an auto-gened
> method.
> +/// TODO: Replace its uses with an auto-gened method.
>  ParallelLoopDimMapping getParallelLoopDimMappingAttr(Processor processor,
>                                                       AffineMap map,
>                                                       AffineMap bound);
>
> diff  --git a/mlir/include/mlir/Dialect/Linalg/EDSC/Builders.h
> b/mlir/include/mlir/Dialect/Linalg/EDSC/Builders.h
> index 15f89a8ec5e0..5b6cb0ac7fa4 100644
> --- a/mlir/include/mlir/Dialect/Linalg/EDSC/Builders.h
> +++ b/mlir/include/mlir/Dialect/Linalg/EDSC/Builders.h
> @@ -67,7 +67,7 @@ void mulRegionBuilder(ValueRange args);
>  /// the current ScopedContext, at the current insert point.
>  void macRegionBuilder(ValueRange args);
>
> -/// TODO(ntv): In the future we should tie these implementations to
> something in
> +/// TODO: In the future we should tie these implementations to something
> in
>  /// Tablegen that generates the proper interfaces and the proper sugared
> named
>  /// ops.
>
> @@ -119,7 +119,7 @@ Operation
> *linalg_generic_pointwise_max(StructuredIndexed I1,
>                                          StructuredIndexed I2,
>                                          StructuredIndexed O);
>
> -// TODO(ntv): Implement more useful pointwise operations on a per-need
> basis.
> +// TODO: Implement more useful pointwise operations on a per-need basis.
>
>  using MatmulRegionBuilder = function_ref<void(ValueRange args)>;
>
> @@ -187,7 +187,7 @@ linalg_generic_matmul(Container values,
>  ///
>  /// For now `...` must be empty (i.e. only 2-D convolutions are
> supported).
>  ///
> -// TODO(ntv) Extend convolution rank with some template magic.
> +// TODO: Extend convolution rank with some template magic.
>  Operation *linalg_generic_conv_nhwc(Value vI, Value vW, Value vO,
>                                      ArrayRef<int> strides = {},
>                                      ArrayRef<int> dilations = {});
> @@ -222,7 +222,7 @@ Operation *linalg_generic_conv_nhwc(Container values,
>  ///
>  /// For now `...` must be empty (i.e. only 2-D convolutions are
> supported).
>  ///
> -// TODO(ntv) Extend convolution rank with some template magic.
> +// TODO: Extend convolution rank with some template magic.
>  Operation *linalg_generic_dilated_conv_nhwc(Value vI, Value vW, Value vO,
>                                              int depth_multiplier = 1,
>                                              ArrayRef<int> strides = {},
>
> diff  --git a/mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td
> b/mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td
> index 4dd652168eb6..9cda61ca80b7 100644
> --- a/mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td
> +++ b/mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td
> @@ -119,8 +119,8 @@ def CopyOp : LinalgStructured_Op<"copy", [
>      OptionalAttr<AffineMapAttr>:$inputPermutation,
>      OptionalAttr<AffineMapAttr>:$outputPermutation);
>
> -  // TODO(ntv) this should go away once the usage of OptionalAttr triggers
> -  // emission of builders with default arguments left unspecified.
> +  // TODO: this should go away once the usage of OptionalAttr triggers
> emission
> +  // of builders with default arguments left unspecified.
>    let builders = [OpBuilder<
>      "OpBuilder &builder, OperationState &result, Value input, Value
> output", [{
>      return build(
> @@ -305,8 +305,8 @@ def ConvOp : PoolingBase_Op<"conv", [NInputs<2>,
> NOutputs<1>]> {
>                     OptionalAttr<I64ElementsAttr>:$padding);
>
>    let extraClassDeclaration = commonUtils # [{
> -    // TODO(ntv) extend to support more than 1 dimensions and potentially
> -    // grouping too.
> +    // TODO: extend to support more than 1 dimensions and potentially
> grouping
> +    // too.
>      unsigned getNumBatchDimensions() { return 1; }
>
>      unsigned getNumInputFeatureDimensions() { return 1; }
>
> diff  --git a/mlir/include/mlir/Dialect/Linalg/IR/LinalgTraits.h
> b/mlir/include/mlir/Dialect/Linalg/IR/LinalgTraits.h
> index 4ab547be2019..8dda7d0a1445 100644
> --- a/mlir/include/mlir/Dialect/Linalg/IR/LinalgTraits.h
> +++ b/mlir/include/mlir/Dialect/Linalg/IR/LinalgTraits.h
> @@ -246,8 +246,8 @@ class StructuredOpTraits
>          cast<ConcreteType>(this->getOperation()).referenceIterators();
>
>      // If there is no reference, this must be a generic op.
> -    // TODO(ntv): Traits are used to define ops. Split into cpp to avoid
> -    // cyclic dependency.
> +    // TODO: Traits are used to define ops. Split into cpp to avoid cyclic
> +    // dependency.
>      auto name = this->getOperation()->getName().getStringRef();
>      if (!maybeReferenceIteratorTypes && name != "generic" &&
>          name != "indexed_generic") {
> @@ -263,8 +263,8 @@ class StructuredOpTraits
>                                         return StringAttr::get(str, ctx);
>                                       });
>      auto attr = ArrayAttr::get(llvm::to_vector<4>(attrRange), ctx);
> -    // TODO(ntv): Need to memoize this. Can't just store as an attribute
> atm as
> -    // it will impact parser, printer and tests.
> +    // TODO: Need to memoize this. Can't just store as an attribute atm
> as it
> +    // will impact parser, printer and tests.
>      // this->getOperation()->setAttr("iterator_types", attr);
>      return attr;
>    }
> @@ -301,8 +301,8 @@ class StructuredOpTraits
>          });
>      SmallVector<Attribute, 4> attrs{attrRange.begin(), attrRange.end()};
>      auto attr = ArrayAttr::get(attrs, ctx);
> -    // TODO(ntv): Need to memoize this. Can't just store as an attribute
> atm as
> -    // it will impact parser, printer and tests.
> +    // TODO: Need to memoize this. Can't just store as an attribute atm
> as it
> +    // will impact parser, printer and tests.
>      // this->getOperation()->setAttr("indexing_maps", attr);
>      return attr;
>    }
>
> diff  --git a/mlir/include/mlir/Dialect/SPIRV/SPIRVBase.td
> b/mlir/include/mlir/Dialect/SPIRV/SPIRVBase.td
> index 832171e92336..fab1f63f7f30 100644
> --- a/mlir/include/mlir/Dialect/SPIRV/SPIRVBase.td
> +++ b/mlir/include/mlir/Dialect/SPIRV/SPIRVBase.td
> @@ -162,7 +162,7 @@ class Extension<list<StrEnumAttrCase> extensions> :
> Availability {
>      AND (`Extension::C`) AND (`Extension::D` OR `Extension::E`) is
> enabled.
>    }];
>
> -  // TODO(antiagainst): Returning SmallVector<ArrayRef<...>> is not
> recommended.
> +  // TODO: Returning SmallVector<ArrayRef<...>> is not recommended.
>    // Find a better way for this.
>    let queryFnRetType = "::llvm::SmallVector<::llvm::ArrayRef<"
>                            "::mlir::spirv::Extension>, 1>";
> @@ -226,9 +226,8 @@ class Capability<list<I32EnumAttrCase> capabilities> :
> Availability {
>    let instance = "ref";
>  }
>
> -// TODO(antiagainst): the following interfaces definitions are
> duplicating with
> -// the above. Remove them once we are able to support dialect-specific
> contents
> -// in ODS.
> +// TODO: the following interfaces definitions are duplicating with the
> above.
> +// Remove them once we are able to support dialect-specific contents in
> ODS.
>  def QueryMinVersionInterface : OpInterface<"QueryMinVersionInterface"> {
>    let methods = [InterfaceMethod<"", "::mlir::spirv::Version",
> "getMinVersion">];
>  }
> @@ -3061,7 +3060,7 @@ def SPV_IntVec4 : SPV_Vec4<SPV_Integer>;
>  def SPV_IOrUIVec4 : SPV_Vec4<SPV_SignlessOrUnsignedInt>;
>  def SPV_Int32Vec4 : SPV_Vec4<AnyI32>;
>
> -// TODO(ravishankarm): From 1.4, this should also include Composite type.
> +// TODO: From 1.4, this should also include Composite type.
>  def SPV_SelectType : AnyTypeOf<[SPV_Scalar, SPV_Vector, SPV_AnyPtr]>;
>
>
>  //===----------------------------------------------------------------------===//
> @@ -3317,9 +3316,9 @@ def SPV_OpcodeAttr :
>  // Base class for all SPIR-V ops.
>  class SPV_Op<string mnemonic, list<OpTrait> traits = []> :
>      Op<SPIRV_Dialect, mnemonic, !listconcat(traits, [
> -         // TODO(antiagainst): We don't need all of the following traits
> for
> -         // every op; only the suitable ones should be added automatically
> -         // after ODS supports dialect-specific contents.
> +         // TODO: We don't need all of the following traits for every op;
> only
> +         // the suitable ones should be added automatically after ODS
> supports
> +         // dialect-specific contents.
>           DeclareOpInterfaceMethods<QueryMinVersionInterface>,
>           DeclareOpInterfaceMethods<QueryMaxVersionInterface>,
>           DeclareOpInterfaceMethods<QueryExtensionInterface>,
>
> diff  --git a/mlir/include/mlir/Dialect/SPIRV/SPIRVLowering.h
> b/mlir/include/mlir/Dialect/SPIRV/SPIRVLowering.h
> index f0a429941fb3..02a1f0861da8 100644
> --- a/mlir/include/mlir/Dialect/SPIRV/SPIRVLowering.h
> +++ b/mlir/include/mlir/Dialect/SPIRV/SPIRVLowering.h
> @@ -35,8 +35,8 @@ namespace mlir {
>  /// satisfy shader interface requirements: shader interface types must be
>  /// pointers to structs.
>  ///
> -/// TODO(antiagainst): We might want to introduce a way to control how
> -/// unsupported bitwidth are handled and explicitly fail if wanted.
> +/// TODO: We might want to introduce a way to control how unsupported
> bitwidth
> +/// are handled and explicitly fail if wanted.
>  class SPIRVTypeConverter : public TypeConverter {
>  public:
>    explicit SPIRVTypeConverter(spirv::TargetEnvAttr targetAttr);
> @@ -120,8 +120,8 @@ Value getBuiltinVariableValue(Operation *op, BuiltIn
> builtin,
>  /// Performs the index computation to get to the element at `indices` of
> the
>  /// memory pointed to by `basePtr`, using the layout map of `baseType`.
>
> -// TODO(ravishankarm) : This method assumes that the `baseType` is a
> MemRefType
> -// with AffineMap that has static strides. Extend to handle dynamic
> strides.
> +// TODO: This method assumes that the `baseType` is a MemRefType with
> AffineMap
> +// that has static strides. Extend to handle dynamic strides.
>  spirv::AccessChainOp getElementPtr(SPIRVTypeConverter &typeConverter,
>                                     MemRefType baseType, Value basePtr,
>                                     ValueRange indices, Location loc,
>
> diff  --git a/mlir/include/mlir/Dialect/SPIRV/SPIRVMatrixOps.td
> b/mlir/include/mlir/Dialect/SPIRV/SPIRVMatrixOps.td
> index 07d7fd1093c2..8545c7ad29e2 100644
> --- a/mlir/include/mlir/Dialect/SPIRV/SPIRVMatrixOps.td
> +++ b/mlir/include/mlir/Dialect/SPIRV/SPIRVMatrixOps.td
> @@ -61,8 +61,8 @@ def SPV_MatrixTimesScalarOp :
> SPV_Op<"MatrixTimesScalar", []> {
>      SPV_AnyMatrix:$result
>    );
>
> -  // TODO (Hazem): we need just one matrix type given that the input and
> result
> -  // are the same and the scalar's type can be deduced from it.
> +  // TODO: we need just one matrix type given that the input and result
> are the
> +  // same and the scalar's type can be deduced from it.
>    let assemblyFormat = [{
>      operands attr-dict `:` type($matrix) `,` type($scalar) `->`
> type($result)
>    }];
> @@ -133,4 +133,4 @@ def SPV_TransposeOp : SPV_Op<"Transpose", []> {
>
>  // -----
>
> -#endif // SPIRV_MATRIX_OPS
> \ No newline at end of file
> +#endif // SPIRV_MATRIX_OPS
>
> diff  --git a/mlir/include/mlir/Dialect/SPIRV/SPIRVStructureOps.td
> b/mlir/include/mlir/Dialect/SPIRV/SPIRVStructureOps.td
> index 0c4cad17f8de..034b7d1b09c7 100644
> --- a/mlir/include/mlir/Dialect/SPIRV/SPIRVStructureOps.td
> +++ b/mlir/include/mlir/Dialect/SPIRV/SPIRVStructureOps.td
> @@ -102,7 +102,7 @@ def SPV_ConstantOp : SPV_Op<"constant", [ConstantLike,
> NoSideEffect]> {
>      %2 = spv.constant [dense<3.0> : vector<2xf32>] :
> !spv.array<1xvector<2xf32>>
>      ```
>
> -    TODO(antiagainst): support constant structs
> +    TODO: support constant structs
>    }];
>
>    let arguments = (ins
> @@ -541,7 +541,7 @@ def SPV_SpecConstantOp : SPV_Op<"specConstant",
> [InModuleScope, Symbol]> {
>      spv.specConstant @spec_const2 spec_id(5) = 42 : i32
>      ```
>
> -    TODO(antiagainst): support composite spec constants with another op
> +    TODO: support composite spec constants with another op
>    }];
>
>    let arguments = (ins
>
> diff  --git a/mlir/include/mlir/Dialect/SPIRV/SPIRVTypes.h
> b/mlir/include/mlir/Dialect/SPIRV/SPIRVTypes.h
> index 95855785e31b..d2dac563bfcf 100644
> --- a/mlir/include/mlir/Dialect/SPIRV/SPIRVTypes.h
> +++ b/mlir/include/mlir/Dialect/SPIRV/SPIRVTypes.h
> @@ -216,7 +216,7 @@ class ImageType
>    ImageSamplingInfo getSamplingInfo() const;
>    ImageSamplerUseInfo getSamplerUseInfo() const;
>    ImageFormat getImageFormat() const;
> -  // TODO(ravishankarm): Add support for Access qualifier
> +  // TODO: Add support for Access qualifier
>
>    void getExtensions(SPIRVType::ExtensionArrayRefVector &extensions,
>                       Optional<spirv::StorageClass> storage = llvm::None);
>
> diff  --git a/mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td
> b/mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td
> index 1fb88a791b70..6f263cd6db63 100644
> --- a/mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td
> +++ b/mlir/include/mlir/Dialect/Shape/IR/ShapeOps.td
> @@ -479,7 +479,7 @@ def Shape_ConcatOp : Shape_Op<"concat", []> {
>  // Shape constraint related ops.
>
>  //===----------------------------------------------------------------------===//
>
> -//TODO(tpopp): Move the code below and witnesses to a
> diff erent file.
> +// TODO: Move the code below and witnesses to a
> diff erent file.
>  def Shape_AnyOp : Shape_Op<"any", [Commutative, NoSideEffect]> {
>    let summary = "Return any combination of the input shapes";
>    let description = [{
>
> diff  --git a/mlir/include/mlir/Dialect/StandardOps/EDSC/Builders.h
> b/mlir/include/mlir/Dialect/StandardOps/EDSC/Builders.h
> index 5f0d6d83df99..36df24f60c70 100644
> --- a/mlir/include/mlir/Dialect/StandardOps/EDSC/Builders.h
> +++ b/mlir/include/mlir/Dialect/StandardOps/EDSC/Builders.h
> @@ -48,7 +48,7 @@ class BoundsCapture {
>  /// MemRef. It has placeholders for non-contiguous tensors that fit
> within the
>  /// Fortran subarray model.
>  /// At the moment it can only capture a MemRef with an identity layout
> map.
> -// TODO(ntv): Support MemRefs with layoutMaps.
> +// TODO: Support MemRefs with layoutMaps.
>  class MemRefBoundsCapture : public BoundsCapture {
>  public:
>    explicit MemRefBoundsCapture(Value v);
>
> diff  --git a/mlir/include/mlir/Dialect/StandardOps/IR/Ops.h
> b/mlir/include/mlir/Dialect/StandardOps/IR/Ops.h
> index 7599988bdefc..0f24d74dcac2 100644
> --- a/mlir/include/mlir/Dialect/StandardOps/IR/Ops.h
> +++ b/mlir/include/mlir/Dialect/StandardOps/IR/Ops.h
> @@ -129,9 +129,9 @@ class ConstantIndexOp : public ConstantOp {
>  //   dma_start %src[%i, %j], %dst[%k, %l], %num_elements, %tag[%idx],
> %stride,
>  //             %num_elt_per_stride :
>  //
> -// TODO(mlir-team): add additional operands to allow source and
> destination
> -// striding, and multiple stride levels.
> -// TODO(andydavis) Consider replacing src/dst memref indices with view
> memrefs.
> +// TODO: add additional operands to allow source and destination
> striding, and
> +// multiple stride levels.
> +// TODO: Consider replacing src/dst memref indices with view memrefs.
>  class DmaStartOp
>      : public Op<DmaStartOp, OpTrait::VariadicOperands,
> OpTrait::ZeroResult> {
>  public:
>
> diff  --git a/mlir/include/mlir/Dialect/Vector/VectorOps.td
> b/mlir/include/mlir/Dialect/Vector/VectorOps.td
> index 8ca9baf2e0d0..a02f39f943f8 100644
> --- a/mlir/include/mlir/Dialect/Vector/VectorOps.td
> +++ b/mlir/include/mlir/Dialect/Vector/VectorOps.td
> @@ -37,8 +37,8 @@ class Vector_Op<string mnemonic, list<OpTrait> traits =
> []> :
>    let parser = [{ return ::parse$cppClass(parser, result); }];
>  }
>
> -// TODO(andydavis, ntv) Add an attribute to specify a
> diff erent algebra
> -// with operators other than the current set: {*, +}.
> +// TODO: Add an attribute to specify a
> diff erent algebra with operators other
> +// than the current set: {*, +}.
>  def Vector_ContractionOp :
>    Vector_Op<"contract", [
>        NoSideEffect,
> @@ -714,8 +714,8 @@ def Vector_OuterProductOp :
>    }];
>  }
>
> -// TODO(andydavis) Add transformation which decomposes ReshapeOp into an
> -// optimized sequence of vector rotate/shuffle/select operations.
> +// TODO: Add transformation which decomposes ReshapeOp into an optimized
> +// sequence of vector rotate/shuffle/select operations.
>  def Vector_ReshapeOp :
>    Vector_Op<"reshape", [AttrSizedOperandSegments, NoSideEffect]>,
>      Arguments<(ins AnyVector:$vector, Variadic<Index>:$input_shape,
> @@ -849,7 +849,7 @@ def Vector_ExtractStridedSliceOp :
>      attribute and extracts the n-D subvector at the proper offset.
>
>      At the moment strides must contain only 1s.
> -    // TODO(ntv) support non-1 strides.
> +    // TODO: support non-1 strides.
>
>      Returns an n-D vector where the first k-D dimensions match the `sizes`
>      attribute. The returned subvector contains the elements starting at
> offset
> @@ -862,7 +862,7 @@ def Vector_ExtractStridedSliceOp :
>          {offsets = [0, 2], sizes = [2, 4], strides = [1, 1]}:
>        vector<4x8x16xf32> to vector<2x4x16xf32>
>
> -    // TODO(ntv) Evolve to a range form syntax similar to:
> +    // TODO: Evolve to a range form syntax similar to:
>      %1 = vector.extract_strided_slice %0[0:2:1][2:4:1]
>        vector<4x8x16xf32> to vector<2x4x16xf32>
>      ```
> @@ -1513,7 +1513,7 @@ def Vector_MatmulOp : Vector_Op<"matrix_multiply",
> [NoSideEffect,
>          PredOpTrait<"rhs operand and result have same element type",
>                      TCresVTEtIsSameAsOpBase<0, 1>>]>,
>        Arguments<(
> -        // TODO(ntv, fhahn): tighten vector element types that make sense.
> +        // TODO: tighten vector element types that make sense.
>          ins VectorOfRankAndType<[1],
>                [AnySignlessInteger, AnySignedInteger, AnyFloat]>:$lhs,
>              VectorOfRankAndType<[1],
> @@ -1576,7 +1576,7 @@ def Vector_FlatTransposeOp :
> Vector_Op<"flat_transpose", [NoSideEffect,
>    PredOpTrait<"source operand and result have same element type",
>                   TCresVTEtIsSameAsOpBase<0, 0>>]>,
>      Arguments<(
> -      // TODO(ntv, fhahn, ajcbik): tighten vector element types that make
> sense.
> +      // TODO: tighten vector element types that make sense.
>        ins VectorOfRankAndType<[1],
>              [AnySignlessInteger, AnySignedInteger, AnyFloat]>:$matrix,
>            I32Attr:$rows, I32Attr:$columns)>,
>
> diff  --git a/mlir/include/mlir/Dialect/Vector/VectorTransforms.h
> b/mlir/include/mlir/Dialect/Vector/VectorTransforms.h
> index ab69a8246587..ca67be9512f7 100644
> --- a/mlir/include/mlir/Dialect/Vector/VectorTransforms.h
> +++ b/mlir/include/mlir/Dialect/Vector/VectorTransforms.h
> @@ -183,8 +183,8 @@ class ContractionOpToOuterProductOpLowering
>  ///
>  /// This only kicks in when VectorTransformsOptions is set to AXPY.
>  //
> -// TODO (ajcbik): this is very similar, but not quite the same as
> -//                the outerproduct lowering above; merge the two?
> +// TODO: this is very similar, but not quite the same as the outerproduct
> +// lowering above; merge the two?
>  class ContractionOpToAXPYLowering
>      : public OpRewritePattern<vector::ContractionOp> {
>  public:
>
> diff  --git a/mlir/include/mlir/Dialect/Vector/VectorUtils.h
> b/mlir/include/mlir/Dialect/Vector/VectorUtils.h
> index 58f936ca305c..19f7f9538307 100644
> --- a/mlir/include/mlir/Dialect/Vector/VectorUtils.h
> +++ b/mlir/include/mlir/Dialect/Vector/VectorUtils.h
> @@ -160,9 +160,9 @@ namespace matcher {
>  /// over other smaller vector types in the function and avoids
> interfering with
>  /// operations on those.
>  /// This is a first approximation, it can easily be extended in the
> future.
> -/// TODO(ntv): this could all be much simpler if we added a bit that a
> vector
> -/// type to mark that a vector is a strict super-vector but it still does
> not
> -/// warrant adding even 1 extra bit in the IR for now.
> +/// TODO: this could all be much simpler if we added a bit that a vector
> type to
> +/// mark that a vector is a strict super-vector but it still does not
> warrant
> +/// adding even 1 extra bit in the IR for now.
>  bool operatesOnSuperVectorsOf(Operation &op, VectorType subVectorType);
>
>  } // end namespace matcher
>
> diff  --git a/mlir/include/mlir/IR/Attributes.h
> b/mlir/include/mlir/IR/Attributes.h
> index f01937fd9794..6436bb9550e8 100644
> --- a/mlir/include/mlir/IR/Attributes.h
> +++ b/mlir/include/mlir/IR/Attributes.h
> @@ -417,7 +417,7 @@ class IntegerAttr
>    APInt getValue() const;
>    /// Return the integer value as a 64-bit int. The attribute must be a
> signless
>    /// integer.
> -  // TODO(jpienaar): Change callers to use getValue instead.
> +  // TODO: Change callers to use getValue instead.
>    int64_t getInt() const;
>    /// Return the integer value as a signed 64-bit int. The attribute must
> be
>    /// a signed integer.
> @@ -1438,8 +1438,7 @@ class SparseElementsAttr
>      auto zeroValue = getZeroValue<T>();
>      auto valueIt = getValues().getValues<T>().begin();
>      const std::vector<ptr
> diff _t> flatSparseIndices(getFlattenedSparseIndices());
> -    // TODO(riverriddle): Move-capture flatSparseIndices when c++14 is
> -    // available.
> +    // TODO: Move-capture flatSparseIndices when c++14 is available.
>      std::function<T(ptr
> diff _t)> mapFn = [=](ptr
> diff _t index) {
>        // Try to map the current index to one of the sparse indices.
>        for (unsigned i = 0, e = flatSparseIndices.size(); i != e; ++i)
>
> diff  --git a/mlir/include/mlir/IR/Diagnostics.h
> b/mlir/include/mlir/IR/Diagnostics.h
> index 584ff4b1e242..84411b720336 100644
> --- a/mlir/include/mlir/IR/Diagnostics.h
> +++ b/mlir/include/mlir/IR/Diagnostics.h
> @@ -558,7 +558,7 @@ class SourceMgrDiagnosticHandler : public
> ScopedDiagnosticHandler {
>    llvm::SMLoc convertLocToSMLoc(FileLineColLoc loc);
>
>    /// The maximum depth that a call stack will be printed.
> -  /// TODO(riverriddle) This should be a tunable flag.
> +  /// TODO: This should be a tunable flag.
>    unsigned callStackLimit = 10;
>
>    std::unique_ptr<detail::SourceMgrDiagnosticHandlerImpl> impl;
>
> diff  --git a/mlir/include/mlir/IR/OpBase.td
> b/mlir/include/mlir/IR/OpBase.td
> index e6fba75ea971..4344d075bc34 100644
> --- a/mlir/include/mlir/IR/OpBase.td
> +++ b/mlir/include/mlir/IR/OpBase.td
> @@ -144,7 +144,7 @@ class Concat<string pre, Pred child, string suf> :
>  // Constraint definitions
>
>  //===----------------------------------------------------------------------===//
>
> -// TODO(b/130064155): Merge Constraints into Pred.
> +// TODO: Merge Constraints into Pred.
>
>  // Base class for named constraints.
>  //
> @@ -314,8 +314,8 @@ class Optional<Type type> :
> TypeConstraint<type.predicate, type.description> {
>  // Note that this does not "inherit" from Type because it would require
>  // duplicating Type subclasses for buildable and non-buildable cases to
> avoid
>  // diamond "inheritance".
> -// TODO(zinenko): we may extend this to a more general 'Buildable' trait,
> -// making some Types and some Attrs buildable.
> +// TODO: we may extend this to a more general 'Buildable' trait, making
> some
> +// Types and some Attrs buildable.
>  class BuildableType<code builder> {
>    // The builder call to invoke (if specified) to construct the
> BuildableType.
>    code builderCall = builder;
> @@ -578,7 +578,7 @@ def AnyRankedTensor :
>    ShapedContainerType<[AnyType], And<[IsTensorTypePred, HasRankPred]>,
>    "ranked tensor">;
>
> -// TODO(b/130064155) Have an easy way to add another constraint to a type.
> +// TODO: Have an easy way to add another constraint to a type.
>  class StaticShapeTensorOf<list<Type> allowedTypes>
>      : Type<And<[TensorOf<allowedTypes>.predicate, HasStaticShapePred]>,
>             "statically shaped " # TensorOf<allowedTypes>.description>;
> @@ -636,7 +636,7 @@ def F16MemRef  : MemRefOf<[F16]>;
>  def F32MemRef  : MemRefOf<[F32]>;
>  def F64MemRef  : MemRefOf<[F64]>;
>
> -// TODO(b/130064155) Have an easy way to add another constraint to a type.
> +// TODO: Have an easy way to add another constraint to a type.
>  class MemRefRankOf<list<Type> allowedTypes, list<int> ranks> :
>      Type<And<[MemRefOf<allowedTypes>.predicate, HasAnyRankOfPred<ranks>]>,
>           StrJoin<!foreach(rank, ranks, rank # "D"), "/">.result # " " #
> @@ -1083,7 +1083,7 @@ class EnumAttrInfo<string name,
> list<EnumAttrCaseInfo> cases> {
>    // use "::" as the delimiter, e.g., given "A::B", generated code will be
>    // placed in `namespace A { namespace B { ... } }`. To avoid placing in
> any
>    // namespace, use "".
> -  // TODO(b/134741431): use dialect to provide the namespace.
> +  // TODO: use dialect to provide the namespace.
>    string cppNamespace = "";
>
>    // The name of the utility function that converts a value of the
> underlying
> @@ -1585,7 +1585,7 @@ def IsNullAttr : AttrConstraint<
>  // An attribute constraint on FlatSymbolRefAttr that requires that the
>  // reference point to an op of `opClass` within the closest parent with a
> symbol
>  // table.
> -// TODO(riverriddle) Add support for nested symbol references.
> +// TODO: Add support for nested symbol references.
>  class ReferToOp<string opClass> : AttrConstraint<
>      CPred<"isa_and_nonnull<" # opClass # ">("
>              "::mlir::SymbolTable::lookupNearestSymbolFrom("
> @@ -1711,8 +1711,8 @@ class ParentOneOf<list<string> ops>
>  def FirstAttrDerivedResultType :
>    GenInternalOpTrait<"FirstAttrDerivedResultType">;
>
> -// TODO(antiagainst): Turn the following into normal traits and generate
> -// verification for them.
> +// TODO: Turn the following into normal traits and generate verification
> for
> +// them.
>
>  // All variadic operands of the op have the same number of values.
>  // A variadic operand contains an array of values whose array size is only
> @@ -1980,9 +1980,9 @@ class Op<Dialect dialect, string mnemonic,
> list<OpTrait> props = []> {
>    code verifier = ?;
>
>    // Whether this op has associated canonicalization patterns.
> -  // TODO(b/120163349): figure out a better way to write canonicalization
> -  // patterns in TableGen rules directly instead of using this marker
> -  // and C++ implementations.
> +  // TODO: figure out a better way to write canonicalization patterns in
> +  // TableGen rules directly instead of using this marker and C++
> +  // implementations.
>    bit hasCanonicalizer = 0;
>
>    // Whether this op has a folder.
> @@ -2024,7 +2024,7 @@ def HasNoUseOf: Constraint<
>
>  // These traits often require including "mlir/IR/TypeUtilities.h".
>
> -// TODO(b/135033717): Improve the autogenerated error messages.
> +// TODO: Improve the autogenerated error messages.
>
>  class Rank<string name> :
>      StrFunc<"$" # name #
> ".getType().cast<::mlir::ShapedType>().getRank()">;
> @@ -2045,7 +2045,7 @@ class AllMatchPred<list<string> values> :
>  class AllMatch<list<string> values, string description> :
>      PredOpTrait<description, AllMatchPred<values>>;
>
> -// TODO(b/135032064): Only works for non-variadic.
> +// TODO: Only works for non-variadic.
>  class AllMatchSameOperatorPred<list<string> names, string operator> :
>      AllMatchPred<!foreach(n, names, !subst("$_self", "$" # n, operator))>;
>
>
> diff  --git a/mlir/include/mlir/IR/StandardTypes.h
> b/mlir/include/mlir/IR/StandardTypes.h
> index 85ac33bcf1ff..5380668b5901 100644
> --- a/mlir/include/mlir/IR/StandardTypes.h
> +++ b/mlir/include/mlir/IR/StandardTypes.h
> @@ -250,7 +250,7 @@ class ShapedType : public Type {
>    using ImplType = detail::ShapedTypeStorage;
>    using Type::Type;
>
> -  // TODO(ntv): merge these two special values in a single one used
> everywhere.
> +  // TODO: merge these two special values in a single one used everywhere.
>    // Unfortunately, uses of `-1` have crept deep into the codebase now
> and are
>    // hard to track.
>    static constexpr int64_t kDynamicSize = -1;
> @@ -561,7 +561,7 @@ class MemRefType : public Type::TypeBase<MemRefType,
> BaseMemRefType,
>    /// Returns the memory space in which data referred to by this memref
> resides.
>    unsigned getMemorySpace() const;
>
> -  // TODO(ntv): merge these two special values in a single one used
> everywhere.
> +  // TODO: merge these two special values in a single one used everywhere.
>    // Unfortunately, uses of `-1` have crept deep into the codebase now
> and are
>    // hard to track.
>    static constexpr int64_t kDynamicSize = -1;
>
> diff  --git a/mlir/include/mlir/Parser.h b/mlir/include/mlir/Parser.h
> index 40bd4f83d345..8bba2ef4d53d 100644
> --- a/mlir/include/mlir/Parser.h
> +++ b/mlir/include/mlir/Parser.h
> @@ -58,7 +58,7 @@ OwningModuleRef parseSourceString(llvm::StringRef
> moduleStr,
>  /// constructed from a new SourceMgr with a single a MemoryBuffer wrapping
>  /// `attrStr`. If the passed `attrStr` has additional tokens that were
> not part
>  /// of the type, an error is emitted.
> -// TODO(ntv) Improve diagnostic reporting.
> +// TODO: Improve diagnostic reporting.
>  Attribute parseAttribute(llvm::StringRef attrStr, MLIRContext *context);
>  Attribute parseAttribute(llvm::StringRef attrStr, Type type);
>
> @@ -76,7 +76,7 @@ Attribute parseAttribute(llvm::StringRef attrStr, Type
> type, size_t &numRead);
>  /// constructed from a new SourceMgr with a single a MemoryBuffer wrapping
>  /// `typeStr`. If the passed `typeStr` has additional tokens that were
> not part
>  /// of the type, an error is emitted.
> -// TODO(ntv) Improve diagnostic reporting.
> +// TODO: Improve diagnostic reporting.
>  Type parseType(llvm::StringRef typeStr, MLIRContext *context);
>
>  /// This parses a single MLIR type to an MLIR context if it was valid.
> If not,
>
> diff  --git a/mlir/include/mlir/Pass/PassOptions.h
> b/mlir/include/mlir/Pass/PassOptions.h
> index de4795a385f0..04754196e67a 100644
> --- a/mlir/include/mlir/Pass/PassOptions.h
> +++ b/mlir/include/mlir/Pass/PassOptions.h
> @@ -96,8 +96,8 @@ class PassOptions : protected llvm::cl::SubCommand {
>    /// The specific parser to use depending on llvm::cl parser used. This
> is only
>    /// necessary because we need to provide additional methods for certain
> data
>    /// type parsers.
> -  /// TODO(riverriddle) We should upstream the methods in
> GenericOptionParser to
> -  /// avoid the need to do this.
> +  /// TODO: We should upstream the methods in GenericOptionParser to
> avoid the
> +  /// need to do this.
>    template <typename DataType>
>    using OptionParser =
>        std::conditional_t<std::is_base_of<llvm::cl::generic_parser_base,
>
> diff  --git a/mlir/include/mlir/TableGen/Operator.h
> b/mlir/include/mlir/TableGen/Operator.h
> index 8f567a7615af..29d4caa32467 100644
> --- a/mlir/include/mlir/TableGen/Operator.h
> +++ b/mlir/include/mlir/TableGen/Operator.h
> @@ -219,9 +219,9 @@ class Operator {
>    StringRef getExtraClassDeclaration() const;
>
>    // Returns the Tablegen definition this operator was constructed from.
> -  // TODO(antiagainst,zinenko): do not expose the TableGen record, this
> is a
> -  // temporary solution to OpEmitter requiring a Record because Operator
> does
> -  // not provide enough methods.
> +  // TODO: do not expose the TableGen record, this is a temporary
> solution to
> +  // OpEmitter requiring a Record because Operator does not provide enough
> +  // methods.
>    const llvm::Record &getDef() const;
>
>    // Returns the dialect of the op.
>
> diff  --git a/mlir/include/mlir/TableGen/Pattern.h
> b/mlir/include/mlir/TableGen/Pattern.h
> index 94b9cde9332a..a5759e358f69 100644
> --- a/mlir/include/mlir/TableGen/Pattern.h
> +++ b/mlir/include/mlir/TableGen/Pattern.h
> @@ -407,8 +407,8 @@ class Pattern {
>    const llvm::Record &def;
>
>    // All operators.
> -  // TODO(antiagainst): we need a proper context manager, like
> MLIRContext,
> -  // for managing the lifetime of shared entities.
> +  // TODO: we need a proper context manager, like MLIRContext, for
> managing the
> +  // lifetime of shared entities.
>    RecordOperatorMap *recordOpMap;
>  };
>
>
> diff  --git a/mlir/include/mlir/Transforms/DialectConversion.h
> b/mlir/include/mlir/Transforms/DialectConversion.h
> index d862823930c5..26b7ce6ea6c3 100644
> --- a/mlir/include/mlir/Transforms/DialectConversion.h
> +++ b/mlir/include/mlir/Transforms/DialectConversion.h
> @@ -355,8 +355,7 @@ struct OpConversionPattern : public ConversionPattern {
>      return matchAndRewrite(cast<SourceOp>(op), operands, rewriter);
>    }
>
> -  // TODO(b/142763075): Use OperandAdaptor when it supports access to
> unnamed
> -  // operands.
> +  // TODO: Use OperandAdaptor when it supports access to unnamed operands.
>
>    /// Rewrite and Match methods that operate on the SourceOp type. These
> must be
>    /// overridden by the derived pattern class.
>
> diff  --git a/mlir/include/mlir/Transforms/LoopFusionUtils.h
> b/mlir/include/mlir/Transforms/LoopFusionUtils.h
> index 2832f050fc55..36d2520b7c85 100644
> --- a/mlir/include/mlir/Transforms/LoopFusionUtils.h
> +++ b/mlir/include/mlir/Transforms/LoopFusionUtils.h
> @@ -24,9 +24,8 @@ class AffineForOp;
>  struct ComputationSliceState;
>  class Operation;
>
> -// TODO(andydavis) Extend this module to include utility functions for
> querying
> -// fusion cost/storage reduction, and for performing the loop fusion
> -// transformation.
> +// TODO: Extend this module to include utility functions for querying
> fusion
> +// cost/storage reduction, and for performing the loop fusion
> transformation.
>
>  struct FusionResult {
>    enum ResultEnum {
> @@ -46,7 +45,7 @@ struct FusionResult {
>  /// returns a FusionResult explaining why fusion is not feasible.
>  /// NOTE: This function is not feature complete and should only be used in
>  /// testing.
> -/// TODO(andydavis) Update comments when this function is fully
> implemented.
> +/// TODO: Update comments when this function is fully implemented.
>  FusionResult canFuseLoops(AffineForOp srcForOp, AffineForOp dstForOp,
>                            unsigned dstLoopDepth,
>                            ComputationSliceState *srcSlice);
> @@ -71,14 +70,14 @@ struct LoopNestStats {
>  /// Collect loop nest statistics (eg. loop trip count and operation count)
>  /// in 'stats' for loop nest rooted at 'forOp'. Returns true on success,
>  /// returns false otherwise.
> -// TODO(andydavis) Consider moving this to LoopUtils.
> +// TODO: Consider moving this to LoopUtils.
>  bool getLoopNestStats(AffineForOp forOp, LoopNestStats *stats);
>
>  /// Computes the total cost of the loop nest rooted at 'forOp' using
> 'stats'.
>  /// Currently, the total cost is computed by counting the total operation
>  /// instance count (i.e. total number of operations in the loop body *
> loop
>  /// trip count) for the entire loop nest.
> -// TODO(andydavis) Improve this cost model.
> +// TODO: Improve this cost model.
>  int64_t getComputeCost(AffineForOp forOp, LoopNestStats &stats);
>
>  /// Computes and returns in 'computeCost', the total compute cost of
> fusing the
> @@ -87,7 +86,7 @@ int64_t getComputeCost(AffineForOp forOp, LoopNestStats
> &stats);
>  /// (i.e. total number of operations in the loop body * loop trip count)
> for
>  /// the entire loop nest.
>  /// Returns true on success, failure otherwise (e.g. non-constant trip
> counts).
> -// TODO(andydavis) Improve this cost model.
> +// TODO: Improve this cost model.
>  bool getFusionComputeCost(AffineForOp srcForOp, LoopNestStats &srcStats,
>                            AffineForOp dstForOp, LoopNestStats &dstStats,
>                            ComputationSliceState *slice, int64_t
> *computeCost);
>
> diff  --git a/mlir/include/mlir/Transforms/Utils.h
> b/mlir/include/mlir/Transforms/Utils.h
> index acd5fd9efe41..6f29c1b41ae6 100644
> --- a/mlir/include/mlir/Transforms/Utils.h
> +++ b/mlir/include/mlir/Transforms/Utils.h
> @@ -56,7 +56,7 @@ class OpBuilder;
>  //  d1, d2) -> (d0 - d1, d2), and %ii will be the extra operand. Without
> any
>  //  extra operands, note that 'indexRemap' would just be applied to
> existing
>  //  indices (%i, %j).
> -//  TODO(bondhugula): allow extraIndices to be added at any position.
> +//  TODO: allow extraIndices to be added at any position.
>  LogicalResult replaceAllMemRefUsesWith(Value oldMemRef, Value newMemRef,
>                                         ArrayRef<Value> extraIndices = {},
>                                         AffineMap indexRemap = AffineMap(),
>
> diff  --git a/mlir/lib/Analysis/AffineAnalysis.cpp
> b/mlir/lib/Analysis/AffineAnalysis.cpp
> index 044c85d4f685..3bacedae80ff 100644
> --- a/mlir/lib/Analysis/AffineAnalysis.cpp
> +++ b/mlir/lib/Analysis/AffineAnalysis.cpp
> @@ -33,8 +33,8 @@ using llvm::dbgs;
>  /// Returns the sequence of AffineApplyOp Operations operation in
>  /// 'affineApplyOps', which are reachable via a search starting from
> 'operands',
>  /// and ending at operands which are not defined by AffineApplyOps.
> -// TODO(andydavis) Add a method to AffineApplyOp which forward substitutes
> -// the AffineApplyOp into any user AffineApplyOps.
> +// TODO: Add a method to AffineApplyOp which forward substitutes the
> +// AffineApplyOp into any user AffineApplyOps.
>  void mlir::getReachableAffineApplyOps(
>      ArrayRef<Value> operands, SmallVectorImpl<Operation *>
> &affineApplyOps) {
>    struct State {
> @@ -81,10 +81,10 @@ void mlir::getReachableAffineApplyOps(
>  // the loop IVs of the forOps appearing in that order. Any symbols founds
> in
>  // the bound operands are added as symbols in the system. Returns failure
> for
>  // the yet unimplemented cases.
> -// TODO(andydavis,bondhugula) Handle non-unit steps through local
> variables or
> -// stride information in FlatAffineConstraints. (For eg., by using iv -
> lb %
> -// step = 0 and/or by introducing a method in FlatAffineConstraints
> -// setExprStride(ArrayRef<int64_t> expr, int64_t stride)
> +// TODO: Handle non-unit steps through local variables or stride
> information in
> +// FlatAffineConstraints. (For eg., by using iv - lb % step = 0 and/or by
> +// introducing a method in FlatAffineConstraints
> setExprStride(ArrayRef<int64_t>
> +// expr, int64_t stride)
>  LogicalResult mlir::getIndexSet(MutableArrayRef<AffineForOp> forOps,
>                                  FlatAffineConstraints *domain) {
>    SmallVector<Value, 4> indices;
> @@ -104,10 +104,10 @@ LogicalResult
> mlir::getIndexSet(MutableArrayRef<AffineForOp> forOps,
>  // potentially involving any Function symbols. The dimensional
> identifiers in
>  // 'indexSet' correspond to the loops surrounding 'op' from outermost to
>  // innermost.
> -// TODO(andydavis) Add support to handle IfInsts surrounding 'op'.
> +// TODO: Add support to handle IfInsts surrounding 'op'.
>  static LogicalResult getInstIndexSet(Operation *op,
>                                       FlatAffineConstraints *indexSet) {
> -  // TODO(andydavis) Extend this to gather enclosing IfInsts and consider
> +  // TODO: Extend this to gather enclosing IfInsts and consider
>    // factoring it out into a utility function.
>    SmallVector<AffineForOp, 4> loops;
>    getLoopIVs(*op, &loops);
> @@ -130,9 +130,9 @@ namespace {
>  // an Value in multiple maps are provided (i.e. getSrcDimOrSymPos) to
> handle
>  // the common case of resolving positions for all access function
> operands.
>  //
> -// TODO(andydavis) Generalize this: could take a template parameter for
> -// the number of maps (3 in the current case), and lookups could take
> indices
> -// of maps to check. So getSrcDimOrSymPos would be "getPos(value, {0,
> 2})".
> +// TODO: Generalize this: could take a template parameter for the number
> of maps
> +// (3 in the current case), and lookups could take indices of maps to
> check. So
> +// getSrcDimOrSymPos would be "getPos(value, {0, 2})".
>  class ValuePositionMap {
>  public:
>    void addSrcValue(Value value) {
> @@ -758,7 +758,7 @@ void MemRefAccess::getAccessMap(AffineValueMap
> *accessMap) const {
>  //       0         0         0        -1        0     0     0     50   >=
> 0
>  //
>  //
> -// TODO(andydavis) Support AffineExprs mod/floordiv/ceildiv.
> +// TODO: Support AffineExprs mod/floordiv/ceildiv.
>  DependenceResult mlir::checkMemrefAccessDependence(
>      const MemRefAccess &srcAccess, const MemRefAccess &dstAccess,
>      unsigned loopDepth, FlatAffineConstraints *dependenceConstraints,
> @@ -874,8 +874,8 @@ void mlir::getDependenceComponents(
>
>          FlatAffineConstraints dependenceConstraints;
>          SmallVector<DependenceComponent, 2> depComps;
> -        // TODO(andydavis,bondhugula) Explore whether it would be
> profitable
> -        // to pre-compute and store deps instead of repeatedly checking.
> +        // TODO: Explore whether it would be profitable to pre-compute
> and store
> +        // deps instead of repeatedly checking.
>          DependenceResult result = checkMemrefAccessDependence(
>              srcAccess, dstAccess, d, &dependenceConstraints, &depComps);
>          if (hasDependence(result))
>
> diff  --git a/mlir/lib/Analysis/AffineStructures.cpp
> b/mlir/lib/Analysis/AffineStructures.cpp
> index f297f6b11d63..c09a9af45712 100644
> --- a/mlir/lib/Analysis/AffineStructures.cpp
> +++ b/mlir/lib/Analysis/AffineStructures.cpp
> @@ -894,7 +894,7 @@ void FlatAffineConstraints::removeIdRange(unsigned
> idStart, unsigned idLimit) {
>    // We are going to be removing one or more identifiers from the range.
>    assert(idStart < numIds && "invalid idStart position");
>
> -  // TODO(andydavis) Make 'removeIdRange' a lambda called from here.
> +  // TODO: Make 'removeIdRange' a lambda called from here.
>    // Remove eliminated identifiers from equalities.
>    shiftColumnsToLeft(this, idStart, idLimit, /*isEq=*/true);
>
> @@ -1173,8 +1173,8 @@ static bool detectAsMod(const FlatAffineConstraints
> &cst, unsigned pos,
>        if (c == pos)
>          continue;
>        // The coefficient of the quotient should be +/-divisor.
> -      // TODO(bondhugula): could be extended to detect an affine function
> for
> -      // the quotient (i.e., the coeff could be a non-zero multiple of
> divisor).
> +      // TODO: could be extended to detect an affine function for the
> quotient
> +      // (i.e., the coeff could be a non-zero multiple of divisor).
>        int64_t v = cst.atEq(r, c) * cst.atEq(r, pos);
>        if (v == divisor || v == -divisor) {
>          seenQuotient++;
> @@ -1182,8 +1182,8 @@ static bool detectAsMod(const FlatAffineConstraints
> &cst, unsigned pos,
>          quotientSign = v > 0 ? 1 : -1;
>        }
>        // The coefficient of the dividend should be +/-1.
> -      // TODO(bondhugula): could be extended to detect an affine function
> of
> -      // the other identifiers as the dividend.
> +      // TODO: could be extended to detect an affine function of the other
> +      // identifiers as the dividend.
>        else if (v == -1 || v == 1) {
>          seenDividend++;
>          dividendPos = c;
> @@ -1342,8 +1342,8 @@ static bool detectAsFloorDiv(const
> FlatAffineConstraints &cst, unsigned pos,
>          }
>          // Expression can't be constructed as it depends on a yet unknown
>          // identifier.
> -        // TODO(mlir-team): Visit/compute the identifiers in an order so
> that
> -        // this doesn't happen. More complex but much more efficient.
> +        // TODO: Visit/compute the identifiers in an order so that this
> doesn't
> +        // happen. More complex but much more efficient.
>          if (c < f)
>            continue;
>          // Successfully detected the floordiv.
> @@ -1619,9 +1619,9 @@ void FlatAffineConstraints::getSliceBounds(unsigned
> offset, unsigned num,
>        lbMap = AffineMap::get(numMapDims, numMapSymbols, expr);
>        ubMap = AffineMap::get(numMapDims, numMapSymbols, expr + 1);
>      } else {
> -      // TODO(bondhugula): Whenever there are local identifiers in the
> -      // dependence constraints, we'll conservatively over-approximate,
> since we
> -      // don't always explicitly compute them above (in the while loop).
> +      // TODO: Whenever there are local identifiers in the dependence
> +      // constraints, we'll conservatively over-approximate, since we
> don't
> +      // always explicitly compute them above (in the while loop).
>        if (getNumLocalIds() == 0) {
>          // Work on a copy so that we don't update this constraint system.
>          if (!tmpClone) {
> @@ -1636,7 +1636,7 @@ void FlatAffineConstraints::getSliceBounds(unsigned
> offset, unsigned num,
>
>        // If the above fails, we'll just use the constant lower bound and
> the
>        // constant upper bound (if they exist) as the slice bounds.
> -      // TODO(b/126426796): being conservative for the moment in cases
> that
> +      // TODO: being conservative for the moment in cases that
>        // lead to multiple bounds - until getConstDifference in
> LoopFusion.cpp is
>        // fixed (b/126426796).
>        if (!lbMap || lbMap.getNumResults() > 1) {
> @@ -2356,8 +2356,8 @@ void
> FlatAffineConstraints::removeTrivialRedundancy() {
>    }
>    inequalities.resize(numReservedCols * pos);
>
> -  // TODO(bondhugula): consider doing this for equalities as well, but
> probably
> -  // not worth the savings.
> +  // TODO: consider doing this for equalities as well, but probably not
> worth
> +  // the savings.
>  }
>
>  void FlatAffineConstraints::clearAndCopyFrom(
> @@ -2434,8 +2434,8 @@ getNewNumDimsSymbols(unsigned pos, const
> FlatAffineConstraints &cst) {
>  /// holes/splinters:                         j = 2
>  ///
>  /// darkShadow = false, isResultIntegerExact = nullptr are default values.
> -// TODO(bondhugula): a slight modification to yield dark shadow version
> of FM
> -// (tightened), which can prove the existence of a solution if there is
> one.
> +// TODO: a slight modification to yield dark shadow version of FM
> (tightened),
> +// which can prove the existence of a solution if there is one.
>  void FlatAffineConstraints::FourierMotzkinEliminate(
>      unsigned pos, bool darkShadow, bool *isResultIntegerExact) {
>    LLVM_DEBUG(llvm::dbgs() << "FM input (eliminate pos " << pos << "):\n");
> @@ -2467,7 +2467,7 @@ void FlatAffineConstraints::FourierMotzkinEliminate(
>    }
>    if (r == getNumInequalities()) {
>      // If it doesn't appear, just remove the column and return.
> -    // TODO(andydavis,bondhugula): refactor removeColumns to use it from
> here.
> +    // TODO: refactor removeColumns to use it from here.
>      removeId(pos);
>      LLVM_DEBUG(llvm::dbgs() << "FM output:\n");
>      LLVM_DEBUG(dump());
> @@ -2538,7 +2538,7 @@ void FlatAffineConstraints::FourierMotzkinEliminate(
>        // coefficient in the canonical form as the view taken here is that
> of the
>        // term being moved to the other size of '>='.
>        int64_t ubCoeff = -atIneq(ubPos, pos);
> -      // TODO(bondhugula): refactor this loop to avoid all branches
> inside.
> +      // TODO: refactor this loop to avoid all branches inside.
>        for (unsigned l = 0, e = getNumCols(); l < e; l++) {
>          if (l == pos)
>            continue;
> @@ -2742,14 +2742,14 @@ FlatAffineConstraints::unionBoundingBox(const
> FlatAffineConstraints &otherCst) {
>    for (unsigned d = 0, e = getNumDimIds(); d < e; ++d) {
>      auto extent = getConstantBoundOnDimSize(d, &lb, &lbFloorDivisor, &ub);
>      if (!extent.hasValue())
> -      // TODO(bondhugula): symbolic extents when necessary.
> -      // TODO(bondhugula): handle union if a dimension is unbounded.
> +      // TODO: symbolic extents when necessary.
> +      // TODO: handle union if a dimension is unbounded.
>        return failure();
>
>      auto otherExtent = otherAligned.getConstantBoundOnDimSize(
>          d, &otherLb, &otherLbFloorDivisor, &otherUb);
>      if (!otherExtent.hasValue() || lbFloorDivisor != otherLbFloorDivisor)
> -      // TODO(bondhugula): symbolic extents when necessary.
> +      // TODO: symbolic extents when necessary.
>        return failure();
>
>      assert(lbFloorDivisor > 0 && "divisor always expected to be
> positive");
> @@ -2819,9 +2819,9 @@ FlatAffineConstraints::unionBoundingBox(const
> FlatAffineConstraints &otherCst) {
>    append(commonCst);
>    removeTrivialRedundancy();
>
> -  // TODO(mlir-team): copy over pure symbolic constraints from this and
> 'other'
> -  // over to the union (since the above are just the union along
> dimensions); we
> -  // shouldn't be discarding any other constraints on the symbols.
> +  // TODO: copy over pure symbolic constraints from this and 'other' over
> to the
> +  // union (since the above are just the union along dimensions); we
> shouldn't
> +  // be discarding any other constraints on the symbols.
>
>    return success();
>  }
>
> diff  --git a/mlir/lib/Analysis/LoopAnalysis.cpp
> b/mlir/lib/Analysis/LoopAnalysis.cpp
> index 8975a0796356..210b68eae3cb 100644
> --- a/mlir/lib/Analysis/LoopAnalysis.cpp
> +++ b/mlir/lib/Analysis/LoopAnalysis.cpp
> @@ -213,7 +213,7 @@ DenseSet<Value> mlir::getInvariantAccesses(Value iv,
> ArrayRef<Value> indices) {
>  /// Returns false if the MemRef has a non-identity layoutMap or more than
> 1
>  /// layoutMap. This is conservative.
>  ///
> -// TODO(ntv): check strides.
> +// TODO: check strides.
>  template <typename LoadOrStoreOp>
>  static bool isContiguousAccess(Value iv, LoadOrStoreOp memoryOp,
>                                 int *memRefDim) {
> @@ -224,8 +224,7 @@ static bool isContiguousAccess(Value iv, LoadOrStoreOp
> memoryOp,
>    auto memRefType = memoryOp.getMemRefType();
>
>    auto layoutMap = memRefType.getAffineMaps();
> -  // TODO(ntv): remove dependence on Builder once we support non-identity
> -  // layout map.
> +  // TODO: remove dependence on Builder once we support non-identity
> layout map.
>    Builder b(memoryOp.getContext());
>    if (layoutMap.size() >= 2 ||
>        (layoutMap.size() == 1 &&
> @@ -314,7 +313,7 @@ isVectorizableLoopBodyWithOpCond(AffineForOp loop,
>      auto store = dyn_cast<AffineStoreOp>(op);
>      // Only scalar types are considered vectorizable, all load/store must
> be
>      // vectorizable for a loop to qualify as vectorizable.
> -    // TODO(ntv): ponder whether we want to be more general here.
> +    // TODO: ponder whether we want to be more general here.
>      bool vector = load ? isVectorElement(load) : isVectorElement(store);
>      if (vector) {
>        return false;
> @@ -345,8 +344,8 @@ bool mlir::isVectorizableLoopBody(AffineForOp loop,
>  /// Checks whether SSA dominance would be violated if a for op's body
>  /// operations are shifted by the specified shifts. This method checks if
> a
>  /// 'def' and all its uses have the same shift factor.
> -// TODO(mlir-team): extend this to check for memory-based dependence
> violation
> -// when we have the support.
> +// TODO: extend this to check for memory-based dependence violation when
> we have
> +// the support.
>  bool mlir::isOpwiseShiftValid(AffineForOp forOp, ArrayRef<uint64_t>
> shifts) {
>    auto *forBody = forOp.getBody();
>    assert(shifts.size() == forBody->getOperations().size());
>
> diff  --git a/mlir/lib/Analysis/NestedMatcher.cpp
> b/mlir/lib/Analysis/NestedMatcher.cpp
> index e745ac7190e5..7e15ea1094c9 100644
> --- a/mlir/lib/Analysis/NestedMatcher.cpp
> +++ b/mlir/lib/Analysis/NestedMatcher.cpp
> @@ -68,8 +68,8 @@ unsigned NestedPattern::getDepth() const {
>  ///   3. if all is good, recursively matches the nested patterns;
>  ///   4. if all nested match then the single operation matches too and is
>  ///      appended to the list of matches;
> -///   5. TODO(ntv) Optionally applies actions (lambda), in which case we
> will
> -///      want to traverse in post-order DFS to avoid invalidating
> iterators.
> +///   5. TODO: Optionally applies actions (lambda), in which case we will
> want
> +///      to traverse in post-order DFS to avoid invalidating iterators.
>  void NestedPattern::matchOne(Operation *op,
>                               SmallVectorImpl<NestedMatch> *matches) {
>    if (skip == op) {
>
> diff  --git a/mlir/lib/Analysis/Utils.cpp b/mlir/lib/Analysis/Utils.cpp
> index ab273f8d95d5..861976567d56 100644
> --- a/mlir/lib/Analysis/Utils.cpp
> +++ b/mlir/lib/Analysis/Utils.cpp
> @@ -191,7 +191,7 @@ LogicalResult MemRefRegion::unionBoundingBox(const
> MemRefRegion &other) {
>  // region:  {memref = %A, write = false, {%i <= m0 <= %i + 7} }
>  // The last field is a 2-d FlatAffineConstraints symbolic in %i.
>  //
> -// TODO(bondhugula): extend this to any other memref dereferencing ops
> +// TODO: extend this to any other memref dereferencing ops
>  // (dma_start, dma_wait).
>  LogicalResult MemRefRegion::compute(Operation *op, unsigned loopDepth,
>                                      ComputationSliceState *sliceState,
> @@ -258,7 +258,7 @@ LogicalResult MemRefRegion::compute(Operation *op,
> unsigned loopDepth,
>      if (auto loop = getForInductionVarOwner(operand)) {
>        // Note that cst can now have more dimensions than accessMap if the
>        // bounds expressions involve outer loops or other symbols.
> -      // TODO(bondhugula): rewrite this to use getInstIndexSet; this way
> +      // TODO: rewrite this to use getInstIndexSet; this way
>        // conditionals will be handled when the latter supports it.
>        if (failed(cst.addAffineForOpDomain(loop)))
>          return failure();
> @@ -330,7 +330,7 @@ LogicalResult MemRefRegion::compute(Operation *op,
> unsigned loopDepth,
>
>    // Add upper/lower bounds for each memref dimension with static size
>    // to guard against potential over-approximation from projection.
> -  // TODO(andydavis) Support dynamic memref dimensions.
> +  // TODO: Support dynamic memref dimensions.
>    if (addMemRefDimBounds) {
>      auto memRefType = memref.getType().cast<MemRefType>();
>      for (unsigned r = 0; r < rank; r++) {
> @@ -390,7 +390,7 @@ Optional<int64_t> MemRefRegion::getRegionSize() {
>  /// Returns the size of memref data in bytes if it's statically shaped,
> None
>  /// otherwise.  If the element of the memref has vector type, takes into
> account
>  /// size of the vector as well.
> -//  TODO(mlir-team): improve/complete this when we have target data.
> +//  TODO: improve/complete this when we have target data.
>  Optional<uint64_t> mlir::getMemRefSizeInBytes(MemRefType memRefType) {
>    if (!memRefType.hasStaticShape())
>      return None;
> @@ -434,7 +434,7 @@ LogicalResult
> mlir::boundCheckLoadOrStoreOp(LoadOrStoreOp loadOrStoreOp,
>      // feasible. If it is, there is at least one point out of bounds.
>      SmallVector<int64_t, 4> ineq(rank + 1, 0);
>      int64_t dimSize = loadOrStoreOp.getMemRefType().getDimSize(r);
> -    // TODO(bondhugula): handle dynamic dim sizes.
> +    // TODO: handle dynamic dim sizes.
>      if (dimSize == -1)
>        continue;
>
> @@ -525,7 +525,7 @@ static LogicalResult
> addMissingLoopIVBounds(SmallPtrSet<Value, 8> &ivs,
>  }
>
>  // Returns the innermost common loop depth for the set of operations in
> 'ops'.
> -// TODO(andydavis) Move this to LoopUtils.
> +// TODO: Move this to LoopUtils.
>  static unsigned
>  getInnermostCommonLoopDepth(ArrayRef<Operation *> ops,
>                              SmallVectorImpl<AffineForOp>
> &surroundingLoops) {
> @@ -782,7 +782,7 @@ void mlir::getComputationSliceState(
>    }
>    // Clear all sliced loop bounds beginning at the first sequential loop,
> or
>    // first loop with a slice fusion barrier attribute..
> -  // TODO(andydavis, bondhugula) Use MemRef read/write regions instead of
> +  // TODO: Use MemRef read/write regions instead of
>    // using 'kSliceFusionBarrierAttrName'.
>    auto getSliceLoop = [&](unsigned i) {
>      return isBackwardSlice ? srcLoopIVs[i] : dstLoopIVs[i];
> @@ -804,10 +804,10 @@ void mlir::getComputationSliceState(
>  /// updates the slice loop bounds with any non-null bound maps specified
> in
>  /// 'sliceState', and inserts this slice into the loop nest surrounding
>  /// 'dstOpInst' at loop depth 'dstLoopDepth'.
> -// TODO(andydavis,bondhugula): extend the slicing utility to compute
> slices that
> +// TODO: extend the slicing utility to compute slices that
>  // aren't necessarily a one-to-one relation b/w the source and
> destination. The
>  // relation between the source and destination could be many-to-many in
> general.
> -// TODO(andydavis,bondhugula): the slice computation is incorrect in the
> cases
> +// TODO: the slice computation is incorrect in the cases
>  // where the dependence from the source to the destination does not cover
> the
>  // entire destination index set. Subtract out the dependent destination
>  // iterations from destination index set and check for emptiness --- this
> is one
> @@ -832,7 +832,7 @@ mlir::insertBackwardComputationSlice(Operation
> *srcOpInst, Operation *dstOpInst,
>
>    // Find the op block positions of 'srcOpInst' within 'srcLoopIVs'.
>    SmallVector<unsigned, 4> positions;
> -  // TODO(andydavis): This code is incorrect since srcLoopIVs can be 0-d.
> +  // TODO: This code is incorrect since srcLoopIVs can be 0-d.
>    findInstPosition(srcOpInst, srcLoopIVs[0].getOperation()->getBlock(),
>                     &positions);
>
>
> diff  --git a/mlir/lib/Conversion/AVX512ToLLVM/ConvertAVX512ToLLVM.cpp
> b/mlir/lib/Conversion/AVX512ToLLVM/ConvertAVX512ToLLVM.cpp
> index b65118b72fdf..a8c483430fce 100644
> --- a/mlir/lib/Conversion/AVX512ToLLVM/ConvertAVX512ToLLVM.cpp
> +++ b/mlir/lib/Conversion/AVX512ToLLVM/ConvertAVX512ToLLVM.cpp
> @@ -30,7 +30,7 @@ static Type getSrcVectorElementType(OpTy op) {
>    return op.src().getType().template cast<VectorType>().getElementType();
>  }
>
> -// TODO(ntv, zinenko): Code is currently copy-pasted and adapted from the
> code
> +// TODO: Code is currently copy-pasted and adapted from the code
>  // 1-1 LLVM conversion. It would better if it were properly exposed in
> core and
>  // reusable.
>  /// Basic lowering implementation for one-to-one rewriting from AVX512
> Ops to
> @@ -76,7 +76,7 @@ matchAndRewriteOneToOne(const ConvertToLLVMPattern
> &lowering,
>  }
>
>  namespace {
> -// TODO(ntv): Patterns are too verbose due to the fact that we have 1 op
> (e.g.
> +// TODO: Patterns are too verbose due to the fact that we have 1 op (e.g.
>  // MaskRndScaleOp) and
> diff erent possible target ops. It would be better to take
>  // a Functor so that all these conversions become 1-liners.
>  struct MaskRndScaleOpPS512Conversion : public ConvertToLLVMPattern {
>
> diff  --git a/mlir/lib/Conversion/GPUToNVVM/LowerGpuOpsToNVVMOps.cpp
> b/mlir/lib/Conversion/GPUToNVVM/LowerGpuOpsToNVVMOps.cpp
> index feaa38259e01..afb6d2875866 100644
> --- a/mlir/lib/Conversion/GPUToNVVM/LowerGpuOpsToNVVMOps.cpp
> +++ b/mlir/lib/Conversion/GPUToNVVM/LowerGpuOpsToNVVMOps.cpp
> @@ -140,7 +140,7 @@ struct LowerGpuOpsToNVVMOpsPass
>                          LLVM::LogOp, LLVM::Log10Op, LLVM::Log2Op>();
>      target.addIllegalOp<FuncOp>();
>      target.addLegalDialect<NVVM::NVVMDialect>();
> -    // TODO(csigg): Remove once we support replacing non-root ops.
> +    // TODO: Remove once we support replacing non-root ops.
>      target.addLegalOp<gpu::YieldOp, gpu::GPUModuleOp, gpu::ModuleEndOp>();
>      if (failed(applyPartialConversion(m, target, patterns)))
>        signalPassFailure();
>
> diff  --git a/mlir/lib/Conversion/GPUToROCDL/LowerGpuOpsToROCDLOps.cpp
> b/mlir/lib/Conversion/GPUToROCDL/LowerGpuOpsToROCDLOps.cpp
> index 8a1d10f0eb03..697f8078e725 100644
> --- a/mlir/lib/Conversion/GPUToROCDL/LowerGpuOpsToROCDLOps.cpp
> +++ b/mlir/lib/Conversion/GPUToROCDL/LowerGpuOpsToROCDLOps.cpp
> @@ -74,7 +74,7 @@ struct LowerGpuOpsToROCDLOpsPass
>                          LLVM::LogOp, LLVM::Log10Op, LLVM::Log2Op>();
>      target.addIllegalOp<FuncOp>();
>      target.addLegalDialect<ROCDL::ROCDLDialect>();
> -    // TODO(whchung): Remove once we support replacing non-root ops.
> +    // TODO: Remove once we support replacing non-root ops.
>      target.addLegalOp<gpu::YieldOp, gpu::GPUModuleOp, gpu::ModuleEndOp>();
>      if (failed(applyPartialConversion(m, target, patterns)))
>        signalPassFailure();
>
> diff  --git a/mlir/lib/Conversion/LinalgToLLVM/LinalgToLLVM.cpp
> b/mlir/lib/Conversion/LinalgToLLVM/LinalgToLLVM.cpp
> index b92ab13bd513..7b57854dde98 100644
> --- a/mlir/lib/Conversion/LinalgToLLVM/LinalgToLLVM.cpp
> +++ b/mlir/lib/Conversion/LinalgToLLVM/LinalgToLLVM.cpp
> @@ -219,7 +219,7 @@ class SliceOpConversion : public ConvertToLLVMPattern {
>      BaseViewConversionHelper desc(
>          typeConverter.convertType(sliceOp.getShapedType()));
>
> -    // TODO(ntv): extract sizes and emit asserts.
> +    // TODO: extract sizes and emit asserts.
>      SmallVector<Value, 4> strides(memRefType.getRank());
>      for (int i = 0, e = memRefType.getRank(); i < e; ++i)
>        strides[i] = baseDesc.stride(i);
>
> diff  --git a/mlir/lib/Conversion/LinalgToSPIRV/LinalgToSPIRV.cpp
> b/mlir/lib/Conversion/LinalgToSPIRV/LinalgToSPIRV.cpp
> index cf67b96fce1e..0cde4a05ece5 100644
> --- a/mlir/lib/Conversion/LinalgToSPIRV/LinalgToSPIRV.cpp
> +++ b/mlir/lib/Conversion/LinalgToSPIRV/LinalgToSPIRV.cpp
> @@ -92,7 +92,7 @@ SingleWorkgroupReduction::matchAsPerformingReduction(
>    if (genericOp.indexing_maps().getValue().size() != 2)
>      return llvm::None;
>
> -  // TODO(nicolasvasilache): create utility functions for these checks in
> Linalg
> +  // TODO: create utility functions for these checks in Linalg
>    // and use them.
>    auto inputMap =
> genericOp.indexing_maps().getValue()[0].cast<AffineMapAttr>();
>    auto outputMap =
> @@ -132,7 +132,7 @@ LogicalResult
> SingleWorkgroupReduction::matchAndRewrite(
>                     [](const APInt &size) { return !size.isOneValue(); }))
>      return failure();
>
> -  // TODO(antiagainst): Query the target environment to make sure the
> current
> +  // TODO: Query the target environment to make sure the current
>    // workload fits in a local workgroup.
>
>    Value convertedInput = operands[0], convertedOutput = operands[1];
> @@ -141,7 +141,7 @@ LogicalResult
> SingleWorkgroupReduction::matchAndRewrite(
>    // Get the invocation ID.
>    Value x = getLocalInvocationDimSize(genericOp, /*dim=*/0, loc,
> &rewriter);
>
> -  // TODO(antiagainst): Load to Workgroup storage class first.
> +  // TODO: Load to Workgroup storage class first.
>
>    // Get the input element accessed by this invocation.
>    Value inputElementPtr = spirv::getElementPtr(
>
> diff  --git a/mlir/lib/Conversion/LinalgToStandard/LinalgToStandard.cpp
> b/mlir/lib/Conversion/LinalgToStandard/LinalgToStandard.cpp
> index 7f13a7a609e9..75b8466ff7fd 100644
> --- a/mlir/lib/Conversion/LinalgToStandard/LinalgToStandard.cpp
> +++ b/mlir/lib/Conversion/LinalgToStandard/LinalgToStandard.cpp
> @@ -173,7 +173,7 @@ class LinalgOpConversion<IndexedGenericOp>
>      if (!libraryCallName)
>        return failure();
>
> -    // TODO(pifon, ntv): Use induction variables values instead of zeros,
> when
> +    // TODO: Use induction variables values instead of zeros, when
>      // IndexedGenericOp is tiled.
>      auto zero = rewriter.create<mlir::ConstantOp>(
>          op.getLoc(), rewriter.getIntegerAttr(rewriter.getIndexType(), 0));
> @@ -227,7 +227,7 @@ class CopyTransposeConversion : public
> OpRewritePattern<CopyOp> {
>  /// Populate the given list with patterns that convert from Linalg to
> Standard.
>  void mlir::populateLinalgToStandardConversionPatterns(
>      OwningRewritePatternList &patterns, MLIRContext *ctx) {
> -  // TODO(ntv) ConvOp conversion needs to export a descriptor with
> relevant
> +  // TODO: ConvOp conversion needs to export a descriptor with relevant
>    // attribute values such as kernel striding and dilation.
>    // clang-format off
>    patterns.insert<
>
> diff  --git a/mlir/lib/Conversion/SCFToGPU/SCFToGPU.cpp
> b/mlir/lib/Conversion/SCFToGPU/SCFToGPU.cpp
> index 4acce99ce2dc..0a657e5387b2 100644
> --- a/mlir/lib/Conversion/SCFToGPU/SCFToGPU.cpp
> +++ b/mlir/lib/Conversion/SCFToGPU/SCFToGPU.cpp
> @@ -378,12 +378,12 @@ static LogicalResult processParallelLoop(
>      ParallelOp parallelOp, gpu::LaunchOp launchOp,
>      BlockAndValueMapping &cloningMap, SmallVectorImpl<Operation *>
> &worklist,
>      DenseMap<gpu::Processor, Value> &bounds, PatternRewriter &rewriter) {
> -  // TODO(herhut): Verify that this is a valid GPU mapping.
> +  // TODO: Verify that this is a valid GPU mapping.
>    // processor ids: 0-2 block [x/y/z], 3-5 -> thread [x/y/z], 6->
> sequential
>    ArrayAttr mapping =
>        parallelOp.getAttrOfType<ArrayAttr>(gpu::getMappingAttrName());
>
> -  // TODO(herhut): Support reductions.
> +  // TODO: Support reductions.
>    if (!mapping || parallelOp.getNumResults() != 0)
>      return failure();
>
> @@ -431,7 +431,7 @@ static LogicalResult processParallelLoop(
>            loc, annotation.map().getValue().compose(lowerAndStep),
>            ValueRange{operand, step, lowerBound});
>        // If there was also a bound, insert that, too.
> -      // TODO(herhut): Check that we do not assign bounds twice.
> +      // TODO: Check that we do not assign bounds twice.
>        if (annotation.bound().getValue()) {
>          // We pass as the single operand to the bound-map the number of
>          // iterations, which is (upperBound - lowerBound) ceilDiv step. To
>
> diff  --git a/mlir/lib/Conversion/StandardToLLVM/StandardToLLVM.cpp
> b/mlir/lib/Conversion/StandardToLLVM/StandardToLLVM.cpp
> index ee98bc9166f8..91a4867ad307 100644
> --- a/mlir/lib/Conversion/StandardToLLVM/StandardToLLVM.cpp
> +++ b/mlir/lib/Conversion/StandardToLLVM/StandardToLLVM.cpp
> @@ -331,7 +331,7 @@
> LLVMTypeConverter::convertFunctionTypeCWrapper(FunctionType type) {
>  //   4. a second array containing as many `index`-type integers as the
> rank of
>  //   the MemRef: the second array represents the "stride" (in tensor
> abstraction
>  //   sense), i.e. the number of consecutive elements of the underlying
> buffer.
> -//   TODO(ntv, zinenko): add assertions for the static cases.
> +//   TODO: add assertions for the static cases.
>  //
>  // template <typename Elem, size_t Rank>
>  // struct {
>
> diff  --git
> a/mlir/lib/Conversion/StandardToSPIRV/ConvertStandardToSPIRV.cpp
> b/mlir/lib/Conversion/StandardToSPIRV/ConvertStandardToSPIRV.cpp
> index aac275548891..6bb7a17ae46f 100644
> --- a/mlir/lib/Conversion/StandardToSPIRV/ConvertStandardToSPIRV.cpp
> +++ b/mlir/lib/Conversion/StandardToSPIRV/ConvertStandardToSPIRV.cpp
> @@ -536,7 +536,7 @@ LogicalResult
> ConstantCompositeOpPattern::matchAndRewrite(
>                                            srcType.getElementType());
>        dstElementsAttr = dstElementsAttr.reshape(dstAttrType);
>      } else {
> -      // TODO(antiagainst): add support for large vectors.
> +      // TODO: add support for large vectors.
>        return failure();
>      }
>    }
>
> diff  --git a/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
> b/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
> index 6b43a1e1fdc1..9a66dafc345a 100644
> --- a/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
> +++ b/mlir/lib/Conversion/VectorToLLVM/ConvertVectorToLLVM.cpp
> @@ -110,7 +110,7 @@ static Value extractOne(PatternRewriter &rewriter,
> Location loc, Value vector,
>  }
>
>  // Helper that returns a subset of `arrayAttr` as a vector of int64_t.
> -// TODO(rriddle): Better support for attribute subtype forwarding +
> slicing.
> +// TODO: Better support for attribute subtype forwarding + slicing.
>  static SmallVector<int64_t, 4> getI64SubArray(ArrayAttr arrayAttr,
>                                                unsigned dropFront = 0,
>                                                unsigned dropBack = 0) {
> @@ -927,7 +927,7 @@ class VectorTransferConversion : public
> ConvertToLLVMPattern {
>          loc, toLLVMTy(vectorCmpType), linearIndices);
>
>      // 3. Create offsetVector = [ offset + 0 .. offset + vector_length -
> 1 ].
> -    // TODO(ntv, ajcbik): when the leaf transfer rank is k > 1 we need
> the last
> +    // TODO: when the leaf transfer rank is k > 1 we need the last
>      // `k` dimensions here.
>      unsigned lastIndex = llvm::size(xferOp.indices()) - 1;
>      Value offsetIndex = *(xferOp.indices().begin() + lastIndex);
> @@ -968,7 +968,7 @@ class VectorPrintOpConversion : public
> ConvertToLLVMPattern {
>    // output of any shaped and dimensioned vector. Due to full unrolling,
>    // this approach is less suited for very large vectors though.
>    //
> -  // TODO(ajcbik): rely solely on libc in future? something else?
> +  // TODO: rely solely on libc in future? something else?
>    //
>    LogicalResult
>    matchAndRewrite(Operation *op, ArrayRef<Value> operands,
>
> diff  --git a/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
> b/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
> index c7b4db1d5ce3..24e1d66e3605 100644
> --- a/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
> +++ b/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
> @@ -86,7 +86,7 @@ class NDTransferOpHelper {
>          scope(std::make_unique<ScopedContext>(rewriter, loc)),
> xferOp(xferOp),
>          op(xferOp.getOperation()) {
>      vectorType = xferOp.getVectorType();
> -    // TODO(ntv, ajcbik): when we go to k > 1-D vectors adapt minorRank.
> +    // TODO: when we go to k > 1-D vectors adapt minorRank.
>      minorRank = 1;
>      majorRank = vectorType.getRank() - minorRank;
>      leadingRank = xferOp.getMemRefType().getRank() - (majorRank +
> minorRank);
> @@ -528,8 +528,8 @@ MemRefType
> VectorTransferRewriter<TransferOpTy>::tmpMemRefType(
>  /// in the presence of data-parallel only operations, we generate code
> that
>  /// writes the same value multiple time on the edge locations.
>  ///
> -/// TODO(ntv): implement alternatives to clipping.
> -/// TODO(ntv): support non-data-parallel operations.
> +/// TODO: implement alternatives to clipping.
> +/// TODO: support non-data-parallel operations.
>
>  /// Performs the rewrite.
>  template <>
> @@ -603,8 +603,8 @@ LogicalResult
> VectorTransferRewriter<TransferReadOp>::matchAndRewrite(
>  /// See `Important notes about clipping and full-tiles only abstraction`
> in the
>  /// description of `readClipped` above.
>  ///
> -/// TODO(ntv): implement alternatives to clipping.
> -/// TODO(ntv): support non-data-parallel operations.
> +/// TODO: implement alternatives to clipping.
> +/// TODO: support non-data-parallel operations.
>  template <>
>  LogicalResult VectorTransferRewriter<TransferWriteOp>::matchAndRewrite(
>      Operation *op, PatternRewriter &rewriter) const {
>
> diff  --git a/mlir/lib/Dialect/Affine/IR/AffineOps.cpp
> b/mlir/lib/Dialect/Affine/IR/AffineOps.cpp
> index b4c1f7aa35a0..3f10e744f419 100644
> --- a/mlir/lib/Dialect/Affine/IR/AffineOps.cpp
> +++ b/mlir/lib/Dialect/Affine/IR/AffineOps.cpp
> @@ -467,7 +467,7 @@ indicesFromAffineApplyOp(ArrayRef<Value> operands) {
>  // of allowing mathematical AffineMap composition.
>  // Returns an AffineMap where symbols that come from an AffineApplyOp
> have been
>  // rewritten as dims and are ordered after the original dims.
> -// TODO(andydavis,ntv): This promotion makes AffineMap lose track of which
> +// TODO: This promotion makes AffineMap lose track of which
>  // symbols are represented as dims. This loss is static but can still be
>  // recovered dynamically (with `isValidSymbol`). Still this is annoying
> for the
>  // semi-affine map case. A dynamic canonicalization of all dims that are
> valid
> @@ -536,7 +536,7 @@ static AffineMap
> promoteComposedSymbolsAsDims(AffineMap map,
>  /// composition via symbols, which is ambiguous mathematically. This
> corner case
>  /// is handled by locally rewriting such symbols that come from
> AffineApplyOp
>  /// into dims and composing through dims.
> -/// TODO(andydavis, ntv): Composition via symbols comes at a significant
> code
> +/// TODO: Composition via symbols comes at a significant code
>  /// complexity. Alternatively we should investigate whether we want to
>  /// explicitly disallow symbols coming from affine.apply and instead
> force the
>  /// user to compose symbols beforehand. The annoyances may be small (i.e.
> 1 or 2
> @@ -647,7 +647,7 @@ AffineApplyNormalizer::AffineApplyNormalizer(AffineMap
> map,
>    LLVM_DEBUG(auxiliaryMap.print(dbgs() << "\nWith map: "));
>    LLVM_DEBUG(map.compose(auxiliaryMap).print(dbgs() << "\nResult: "));
>
> -  // TODO(andydavis,ntv): Disabling simplification results in major speed
> gains.
> +  // TODO: Disabling simplification results in major speed gains.
>    // Another option is to cache the results as it is expected a lot of
> redundant
>    // work is performed in practice.
>    affineMap = simplifyAffineMap(map.compose(auxiliaryMap));
> @@ -928,7 +928,7 @@ static LogicalResult foldMemRefCast(Operation *op) {
>  // AffineDmaStartOp
>
>  //===----------------------------------------------------------------------===//
>
> -// TODO(b/133776335) Check that map operands are loop IVs or symbols.
> +// TODO: Check that map operands are loop IVs or symbols.
>  void AffineDmaStartOp::build(OpBuilder &builder, OperationState &result,
>                               Value srcMemRef, AffineMap srcMap,
>                               ValueRange srcIndices, Value destMemRef,
> @@ -1098,7 +1098,7 @@ LogicalResult
> AffineDmaStartOp::fold(ArrayRef<Attribute> cstOperands,
>  // AffineDmaWaitOp
>
>  //===----------------------------------------------------------------------===//
>
> -// TODO(b/133776335) Check that map operands are loop IVs or symbols.
> +// TODO: Check that map operands are loop IVs or symbols.
>  void AffineDmaWaitOp::build(OpBuilder &builder, OperationState &result,
>                              Value tagMemRef, AffineMap tagMap,
>                              ValueRange tagIndices, Value numElements) {
> @@ -2196,7 +2196,7 @@ static OpFoldResult foldMinMaxOp(T op,
> ArrayRef<Attribute> operands) {
>                  "expected affine min or max op");
>
>    // Fold the affine map.
> -  // TODO(andydavis, ntv) Fold more cases:
> +  // TODO: Fold more cases:
>    // min(some_affine, some_affine + constant, ...), etc.
>    SmallVector<int64_t, 2> results;
>    auto foldedMap = op.map().partialConstantFold(operands, &results);
>
> diff  --git a/mlir/lib/Dialect/Affine/IR/AffineValueMap.cpp
> b/mlir/lib/Dialect/Affine/IR/AffineValueMap.cpp
> index 792ca379cef4..6f7c2fbc56ff 100644
> --- a/mlir/lib/Dialect/Affine/IR/AffineValueMap.cpp
> +++ b/mlir/lib/Dialect/Affine/IR/AffineValueMap.cpp
> @@ -85,7 +85,7 @@ bool AffineValueMap::isFunctionOf(unsigned idx, Value
> value) const {
>      return false;
>    }
>    auto expr = const_cast<AffineValueMap
> *>(this)->getAffineMap().getResult(idx);
> -  // TODO(ntv): this is better implemented on a flattened representation.
> +  // TODO: this is better implemented on a flattened representation.
>    // At least for now it is conservative.
>    return expr.isFunctionOfDim(index);
>  }
>
> diff  --git
> a/mlir/lib/Dialect/Affine/Transforms/AffineDataCopyGeneration.cpp
> b/mlir/lib/Dialect/Affine/Transforms/AffineDataCopyGeneration.cpp
> index ea66fcb3b090..f438630a6e55 100644
> --- a/mlir/lib/Dialect/Affine/Transforms/AffineDataCopyGeneration.cpp
> +++ b/mlir/lib/Dialect/Affine/Transforms/AffineDataCopyGeneration.cpp
> @@ -46,7 +46,7 @@ namespace {
>  /// inner levels if necessary to determine at what depth copies need to be
>  /// placed so that the allocated buffers fit within the memory capacity
>  /// provided.
> -// TODO(bondhugula): We currently can't generate copies correctly when
> stores
> +// TODO: We currently can't generate copies correctly when stores
>  // are strided. Check for strided stores.
>  struct AffineDataCopyGeneration
>      : public AffineDataCopyGenerationBase<AffineDataCopyGeneration> {
> @@ -75,7 +75,7 @@ struct AffineDataCopyGeneration
>  /// Generates copies for memref's living in 'slowMemorySpace' into newly
> created
>  /// buffers in 'fastMemorySpace', and replaces memory operations to the
> former
>  /// by the latter. Only load op's handled for now.
> -/// TODO(bondhugula): extend this to store op's.
> +/// TODO: extend this to store op's.
>  std::unique_ptr<OperationPass<FuncOp>>
> mlir::createAffineDataCopyGenerationPass(
>      unsigned slowMemorySpace, unsigned fastMemorySpace, unsigned
> tagMemorySpace,
>      int minDmaTransferSize, uint64_t fastMemCapacityBytes) {
> @@ -113,7 +113,7 @@ AffineDataCopyGeneration::runOnBlock(Block *block,
>    // operations excluding AffineForOp's) are always assumed to not exhaust
>    // memory. As a result, this approach is conservative in some cases at
> the
>    // moment; we do a check later and report an error with location info.
> -  // TODO(bondhugula): An 'affine.if' operation is being treated similar
> to an
> +  // TODO: An 'affine.if' operation is being treated similar to an
>    // operation. 'affine.if''s could have 'affine.for's in them;
>    // treat them separately.
>
>
> diff  --git
> a/mlir/lib/Dialect/Affine/Transforms/AffineLoopInvariantCodeMotion.cpp
> b/mlir/lib/Dialect/Affine/Transforms/AffineLoopInvariantCodeMotion.cpp
> index aaa21104e1fd..e76151fde692 100644
> --- a/mlir/lib/Dialect/Affine/Transforms/AffineLoopInvariantCodeMotion.cpp
> +++ b/mlir/lib/Dialect/Affine/Transforms/AffineLoopInvariantCodeMotion.cpp
> @@ -37,8 +37,8 @@ using namespace mlir;
>  namespace {
>
>  /// Loop invariant code motion (LICM) pass.
> -/// TODO(asabne) : The pass is missing zero-trip tests.
> -/// TODO(asabne) : Check for the presence of side effects before hoisting.
> +/// TODO: The pass is missing zero-trip tests.
> +/// TODO: Check for the presence of side effects before hoisting.
>  /// TODO: This code should be removed once the new LICM pass can handle
> its
>  ///       uses.
>  struct LoopInvariantCodeMotion
> @@ -62,7 +62,7 @@ areAllOpsInTheBlockListInvariant(Region &blockList,
> Value indVar,
>                                   SmallPtrSetImpl<Operation *>
> &opsToHoist);
>
>  static bool isMemRefDereferencingOp(Operation &op) {
> -  // TODO(asabne): Support DMA Ops.
> +  // TODO: Support DMA Ops.
>    return isa<AffineLoadOp, AffineStoreOp>(op);
>  }
>
> @@ -81,7 +81,7 @@ bool isOpLoopInvariant(Operation &op, Value indVar,
>      // 'affine.if'.
>      return false;
>    } else if (isa<AffineDmaStartOp, AffineDmaWaitOp>(op)) {
> -    // TODO(asabne): Support DMA ops.
> +    // TODO: Support DMA ops.
>      return false;
>    } else if (!isa<ConstantOp>(op)) {
>      if (isMemRefDereferencingOp(op)) {
>
> diff  --git a/mlir/lib/Dialect/Affine/Transforms/LoopTiling.cpp
> b/mlir/lib/Dialect/Affine/Transforms/LoopTiling.cpp
> index c411a6a548ff..ddb00bdd8f0e 100644
> --- a/mlir/lib/Dialect/Affine/Transforms/LoopTiling.cpp
> +++ b/mlir/lib/Dialect/Affine/Transforms/LoopTiling.cpp
> @@ -159,7 +159,7 @@
> constructTiledIndexSetHyperRect(MutableArrayRef<AffineForOp> origLoops,
>
>  /// Tiles the specified band of perfectly nested loops creating
> tile-space loops
>  /// and intra-tile loops. A band is a contiguous set of loops.
> -//  TODO(bondhugula): handle non hyper-rectangular spaces.
> +//  TODO: handle non hyper-rectangular spaces.
>  LogicalResult
>  mlir::tilePerfectlyNested(MutableArrayRef<AffineForOp> input,
>                            ArrayRef<unsigned> tileSizes,
> @@ -282,7 +282,7 @@ static void
> adjustToDivisorsOfTripCounts(ArrayRef<AffineForOp> band,
>  // based on a simple model that looks at the memory footprint and
> determines
>  // tile sizes assuming identity accesses / 1:1 tile size proportional
> footprint
>  // along each of the dimensions being tiled.
> -// TODO(mlir-team): evolve this model. Tile size determination is a large
> area
> +// TODO: evolve this model. Tile size determination is a large area
>  // to play with in general.
>  void LoopTiling::getTileSizes(ArrayRef<AffineForOp> band,
>                                SmallVectorImpl<unsigned> *tileSizes) {
> @@ -334,7 +334,7 @@ void LoopTiling::getTileSizes(ArrayRef<AffineForOp>
> band,
>    }
>
>    // Divide all loops equally in an attempt to reduce footprint.
> -  // TODO(bondhugula): this is approximate. Ideally, obtain reuse factor /
> +  // TODO: this is approximate. Ideally, obtain reuse factor /
>    // profitability along each dimension and weight tile sizes based on
> that as
>    // one possible approach. Or compute a polynomial in tile sizes and
> solve for
>    // it.
>
> diff  --git a/mlir/lib/Dialect/Affine/Transforms/SuperVectorize.cpp
> b/mlir/lib/Dialect/Affine/Transforms/SuperVectorize.cpp
> index c47a65683587..1638502508e3 100644
> --- a/mlir/lib/Dialect/Affine/Transforms/SuperVectorize.cpp
> +++ b/mlir/lib/Dialect/Affine/Transforms/SuperVectorize.cpp
> @@ -236,7 +236,7 @@ using namespace mlir;
>  ///
>  /// Lowering, unrolling, pipelining:
>  /// ================================
> -/// TODO(ntv): point to the proper places.
> +/// TODO: point to the proper places.
>  ///
>  /// Algorithm:
>  /// ==========
> @@ -248,7 +248,7 @@ using namespace mlir;
>  ///     reduction, vectorizable, ...) as well as b. all contiguous
> load/store
>  ///     operations along a specified minor dimension (not necessarily the
>  ///     fastest varying) ;
> -///  2. analyzing those patterns for profitability (TODO(ntv): and
> +///  2. analyzing those patterns for profitability (TODO: and
>  ///     interference);
>  ///  3. Then, for each pattern in order:
>  ///    a. applying iterative rewriting of the loop and the load
> operations in
> @@ -259,7 +259,7 @@ using namespace mlir;
>  ///    c. traversing the use-def chains starting from the roots and
> iteratively
>  ///       propagating vectorized values. Scalar values that are
> encountered
>  ///       during this process must come from outside the scope of the
> current
> -///       pattern (TODO(ntv): enforce this and generalize). Such a scalar
> value
> +///       pattern (TODO: enforce this and generalize). Such a scalar value
>  ///       is vectorized only if it is a constant (into a vector splat).
> The
>  ///       non-constant case is not supported for now and results in the
> pattern
>  ///       failing to vectorize;
> @@ -582,7 +582,7 @@ Vectorize::Vectorize(ArrayRef<int64_t>
> virtualVectorSize) {
>    vectorSizes = virtualVectorSize;
>  }
>
> -/////// TODO(ntv): Hoist to a VectorizationStrategy.cpp when appropriate.
> +/////// TODO: Hoist to a VectorizationStrategy.cpp when appropriate.
>  /////////
>  namespace {
>
> @@ -614,7 +614,7 @@ static void vectorizeLoopIfProfitable(Operation *loop,
> unsigned depthInPattern,
>  /// load/store MemRefs, this creates a generic vectorization strategy
> that works
>  /// for any loop in a hierarchy (outermost, innermost or intermediate).
>  ///
> -/// TODO(ntv): In the future we should additionally increase the power of
> the
> +/// TODO: In the future we should additionally increase the power of the
>  /// profitability analysis along 3 directions:
>  ///   1. account for loop extents (both static and parametric +
> annotations);
>  ///   2. account for data layout permutations;
> @@ -636,7 +636,7 @@ static LogicalResult
> analyzeProfitability(ArrayRef<NestedMatch> matches,
>    return success();
>  }
>
> -///// end TODO(ntv): Hoist to a VectorizationStrategy.cpp when
> appropriate /////
> +///// end TODO: Hoist to a VectorizationStrategy.cpp when appropriate
> /////
>
>  namespace {
>
> @@ -741,7 +741,7 @@ static void computeMemoryOpIndices(Operation *op,
> AffineMap map,
>    }
>  }
>
> -////// TODO(ntv): Hoist to a VectorizationMaterialize.cpp when
> appropriate. ////
> +////// TODO: Hoist to a VectorizationMaterialize.cpp when appropriate.
> ////
>
>  /// Handles the vectorization of load and store MLIR operations.
>  ///
> @@ -763,7 +763,7 @@ static LogicalResult vectorizeRootOrTerminal(Value iv,
>    auto memRefType = memoryOp.getMemRef().getType().template
> cast<MemRefType>();
>
>    auto elementType = memRefType.getElementType();
> -  // TODO(ntv): ponder whether we want to further vectorize a vector
> value.
> +  // TODO: ponder whether we want to further vectorize a vector value.
>    assert(VectorType::isValidElementType(elementType) &&
>           "Not a valid vector element type");
>    auto vectorType = VectorType::get(state->strategy->vectorSizes,
> elementType);
> @@ -772,7 +772,7 @@ static LogicalResult vectorizeRootOrTerminal(Value iv,
>    auto *opInst = memoryOp.getOperation();
>    // For now, vector.transfers must be aligned, operate only on indices
> with an
>    // identity subset of AffineMap and do not change layout.
> -  // TODO(ntv): increase the expressiveness power of vector.transfer
> operations
> +  // TODO: increase the expressiveness power of vector.transfer operations
>    // as needed by various targets.
>    if (auto load = dyn_cast<AffineLoadOp>(opInst)) {
>      OpBuilder b(opInst);
> @@ -800,7 +800,7 @@ static LogicalResult vectorizeRootOrTerminal(Value iv,
>    }
>    return success();
>  }
> -/// end TODO(ntv): Hoist to a VectorizationMaterialize.cpp when
> appropriate. ///
> +/// end TODO: Hoist to a VectorizationMaterialize.cpp when appropriate.
> ///
>
>  /// Coarsens the loops bounds and transforms all remaining load and store
>  /// operations into the appropriate vector.transfer.
> @@ -937,7 +937,7 @@ static Value vectorizeConstant(Operation *op,
> ConstantOp constant, Type type) {
>  /// Returns an operand that has been vectorized to match `state`'s
> strategy if
>  /// vectorization is possible with the above logic. Returns nullptr
> otherwise.
>  ///
> -/// TODO(ntv): handle more complex cases.
> +/// TODO: handle more complex cases.
>  static Value vectorizeOperand(Value operand, Operation *op,
>                                VectorizationState *state) {
>    LLVM_DEBUG(dbgs() << "\n[early-vect]vectorize operand: " << operand);
> @@ -956,7 +956,7 @@ static Value vectorizeOperand(Value operand, Operation
> *op,
>      LLVM_DEBUG(dbgs() << "-> delayed replacement by: " << res);
>      return res;
>    }
> -  // 2. TODO(ntv): broadcast needed.
> +  // 2. TODO: broadcast needed.
>    if (operand.getType().isa<VectorType>()) {
>      LLVM_DEBUG(dbgs() << "-> non-vectorizable");
>      return nullptr;
> @@ -978,7 +978,7 @@ static Value vectorizeOperand(Value operand, Operation
> *op,
>  /// particular operation vectorizes. For now we implement the case
> distinction
>  /// here.
>  /// Returns a vectorized form of an operation or nullptr if vectorization
> fails.
> -// TODO(ntv): consider adding a trait to Op to describe how it gets
> vectorized.
> +// TODO: consider adding a trait to Op to describe how it gets vectorized.
>  // Maybe some Ops are not vectorizable or require some tricky logic, we
> cannot
>  // do one-off logic here; ideally it would be TableGen'd.
>  static Operation *vectorizeOneOperation(Operation *opInst,
> @@ -1044,9 +1044,9 @@ static Operation *vectorizeOneOperation(Operation
> *opInst,
>    }
>
>    // Create a clone of the op with the proper operands and return types.
> -  // TODO(ntv): The following assumes there is always an op with a fixed
> +  // TODO: The following assumes there is always an op with a fixed
>    // name that works both in scalar mode and vector mode.
> -  // TODO(ntv): Is it worth considering an Operation.clone operation which
> +  // TODO: Is it worth considering an Operation.clone operation which
>    // changes the type so we can promote an Operation with less
> boilerplate?
>    OpBuilder b(opInst);
>    OperationState newOp(opInst->getLoc(), opInst->getName().getStringRef(),
> @@ -1072,7 +1072,7 @@ static LogicalResult
> vectorizeNonTerminals(VectorizationState *state) {
>    // Note: we have to exclude terminals because some of their defs may
> not be
>    // nested under the vectorization pattern (e.g. constants defined in an
>    // encompassing scope).
> -  // TODO(ntv): Use a backward slice for terminals, avoid special casing
> and
> +  // TODO: Use a backward slice for terminals, avoid special casing and
>    // merge implementations.
>    for (auto *op : state->roots) {
>      getForwardSlice(op, &worklist, [state](Operation *op) {
> @@ -1120,7 +1120,7 @@ static LogicalResult vectorizeRootMatch(NestedMatch
> m,
>    // pattern matching, from profitability analysis, from application.
>    // As a consequence we must check that each root pattern is still
>    // vectorizable. If a pattern is not vectorizable anymore, we just skip
> it.
> -  // TODO(ntv): implement a non-greedy profitability analysis that keeps
> only
> +  // TODO: implement a non-greedy profitability analysis that keeps only
>    // non-intersecting patterns.
>    if (!isVectorizableLoopBody(loop, vectorTransferPattern())) {
>      LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ loop is not vectorizable");
> @@ -1162,7 +1162,7 @@ static LogicalResult vectorizeRootMatch(NestedMatch
> m,
>
>    // 2. Vectorize operations reached by use-def chains from root except
> the
>    // terminals (store operations) that need to be post-processed
> separately.
> -  // TODO(ntv): add more as we expand.
> +  // TODO: add more as we expand.
>    if (failed(vectorizeNonTerminals(&state))) {
>      LLVM_DEBUG(dbgs() << "\n[early-vect]+++++ failed
> vectorizeNonTerminals");
>      return guard.failure();
> @@ -1172,7 +1172,7 @@ static LogicalResult vectorizeRootMatch(NestedMatch
> m,
>    // Note: we have to post-process terminals because some of their defs
> may not
>    // be nested under the vectorization pattern (e.g. constants defined in
> an
>    // encompassing scope).
> -  // TODO(ntv): Use a backward slice for terminals, avoid special casing
> and
> +  // TODO: Use a backward slice for terminals, avoid special casing and
>    // merge implementations.
>    for (auto *op : state.terminals) {
>      if (!vectorizeOneOperation(op, &state)) { // nullptr == failure
> @@ -1221,7 +1221,7 @@ void Vectorize::runOnFunction() {
>      // This automatically prunes intersecting matches.
>      for (auto m : matches) {
>        VectorizationStrategy strategy;
> -      // TODO(ntv): depending on profitability, elect to reduce the
> vector size.
> +      // TODO: depending on profitability, elect to reduce the vector
> size.
>        strategy.vectorSizes.assign(vectorSizes.begin(), vectorSizes.end());
>        if (failed(analyzeProfitability(m.getMatchedChildren(), 1,
> patternDepth,
>                                        &strategy))) {
> @@ -1229,10 +1229,10 @@ void Vectorize::runOnFunction() {
>        }
>        vectorizeLoopIfProfitable(m.getMatchedOperation(), 0, patternDepth,
>                                  &strategy);
> -      // TODO(ntv): if pattern does not apply, report it; alter the
> +      // TODO: if pattern does not apply, report it; alter the
>        // cost/benefit.
>        vectorizeRootMatch(m, &strategy);
> -      // TODO(ntv): some diagnostics if failure to vectorize occurs.
> +      // TODO: some diagnostics if failure to vectorize occurs.
>      }
>    }
>    LLVM_DEBUG(dbgs() << "\n");
>
> diff  --git a/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp
> b/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp
> index 13c5ed835c66..fd0c6245e084 100644
> --- a/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp
> +++ b/mlir/lib/Dialect/GPU/IR/GPUDialect.cpp
> @@ -87,7 +87,7 @@ LogicalResult
> GPUDialect::verifyOperationAttribute(Operation *op,
>        return launchOp.emitOpError("kernel function is missing the '")
>               << GPUDialect::getKernelFuncAttrName() << "' attribute";
>
> -    // TODO(ntv,zinenko,herhut): if the kernel function has been
> converted to
> +    // TODO: if the kernel function has been converted to
>      // the LLVM dialect but the caller hasn't (which happens during the
>      // separate compilation), do not check type correspondence as it would
>      // require the verifier to be aware of the LLVM type conversion.
>
> diff  --git a/mlir/lib/Dialect/GPU/Transforms/KernelOutlining.cpp
> b/mlir/lib/Dialect/GPU/Transforms/KernelOutlining.cpp
> index 1ec0f0cf65af..fcae3114188a 100644
> --- a/mlir/lib/Dialect/GPU/Transforms/KernelOutlining.cpp
> +++ b/mlir/lib/Dialect/GPU/Transforms/KernelOutlining.cpp
> @@ -156,7 +156,7 @@ static gpu::GPUFuncOp
> outlineKernelFuncImpl(gpu::LaunchOp launchOp,
>      map.map(operand.value(), entryBlock.getArgument(operand.index()));
>
>    // Clone the region of the gpu.launch operation into the gpu.func
> operation.
> -  // TODO(ravishankarm): If cloneInto can be modified such that if a
> mapping for
> +  // TODO: If cloneInto can be modified such that if a mapping for
>    // a block exists, that block will be used to clone operations into (at
> the
>    // end of the block), instead of creating a new block, this would be
> much
>    // cleaner.
>
> diff  --git a/mlir/lib/Dialect/GPU/Transforms/ParallelLoopMapper.cpp
> b/mlir/lib/Dialect/GPU/Transforms/ParallelLoopMapper.cpp
> index 85609aa9f052..b42929039a97 100644
> --- a/mlir/lib/Dialect/GPU/Transforms/ParallelLoopMapper.cpp
> +++ b/mlir/lib/Dialect/GPU/Transforms/ParallelLoopMapper.cpp
> @@ -80,7 +80,7 @@ MappingLevel &operator++(MappingLevel &mappingLevel) {
>  /// Computed the hardware id to use for a given mapping level. Will
>  /// assign x,y and z hardware ids for the first 3 dimensions and use
>  /// sequential after.
> -/// TODO(ravishankarm/herhut) : Make this use x for the inner-most loop
> that is
> +/// TODO: Make this use x for the inner-most loop that is
>  /// distributed to map to x, the next innermost to y and the next
> innermost to
>  /// z.
>  static gpu::Processor getHardwareIdForMapping(MappingLevel level,
>
> diff  --git a/mlir/lib/Dialect/LLVMIR/IR/LLVMDialect.cpp
> b/mlir/lib/Dialect/LLVMIR/IR/LLVMDialect.cpp
> index 6de98ca89ec1..f448427099a4 100644
> --- a/mlir/lib/Dialect/LLVMIR/IR/LLVMDialect.cpp
> +++ b/mlir/lib/Dialect/LLVMIR/IR/LLVMDialect.cpp
> @@ -978,7 +978,7 @@ static LogicalResult verify(DialectCastOp op) {
>      // * if we allow memrefs to cast from/to bare pointers, some users
> might
>      // alternatively want metadata that only present in the descriptor.
>      //
> -    // TODO(timshen): re-evaluate the memref cast design when it's needed.
> +    // TODO: re-evaluate the memref cast design when it's needed.
>      return op.emitOpError("type must be non-index integer types, float
> types, "
>                            "or vector of mentioned types.");
>    };
>
> diff  --git a/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
> b/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
> index dab441fc26ff..9a694a5e9899 100644
> --- a/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
> +++ b/mlir/lib/Dialect/LLVMIR/IR/NVVMDialect.cpp
> @@ -141,7 +141,7 @@ static LogicalResult verify(MmaOp op) {
>  // NVVMDialect initialization, type parsing, and registration.
>
>  //===----------------------------------------------------------------------===//
>
> -// TODO(herhut): This should be the llvm.nvvm dialect once this is
> supported.
> +// TODO: This should be the llvm.nvvm dialect once this is supported.
>  NVVMDialect::NVVMDialect(MLIRContext *context) : Dialect("nvvm", context)
> {
>    addOperations<
>  #define GET_OP_LIST
>
> diff  --git a/mlir/lib/Dialect/LLVMIR/IR/ROCDLDialect.cpp
> b/mlir/lib/Dialect/LLVMIR/IR/ROCDLDialect.cpp
> index aac1727c6be8..f3771dd57719 100644
> --- a/mlir/lib/Dialect/LLVMIR/IR/ROCDLDialect.cpp
> +++ b/mlir/lib/Dialect/LLVMIR/IR/ROCDLDialect.cpp
> @@ -84,7 +84,7 @@ static ParseResult parseROCDLMubufStoreOp(OpAsmParser
> &parser,
>  // ROCDLDialect initialization, type parsing, and registration.
>
>  //===----------------------------------------------------------------------===//
>
> -// TODO(herhut): This should be the llvm.rocdl dialect once this is
> supported.
> +// TODO: This should be the llvm.rocdl dialect once this is supported.
>  ROCDLDialect::ROCDLDialect(MLIRContext *context) : Dialect("rocdl",
> context) {
>    addOperations<
>  #define GET_OP_LIST
>
> diff  --git a/mlir/lib/Dialect/Linalg/Analysis/DependenceAnalysis.cpp
> b/mlir/lib/Dialect/Linalg/Analysis/DependenceAnalysis.cpp
> index 4c218503ba17..af15740af2a8 100644
> --- a/mlir/lib/Dialect/Linalg/Analysis/DependenceAnalysis.cpp
> +++ b/mlir/lib/Dialect/Linalg/Analysis/DependenceAnalysis.cpp
> @@ -223,7 +223,7 @@
> LinalgDependenceGraph::findOperationsWithCoveringDependences(
>    SmallVector<Operation *, 8> res;
>    // Consider an intermediate interleaved `interim` op, look for any
> dependence
>    // to an aliasing view on a src -> op -> dst path.
> -  // TODO(ntv) we are not considering paths yet, just interleaved
> positions.
> +  // TODO: we are not considering paths yet, just interleaved positions.
>    for (auto dt : types) {
>      for (auto dependence : getDependencesFrom(src, dt)) {
>        auto interimPos =
> linalgOpPositions.lookup(dependence.dependentOpView.op);
>
> diff  --git a/mlir/lib/Dialect/Linalg/EDSC/Builders.cpp
> b/mlir/lib/Dialect/Linalg/EDSC/Builders.cpp
> index 8cfc25d2ff8e..b9ec01d3ec79 100644
> --- a/mlir/lib/Dialect/Linalg/EDSC/Builders.cpp
> +++ b/mlir/lib/Dialect/Linalg/EDSC/Builders.cpp
> @@ -226,7 +226,7 @@ Operation
> *mlir::edsc::ops::linalg_generic_conv_nhwc(Value vI, Value vW,
>                                                       ArrayRef<int>
> strides,
>                                                       ArrayRef<int>
> dilations) {
>    MLIRContext *ctx = ScopedContext::getContext();
> -  // TODO(ntv) some template magic to make everything rank-polymorphic.
> +  // TODO: some template magic to make everything rank-polymorphic.
>    assert((dilations.empty() || dilations.size() == 2) && "only 2-D conv
> atm");
>    assert((strides.empty() || strides.size() == 2) && "only 2-D conv atm");
>
> @@ -259,7 +259,7 @@ Operation
> *mlir::edsc::ops::linalg_generic_dilated_conv_nhwc(
>      Value vI, Value vW, Value vO, int depth_multiplier, ArrayRef<int>
> strides,
>      ArrayRef<int> dilations) {
>    MLIRContext *ctx = ScopedContext::getContext();
> -  // TODO(ntv) some template magic to make everything rank-polymorphic.
> +  // TODO: some template magic to make everything rank-polymorphic.
>    assert((dilations.empty() || dilations.size() == 2) && "only 2-D conv
> atm");
>    assert((strides.empty() || strides.size() == 2) && "only 2-D conv atm");
>
>
> diff  --git a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
> b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
> index 8012a1087ee1..18ea31571aa4 100644
> --- a/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
> +++ b/mlir/lib/Dialect/Linalg/IR/LinalgOps.cpp
> @@ -447,7 +447,7 @@ static bool isReshapableDimBand(unsigned dim, unsigned
> extent,
>      // proper symbol in the AffineExpr of a stride.
>      if (ShapedType::isDynamic(sizes[dim + 1]))
>        return false;
> -    // TODO(ntv) Refine this by passing the proper nDims and nSymbols so
> we can
> +    // TODO: Refine this by passing the proper nDims and nSymbols so we
> can
>      // simplify on the fly and catch more reshapable cases.
>      if (strides[idx] != strides[idx + 1] * sizes[idx + 1])
>        return false;
> @@ -520,7 +520,7 @@ computeReshapeCollapsedType(MemRefType type,
>
>  /// Helper functions assert Attribute of the proper type in attr and
> returns the
>  /// corresponding vector.
> -/// TODO(rridle,ntv) this should be evolved into a generic
> +/// TODO: this should be evolved into a generic
>  /// `getRangeOfType<AffineMap>(ArrayAttr attrs)` that does not copy.
>  static SmallVector<AffineMap, 4> getAffineMaps(ArrayAttr attrs) {
>    return llvm::to_vector<8>(llvm::map_range(
> @@ -713,7 +713,7 @@ static LogicalResult verify(TensorReshapeOp op) {
>    if (failed(verifyReshapeLikeTypes(op, expandedType, collapsedType)))
>      return failure();
>    auto maps = getAffineMaps(op.reassociation());
> -  // TODO(ntv): expanding a ? with a non-constant is under-specified.
> Error
> +  // TODO: expanding a ? with a non-constant is under-specified. Error
>    // out.
>    RankedTensorType expectedType =
>        computeTensorReshapeCollapsedType(expandedType, maps);
> @@ -744,7 +744,7 @@ void mlir::linalg::SliceOp::build(OpBuilder &b,
> OperationState &result,
>    (void)res;
>
>    unsigned rank = memRefType.getRank();
> -  // TODO(ntv): propagate static size and stride information when
> available.
> +  // TODO: propagate static size and stride information when available.
>    SmallVector<int64_t, 4> sizes(rank, -1); // -1 encodes dynamic size.
>    result.addTypes({MemRefType::Builder(memRefType)
>                         .setShape(sizes)
> @@ -1075,7 +1075,7 @@ mlir::linalg::weightedPoolingInputIndex(PoolingOp op,
>    SmallVector<AffineExpr, 4> res;
>    res.reserve(outputDims.size());
>    for (unsigned i = 0, e = outputDims.size(); i < e; ++i) {
> -    // TODO(ntv): add a level of indirection to linalg.generic.
> +    // TODO: add a level of indirection to linalg.generic.
>      auto expr = op.getStride(i) * outputDims[i] +
>                  op.getDilation(i) * windowDims[i] - op.getLowPad(i);
>      res.push_back(expr);
> @@ -1137,7 +1137,7 @@ std::string
> mlir::linalg::generateLibraryCallName(Operation *op) {
>    return ss.str();
>  }
>
> -// TODO(ntv, rriddle): Consider making all this boilerplate easy to
> autogenerate
> +// TODO: Consider making all this boilerplate easy to autogenerate
>  // with Tablegen. This seems a desirable property in the context of
> OpInterfaces
>  // where a Linalg "named" op **isa** LinalgOp.
>  LogicalResult ConvOp::fold(ArrayRef<Attribute>,
>
> diff  --git a/mlir/lib/Dialect/Linalg/Transforms/Fusion.cpp
> b/mlir/lib/Dialect/Linalg/Transforms/Fusion.cpp
> index e37146e73954..d67126c21f3e 100644
> --- a/mlir/lib/Dialect/Linalg/Transforms/Fusion.cpp
> +++ b/mlir/lib/Dialect/Linalg/Transforms/Fusion.cpp
> @@ -279,12 +279,12 @@ bool mlir::linalg::isFusableInto(const
> LinalgDependenceGraph &graph,
>      return false;
>    }
>    if (auto convOp = dyn_cast<linalg::ConvOp>(producer.getOperation())) {
> -    // TODO(ntv): add a level of indirection to linalg.generic.
> +    // TODO: add a level of indirection to linalg.generic.
>      if (convOp.padding())
>        return false;
>    }
>    if (auto convOp = dyn_cast<linalg::ConvOp>(consumer.getOperation())) {
> -    // TODO(ntv): add a level of indirection to linalg.generic.
> +    // TODO: add a level of indirection to linalg.generic.
>      if (convOp.padding())
>        return false;
>    }
> @@ -403,7 +403,7 @@ static void fuseLinalgOpsGreedily(FuncOp f) {
>        linalgOps.push_back(op);
>    });
>
> -  // TODO(pifon, ntv): LinalgDependenceGraph should be able to update
> itself.
> +  // TODO: LinalgDependenceGraph should be able to update itself.
>    // The current naive and expensive reconstruction of the graph should be
>    // removed.
>    for (auto *op : llvm::reverse(linalgOps)) {
>
> diff  --git a/mlir/lib/Dialect/Linalg/Transforms/Loops.cpp
> b/mlir/lib/Dialect/Linalg/Transforms/Loops.cpp
> index 575115c0fbed..6cbe947657a0 100644
> --- a/mlir/lib/Dialect/Linalg/Transforms/Loops.cpp
> +++ b/mlir/lib/Dialect/Linalg/Transforms/Loops.cpp
> @@ -165,7 +165,7 @@ void emitScalarImplementation(ArrayRef<Value> allIvs,
>    SmallVector<Value, 4> indexedValues;
>    indexedValues.reserve(nInputs + nOutputs);
>
> -  // TODO(mravishankar): Avoid the loads if the corresponding argument of
> the
> +  // TODO: Avoid the loads if the corresponding argument of the
>    // region has no uses.
>    // 1.a. Emit load from input views.
>    for (unsigned i = 0; i < nInputs; ++i) {
> @@ -183,7 +183,7 @@ void emitScalarImplementation(ArrayRef<Value> allIvs,
>          IndexedValueType(linalgOp.getOutputBuffer(i))(indexing));
>    }
>
> -  // TODO(ntv): When a region inliner exists, use it.
> +  // TODO: When a region inliner exists, use it.
>    // 2. Inline region, currently only works for a single basic block.
>    // 3. Emit store.
>    SmallVector<SmallVector<Value, 8>, 8> indexing;
> @@ -246,7 +246,7 @@ void emitScalarImplementation(ArrayRef<Value> allIvs,
> DotOp dotOp) {
>  template <typename IndexedValueType>
>  Value getConvOpInput(ConvOp convOp, StdIndexedValue im,
>                       MutableArrayRef<Value> imIdx) {
> -  // TODO(ntv): add a level of indirection to linalg.generic.
> +  // TODO: add a level of indirection to linalg.generic.
>    if (!convOp.padding())
>      return im(imIdx);
>
> @@ -409,7 +409,7 @@ static void emitScalarImplementation(ArrayRef<Value>
> allIvs,
>    for (unsigned i = 0; i < nLoops; ++i)
>      indexedValues.push_back(allIvs[i]);
>
> -  // TODO(mravishankar): Avoid the loads if the corresponding argument of
> the
> +  // TODO: Avoid the loads if the corresponding argument of the
>    // region has no uses.
>    // 1.a. Emit load from input views.
>    for (unsigned i = 0; i < nInputs; ++i) {
> @@ -428,7 +428,7 @@ static void emitScalarImplementation(ArrayRef<Value>
> allIvs,
>          IndexedValueType(indexedGenericOp.getOutputBuffer(i))(indexing));
>    }
>
> -  // TODO(ntv): When a region inliner exists, use it.
> +  // TODO: When a region inliner exists, use it.
>    // 2. Inline region, currently only works for a single basic block.
>    // 3. Emit store.
>    SmallVector<SmallVector<Value, 8>, 8> indexing;
> @@ -560,7 +560,7 @@ static void lowerLinalgToLoopsImpl(FuncOp funcOp,
> MLIRContext *context) {
>    OwningRewritePatternList patterns;
>    // Canonicalization and folding patterns applied greedily allow
> cleaning up
>    // the emitted IR on the fly.
> -  // TODO(ntv) fold view and subview ops?
> +  // TODO: fold view and subview ops?
>    insertPatterns<LoopType,
>  #define GET_OP_LIST
>  #include "mlir/Dialect/Linalg/IR/LinalgStructuredOps.cpp.inc"
>
> diff  --git a/mlir/lib/Dialect/Linalg/Transforms/Promotion.cpp
> b/mlir/lib/Dialect/Linalg/Transforms/Promotion.cpp
> index aebc33ab541d..a5323f4b7687 100644
> --- a/mlir/lib/Dialect/Linalg/Transforms/Promotion.cpp
> +++ b/mlir/lib/Dialect/Linalg/Transforms/Promotion.cpp
> @@ -316,7 +316,7 @@ promoteSubViews(OpBuilder &b, LinalgOp op,
>    assert(op.hasBufferSemantics() && "expected linalg op with buffer
> semantics");
>
>    if (auto convOp = dyn_cast<linalg::ConvOp>(op.getOperation())) {
> -    // TODO(ntv): add a level of indirection to linalg.generic.
> +    // TODO: add a level of indirection to linalg.generic.
>      if (convOp.padding())
>        return {};
>    }
>
> diff  --git a/mlir/lib/Dialect/Linalg/Transforms/Tiling.cpp
> b/mlir/lib/Dialect/Linalg/Transforms/Tiling.cpp
> index bde163e3ee72..6ffc181fed67 100644
> --- a/mlir/lib/Dialect/Linalg/Transforms/Tiling.cpp
> +++ b/mlir/lib/Dialect/Linalg/Transforms/Tiling.cpp
> @@ -161,7 +161,7 @@ struct TileCheck : public AffineExprVisitor<TileCheck>
> {
>  //   }
>  // }
>  //
> -// TODO(pifon, ntv): Investigate whether mixing implicit and explicit
> indices
> +// TODO: Investigate whether mixing implicit and explicit indices
>  // does not lead to losing information.
>  static void transformIndexedGenericOpIndices(
>      OpBuilder &b, LinalgOp op, SmallVectorImpl<Value> &ivs,
> @@ -176,7 +176,7 @@ static void transformIndexedGenericOpIndices(
>    // that refers to an existing function symbol. The `fun` function call
> will be
>    // inserted in the loop body in that case.
>    //
> -  // TODO(pifon): Add support for `linalg.indexed_generic` with `fun`
> attribute.
> +  // TODO: Add support for `linalg.indexed_generic` with `fun` attribute.
>    auto &region = indexedGenericOp.region();
>    if (region.empty()) {
>      indexedGenericOp.emitOpError("expected a region");
>
> diff  --git a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
> b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
> index 07de4952d928..bba7b2a10030 100644
> --- a/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
> +++ b/mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp
> @@ -83,7 +83,7 @@ LogicalResult
> mlir::linalg::vectorizeLinalgOpPrecondition(Operation *op) {
>    if (!genericOp || !::isRowMajorMatmul(genericOp))
>      return failure();
>
> -  // TODO(ntv): non-identity layout.
> +  // TODO: non-identity layout.
>    auto isStaticMemRefWithIdentityLayout = [](Value v) {
>      auto m = v.getType().dyn_cast<MemRefType>();
>      if (!m || !m.hasStaticShape() || !m.getAffineMaps().empty())
>
> diff  --git a/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
> b/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
> index 5bba11420d08..a9d5e2028c22 100644
> --- a/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
> +++ b/mlir/lib/Dialect/Linalg/Utils/Utils.cpp
> @@ -181,7 +181,7 @@ void GenerateLoopNest<AffineForOp>::doit(
>  /// of the innermost loop is populated by `bodyBuilderFn` that accepts a
> range
>  /// of induction variables for all loops. `ivStorage` is used to store the
>  /// partial list of induction variables.
> -// TODO(zinenko,ntv): this function can be made iterative instead.
> However, it
> +// TODO: this function can be made iterative instead. However, it
>  // will have at most as many recursive calls as nested loops, which rarely
>  // exceeds 10.
>  static void
>
> diff  --git a/mlir/lib/Dialect/Quant/Utils/FakeQuantSupport.cpp
> b/mlir/lib/Dialect/Quant/Utils/FakeQuantSupport.cpp
> index 71b7e042aa1a..37b72af93424 100644
> --- a/mlir/lib/Dialect/Quant/Utils/FakeQuantSupport.cpp
> +++ b/mlir/lib/Dialect/Quant/Utils/FakeQuantSupport.cpp
> @@ -61,7 +61,7 @@ static bool getDefaultStorageParams(unsigned numBits,
> bool narrowRange,
>  // point is derived from the shifted range, and the scale isn't changed.
> As
>  // a consequence some values, which are supposed in the original [rmin,
> rmax]
>  // range will be outside the shifted range and be clamped during
> quantization.
> -// TODO(fengliuai): we should nudge the scale as well, but that requires
> the
> +// TODO: we should nudge the scale as well, but that requires the
>  // fake quant op used in the training to use the nudged scale as well.
>  static void getNudgedScaleAndZeroPoint(int64_t qmin, int64_t qmax, double
> rmin,
>                                         double rmax, double &scale,
>
> diff  --git a/mlir/lib/Dialect/Quant/Utils/QuantizeUtils.cpp
> b/mlir/lib/Dialect/Quant/Utils/QuantizeUtils.cpp
> index 1fbc60962fd1..7b475d31914b 100644
> --- a/mlir/lib/Dialect/Quant/Utils/QuantizeUtils.cpp
> +++ b/mlir/lib/Dialect/Quant/Utils/QuantizeUtils.cpp
> @@ -137,7 +137,7 @@ Attribute mlir::quant::quantizeAttr(Attribute
> realValue,
>
> quantizedElementType.dyn_cast<UniformQuantizedPerAxisType>()) {
>      UniformQuantizedPerAxisValueConverter
> converter(uniformQuantizedPerAxis);
>      auto converted = converter.convert(realValue);
> -    // TODO(fengliuai): why we need this outConvertedType? remove it?
> +    // TODO: why we need this outConvertedType? remove it?
>      if (converted) {
>        outConvertedType = converted.getType();
>      }
>
> diff  --git a/mlir/lib/Dialect/Quant/Utils/UniformSupport.cpp
> b/mlir/lib/Dialect/Quant/Utils/UniformSupport.cpp
> index 991d7c179f90..a79ef0023a7f 100644
> --- a/mlir/lib/Dialect/Quant/Utils/UniformSupport.cpp
> +++ b/mlir/lib/Dialect/Quant/Utils/UniformSupport.cpp
> @@ -68,7 +68,7 @@ UniformQuantizedPerAxisValueConverter::convert(Attribute
> realValue) {
>    if (auto attr = realValue.dyn_cast<DenseFPElementsAttr>()) {
>      return convert(attr);
>    }
> -  // TODO(fengliuai): handles sparse elements attribute
> +  // TODO: handles sparse elements attribute
>    return nullptr;
>  }
>
>
> diff  --git a/mlir/lib/Dialect/SDBM/SDBM.cpp
> b/mlir/lib/Dialect/SDBM/SDBM.cpp
> index 77f81fed0322..df24e77bc4f2 100644
> --- a/mlir/lib/Dialect/SDBM/SDBM.cpp
> +++ b/mlir/lib/Dialect/SDBM/SDBM.cpp
> @@ -187,7 +187,7 @@ struct SDBMBuilder : public SDBMVisitor<SDBMBuilder,
> SDBMBuilderResult> {
>  SDBM SDBM::get(ArrayRef<SDBMExpr> inequalities, ArrayRef<SDBMExpr>
> equalities) {
>    SDBM result;
>
> -  // TODO(zinenko): consider detecting equalities in the list of
> inequalities.
> +  // TODO: consider detecting equalities in the list of inequalities.
>    // This is potentially expensive and requires to
>    //   - create a list of negated inequalities (may allocate under lock);
>    //   - perform a pairwise comparison of direct and negated inequalities;
> @@ -289,7 +289,7 @@ SDBM SDBM::get(ArrayRef<SDBMExpr> inequalities,
> ArrayRef<SDBMExpr> equalities) {
>    // value is positive, the set defined by SDBM is trivially empty.  We
> store
>    // this value anyway and continue processing to maintain the
> correspondence
>    // between the matrix form and the list-of-SDBMExpr form.
> -  // TODO(zinenko): we may want to reconsider this once we have
> canonicalization
> +  // TODO: we may want to reconsider this once we have canonicalization
>    // or simplification in place
>    auto updateMatrix = [](SDBM &sdbm, const SDBMBuilderResult &r) {
>      for (auto positivePos : r.positivePos) {
>
> diff  --git a/mlir/lib/Dialect/SDBM/SDBMExpr.cpp
> b/mlir/lib/Dialect/SDBM/SDBMExpr.cpp
> index 0428df79a053..5d60158c34e4 100644
> --- a/mlir/lib/Dialect/SDBM/SDBMExpr.cpp
> +++ b/mlir/lib/Dialect/SDBM/SDBMExpr.cpp
> @@ -451,7 +451,7 @@ Optional<SDBMExpr>
> SDBMExpr::tryConvertAffineExpr(AffineExpr affine) {
>        if (pattern.match(expr)) {
>          if (SDBMExpr converted = visit(x.matched())) {
>            if (auto varConverted = converted.dyn_cast<SDBMTermExpr>())
> -            // TODO(ntv): return
> varConverted.stripe(C.getConstantValue());
> +            // TODO: return varConverted.stripe(C.getConstantValue());
>              return SDBMStripeExpr::get(
>                  varConverted,
>                  SDBMConstantExpr::get(dialect,
>
> diff  --git a/mlir/lib/Dialect/SPIRV/SPIRVDialect.cpp
> b/mlir/lib/Dialect/SPIRV/SPIRVDialect.cpp
> index 894de3dba377..fbc644d38ae3 100644
> --- a/mlir/lib/Dialect/SPIRV/SPIRVDialect.cpp
> +++ b/mlir/lib/Dialect/SPIRV/SPIRVDialect.cpp
> @@ -69,11 +69,11 @@ struct SPIRVInlinerInterface : public
> DialectInlinerInterface {
>    /// operation registered to the current dialect.
>    bool isLegalToInline(Operation *op, Region *dest,
>                         BlockAndValueMapping &) const final {
> -    // TODO(antiagainst): Enable inlining structured control flows with
> return.
> +    // TODO: Enable inlining structured control flows with return.
>      if ((isa<spirv::SelectionOp, spirv::LoopOp>(op)) &&
>          containsReturn(op->getRegion(0)))
>        return false;
> -    // TODO(antiagainst): we need to filter OpKill here to avoid inlining
> it to
> +    // TODO: we need to filter OpKill here to avoid inlining it to
>      // a loop continue construct:
>      // https://github.com/KhronosGroup/SPIRV-Headers/issues/86
>      // However OpKill is fragment shader specific and we don't support it
> yet.
> @@ -330,7 +330,7 @@ static Type parseCooperativeMatrixType(SPIRVDialect
> const &dialect,
>    return CooperativeMatrixNVType::get(elementTy, scope, dims[0], dims[1]);
>  }
>
> -// TODO(ravishankarm) : Reorder methods to be utilities first and
> parse*Type
> +// TODO: Reorder methods to be utilities first and parse*Type
>  // methods in alphabetical order
>  //
>  // storage-class ::= `UniformConstant`
> @@ -438,7 +438,7 @@ static Optional<ValTy> parseAndVerify(SPIRVDialect
> const &dialect,
>  template <>
>  Optional<Type> parseAndVerify<Type>(SPIRVDialect const &dialect,
>                                      DialectAsmParser &parser) {
> -  // TODO(ravishankarm): Further verify that the element type can be
> sampled
> +  // TODO: Further verify that the element type can be sampled
>    auto ty = parseAndVerifyType(dialect, parser);
>    if (!ty)
>      return llvm::None;
> @@ -1054,7 +1054,7 @@ LogicalResult
> SPIRVDialect::verifyOperationAttribute(Operation *op,
>    StringRef symbol = attribute.first.strref();
>    Attribute attr = attribute.second;
>
> -  // TODO(antiagainst): figure out a way to generate the description from
> the
> +  // TODO: figure out a way to generate the description from the
>    // StructAttr definition.
>    if (symbol == spirv::getEntryPointABIAttrName()) {
>      if (!attr.isa<spirv::EntryPointABIAttr>())
>
> diff  --git a/mlir/lib/Dialect/SPIRV/SPIRVLowering.cpp
> b/mlir/lib/Dialect/SPIRV/SPIRVLowering.cpp
> index 6bb07b28d022..d31f9c28362a 100644
> --- a/mlir/lib/Dialect/SPIRV/SPIRVLowering.cpp
> +++ b/mlir/lib/Dialect/SPIRV/SPIRVLowering.cpp
> @@ -89,7 +89,7 @@ static LogicalResult checkCapabilityRequirements(
>  Type SPIRVTypeConverter::getIndexType(MLIRContext *context) {
>    // Convert to 32-bit integers for now. Might need a way to control this
> in
>    // future.
> -  // TODO(ravishankarm): It is probably better to make it 64-bit
> integers. To
> +  // TODO: It is probably better to make it 64-bit integers. To
>    // this some support is needed in SPIR-V dialect for Conversion
>    // instructions. The Vulkan spec requires the builtins like
>    // GlobalInvocationID, etc. to be 32-bit (unsigned) integers which
> should be
> @@ -104,7 +104,7 @@ Type SPIRVTypeConverter::getIndexType(MLIRContext
> *context) {
>  /// behind the number assignments; we try to follow NVVM conventions and
> largely
>  /// give common storage classes a smaller number. The hope is use symbolic
>  /// memory space representation eventually after memref supports it.
> -// TODO(antiagainst): swap Generic and StorageBuffer assignment to be
> more akin
> +// TODO: swap Generic and StorageBuffer assignment to be more akin
>  // to NVVM.
>  #define STORAGE_SPACE_MAP_LIST(MAP_FN)
>      \
>    MAP_FN(spirv::StorageClass::Generic, 1)
>       \
> @@ -155,7 +155,7 @@
> SPIRVTypeConverter::getStorageClassForMemorySpace(unsigned space) {
>
>  #undef STORAGE_SPACE_MAP_LIST
>
> -// TODO(ravishankarm): This is a utility function that should probably be
> +// TODO: This is a utility function that should probably be
>  // exposed by the SPIR-V dialect. Keeping it local till the use case
> arises.
>  static Optional<int64_t> getTypeNumBytes(Type t) {
>    if (t.isa<spirv::ScalarType>()) {
> @@ -239,7 +239,7 @@ convertScalarType(const spirv::TargetEnv &targetEnv,
> spirv::ScalarType type,
>
>    // Otherwise we need to adjust the type, which really means adjusting
> the
>    // bitwidth given this is a scalar type.
> -  // TODO(antiagainst): We are unconditionally converting the bitwidth
> here,
> +  // TODO: We are unconditionally converting the bitwidth here,
>    // this might be okay for non-interface types (i.e., types used in
>    // Private/Function storage classes), but not for interface types (i.e.,
>    // types used in StorageBuffer/Uniform/PushConstant/etc. storage
> classes).
> @@ -263,7 +263,7 @@ static Optional<Type>
>  convertVectorType(const spirv::TargetEnv &targetEnv, VectorType type,
>                    Optional<spirv::StorageClass> storageClass = {}) {
>    if (!spirv::CompositeType::isValid(type)) {
> -    // TODO(antiagainst): One-element vector types can be translated into
> scalar
> +    // TODO: One-element vector types can be translated into scalar
>      // types. Vector types with more than four elements can be translated
> into
>      // array types.
>      LLVM_DEBUG(llvm::dbgs()
> @@ -297,7 +297,7 @@ convertVectorType(const spirv::TargetEnv &targetEnv,
> VectorType type,
>  /// manipulate, like what we do for vectors.
>  static Optional<Type> convertTensorType(const spirv::TargetEnv &targetEnv,
>                                          TensorType type) {
> -  // TODO(ravishankarm) : Handle dynamic shapes.
> +  // TODO: Handle dynamic shapes.
>    if (!type.hasStaticShape()) {
>      LLVM_DEBUG(llvm::dbgs()
>                 << type << " illegal: dynamic shape unimplemented\n");
> @@ -406,7 +406,7 @@
> SPIRVTypeConverter::SPIRVTypeConverter(spirv::TargetEnvAttr targetAttr)
>    // adopted in the SPIR-V dialect (i.e., IntegerType, FloatType,
> VectorType)
>    // were tried before.
>    //
> -  // TODO(antiagainst): this assumes that the SPIR-V types are valid to
> use in
> +  // TODO: this assumes that the SPIR-V types are valid to use in
>    // the given target environment, which should be the case if the whole
>    // pipeline is driven by the same target environment. Still, we
> probably still
>    // want to validate and convert to be safe.
> @@ -462,7 +462,7 @@ LogicalResult
>  FuncOpConversion::matchAndRewrite(FuncOp funcOp, ArrayRef<Value> operands,
>                                    ConversionPatternRewriter &rewriter)
> const {
>    auto fnType = funcOp.getType();
> -  // TODO(antiagainst): support converting functions with one result.
> +  // TODO: support converting functions with one result.
>    if (fnType.getNumResults())
>      return failure();
>
>
> diff  --git a/mlir/lib/Dialect/SPIRV/SPIRVOps.cpp
> b/mlir/lib/Dialect/SPIRV/SPIRVOps.cpp
> index 1ac6a1e6d75b..32d13e6afd61 100644
> --- a/mlir/lib/Dialect/SPIRV/SPIRVOps.cpp
> +++ b/mlir/lib/Dialect/SPIRV/SPIRVOps.cpp
> @@ -27,7 +27,7 @@
>
>  using namespace mlir;
>
> -// TODO(antiagainst): generate these strings using ODS.
> +// TODO: generate these strings using ODS.
>  static constexpr const char kAlignmentAttrName[] = "alignment";
>  static constexpr const char kBranchWeightAttrName[] = "branch_weights";
>  static constexpr const char kCallee[] = "callee";
> @@ -313,7 +313,7 @@ static LogicalResult
> verifyLoadStorePtrAndValTypes(LoadStoreOpTy op, Value ptr,
>    // ODS already checks ptr is spirv::PointerType. Just check that the
> pointee
>    // type of the pointer and the type of the value are the same
>    //
> -  // TODO(ravishankarm): Check that the value type satisfies restrictions
> of
> +  // TODO: Check that the value type satisfies restrictions of
>    // SPIR-V OpLoad/OpStore operations
>    if (val.getType() !=
>        ptr.getType().cast<spirv::PointerType>().getPointeeType()) {
> @@ -618,7 +618,7 @@ static LogicalResult
> verifyGroupNonUniformArithmeticOp(Operation *groupOp) {
>      Operation *sizeOp = groupOp->getOperand(1).getDefiningOp();
>      int32_t clusterSize = 0;
>
> -    // TODO(antiagainst): support specialization constant here.
> +    // TODO: support specialization constant here.
>      if (failed(extractValueFromConstOp(sizeOp, clusterSize)))
>        return groupOp->emitOpError(
>            "cluster size operand must come from a constant op");
> @@ -753,7 +753,7 @@ static Type getElementPtrType(Type type, ValueRange
> indices, Location baseLoc) {
>          return nullptr;
>        }
>
> -      // TODO(denis0x0D): this should be relaxed to allow
> +      // TODO: this should be relaxed to allow
>        // integer literals of other bitwidths.
>        if (failed(extractValueFromConstOp(op, index))) {
>          emitError(baseLoc,
> @@ -948,7 +948,7 @@ static LogicalResult
> verify(spirv::AtomicCompareExchangeWeakOp atomOp) {
>                 "as the op result type, but found ")
>             << pointeeType << " vs " << atomOp.getType();
>
> -  // TODO(antiagainst): Unequal cannot be set to Release or Acquire and
> Release.
> +  // TODO: Unequal cannot be set to Release or Acquire and Release.
>    // In addition, Unequal cannot be set to a stronger memory-order then
> Equal.
>
>    return success();
> @@ -1384,7 +1384,7 @@ bool spirv::ConstantOp::isBuildableWith(Type type) {
>
>    if (type.getKind() >= Type::FIRST_SPIRV_TYPE &&
>        type.getKind() <= spirv::TypeKind::LAST_SPIRV_TYPE) {
> -    // TODO(antiagainst): support constant struct
> +    // TODO: support constant struct
>      return type.isa<spirv::ArrayType>();
>    }
>
> @@ -1633,7 +1633,7 @@ LogicalResult spirv::FuncOp::verifyBody() {
>      return WalkResult::advance();
>    });
>
> -  // TODO(antiagainst): verify other bits like linkage type.
> +  // TODO: verify other bits like linkage type.
>
>    return failure(walkResult.wasInterrupted());
>  }
> @@ -1939,7 +1939,7 @@ void spirv::LoopOp::build(OpBuilder &builder,
> OperationState &state) {
>  }
>
>  static ParseResult parseLoopOp(OpAsmParser &parser, OperationState
> &state) {
> -  // TODO(antiagainst): support loop control properly
> +  // TODO: support loop control properly
>    Builder builder = parser.getBuilder();
>    state.addAttribute("loop_control",
>                       builder.getI32IntegerAttr(
> @@ -2222,7 +2222,7 @@ static LogicalResult verify(spirv::ModuleOp
> moduleOp) {
>        if (funcOp.isExternal())
>          return op.emitError("'spv.module' cannot contain external
> functions");
>
> -      // TODO(antiagainst): move this check to spv.func.
> +      // TODO: move this check to spv.func.
>        for (auto &block : funcOp)
>          for (auto &op : block) {
>            if (op.getDialect() != dialect)
> @@ -2302,7 +2302,7 @@ static LogicalResult verify(spirv::SelectOp op) {
>
>  static ParseResult parseSelectionOp(OpAsmParser &parser,
>                                      OperationState &state) {
> -  // TODO(antiagainst): support selection control properly
> +  // TODO: support selection control properly
>    Builder builder = parser.getBuilder();
>    state.addAttribute("selection_control",
>                       builder.getI32IntegerAttr(
> @@ -2536,7 +2536,7 @@ static LogicalResult verify(spirv::UnreachableOp
> unreachableOp) {
>    if (block->hasNoPredecessors())
>      return success();
>
> -  // TODO(antiagainst): further verification needs to analyze
> reachability from
> +  // TODO: further verification needs to analyze reachability from
>    // the entry block.
>
>    return success();
> @@ -2626,7 +2626,7 @@ static LogicalResult verify(spirv::VariableOp varOp)
> {
>                                 "constant or spv.globalVariable op");
>    }
>
> -  // TODO(antiagainst): generate these strings using ODS.
> +  // TODO: generate these strings using ODS.
>    auto *op = varOp.getOperation();
>    auto descriptorSetName = llvm::convertToSnakeFromCamelCase(
>        stringifyDecoration(spirv::Decoration::DescriptorSet));
>
> diff  --git a/mlir/lib/Dialect/SPIRV/SPIRVTypes.cpp
> b/mlir/lib/Dialect/SPIRV/SPIRVTypes.cpp
> index 03ce62807a8b..b0396bfc1163 100644
> --- a/mlir/lib/Dialect/SPIRV/SPIRVTypes.cpp
> +++ b/mlir/lib/Dialect/SPIRV/SPIRVTypes.cpp
> @@ -86,7 +86,7 @@ spirv::getRecursiveImpliedCapabilities(Capability cap) {
>    llvm::SetVector<Capability, SmallVector<Capability, 0>> allCaps(
>        directCaps.begin(), directCaps.end());
>
> -  // TODO(antiagainst): This is insufficient; find a better way to handle
> this
> +  // TODO: This is insufficient; find a better way to handle this
>    // (e.g., using static lists) if this turns out to be a bottleneck.
>    for (unsigned i = 0; i < allCaps.size(); ++i)
>      for (Capability c : getDirectImpliedCapabilities(allCaps[i]))
>
> diff  --git a/mlir/lib/Dialect/SPIRV/Serialization/Deserializer.cpp
> b/mlir/lib/Dialect/SPIRV/Serialization/Deserializer.cpp
> index 215c5ba373a4..b5fef1477870 100644
> --- a/mlir/lib/Dialect/SPIRV/Serialization/Deserializer.cpp
> +++ b/mlir/lib/Dialect/SPIRV/Serialization/Deserializer.cpp
> @@ -91,7 +91,7 @@ using BlockMergeInfoMap = DenseMap<Block *,
> BlockMergeInfo>;
>  /// higher-order bits. So this deserializer uses that to get instruction
>  /// boundary and parse instructions and build a SPIR-V ModuleOp gradually.
>  ///
> -// TODO(antiagainst): clean up created ops on errors
> +// TODO: clean up created ops on errors
>  class Deserializer {
>  public:
>    /// Creates a deserializer for the given SPIR-V `binary` module.
> @@ -420,7 +420,7 @@ class Deserializer {
>    /// MLIRContext to create SPIR-V ModuleOp into.
>    MLIRContext *context;
>
> -  // TODO(antiagainst): create Location subclass for binary blob
> +  // TODO: create Location subclass for binary blob
>    Location unknownLoc;
>
>    /// The SPIR-V ModuleOp.
> @@ -602,7 +602,7 @@ LogicalResult Deserializer::processHeader() {
>             << majorVersion;
>    }
>
> -  // TODO(antiagainst): generator number, bound, schema
> +  // TODO: generator number, bound, schema
>    curOffset = spirv::kHeaderWordCount;
>    return success();
>  }
> @@ -676,7 +676,7 @@ LogicalResult
> Deserializer::processMemoryModel(ArrayRef<uint32_t> operands) {
>  }
>
>  LogicalResult Deserializer::processDecoration(ArrayRef<uint32_t> words) {
> -  // TODO : This function should also be auto-generated. For now, since
> only a
> +  // TODO: This function should also be auto-generated. For now, since
> only a
>    // few decorations are processed/handled in a meaningful manner, going
> with a
>    // manual implementation.
>    if (words.size() < 2) {
> @@ -804,7 +804,7 @@ LogicalResult
> Deserializer::processFunction(ArrayRef<uint32_t> operands) {
>      return emitError(unknownLoc, "unknown Function Control: ") <<
> operands[2];
>    }
>    if (functionControl.getValue() != spirv::FunctionControl::None) {
> -    /// TODO : Handle
> diff erent function controls
> +    /// TODO: Handle
> diff erent function controls
>      return emitError(unknownLoc, "unhandled Function Control: '")
>             << spirv::stringifyFunctionControl(functionControl.getValue())
>             << "'";
> @@ -1197,7 +1197,7 @@ LogicalResult
> Deserializer::processArrayType(ArrayRef<uint32_t> operands) {
>    }
>
>    unsigned count = 0;
> -  // TODO(antiagainst): The count can also come frome a specialization
> constant.
> +  // TODO: The count can also come frome a specialization constant.
>    auto countInfo = getConstant(operands[2]);
>    if (!countInfo) {
>      return emitError(unknownLoc, "OpTypeArray count <id> ")
> @@ -1336,7 +1336,7 @@ LogicalResult
> Deserializer::processStructType(ArrayRef<uint32_t> operands) {
>    }
>    typeMap[operands[0]] =
>        spirv::StructType::get(memberTypes, offsetInfo,
> memberDecorationsInfo);
> -  // TODO(ravishankarm): Update StructType to have member name as
> attribute as
> +  // TODO: Update StructType to have member name as attribute as
>    // well.
>    return success();
>  }
> @@ -1823,7 +1823,7 @@ spirv::LoopOp
> ControlFlowStructurizer::createLoopOp() {
>    // merge block so that the newly created LoopOp will be inserted there.
>    OpBuilder builder(&mergeBlock->front());
>
> -  // TODO(antiagainst): handle loop control properly
> +  // TODO: handle loop control properly
>    auto loopOp = builder.create<spirv::LoopOp>(location);
>    loopOp.addEntryAndMergeBlock();
>
> @@ -1966,7 +1966,7 @@ LogicalResult
> ControlFlowStructurizer::structurizeImpl() {
>      // selection/loop. If so, they will be recorded within blockMergeInfo.
>      // We need to update the pointers there to the newly remapped ones so
> we can
>      // continue structurizing them later.
> -    // TODO(antiagainst): The asserts in the following assumes input
> SPIR-V blob
> +    // TODO: The asserts in the following assumes input SPIR-V blob
>      // forms correctly nested selection/loop constructs. We should relax
> this
>      // and support error cases better.
>      auto it = blockMergeInfo.find(block);
>
> diff  --git a/mlir/lib/Dialect/SPIRV/Serialization/Serializer.cpp
> b/mlir/lib/Dialect/SPIRV/Serialization/Serializer.cpp
> index f8641873fd95..8f6e02de27e7 100644
> --- a/mlir/lib/Dialect/SPIRV/Serialization/Serializer.cpp
> +++ b/mlir/lib/Dialect/SPIRV/Serialization/Serializer.cpp
> @@ -283,7 +283,7 @@ class Serializer {
>    /// iterates the DenseElementsAttr to construct the constant array, and
>    /// returns the result <id>  associated with it. Returns 0 if failed.
> Note
>    /// that the size of `index` must match the rank.
> -  /// TODO(hanchung): Consider to enhance splat elements cases. For splat
> cases,
> +  /// TODO: Consider to enhance splat elements cases. For splat cases,
>    /// we don't need to loop over all elements, especially when the splat
> value
>    /// is zero. We can use OpConstantNull when the value is zero.
>    uint32_t prepareDenseElementsConstant(Location loc, Type constType,
> @@ -511,7 +511,7 @@ LogicalResult Serializer::serialize() {
>    if (failed(module.verify()))
>      return failure();
>
> -  // TODO(antiagainst): handle the other sections
> +  // TODO: handle the other sections
>    processCapability();
>    processExtension();
>    processMemoryModel();
> @@ -773,7 +773,7 @@ LogicalResult Serializer::processFuncOp(spirv::FuncOp
> op) {
>    operands.push_back(resTypeID);
>    auto funcID = getOrCreateFunctionID(op.getName());
>    operands.push_back(funcID);
> -  // TODO : Support other function control options.
> +  // TODO: Support other function control options.
>    operands.push_back(static_cast<uint32_t>(spirv::FunctionControl::None));
>    operands.push_back(fnTypeID);
>    encodeInstructionInto(functionHeader, spirv::Opcode::OpFunction,
> operands);
> @@ -1136,7 +1136,7 @@ Serializer::prepareBasicType(Location loc, Type
> type, uint32_t resultID,
>      return success();
>    }
>
> -  // TODO(ravishankarm) : Handle other types.
> +  // TODO: Handle other types.
>    return emitError(loc, "unhandled type in serialization: ") << type;
>  }
>
> @@ -1229,7 +1229,7 @@ uint32_t Serializer::prepareArrayConstant(Location
> loc, Type constType,
>    return resultID;
>  }
>
> -// TODO(hanchung): Turn the below function into iterative function,
> instead of
> +// TODO: Turn the below function into iterative function, instead of
>  // recursive function.
>  uint32_t
>  Serializer::prepareDenseElementsConstant(Location loc, Type constType,
> @@ -1572,7 +1572,7 @@ LogicalResult
> Serializer::processSelectionOp(spirv::SelectionOp selectionOp) {
>    auto emitSelectionMerge = [&]() {
>      emitDebugLine(functionBody, loc);
>      lastProcessedWasMergeInst = true;
> -    // TODO(antiagainst): properly support selection control here
> +    // TODO: properly support selection control here
>      encodeInstructionInto(
>          functionBody, spirv::Opcode::OpSelectionMerge,
>          {mergeID, static_cast<uint32_t>(spirv::SelectionControl::None)});
> @@ -1635,7 +1635,7 @@ LogicalResult
> Serializer::processLoopOp(spirv::LoopOp loopOp) {
>    auto emitLoopMerge = [&]() {
>      emitDebugLine(functionBody, loc);
>      lastProcessedWasMergeInst = true;
> -    // TODO(antiagainst): properly support loop control here
> +    // TODO: properly support loop control here
>      encodeInstructionInto(
>          functionBody, spirv::Opcode::OpLoopMerge,
>          {mergeID, continueID,
> static_cast<uint32_t>(spirv::LoopControl::None)});
>
> diff  --git a/mlir/lib/Dialect/SPIRV/Transforms/LowerABIAttributesPass.cpp
> b/mlir/lib/Dialect/SPIRV/Transforms/LowerABIAttributesPass.cpp
> index 5bd425ae9107..be1d27141390 100644
> --- a/mlir/lib/Dialect/SPIRV/Transforms/LowerABIAttributesPass.cpp
> +++ b/mlir/lib/Dialect/SPIRV/Transforms/LowerABIAttributesPass.cpp
> @@ -72,13 +72,13 @@ getInterfaceVariables(spirv::FuncOp funcOp,
>    }
>    llvm::SetVector<Operation *> interfaceVarSet;
>
> -  // TODO(ravishankarm) : This should in reality traverse the entry
> function
> +  // TODO: This should in reality traverse the entry function
>    // call graph and collect all the interfaces. For now, just traverse the
>    // instructions in this function.
>    funcOp.walk([&](spirv::AddressOfOp addressOfOp) {
>      auto var =
>
>  module.lookupSymbol<spirv::GlobalVariableOp>(addressOfOp.variable());
> -    // TODO(antiagainst): Per SPIR-V spec: "Before version 1.4, the
> interface’s
> +    // TODO: Per SPIR-V spec: "Before version 1.4, the interface’s
>      // storage classes are limited to the Input and Output storage
> classes.
>      // Starting with version 1.4, the interface’s storage classes are all
>      // storage classes used in declaring all global variables referenced
> by the
> @@ -158,7 +158,7 @@ LogicalResult ProcessInterfaceVarABI::matchAndRewrite(
>      ConversionPatternRewriter &rewriter) const {
>    if (!funcOp.getAttrOfType<spirv::EntryPointABIAttr>(
>            spirv::getEntryPointABIAttrName())) {
> -    // TODO(ravishankarm) : Non-entry point functions are not handled.
> +    // TODO: Non-entry point functions are not handled.
>      return failure();
>    }
>    TypeConverter::SignatureConversion signatureConverter(
> @@ -169,7 +169,7 @@ LogicalResult ProcessInterfaceVarABI::matchAndRewrite(
>      auto abiInfo = funcOp.getArgAttrOfType<spirv::InterfaceVarABIAttr>(
>          argType.index(), attrName);
>      if (!abiInfo) {
> -      // TODO(ravishankarm) : For non-entry point functions, it should be
> legal
> +      // TODO: For non-entry point functions, it should be legal
>        // to pass around scalar/vector values and return a scalar/vector.
> For now
>        // non-entry point functions are not handled in this ABI lowering
> and will
>        // produce an error.
> @@ -187,7 +187,7 @@ LogicalResult ProcessInterfaceVarABI::matchAndRewrite(
>          rewriter.create<spirv::AddressOfOp>(funcOp.getLoc(), var);
>      // Check if the arg is a scalar or vector type. In that case, the
> value
>      // needs to be loaded into registers.
> -    // TODO(ravishankarm) : This is loading value of the scalar into
> registers
> +    // TODO: This is loading value of the scalar into registers
>      // at the start of the function. It is probably better to do the load
> just
>      // before the use. There might be multiple loads and currently there
> is no
>      // easy way to replace all uses with a sequence of operations.
>
> diff  --git a/mlir/lib/Dialect/SPIRV/Transforms/UpdateVCEPass.cpp
> b/mlir/lib/Dialect/SPIRV/Transforms/UpdateVCEPass.cpp
> index 8adbc76a3a03..13e04f80c2b8 100644
> --- a/mlir/lib/Dialect/SPIRV/Transforms/UpdateVCEPass.cpp
> +++ b/mlir/lib/Dialect/SPIRV/Transforms/UpdateVCEPass.cpp
> @@ -163,7 +163,7 @@ void UpdateVCEPass::runOnOperation() {
>    if (walkResult.wasInterrupted())
>      return signalPassFailure();
>
> -  // TODO(antiagainst): verify that the deduced version is consistent with
> +  // TODO: verify that the deduced version is consistent with
>    // SPIR-V ops' maximal version requirements.
>
>    auto triple = spirv::VerCapExtAttr::get(
>
> diff  --git a/mlir/lib/Dialect/StandardOps/IR/Ops.cpp
> b/mlir/lib/Dialect/StandardOps/IR/Ops.cpp
> index b81f7f4c7387..f792e5d93e4e 100644
> --- a/mlir/lib/Dialect/StandardOps/IR/Ops.cpp
> +++ b/mlir/lib/Dialect/StandardOps/IR/Ops.cpp
> @@ -896,7 +896,7 @@ OpFoldResult CmpFOp::fold(ArrayRef<Attribute>
> operands) {
>    auto lhs = operands.front().dyn_cast_or_null<FloatAttr>();
>    auto rhs = operands.back().dyn_cast_or_null<FloatAttr>();
>
> -  // TODO(gcmn) We could actually do some intelligent things if we know
> only one
> +  // TODO: We could actually do some intelligent things if we know only
> one
>    // of the operands, but it's inf or nan.
>    if (!lhs || !rhs)
>      return {};
>
> diff  --git a/mlir/lib/Dialect/Vector/VectorOps.cpp
> b/mlir/lib/Dialect/Vector/VectorOps.cpp
> index 184aed2ee1cd..f97906c2570d 100644
> --- a/mlir/lib/Dialect/Vector/VectorOps.cpp
> +++ b/mlir/lib/Dialect/Vector/VectorOps.cpp
> @@ -159,7 +159,7 @@ static ParseResult parseContractionOp(OpAsmParser
> &parser,
>    Type resultType;
>    auto loc = parser.getCurrentLocation();
>    DictionaryAttr dictAttr;
> -  // TODO(andydavis, ntv) Unify linalg op attribute parsing.
> +  // TODO: Unify linalg op attribute parsing.
>    if (parser.parseAttribute(dictAttr, "_", result.attributes) ||
>        parser.parseOperand(lhsInfo) || parser.parseComma() ||
>        parser.parseOperand(rhsInfo) || parser.parseComma() ||
> @@ -192,7 +192,7 @@ static ParseResult parseContractionOp(OpAsmParser
> &parser,
>  }
>
>  static void print(OpAsmPrinter &p, ContractionOp op) {
> -  // TODO(andydavis, ntv) Unify printing code with linalg ops.
> +  // TODO: Unify printing code with linalg ops.
>    auto attrNames = op.getTraitAttrNames();
>    llvm::StringSet<> traitAttrsSet;
>    traitAttrsSet.insert(attrNames.begin(), attrNames.end());
> @@ -592,7 +592,7 @@ isValidExtractOrInsertSlicesType(Operation *op,
> VectorType vectorType,
>                                   TupleType tupleType, ArrayRef<int64_t>
> sizes,
>                                   ArrayRef<int64_t> strides) {
>    // Check for non-unit strides.
> -  // TODO(b/144845578) Support non-1 strides.
> +  // TODO: Support non-1 strides.
>    if (llvm::any_of(strides, [](int64_t s) { return s != 1; }))
>      return op->emitError("requires unit strides");
>    // Check that 'vectorType' rank matches rank of tuple element vectors.
> @@ -864,7 +864,7 @@ void InsertStridedSliceOp::build(OpBuilder &builder,
> OperationState &result,
>    result.addAttribute(getStridesAttrName(), stridesAttr);
>  }
>
> -// TODO(ntv) Should be moved to Tablegen Confined attributes.
> +// TODO: Should be moved to Tablegen Confined attributes.
>  template <typename OpType>
>  static LogicalResult isIntegerArrayAttrSmallerThanShape(OpType op,
>                                                          ArrayAttr
> arrayAttr,
> @@ -1331,7 +1331,7 @@ static LogicalResult verifyTransferOp(Operation *op,
> MemRefType memrefType,
>      if (memrefVecEltRank > resultVecRank)
>        return op->emitOpError(
>            "requires memref vector element and vector result ranks to
> match.");
> -    // TODO(b/146516564) Move this to isSuffix in Vector/Utils.h.
> +    // TODO: Move this to isSuffix in Vector/Utils.h.
>      unsigned rankOffset = resultVecRank - memrefVecEltRank;
>      auto memrefVecEltShape = memrefVectorElementType.getShape();
>      auto resultVecShape = vectorType.getShape();
>
> diff  --git a/mlir/lib/Dialect/Vector/VectorTransforms.cpp
> b/mlir/lib/Dialect/Vector/VectorTransforms.cpp
> index c7cf2937939c..19c5bdcf97f2 100644
> --- a/mlir/lib/Dialect/Vector/VectorTransforms.cpp
> +++ b/mlir/lib/Dialect/Vector/VectorTransforms.cpp
> @@ -101,7 +101,7 @@ static Type adjustType(VectorType tp, int64_t index) {
>  }
>
>  // Helper method to possibly drop a dimension in a load.
> -// TODO(ajcbik): use a reshaping vector load (and share lowering code)
> +// TODO
>  static Value reshapeLoad(Location loc, Value val, VectorType type,
>                           int64_t index, int64_t pos,
>                           PatternRewriter &rewriter) {
> @@ -129,7 +129,7 @@ static Value reshapeLoad(Location loc, Value val,
> VectorType type,
>  }
>
>  // Helper method to possibly drop a dimension in a store.
> -// TODO(ajcbik): use a reshaping vector store (and share lowering code)
> +// TODO
>  static Value reshapeStore(Location loc, Value val, Value result,
>                            VectorType type, int64_t index, int64_t pos,
>                            PatternRewriter &rewriter) {
> @@ -182,7 +182,7 @@ static void getMappedElements(const DenseMap<int64_t,
> int64_t> &indexMap,
>
>  // Returns a tuple type with vector element types for each resulting slice
>  // of 'vectorType' unrolled by 'sizes' and 'strides'.
> -// TODO(andydavis) Move this to a utility function and share it with
> +// TODO: Move this to a utility function and share it with
>  // Extract/InsertSlicesOp verification.
>  static TupleType generateExtractSlicesOpResultType(VectorType vectorType,
>                                                     ArrayRef<int64_t>
> sizes,
> @@ -276,7 +276,7 @@ static Value getOrCreateUnrolledVectorSlice(
>    // Compute slice offsets.
>    SmallVector<int64_t, 4> sliceOffsets(state.unrolledShape.size());
>    getMappedElements(indexMap, offsets, sliceOffsets);
> -  // TODO(b/144845578) Support non-1 strides.
> +  // TODO: Support non-1 strides.
>    SmallVector<int64_t, 4> sliceStrides(state.unrolledShape.size(), 1);
>    // Compute linear index of 'sliceOffsets' w.r.t 'state.basis'.
>    int64_t sliceLinearIndex =
> @@ -347,7 +347,7 @@ struct VectorState {
>  //                           insertslice
>  //                                |
>
> -// TODO(andydavis) Add the following canonicalization/simplification
> patterns:
> +// TODO: Add the following canonicalization/simplification patterns:
>  // *) Add pattern which matches InsertStridedSlice -> StridedSlice and
> forwards
>  //    InsertStridedSlice operand to StridedSlice.
>  // *) Add pattern which matches SourceOp -> StridedSlice -> UserOp which
> checks
> @@ -357,7 +357,7 @@ struct VectorState {
>  //    operation, and leave the duplicate StridedSlice ops with no users
>  //    (removable with DCE).
>
> -// TODO(andydavis) Generalize this to support structured ops beyond
> +// TODO: Generalize this to support structured ops beyond
>  // vector ContractionOp, and merge it with 'unrollSingleResultVectorOp'
>  static Value unrollSingleResultStructuredOp(Operation *op,
>                                              ArrayRef<int64_t>
> iterationBounds,
> @@ -473,7 +473,7 @@ static void getVectorContractionOpUnrollState(
>      vectors.push_back({contractionOp.getRHSVectorMaskType(),
>                         vectors[1].indexMap, accOperandIndex + 2, false});
>    }
> -  // TODO(andydavis) Use linalg style 'args_in'/'args_out' to partition
> +  // TODO: Use linalg style 'args_in'/'args_out' to partition
>    // 'vectors' instead of 'resultIndex'.
>    resultIndex = accOperandIndex;
>  }
> @@ -618,7 +618,7 @@ struct SplitTransferReadOp : public
> OpRewritePattern<vector::TransferReadOp> {
>
>    LogicalResult matchAndRewrite(vector::TransferReadOp xferReadOp,
>                                  PatternRewriter &rewriter) const override
> {
> -    // TODO(andydavis, ntv) Support splitting TransferReadOp with
> non-identity
> +    // TODO: Support splitting TransferReadOp with non-identity
>      // permutation maps. Repurpose code from MaterializeVectors
> transformation.
>      if (!isIdentitySuffix(xferReadOp.permutation_map()))
>        return failure();
> @@ -677,7 +677,7 @@ struct SplitTransferWriteOp : public
> OpRewritePattern<vector::TransferWriteOp> {
>
>    LogicalResult matchAndRewrite(vector::TransferWriteOp xferWriteOp,
>                                  PatternRewriter &rewriter) const override
> {
> -    // TODO(andydavis, ntv) Support splitting TransferWriteOp with
> non-identity
> +    // TODO: Support splitting TransferWriteOp with non-identity
>      // permutation maps. Repurpose code from MaterializeVectors
> transformation.
>      if (!isIdentitySuffix(xferWriteOp.permutation_map()))
>        return failure();
> @@ -1553,7 +1553,7 @@ namespace mlir {
>  /// the vector.contract op is a row-major matrix multiply.
>  LogicalResult
>  ContractionOpToMatmulOpLowering::match(vector::ContractionOp op) const {
> -  // TODO(ajcbik): implement masks
> +  // TODO: implement masks
>    if (llvm::size(op.masks()) != 0)
>      return failure();
>
> @@ -1619,7 +1619,7 @@ void
> ContractionOpToMatmulOpLowering::rewrite(vector::ContractionOp op,
>  /// otherwise supports any layout permutation of the matrix-multiply.
>  LogicalResult
>  ContractionOpToOuterProductOpLowering ::match(vector::ContractionOp op)
> const {
> -  // TODO(ajcbik): implement masks
> +  // TODO: implement masks
>    if (llvm::size(op.masks()) != 0)
>      return failure();
>
> @@ -1728,11 +1728,11 @@ void
> ContractionOpToOuterProductOpLowering::rewrite(
>  ///
>  /// This only kicks in when VectorTransformsOptions is set to AXPY.
>  //
> -// TODO (ajcbik): this is very similar, but not quite the same as
> -//                the outerproduct lowering above; merge the two?
> +// TODO: this is very similar, but not quite the same as the outerproduct
> +// lowering above; merge the two?
>  LogicalResult
>  ContractionOpToAXPYLowering::match(vector::ContractionOp op) const {
> -  // TODO(ajcbik): implement masks
> +  // TODO: implement masks
>    if (llvm::size(op.masks()) != 0)
>      return failure();
>
> @@ -1818,23 +1818,23 @@ void
> ContractionOpToAXPYLowering::rewrite(vector::ContractionOp op,
>  /// This only kicks in when either VectorTransformsOptions is set
>  /// to DOT or when other contraction patterns fail.
>  //
> -// TODO(ajcbik): break down into transpose/reshape/cast ops
> +// TODO: break down into transpose/reshape/cast ops
>  //               when they become available to avoid code dup
> -// TODO(ajcbik): investigate lowering order impact on performance
> +// TODO: investigate lowering order impact on performance
>  LogicalResult
>  ContractionOpLowering::matchAndRewrite(vector::ContractionOp op,
>                                         PatternRewriter &rewriter) const {
>
> -  // TODO(ajcbik): implement masks.
> +  // TODO: implement masks.
>    if (llvm::size(op.masks()) != 0)
>      return failure();
> -  // TODO(thomasraoux): support mixed mode contract lowering.
> +  // TODO: support mixed mode contract lowering.
>    if (op.getLhsType().getElementType() !=
>            getElementTypeOrSelf(op.getAccType()) ||
>        op.getRhsType().getElementType() !=
> getElementTypeOrSelf(op.getAccType()))
>      return failure();
>
> -  // TODO(ntv, ajcbik): implement benefits, cost models.
> +  // TODO: implement benefits, cost models.
>    MLIRContext *ctx = op.getContext();
>    ContractionOpToMatmulOpLowering pat1(vectorTransformsOptions, ctx);
>    if (succeeded(pat1.match(op)))
> @@ -1895,7 +1895,7 @@
> ContractionOpLowering::matchAndRewrite(vector::ContractionOp op,
>  }
>
>  // Lower one parallel dimension.
> -// TODO(ajcbik): consider reusing existing contract unrolling
> +// TODO: consider reusing existing contract unrolling
>  Value ContractionOpLowering::lowerParallel(vector::ContractionOp op,
>                                             int64_t lhsIndex, int64_t
> rhsIndex,
>                                             PatternRewriter &rewriter)
> const {
> @@ -1998,8 +1998,8 @@ Value
> ContractionOpLowering::lowerReduction(vector::ContractionOp op,
>
>  } // namespace mlir
>
> -// TODO(andydavis) Add pattern to rewrite ExtractSlices(ConstantMaskOp).
> -// TODO(andydavis) Add this as DRR pattern.
> +// TODO: Add pattern to rewrite ExtractSlices(ConstantMaskOp).
> +// TODO: Add this as DRR pattern.
>  void mlir::vector::populateVectorToVectorTransformationPatterns(
>      OwningRewritePatternList &patterns, MLIRContext *context) {
>    // clang-format off
>
> diff  --git a/mlir/lib/Dialect/Vector/VectorUtils.cpp
> b/mlir/lib/Dialect/Vector/VectorUtils.cpp
> index ccd243e8a7de..d5beaefc5eac 100644
> --- a/mlir/lib/Dialect/Vector/VectorUtils.cpp
> +++ b/mlir/lib/Dialect/Vector/VectorUtils.cpp
> @@ -208,7 +208,7 @@ static AffineMap makePermutationMap(
>
>  /// Implementation detail that walks up the parents and records the ones
> with
>  /// the specified type.
> -/// TODO(ntv): could also be implemented as a collect parents followed by
> a
> +/// TODO: could also be implemented as a collect parents followed by a
>  /// filter and made available outside this file.
>  template <typename T>
>  static SetVector<Operation *> getParentsOfType(Operation *op) {
> @@ -252,7 +252,7 @@ bool matcher::operatesOnSuperVectorsOf(Operation &op,
>    // The ops that *may* lower a super-vector only do so if the
> super-vector to
>    // sub-vector ratio exists. The ops that *must* lower a super-vector are
>    // explicitly checked for this property.
> -  /// TODO(ntv): there should be a single function for all ops to do this
> so we
> +  /// TODO: there should be a single function for all ops to do this so we
>    /// do not have to special case. Maybe a trait, or just a method,
> unclear atm.
>    bool mustDivide = false;
>    (void)mustDivide;
>
> diff  --git a/mlir/lib/ExecutionEngine/ExecutionEngine.cpp
> b/mlir/lib/ExecutionEngine/ExecutionEngine.cpp
> index c64c7d208dec..130ca3c02ad4 100644
> --- a/mlir/lib/ExecutionEngine/ExecutionEngine.cpp
> +++ b/mlir/lib/ExecutionEngine/ExecutionEngine.cpp
> @@ -234,7 +234,7 @@ Expected<std::unique_ptr<ExecutionEngine>>
> ExecutionEngine::create(
>
>    // Clone module in a new LLVMContext since translateModuleToLLVMIR
> buries
>    // ownership too deeply.
> -  // TODO(zinenko): Reevaluate model of ownership of LLVMContext in
> LLVMDialect.
> +  // TODO: Reevaluate model of ownership of LLVMContext in LLVMDialect.
>    std::unique_ptr<Module> deserModule =
>        LLVM::cloneModuleIntoNewContext(ctx.get(), llvmModule.get());
>    auto dataLayout = deserModule->getDataLayout();
>
> diff  --git a/mlir/lib/IR/AffineExpr.cpp b/mlir/lib/IR/AffineExpr.cpp
> index 7d9145aaccce..e0c4b6b208f7 100644
> --- a/mlir/lib/IR/AffineExpr.cpp
> +++ b/mlir/lib/IR/AffineExpr.cpp
> @@ -703,7 +703,7 @@ void
> SimpleAffineExprFlattener::visitModExpr(AffineBinaryOpExpr expr) {
>    auto rhsConst = operandExprStack.back()[getConstantIndex()];
>    operandExprStack.pop_back();
>    auto &lhs = operandExprStack.back();
> -  // TODO(bondhugula): handle modulo by zero case when this issue is fixed
> +  // TODO: handle modulo by zero case when this issue is fixed
>    // at the other places in the IR.
>    assert(rhsConst > 0 && "RHS constant has to be positive");
>
> @@ -791,7 +791,7 @@ void
> SimpleAffineExprFlattener::visitDivExpr(AffineBinaryOpExpr expr,
>
>    // This is a pure affine expr; the RHS is a positive constant.
>    int64_t rhsConst = operandExprStack.back()[getConstantIndex()];
> -  // TODO(bondhugula): handle division by zero at the same time the issue
> is
> +  // TODO: handle division by zero at the same time the issue is
>    // fixed at other places.
>    assert(rhsConst > 0 && "RHS constant has to be positive");
>    operandExprStack.pop_back();
> @@ -870,7 +870,7 @@ int SimpleAffineExprFlattener::findLocalId(AffineExpr
> localExpr) {
>  /// Simplify the affine expression by flattening it and reconstructing it.
>  AffineExpr mlir::simplifyAffineExpr(AffineExpr expr, unsigned numDims,
>                                      unsigned numSymbols) {
> -  // TODO(bondhugula): only pure affine for now. The simplification here
> can
> +  // TODO: only pure affine for now. The simplification here can
>    // be extended to semi-affine maps in the future.
>    if (!expr.isPureAffine())
>      return expr;
>
> diff  --git a/mlir/lib/IR/AffineMap.cpp b/mlir/lib/IR/AffineMap.cpp
> index 07a01f0bf75f..050cb831f7a1 100644
> --- a/mlir/lib/IR/AffineMap.cpp
> +++ b/mlir/lib/IR/AffineMap.cpp
> @@ -444,7 +444,7 @@ bool MutableAffineMap::isMultipleOf(unsigned idx,
> int64_t factor) const {
>    if (results[idx].isMultipleOf(factor))
>      return true;
>
> -  // TODO(bondhugula): use simplifyAffineExpr and FlatAffineConstraints to
> +  // TODO: use simplifyAffineExpr and FlatAffineConstraints to
>    // complete this (for a more powerful analysis).
>    return false;
>  }
> @@ -453,7 +453,7 @@ bool MutableAffineMap::isMultipleOf(unsigned idx,
> int64_t factor) const {
>  // be pure for the simplification implemented.
>  void MutableAffineMap::simplify() {
>    // Simplify each of the results if possible.
> -  // TODO(ntv): functional-style map
> +  // TODO: functional-style map
>    for (unsigned i = 0, e = getNumResults(); i < e; i++) {
>      results[i] = simplifyAffineExpr(getResult(i), numDims, numSymbols);
>    }
>
> diff  --git a/mlir/lib/IR/AsmPrinter.cpp b/mlir/lib/IR/AsmPrinter.cpp
> index 0636ab59ea50..881f77f6004a 100644
> --- a/mlir/lib/IR/AsmPrinter.cpp
> +++ b/mlir/lib/IR/AsmPrinter.cpp
> @@ -284,7 +284,7 @@ class AliasState {
>  // Utility to generate a function to register a symbol alias.
>  static bool canRegisterAlias(StringRef name, llvm::StringSet<>
> &usedAliases) {
>    assert(!name.empty() && "expected alias name to be non-empty");
> -  // TODO(riverriddle) Assert that the provided alias name can be lexed as
> +  // TODO: Assert that the provided alias name can be lexed as
>    // an identifier.
>
>    // Check that the alias doesn't contain a '.' character and the name is
> not
> @@ -431,7 +431,7 @@ void AliasState::recordAttributeReference(Attribute
> attr) {
>  /// Record a reference to the given type.
>  void AliasState::recordTypeReference(Type ty) { usedTypes.insert(ty); }
>
> -// TODO Support visiting other types/operations when implemented.
> +// TODO: Support visiting other types/operations when implemented.
>  void AliasState::visitType(Type type) {
>    recordTypeReference(type);
>
> @@ -2442,7 +2442,7 @@ void Value::dump() {
>  }
>
>  void Value::printAsOperand(raw_ostream &os, AsmState &state) {
> -  // TODO(riverriddle) This doesn't necessarily capture all potential
> cases.
> +  // TODO: This doesn't necessarily capture all potential cases.
>    // Currently, region arguments can be shadowed when printing the main
>    // operation. If the IR hasn't been printed, this will produce the old
> SSA
>    // name and not the shadowed name.
>
> diff  --git a/mlir/lib/IR/AttributeDetail.h b/mlir/lib/IR/AttributeDetail.h
> index e6c9ae5ed59c..ad0b302bc78d 100644
> --- a/mlir/lib/IR/AttributeDetail.h
> +++ b/mlir/lib/IR/AttributeDetail.h
> @@ -658,7 +658,7 @@ struct OpaqueElementsAttributeStorage : public
> AttributeStorage {
>    /// Construct a new storage instance.
>    static OpaqueElementsAttributeStorage *
>    construct(AttributeStorageAllocator &allocator, KeyTy key) {
> -    // TODO(b/131468830): Provide a way to avoid copying content of large
> opaque
> +    // TODO: Provide a way to avoid copying content of large opaque
>      // tensors This will likely require a new reference attribute kind.
>      return new (allocator.allocate<OpaqueElementsAttributeStorage>())
>          OpaqueElementsAttributeStorage(std::get<0>(key), std::get<1>(key),
>
> diff  --git a/mlir/lib/IR/Operation.cpp b/mlir/lib/IR/Operation.cpp
> index 23fb48b4993b..1e2a47639fdb 100644
> --- a/mlir/lib/IR/Operation.cpp
> +++ b/mlir/lib/IR/Operation.cpp
> @@ -280,7 +280,7 @@ InFlightDiagnostic Operation::emitError(const Twine
> &message) {
>    if (getContext()->shouldPrintOpOnDiagnostic()) {
>      // Print out the operation explicitly here so that we can print the
> generic
>      // form.
> -    // TODO(riverriddle) It would be nice if we could instead provide the
> +    // TODO: It would be nice if we could instead provide the
>      // specific printing flags when adding the operation as an argument
> to the
>      // diagnostic.
>      std::string printedOp;
>
> diff  --git a/mlir/lib/IR/StandardTypes.cpp b/mlir/lib/IR/StandardTypes.cpp
> index c76ff30d6c79..5a9d22148b76 100644
> --- a/mlir/lib/IR/StandardTypes.cpp
> +++ b/mlir/lib/IR/StandardTypes.cpp
> @@ -550,7 +550,7 @@ LogicalResult mlir::getStridesAndOffset(MemRefType t,
>    // For now strides are only computed on a single affine map with a
> single
>    // result (i.e. the closed subset of linearization maps that are
> compatible
>    // with striding semantics).
> -  // TODO(ntv): support more forms on a per-need basis.
> +  // TODO: support more forms on a per-need basis.
>    if (affineMaps.size() > 1)
>      return failure();
>    if (affineMaps.size() == 1 && affineMaps[0].getNumResults() != 1)
> @@ -597,8 +597,8 @@ LogicalResult mlir::getStridesAndOffset(MemRefType t,
>
>    /// In practice, a strided memref must be internally non-aliasing. Test
>    /// against 0 as a proxy.
> -  /// TODO(ntv) static cases can have more advanced checks.
> -  /// TODO(ntv) dynamic cases would require a way to compare symbolic
> +  /// TODO: static cases can have more advanced checks.
> +  /// TODO: dynamic cases would require a way to compare symbolic
>    /// expressions and would probably need an affine set context propagated
>    /// everywhere.
>    if (llvm::any_of(strides, [](AffineExpr e) {
>
> diff  --git a/mlir/lib/IR/Visitors.cpp b/mlir/lib/IR/Visitors.cpp
> index 40ed0d934838..bbccdcbf7592 100644
> --- a/mlir/lib/IR/Visitors.cpp
> +++ b/mlir/lib/IR/Visitors.cpp
> @@ -14,7 +14,7 @@ using namespace mlir;
>  /// Walk all of the operations nested under and including the given
> operations.
>  void detail::walkOperations(Operation *op,
>                              function_ref<void(Operation *op)> callback) {
> -  // TODO(b/140235992) This walk should be iterative over the operations.
> +  // TODO: This walk should be iterative over the operations.
>    for (auto &region : op->getRegions())
>      for (auto &block : region)
>        // Early increment here in the case where the operation is erased.
> @@ -29,7 +29,7 @@ void detail::walkOperations(Operation *op,
>  WalkResult
>  detail::walkOperations(Operation *op,
>                         function_ref<WalkResult(Operation *op)> callback) {
> -  // TODO(b/140235992) This walk should be iterative over the operations.
> +  // TODO: This walk should be iterative over the operations.
>    for (auto &region : op->getRegions()) {
>      for (auto &block : region) {
>        // Early increment here in the case where the operation is erased.
>
> diff  --git a/mlir/lib/Parser/AttributeParser.cpp
> b/mlir/lib/Parser/AttributeParser.cpp
> index 609d7ad3f8d2..65613a149ae9 100644
> --- a/mlir/lib/Parser/AttributeParser.cpp
> +++ b/mlir/lib/Parser/AttributeParser.cpp
> @@ -784,7 +784,7 @@ Attribute Parser::parseOpaqueElementsAttr(Type
> attrType) {
>
>    auto name = getToken().getStringValue();
>    auto *dialect = builder.getContext()->getRegisteredDialect(name);
> -  // TODO(shpeisman): Allow for having an unknown dialect on an opaque
> +  // TODO: Allow for having an unknown dialect on an opaque
>    // attribute. Otherwise, it can't be roundtripped without having the
> dialect
>    // registered.
>    if (!dialect)
>
> diff  --git a/mlir/lib/Parser/DialectSymbolParser.cpp
> b/mlir/lib/Parser/DialectSymbolParser.cpp
> index 9d14d6f4fa4f..1a7e2c5448c1 100644
> --- a/mlir/lib/Parser/DialectSymbolParser.cpp
> +++ b/mlir/lib/Parser/DialectSymbolParser.cpp
> @@ -76,7 +76,7 @@ class CustomDialectAsmParser : public DialectAsmParser {
>        return success();
>      }
>
> -    // TODO(riverriddle) support hex floating point values.
> +    // TODO: support hex floating point values.
>      return emitError(getCurrentLocation(), "expected floating point
> literal");
>    }
>
>
> diff  --git a/mlir/lib/Parser/Parser.cpp b/mlir/lib/Parser/Parser.cpp
> index 465709d925e0..0e4589a20918 100644
> --- a/mlir/lib/Parser/Parser.cpp
> +++ b/mlir/lib/Parser/Parser.cpp
> @@ -1886,7 +1886,7 @@ OwningModuleRef mlir::parseSourceFile(StringRef
> filename,
>                                        llvm::SourceMgr &sourceMgr,
>                                        MLIRContext *context) {
>    if (sourceMgr.getNumBuffers() != 0) {
> -    // TODO(b/136086478): Extend to support multiple buffers.
> +    // TODO: Extend to support multiple buffers.
>      emitError(mlir::UnknownLoc::get(context),
>                "only main buffer parsed at the moment");
>      return nullptr;
>
> diff  --git a/mlir/lib/Pass/PassRegistry.cpp
> b/mlir/lib/Pass/PassRegistry.cpp
> index fc1beae3a2d8..442233024bbe 100644
> --- a/mlir/lib/Pass/PassRegistry.cpp
> +++ b/mlir/lib/Pass/PassRegistry.cpp
> @@ -128,7 +128,7 @@ void detail::PassOptions::copyOptionValuesFrom(const
> PassOptions &other) {
>  }
>
>  LogicalResult detail::PassOptions::parseFromString(StringRef options) {
> -  // TODO(parkers): Handle escaping strings.
> +  // TODO: Handle escaping strings.
>    // NOTE: `options` is modified in place to always refer to the
> unprocessed
>    // part of the string.
>    while (!options.empty()) {
> @@ -199,7 +199,7 @@ void detail::PassOptions::printHelp(size_t indent,
> size_t descIndent) const {
>    };
>    llvm::array_pod_sort(orderedOps.begin(), orderedOps.end(),
> compareOptionArgs);
>    for (OptionBase *option : orderedOps) {
> -    // TODO(riverriddle) printOptionInfo assumes a specific indent and
> will
> +    // TODO: printOptionInfo assumes a specific indent and will
>      // print options with values with incorrect indentation. We should add
>      // support to llvm::cl::Option for passing in a base indent to use
> when
>      // printing.
> @@ -328,7 +328,7 @@ LogicalResult
> TextualPipeline::parsePipelineText(StringRef text,
>        // Skip over everything until the closing '}' and store as options.
>        size_t close = text.find('}');
>
> -      // TODO(parkers): Handle skipping over quoted sub-strings.
> +      // TODO: Handle skipping over quoted sub-strings.
>        if (close == StringRef::npos) {
>          return errorHandler(
>              /*rawLoc=*/text.data() - 1,
>
> diff  --git a/mlir/lib/TableGen/OpClass.cpp b/mlir/lib/TableGen/OpClass.cpp
> index 2b9da7355929..09cb6cb9e007 100644
> --- a/mlir/lib/TableGen/OpClass.cpp
> +++ b/mlir/lib/TableGen/OpClass.cpp
> @@ -30,7 +30,7 @@ void tblgen::OpMethodSignature::writeDeclTo(raw_ostream
> &os) const {
>  void tblgen::OpMethodSignature::writeDefTo(raw_ostream &os,
>                                             StringRef namePrefix) const {
>    // We need to remove the default values for parameters in method
> definition.
> -  // TODO(antiagainst): We are using '=' and ',' as delimiters for
> parameter
> +  // TODO: We are using '=' and ',' as delimiters for parameter
>    // initializers. This is incorrect for initializer list with more than
> one
>    // element. Change to a more robust approach.
>    auto removeParamDefaultValue = [](StringRef params) {
>
> diff  --git a/mlir/lib/TableGen/Predicate.cpp
> b/mlir/lib/TableGen/Predicate.cpp
> index 2ebec1b3b6ee..bf617318e694 100644
> --- a/mlir/lib/TableGen/Predicate.cpp
> +++ b/mlir/lib/TableGen/Predicate.cpp
> @@ -213,7 +213,7 @@ static PredNode *propagateGroundTruth(
>    // still point to the original predicate records.  While the original
>    // predicate may be known to be true or false, it is not necessarily
> the case
>    // after rewriting.
> -  // TODO(zinenko,jpienaar): we can support ground truth for rewritten
> +  // TODO: we can support ground truth for rewritten
>    // predicates by either (a) having our own unique'ing of the predicates
>    // instead of relying on TableGen record pointers or (b) taking ground
> truth
>    // values optionally prefixed with a list of substitutions to apply,
> e.g.
>
> diff  --git a/mlir/lib/Target/LLVMIR/DebugTranslation.cpp
> b/mlir/lib/Target/LLVMIR/DebugTranslation.cpp
> index 74dd0d15f441..f40f44f2fbc6 100644
> --- a/mlir/lib/Target/LLVMIR/DebugTranslation.cpp
> +++ b/mlir/lib/Target/LLVMIR/DebugTranslation.cpp
> @@ -32,7 +32,7 @@ DebugTranslation::DebugTranslation(Operation *module,
> llvm::Module &llvmModule)
>    if (!module->walk(interruptIfValidLocation).wasInterrupted())
>      return;
>
> -  // TODO(riverriddle) Several parts of this are incorrect. Different
> source
> +  // TODO: Several parts of this are incorrect. Different source
>    // languages may interpret
> diff erent parts of the debug information
>    //
> diff erently. Frontends will also want to pipe in various information, like
>    // flags. This is fine for now as we only emit line-table information
> and not
> @@ -75,7 +75,7 @@ void DebugTranslation::translate(LLVMFuncOp func,
> llvm::Function &llvmFunc) {
>    auto *file = translateFile(fileLoc ? fileLoc.getFilename() :
> "<unknown>");
>    unsigned line = fileLoc ? fileLoc.getLine() : 0;
>
> -  // TODO(riverriddle) This is the bare essentials for now. We will
> likely end
> +  // TODO: This is the bare essentials for now. We will likely end
>    // up with wrapper metadata around LLVMs metadata in the future, so this
>    // doesn't need to be smart until then.
>    llvm::DISubroutineType *type =
>
> diff  --git a/mlir/lib/Target/LLVMIR/DebugTranslation.h
> b/mlir/lib/Target/LLVMIR/DebugTranslation.h
> index ae04f2b1d488..062b8820c5a8 100644
> --- a/mlir/lib/Target/LLVMIR/DebugTranslation.h
> +++ b/mlir/lib/Target/LLVMIR/DebugTranslation.h
> @@ -54,7 +54,7 @@ class DebugTranslation {
>        locationToLoc;
>
>    /// A mapping between filename and llvm debug file.
> -  /// TODO(riverriddle) Change this to DenseMap<Identifier, ...> when we
> can
> +  /// TODO: Change this to DenseMap<Identifier, ...> when we can
>    /// access the Identifier filename in FileLineColLoc.
>    llvm::StringMap<llvm::DIFile *> fileMap;
>
>
> diff  --git a/mlir/lib/Target/LLVMIR/ModuleTranslation.cpp
> b/mlir/lib/Target/LLVMIR/ModuleTranslation.cpp
> index 075ce9f6089f..08150745e80b 100644
> --- a/mlir/lib/Target/LLVMIR/ModuleTranslation.cpp
> +++ b/mlir/lib/Target/LLVMIR/ModuleTranslation.cpp
> @@ -601,7 +601,7 @@ void ModuleTranslation::connectPHINodes(LLVMFuncOp
> func) {
>    }
>  }
>
> -// TODO(mlir-team): implement an iterative version
> +// TODO: implement an iterative version
>  static void topologicalSortImpl(llvm::SetVector<Block *> &blocks, Block
> *b) {
>    blocks.insert(b);
>    for (Block *bb : b->getSuccessors()) {
>
> diff  --git a/mlir/lib/Transforms/CSE.cpp b/mlir/lib/Transforms/CSE.cpp
> index f7cf7f1c8e95..eb31fd207d79 100644
> --- a/mlir/lib/Transforms/CSE.cpp
> +++ b/mlir/lib/Transforms/CSE.cpp
> @@ -106,7 +106,7 @@ LogicalResult CSE::simplifyOperation(ScopedMapTy
> &knownValues, Operation *op) {
>    if (op->getNumRegions() != 0)
>      return failure();
>
> -  // TODO(riverriddle) We currently only eliminate non side-effecting
> +  // TODO: We currently only eliminate non side-effecting
>    // operations.
>    if (!MemoryEffectOpInterface::hasNoEffect(op))
>      return failure();
>
> diff  --git a/mlir/lib/Transforms/DialectConversion.cpp
> b/mlir/lib/Transforms/DialectConversion.cpp
> index 60c9e78b7a69..9401121eed96 100644
> --- a/mlir/lib/Transforms/DialectConversion.cpp
> +++ b/mlir/lib/Transforms/DialectConversion.cpp
> @@ -1104,7 +1104,7 @@ Block *ConversionPatternRewriter::splitBlock(Block
> *block,
>  /// PatternRewriter hook for merging a block into another.
>  void ConversionPatternRewriter::mergeBlocks(Block *source, Block *dest,
>                                              ValueRange argValues) {
> -  // TODO(riverriddle) This requires fixing the implementation of
> +  // TODO: This requires fixing the implementation of
>    // 'replaceUsesOfBlockArgument', which currently isn't undoable.
>    llvm_unreachable("block merging updates are currently not supported");
>  }
> @@ -1381,7 +1381,7 @@ OperationLegalizer::legalize(Operation *op,
>    }
>
>    // If the operation isn't legal, try to fold it in-place.
> -  // TODO(riverriddle) Should we always try to do this, even if the op is
> +  // TODO: Should we always try to do this, even if the op is
>    // already legal?
>    if (succeeded(legalizeWithFold(op, rewriter))) {
>      LLVM_DEBUG({
>
> diff  --git a/mlir/lib/Transforms/Inliner.cpp
> b/mlir/lib/Transforms/Inliner.cpp
> index e17a379d54b8..2ddb10a3a088 100644
> --- a/mlir/lib/Transforms/Inliner.cpp
> +++ b/mlir/lib/Transforms/Inliner.cpp
> @@ -321,7 +321,7 @@ static void
> collectCallOps(iterator_range<Region::iterator> blocks,
>
>      for (Operation &op : *block) {
>        if (auto call = dyn_cast<CallOpInterface>(op)) {
> -        // TODO(riverriddle) Support inlining nested call references.
> +        // TODO: Support inlining nested call references.
>          CallInterfaceCallable callable = call.getCallableForCallee();
>          if (SymbolRefAttr symRef = callable.dyn_cast<SymbolRefAttr>()) {
>            if (!symRef.isa<FlatSymbolRefAttr>())
>
> diff  --git a/mlir/lib/Transforms/LoopFusion.cpp
> b/mlir/lib/Transforms/LoopFusion.cpp
> index f71ff2aba9e9..c2f30fa3d189 100644
> --- a/mlir/lib/Transforms/LoopFusion.cpp
> +++ b/mlir/lib/Transforms/LoopFusion.cpp
> @@ -42,9 +42,9 @@ namespace {
>  /// which fuses loop nests with single-writer/single-reader memref
> dependences
>  /// with the goal of improving locality.
>
> -// TODO(andydavis) Support fusion of source loop nests which write to
> multiple
> +// TODO: Support fusion of source loop nests which write to multiple
>  // memrefs, where each memref can have multiple users (if profitable).
> -// TODO(andydavis) Extend this pass to check for fusion preventing
> dependences,
> +// TODO: Extend this pass to check for fusion preventing dependences,
>  // and add support for more general loop fusion algorithms.
>
>  struct LoopFusion : public AffineLoopFusionBase<LoopFusion> {
> @@ -68,7 +68,7 @@ mlir::createLoopFusionPass(unsigned fastMemorySpace,
>                                        maximalFusion);
>  }
>
> -// TODO(b/117228571) Replace when this is modeled through side-effects/op
> traits
> +// TODO: Replace when this is modeled through side-effects/op traits
>  static bool isMemRefDereferencingOp(Operation &op) {
>    return isa<AffineReadOpInterface, AffineWriteOpInterface,
> AffineDmaStartOp,
>               AffineDmaWaitOp>(op);
> @@ -101,8 +101,8 @@ struct LoopNestStateCollector {
>  // MemRefDependenceGraph is a graph data structure where graph nodes are
>  // top-level operations in a FuncOp which contain load/store ops, and
> edges
>  // are memref dependences between the nodes.
> -// TODO(andydavis) Add a more flexible dependence graph representation.
> -// TODO(andydavis) Add a depth parameter to dependence graph construction.
> +// TODO: Add a more flexible dependence graph representation.
> +// TODO: Add a depth parameter to dependence graph construction.
>  struct MemRefDependenceGraph {
>  public:
>    // Node represents a node in the graph. A Node is either an entire loop
> nest
> @@ -628,7 +628,7 @@ struct MemRefDependenceGraph {
>
>  // Initializes the data dependence graph by walking operations in 'f'.
>  // Assigns each node in the graph a node id based on program order in 'f'.
> -// TODO(andydavis) Add support for taking a Block arg to construct the
> +// TODO: Add support for taking a Block arg to construct the
>  // dependence graph at a
> diff erent depth.
>  bool MemRefDependenceGraph::init(FuncOp f) {
>    DenseMap<Value, SetVector<unsigned>> memrefAccesses;
> @@ -796,7 +796,7 @@ static unsigned getMaxLoopDepth(ArrayRef<Operation *>
> loadOpInsts,
>            getNumCommonSurroundingLoops(*srcOpInst, *dstOpInst);
>        for (unsigned d = 1; d <= numCommonLoops + 1; ++d) {
>          FlatAffineConstraints dependenceConstraints;
> -        // TODO(andydavis) Cache dependence analysis results, check cache
> here.
> +        // TODO: Cache dependence analysis results, check cache here.
>          DependenceResult result = checkMemrefAccessDependence(
>              srcAccess, dstAccess, d, &dependenceConstraints,
>              /*dependenceComponents=*/nullptr);
> @@ -823,7 +823,7 @@ static void
> sinkSequentialLoops(MemRefDependenceGraph::Node *node) {
>    node->op = newRootForOp.getOperation();
>  }
>
> -//  TODO(mlir-team): improve/complete this when we have target data.
> +//  TODO: improve/complete this when we have target data.
>  static unsigned getMemRefEltSizeInBytes(MemRefType memRefType) {
>    auto elementType = memRefType.getElementType();
>
> @@ -841,7 +841,7 @@ static unsigned getMemRefEltSizeInBytes(MemRefType
> memRefType) {
>  // Creates and returns a private (single-user) memref for fused loop
> rooted
>  // at 'forOp', with (potentially reduced) memref size based on the
>  // MemRefRegion written to by 'srcStoreOpInst' at depth 'dstLoopDepth'.
> -// TODO(bondhugula): consider refactoring the common code from
> generateDma and
> +// TODO: consider refactoring the common code from generateDma and
>  // this one.
>  static Value createPrivateMemRef(AffineForOp forOp, Operation
> *srcStoreOpInst,
>                                   unsigned dstLoopDepth,
> @@ -912,7 +912,7 @@ static Value createPrivateMemRef(AffineForOp forOp,
> Operation *srcStoreOpInst,
>
>    // Create new private memref for fused loop 'forOp'. 'newShape' is
> always
>    // a constant shape.
> -  // TODO(andydavis) Create/move alloc ops for private memrefs closer to
> their
> +  // TODO: Create/move alloc ops for private memrefs closer to their
>    // consumer loop nests to reduce their live range. Currently they are
> added
>    // at the beginning of the function, because loop nests can be reordered
>    // during the fusion pass.
> @@ -1012,7 +1012,7 @@ static bool hasNonAffineUsersOnThePath(unsigned
> srcId, unsigned dstId,
>  // 'srcLiveOutStoreOp', has output edges.
>  // Returns true if 'dstNode's read/write region to 'memref' is a super
> set of
>  // 'srcNode's write region to 'memref' and 'srcId' has only one output
> edge.
> -// TODO(andydavis) Generalize this to handle more live in/out cases.
> +// TODO: Generalize this to handle more live in/out cases.
>  static bool
>  canFuseSrcWhichWritesToLiveOut(unsigned srcId, unsigned dstId,
>                                 AffineWriteOpInterface srcLiveOutStoreOp,
> @@ -1040,7 +1040,7 @@ canFuseSrcWhichWritesToLiveOut(unsigned srcId,
> unsigned dstId,
>      return false;
>
>    // Compute MemRefRegion 'dstRegion' for 'dstStore/LoadOpInst' on
> 'memref'.
> -  // TODO(andydavis) Compute 'unionboundingbox' of all write regions (one
> for
> +  // TODO: Compute 'unionboundingbox' of all write regions (one for
>    // each store op in 'dstStoreOps').
>    SmallVector<Operation *, 2> dstStoreOps;
>    dstNode->getStoreOpsForMemref(memref, &dstStoreOps);
> @@ -1064,7 +1064,7 @@ canFuseSrcWhichWritesToLiveOut(unsigned srcId,
> unsigned dstId,
>
>    // Return false if write region is not a superset of 'srcNodes' write
>    // region to 'memref'.
> -  // TODO(andydavis) Check the shape and lower bounds here too.
> +  // TODO: Check the shape and lower bounds here too.
>    if (srcNumElements != dstNumElements)
>      return false;
>
> @@ -1244,7 +1244,7 @@ static bool isFusionProfitable(Operation *srcOpInst,
> Operation *srcStoreOpInst,
>          maybeSliceWriteRegionSizeBytes.getValue();
>
>      // If we are fusing for reuse, check that write regions remain the
> same.
> -    // TODO(andydavis) Write region check should check sizes and offsets
> in
> +    // TODO: Write region check should check sizes and offsets in
>      // each dimension, so that we are sure they are covering the same
> memref
>      // region. Also, move this out to a isMemRefRegionSuperSet helper
> function.
>      if (srcOpInst != srcStoreOpInst &&
> @@ -1268,7 +1268,7 @@ static bool isFusionProfitable(Operation *srcOpInst,
> Operation *srcStoreOpInst,
>        llvm::dbgs() << msg.str();
>      });
>
> -    // TODO(b/123247369): This is a placeholder cost model.
> +    // TODO: This is a placeholder cost model.
>      // Among all choices that add an acceptable amount of redundant
> computation
>      // (as per computeToleranceThreshold), we will simply pick the one
> that
>      // reduces the intermediary size the most.
> @@ -1424,9 +1424,10 @@ namespace {
>  // takes O(V) time for initialization, and has runtime O(V + E).
>  //
>  // This greedy algorithm is not 'maximal' due to the current restriction
> of
> -// fusing along single producer consumer edges, but there is a TODO to
> fix this.
> +// fusing along single producer consumer edges, but there is a TODO: to
> fix
> +// this.
>  //
> -// TODO(andydavis) Experiment with other fusion policies.
> +// TODO: Experiment with other fusion policies.
>  struct GreedyFusion {
>  public:
>    // The data dependence graph to traverse during fusion.
> @@ -1457,7 +1458,7 @@ struct GreedyFusion {
>
>    // Initializes 'worklist' with nodes from 'mdg'
>    void init() {
> -    // TODO(andydavis) Add a priority queue for prioritizing nodes by
> diff erent
> +    // TODO: Add a priority queue for prioritizing nodes by
> diff erent
>      // metrics (e.g. arithmetic intensity/flops-to-bytes ratio).
>      worklist.clear();
>      worklistSet.clear();
> @@ -1474,7 +1475,7 @@ struct GreedyFusion {
>    // *) Second pass fuses sibling nodes which share no dependence edges.
>    // *) Third pass fuses any remaining producer nodes into their users.
>    void run() {
> -    // TODO(andydavis) Run this repeatedly until a fixed-point is reached.
> +    // TODO: Run this repeatedly until a fixed-point is reached.
>      fuseProducerConsumerNodes(/*maxSrcUserCount=*/1);
>      fuseSiblingNodes();
>      fuseProducerConsumerNodes(
> @@ -1537,7 +1538,7 @@ struct GreedyFusion {
>              continue;
>            // Skip if 'srcNode' has more than one live-out store to a
>            // function-local memref.
> -          // TODO(andydavis) Support more generic multi-output src loop
> nests
> +          // TODO: Support more generic multi-output src loop nests
>            // fusion.
>            auto srcStoreOp = mdg->getUniqueOutgoingStore(srcNode);
>            if (!srcStoreOp) {
> @@ -1602,7 +1603,7 @@ struct GreedyFusion {
>            unsigned dstLoopDepthTest = getInnermostCommonLoopDepth(dstOps);
>            // Check the feasibility of fusing src loop nest into dst loop
> nest
>            // at loop depths in range [1, dstLoopDepthTest].
> -          // TODO(andydavis) Use slice union computation and union of
> memref
> +          // TODO: Use slice union computation and union of memref
>            // read/write regions to cost model and fusion.
>            bool canFuse = false;
>            for (unsigned i = 1; i <= dstLoopDepthTest; ++i) {
> @@ -1663,7 +1664,7 @@ struct GreedyFusion {
>                      memref)
>                    storesForMemref.push_back(storeOpInst);
>                }
> -              // TODO(andydavis) Use union of memref write regions to
> compute
> +              // TODO: Use union of memref write regions to compute
>                // private memref footprint.
>                auto newMemRef = createPrivateMemRef(
>                    dstAffineForOp, storesForMemref[0], bestDstLoopDepth,
> @@ -1765,7 +1766,7 @@ struct GreedyFusion {
>      while (findSiblingNodeToFuse(dstNode, &visitedSibNodeIds,
> &idAndMemref)) {
>        unsigned sibId = idAndMemref.first;
>        Value memref = idAndMemref.second;
> -      // TODO(andydavis) Check that 'sibStoreOpInst' post-dominates all
> other
> +      // TODO: Check that 'sibStoreOpInst' post-dominates all other
>        // stores to the same memref in 'sibNode' loop nest.
>        auto *sibNode = mdg->getNode(sibId);
>        // Compute an operation list insertion point for the fused loop
> @@ -1787,7 +1788,7 @@ struct GreedyFusion {
>        assert(sibLoadOpInsts.size() == 1);
>        Operation *sibLoadOpInst = sibLoadOpInsts[0];
>        assert(!sibNode->stores.empty());
> -      // TODO(andydavis) Choose the store which postdominates all other
> stores.
> +      // TODO: Choose the store which postdominates all other stores.
>        auto *sibStoreOpInst = sibNode->stores.back();
>
>        // Gather 'dstNode' load ops to 'memref'.
> @@ -1833,7 +1834,7 @@ struct GreedyFusion {
>      // on 'memref'.
>      auto canFuseWithSibNode = [&](Node *sibNode, Value memref) {
>        // Skip if 'outEdge' is not a read-after-write dependence.
> -      // TODO(andydavis) Remove restrict to single load op restriction.
> +      // TODO: Remove restrict to single load op restriction.
>        if (sibNode->getLoadOpCount(memref) != 1)
>          return false;
>        // Skip if there exists a path of dependent edges between
>
> diff  --git a/mlir/lib/Transforms/MemRefDataFlowOpt.cpp
> b/mlir/lib/Transforms/MemRefDataFlowOpt.cpp
> index 75ac0a5197f3..7220fd1f3dc6 100644
> --- a/mlir/lib/Transforms/MemRefDataFlowOpt.cpp
> +++ b/mlir/lib/Transforms/MemRefDataFlowOpt.cpp
> @@ -8,7 +8,7 @@
>  //
>  // This file implements a pass to forward memref stores to loads, thereby
>  // potentially getting rid of intermediate memref's entirely.
> -// TODO(mlir-team): In the future, similar techniques could be used to
> eliminate
> +// TODO: In the future, similar techniques could be used to eliminate
>  // dead memref store's and perform more complex forwarding when support
> for
>  // SSA scalars live out of 'affine.for'/'affine.if' statements is
> available.
>
>  //===----------------------------------------------------------------------===//
> @@ -54,9 +54,9 @@ namespace {
>  // don't reason about loops that are guaranteed to execute at least once
> or
>  // multiple sources to forward from.
>  //
> -// TODO(mlir-team): more forwarding can be done when support for
> +// TODO: more forwarding can be done when support for
>  // loop/conditional live-out SSA values is available.
> -// TODO(mlir-team): do general dead store elimination for memref's. This
> pass
> +// TODO: do general dead store elimination for memref's. This pass
>  // currently only eliminates the stores only if no other loads/uses (other
>  // than dealloc) remain.
>  //
> @@ -203,7 +203,7 @@ void MemRefDataFlowOpt::runOnFunction() {
>      // If the memref hasn't been alloc'ed in this function, skip.
>      Operation *defOp = memref.getDefiningOp();
>      if (!defOp || !isa<AllocOp>(defOp))
> -      // TODO(mlir-team): if the memref was returned by a 'call'
> operation, we
> +      // TODO: if the memref was returned by a 'call' operation, we
>        // could still erase it if the call had no side-effects.
>        continue;
>      if (llvm::any_of(memref.getUsers(), [&](Operation *ownerOp) {
>
> diff  --git a/mlir/lib/Transforms/PipelineDataTransfer.cpp
> b/mlir/lib/Transforms/PipelineDataTransfer.cpp
> index cb2703643d8d..564193e22690 100644
> --- a/mlir/lib/Transforms/PipelineDataTransfer.cpp
> +++ b/mlir/lib/Transforms/PipelineDataTransfer.cpp
> @@ -46,7 +46,7 @@ std::unique_ptr<OperationPass<FuncOp>>
> mlir::createPipelineDataTransferPass() {
>
>  // Returns the position of the tag memref operand given a DMA operation.
>  // Temporary utility: will be replaced when DmaStart/DmaFinish abstract
> op's are
> -// added.  TODO(b/117228571)
> +// added.  TODO
>  static unsigned getTagMemRefPos(Operation &dmaOp) {
>    assert((isa<AffineDmaStartOp, AffineDmaWaitOp>(dmaOp)));
>    if (auto dmaStartOp = dyn_cast<AffineDmaStartOp>(dmaOp)) {
> @@ -149,7 +149,7 @@ static bool checkTagMatch(AffineDmaStartOp startOp,
> AffineDmaWaitOp waitOp) {
>              e = startIndices.end();
>         it != e; ++it, ++wIt) {
>      // Keep it simple for now, just checking if indices match.
> -    // TODO(mlir-team): this would in general need to check if there is no
> +    // TODO: this would in general need to check if there is no
>      // intervening write writing to the same tag location, i.e., memory
> last
>      // write/data flow analysis. This is however sufficient/powerful
> enough for
>      // now since the DMA generation pass or the input for it will always
> have
> @@ -185,12 +185,12 @@ static void findMatchingStartFinishInsts(
>        continue;
>
>      // Only DMAs incoming into higher memory spaces are pipelined for now.
> -    // TODO(bondhugula): handle outgoing DMA pipelining.
> +    // TODO: handle outgoing DMA pipelining.
>      if (!dmaStartOp.isDestMemorySpaceFaster())
>        continue;
>
>      // Check for dependence with outgoing DMAs. Doing this conservatively.
> -    // TODO(andydavis,bondhugula): use the dependence analysis to check
> for
> +    // TODO: use the dependence analysis to check for
>      // dependences between an incoming and outgoing DMA in the same
> iteration.
>      auto it = outgoingDmaOps.begin();
>      for (; it != outgoingDmaOps.end(); ++it) {
> @@ -252,8 +252,8 @@ void
> PipelineDataTransfer::runOnAffineForOp(AffineForOp forOp) {
>    // Identify memref's to replace by scanning through all DMA start
>    // operations. A DMA start operation has two memref's - the one from the
>    // higher level of memory hierarchy is the one to double buffer.
> -  // TODO(bondhugula): check whether double-buffering is even necessary.
> -  // TODO(bondhugula): make this work with
> diff erent layouts: assuming here that
> +  // TODO: check whether double-buffering is even necessary.
> +  // TODO: make this work with
> diff erent layouts: assuming here that
>    // the dimension we are adding here for the double buffering is the
> outermost
>    // dimension.
>    for (auto &pair : startWaitPairs) {
>
> diff  --git a/mlir/lib/Transforms/Utils/GreedyPatternRewriteDriver.cpp
> b/mlir/lib/Transforms/Utils/GreedyPatternRewriteDriver.cpp
> index ea420733e5ff..14a4c7417249 100644
> --- a/mlir/lib/Transforms/Utils/GreedyPatternRewriteDriver.cpp
> +++ b/mlir/lib/Transforms/Utils/GreedyPatternRewriteDriver.cpp
> @@ -110,7 +110,7 @@ class GreedyPatternRewriteDriver : public
> PatternRewriter {
>      for (Value operand : operands) {
>        // If the use count of this operand is now < 2, we re-add the
> defining
>        // operation to the worklist.
> -      // TODO(riverriddle) This is based on the fact that zero use
> operations
> +      // TODO: This is based on the fact that zero use operations
>        // may be deleted, and that single use values often have more
>        // canonicalization opportunities.
>        if (!operand.use_empty() && !operand.hasOneUse())
>
> diff  --git a/mlir/lib/Transforms/Utils/LoopFusionUtils.cpp
> b/mlir/lib/Transforms/Utils/LoopFusionUtils.cpp
> index 18fc872cdf7f..17dbf8eb166d 100644
> --- a/mlir/lib/Transforms/Utils/LoopFusionUtils.cpp
> +++ b/mlir/lib/Transforms/Utils/LoopFusionUtils.cpp
> @@ -164,7 +164,7 @@ static Operation
> *getFusedLoopNestInsertionPoint(AffineForOp srcForOp,
>          return nullptr;
>      }
>      // Return insertion point in valid range closest to 'opB'.
> -    // TODO(andydavis) Consider other insertion points in valid range.
> +    // TODO: Consider other insertion points in valid range.
>      return firstDepOpA;
>    }
>    // No dependences from 'opA' to operation in range ('opA', 'opB'),
> return
> @@ -187,7 +187,7 @@ gatherLoadsAndStores(AffineForOp forOp,
>    return !hasIfOp;
>  }
>
> -// TODO(andydavis) Prevent fusion of loop nests with side-effecting
> operations.
> +// TODO: Prevent fusion of loop nests with side-effecting operations.
>  FusionResult mlir::canFuseLoops(AffineForOp srcForOp, AffineForOp
> dstForOp,
>                                  unsigned dstLoopDepth,
>                                  ComputationSliceState *srcSlice) {
> @@ -361,7 +361,7 @@ static int64_t getComputeCostHelper(
>    return tripCount * opCount;
>  }
>
> -// TODO(andydavis,b/126426796): extend this to handle multiple result
> maps.
> +// TODO: extend this to handle multiple result maps.
>  static Optional<uint64_t> getConstDifference(AffineMap lbMap, AffineMap
> ubMap) {
>    assert(lbMap.getNumResults() == 1 && "expected single result bound
> map");
>    assert(ubMap.getNumResults() == 1 && "expected single result bound
> map");
> @@ -391,7 +391,7 @@ static uint64_t getSliceIterationCount(
>  // nest surrounding represented by slice loop bounds in 'slice'.
>  // Returns true on success, false otherwise (if a non-constant trip count
>  // was encountered).
> -// TODO(andydavis) Make this work with non-unit step loops.
> +// TODO: Make this work with non-unit step loops.
>  static bool buildSliceTripCountMap(
>      ComputationSliceState *slice,
>      llvm::SmallDenseMap<Operation *, uint64_t, 8> *tripCountMap) {
> @@ -457,7 +457,7 @@ bool mlir::getFusionComputeCost(AffineForOp srcForOp,
> LoopNestStats &srcStats,
>    auto *insertPointParent = slice->insertPoint->getParentOp();
>
>    // The store and loads to this memref will disappear.
> -  // TODO(andydavis) Add load coalescing to memref data flow opt pass.
> +  // TODO: Add load coalescing to memref data flow opt pass.
>    if (storeLoadFwdGuaranteed) {
>      // Subtract from operation count the loads/store we expect load/store
>      // forwarding to remove.
>
> diff  --git a/mlir/lib/Transforms/Utils/LoopUtils.cpp
> b/mlir/lib/Transforms/Utils/LoopUtils.cpp
> index 58807cf72128..249fa1cfdbc2 100644
> --- a/mlir/lib/Transforms/Utils/LoopUtils.cpp
> +++ b/mlir/lib/Transforms/Utils/LoopUtils.cpp
> @@ -149,7 +149,7 @@ static Value ceilDivPositive(OpBuilder &builder,
> Location loc, Value dividend,
>
>  /// Promotes the loop body of a forOp to its containing block if the forOp
>  /// was known to have a single iteration.
> -// TODO(bondhugula): extend this for arbitrary affine bounds.
> +// TODO: extend this for arbitrary affine bounds.
>  LogicalResult mlir::promoteIfSingleIteration(AffineForOp forOp) {
>    Optional<uint64_t> tripCount = getConstantTripCount(forOp);
>    if (!tripCount || tripCount.getValue() != 1)
> @@ -529,7 +529,7 @@ LogicalResult mlir::loopUnrollByFactor(AffineForOp
> forOp,
>      return failure();
>
>    // If the trip count is lower than the unroll factor, no unrolled body.
> -  // TODO(bondhugula): option to specify cleanup loop unrolling.
> +  // TODO: option to specify cleanup loop unrolling.
>    Optional<uint64_t> mayBeConstantTripCount = getConstantTripCount(forOp);
>    if (mayBeConstantTripCount.hasValue() &&
>        mayBeConstantTripCount.getValue() < unrollFactor)
> @@ -623,7 +623,7 @@ LogicalResult mlir::loopUnrollByFactor(scf::ForOp
> forOp,
>              : boundsBuilder.create<ConstantIndexOp>(loc, stepUnrolledCst);
>    } else {
>      // Dynamic loop bounds computation.
> -    // TODO(andydavis) Add dynamic asserts for negative lb/ub/step, or
> +    // TODO: Add dynamic asserts for negative lb/ub/step, or
>      // consider using ceilDiv from AffineApplyExpander.
>      auto lowerBound = forOp.lowerBound();
>      auto upperBound = forOp.upperBound();
> @@ -720,7 +720,7 @@ LogicalResult mlir::loopUnrollJamByFactor(AffineForOp
> forOp,
>    // Loops where both lower and upper bounds are multi-result maps won't
> be
>    // unrolled (since the trip can't be expressed as an affine function in
>    // general).
> -  // TODO(mlir-team): this may not be common, but we could support the
> case
> +  // TODO: this may not be common, but we could support the case
>    // where the lower bound is a multi-result map and the ub is a single
> result
>    // one.
>    if (forOp.getLowerBoundMap().getNumResults() != 1)
> @@ -1127,7 +1127,7 @@ static Loops stripmineSink(scf::ForOp forOp, Value
> factor,
>  // Returns the new AffineForOps, nested immediately under `target`.
>  template <typename ForType, typename SizeType>
>  static ForType stripmineSink(ForType forOp, SizeType factor, ForType
> target) {
> -  // TODO(ntv): Use cheap structural assertions that targets are nested
> under
> +  // TODO: Use cheap structural assertions that targets are nested under
>    // forOp and that targets are not nested under each other when
> DominanceInfo
>    // exposes the capability. It seems overkill to construct a whole
> function
>    // dominance tree at this point.
> @@ -1226,7 +1226,7 @@ static LogicalResult hoistOpsBetween(scf::ForOp
> outer, scf::ForOp inner) {
>        continue;
>      }
>      // Skip if op has side effects.
> -    // TODO(ntv): loads to immutable memory regions are ok.
> +    // TODO: loads to immutable memory regions are ok.
>      if (!MemoryEffectOpInterface::hasNoEffect(&op)) {
>        status = failure();
>        continue;
> @@ -1294,7 +1294,7 @@ TileLoops mlir::extractFixedOuterLoops(scf::ForOp
> rootForOp,
>    auto intraTile = tile(forOps, tileSizes, forOps.back());
>    TileLoops tileLoops = std::make_pair(forOps, intraTile);
>
> -  // TODO(ntv, zinenko) for now we just ignore the result of band
> isolation.
> +  // TODO: for now we just ignore the result of band isolation.
>    // In the future, mapping decisions may be impacted by the ability to
>    // isolate perfectly nested bands.
>    tryIsolateBands(tileLoops);
> @@ -1322,7 +1322,7 @@ static LoopParams normalizeLoop(OpBuilder
> &boundsBuilder,
>    // Compute the number of iterations the loop executes: ceildiv(ub - lb,
> step)
>    // assuming the step is strictly positive.  Update the bounds and the
> step
>    // of the loop to go from 0 to the number of iterations, if necessary.
> -  // TODO(zinenko): introduce support for negative steps or emit dynamic
> asserts
> +  // TODO: introduce support for negative steps or emit dynamic asserts
>    // on step positivity, whatever gets implemented first.
>    if (isZeroBased && isStepOne)
>      return {/*lowerBound=*/lowerBound, /*upperBound=*/upperBound,
> @@ -1552,7 +1552,7 @@ findHighestBlockForPlacement(const MemRefRegion
> &region, Block &block,
>    // symbolic/variant.
>    auto it = enclosingFors.rbegin();
>    for (auto e = enclosingFors.rend(); it != e; ++it) {
> -    // TODO(bondhugula): also need to be checking this for regions
> symbols that
> +    // TODO: also need to be checking this for regions symbols that
>      // aren't loop IVs, whether we are within their resp. defs' dominance
> scope.
>      if (llvm::is_contained(symbols, it->getInductionVar()))
>        break;
> @@ -1580,7 +1580,7 @@ struct StrideInfo {
>  /// potentially multiple striding levels from outermost to innermost. For
> an
>  /// n-dimensional region, there can be at most n-1 levels of striding
>  /// successively nested.
> -//  TODO(bondhugula): make this work with non-identity layout maps.
> +//  TODO: make this work with non-identity layout maps.
>  static void getMultiLevelStrides(const MemRefRegion &region,
>                                   ArrayRef<int64_t> bufferShape,
>                                   SmallVectorImpl<StrideInfo>
> *strideInfos) {
> @@ -1865,7 +1865,7 @@ static LogicalResult generateCopy(
>      SmallVector<StrideInfo, 4> dmaStrideInfos;
>      getMultiLevelStrides(region, fastBufferShape, &dmaStrideInfos);
>
> -    // TODO(bondhugula): use all stride levels once DmaStartOp is
> extended for
> +    // TODO: use all stride levels once DmaStartOp is extended for
>      // multi-level strides.
>      if (dmaStrideInfos.size() > 1) {
>        LLVM_DEBUG(llvm::dbgs() << "Only up to one level of stride
> supported\n");
> @@ -2120,7 +2120,7 @@ uint64_t
> mlir::affineDataCopyGenerate(Block::iterator begin,
>
>      // Each memref has a single buffer associated with it irrespective of
> how
>      // many load's and store's happen on it.
> -    // TODO(bondhugula): in the future, when regions don't intersect and
> satisfy
> +    // TODO: in the future, when regions don't intersect and satisfy
>      // other properties (based on load/store regions), we could consider
>      // multiple buffers per memref.
>
>
> diff  --git a/mlir/lib/Transforms/Utils/Utils.cpp
> b/mlir/lib/Transforms/Utils/Utils.cpp
> index 86bf4da806d9..5d6d58fc6939 100644
> --- a/mlir/lib/Transforms/Utils/Utils.cpp
> +++ b/mlir/lib/Transforms/Utils/Utils.cpp
> @@ -28,7 +28,7 @@ using namespace mlir;
>
>  /// Return true if this operation dereferences one or more memref's.
>  // Temporary utility: will be replaced when this is modeled through
> -// side-effects/op traits. TODO(b/117228571)
> +// side-effects/op traits. TODO
>  static bool isMemRefDereferencingOp(Operation &op) {
>    return isa<AffineReadOpInterface, AffineWriteOpInterface,
> AffineDmaStartOp,
>               AffineDmaWaitOp>(op);
> @@ -83,7 +83,7 @@ LogicalResult mlir::replaceAllMemRefUsesWith(Value
> oldMemRef, Value newMemRef,
>      return success();
>
>    if (usePositions.size() > 1) {
> -    // TODO(mlir-team): extend it for this case when needed (rare).
> +    // TODO: extend it for this case when needed (rare).
>      assert(false && "multiple dereferencing uses in a single op not
> supported");
>      return failure();
>    }
> @@ -162,7 +162,7 @@ LogicalResult mlir::replaceAllMemRefUsesWith(Value
> oldMemRef, Value newMemRef,
>    // Create new fully composed AffineMap for new op to be created.
>    assert(newMapOperands.size() == newMemRefRank);
>    auto newMap = builder.getMultiDimIdentityMap(newMemRefRank);
> -  // TODO(b/136262594) Avoid creating/deleting temporary AffineApplyOps
> here.
> +  // TODO: Avoid creating/deleting temporary AffineApplyOps here.
>    fullyComposeAffineMapAndOperands(&newMap, &newMapOperands);
>    newMap = simplifyAffineMap(newMap);
>    canonicalizeMapAndOperands(&newMap, &newMapOperands);
>
> diff  --git a/mlir/test/Conversion/GPUToNVVM/gpu-to-nvvm.mlir
> b/mlir/test/Conversion/GPUToNVVM/gpu-to-nvvm.mlir
> index 273b22779d56..fa9a478c1d83 100644
> --- a/mlir/test/Conversion/GPUToNVVM/gpu-to-nvvm.mlir
> +++ b/mlir/test/Conversion/GPUToNVVM/gpu-to-nvvm.mlir
> @@ -77,7 +77,7 @@ gpu.module @test_module {
>    // CHECK-LABEL: func @gpu_all_reduce_op()
>    gpu.func @gpu_all_reduce_op() {
>      %arg0 = constant 1.0 : f32
> -    // TODO(csigg): Check full IR expansion once lowering has settled.
> +    // TODO: Check full IR expansion once lowering has settled.
>      // CHECK: nvvm.shfl.sync.bfly
>      // CHECK: nvvm.barrier0
>      // CHECK: llvm.fadd
> @@ -93,7 +93,7 @@ gpu.module @test_module {
>    // CHECK-LABEL: func @gpu_all_reduce_region()
>    gpu.func @gpu_all_reduce_region() {
>      %arg0 = constant 1 : i32
> -    // TODO(csigg): Check full IR expansion once lowering has settled.
> +    // TODO: Check full IR expansion once lowering has settled.
>      // CHECK: nvvm.shfl.sync.bfly
>      // CHECK: nvvm.barrier0
>      %result = "gpu.all_reduce"(%arg0) ({
>
> diff  --git a/mlir/test/Conversion/GPUToSPIRV/if.mlir
> b/mlir/test/Conversion/GPUToSPIRV/if.mlir
> index 81a7f6d32b91..b7e11d74996b 100644
> --- a/mlir/test/Conversion/GPUToSPIRV/if.mlir
> +++ b/mlir/test/Conversion/GPUToSPIRV/if.mlir
> @@ -131,7 +131,7 @@ module attributes {
>        store %0#1, %arg2[%j] : memref<10xf32>
>        gpu.return
>      }
> -    // TODO(thomasraoux): The transformation should only be legal if
> +    // TODO: The transformation should only be legal if
>      // VariablePointer capability is supported. This test is still useful
> to
>      // make sure we can handle scf op result with type change.
>      // CHECK-LABEL: @simple_if_yield_type_change
>
> diff  --git a/mlir/test/Dialect/Linalg/roundtrip.mlir
> b/mlir/test/Dialect/Linalg/roundtrip.mlir
> index f210b185331c..9e6c27547930 100644
> --- a/mlir/test/Dialect/Linalg/roundtrip.mlir
> +++ b/mlir/test/Dialect/Linalg/roundtrip.mlir
> @@ -1,7 +1,7 @@
>  // RUN: mlir-opt -split-input-file %s | FileCheck %s
>  // | mlir-opt | FileCheck %s
>
> -// TODO(pifon): Re-enable LLVM lowering test after IndexedGenericOp is
> lowered.
> +// TODO: Re-enable LLVM lowering test after IndexedGenericOp is lowered.
>  //
>  // Test that we can lower all the way to LLVM without crashing, don't
> check results here.
>  // DISABLED: mlir-opt %s --convert-linalg-to-llvm -o=/dev/null 2>&1
>
> diff  --git a/mlir/test/Dialect/SPIRV/Serialization/constant.mlir
> b/mlir/test/Dialect/SPIRV/Serialization/constant.mlir
> index a276e4ee9781..520669262e12 100644
> --- a/mlir/test/Dialect/SPIRV/Serialization/constant.mlir
> +++ b/mlir/test/Dialect/SPIRV/Serialization/constant.mlir
> @@ -108,7 +108,7 @@ spv.module Logical GLSL450 requires #spv.vce<v1.0,
> [Shader], []> {
>
>    // CHECK-LABEL: @double_const
>    spv.func @double_const() -> () "None" {
> -    // TODO(antiagainst): test range boundary values
> +    // TODO: test range boundary values
>      // CHECK: spv.constant 1.024000e+03 : f64
>      %0 = spv.constant 1024. : f64
>      // CHECK: spv.constant -1.024000e+03 : f64
>
> diff  --git a/mlir/test/Dialect/SPIRV/canonicalize.mlir
> b/mlir/test/Dialect/SPIRV/canonicalize.mlir
> index 2b719fd7219d..ad129c2f0825 100644
> --- a/mlir/test/Dialect/SPIRV/canonicalize.mlir
> +++ b/mlir/test/Dialect/SPIRV/canonicalize.mlir
> @@ -141,7 +141,7 @@ func @extract_from_not_constant() -> i32 {
>  // spv.constant
>
>  //===----------------------------------------------------------------------===//
>
> -// TODO(antiagainst): test constants in
> diff erent blocks
> +// TODO: test constants in
> diff erent blocks
>
>  func @deduplicate_scalar_constant() -> (i32, i32) {
>    // CHECK: %[[CST:.*]] = spv.constant 42 : i32
>
> diff  --git a/mlir/test/Dialect/SPIRV/structure-ops.mlir
> b/mlir/test/Dialect/SPIRV/structure-ops.mlir
> index 93df070f0a2a..2d62f64b2479 100644
> --- a/mlir/test/Dialect/SPIRV/structure-ops.mlir
> +++ b/mlir/test/Dialect/SPIRV/structure-ops.mlir
> @@ -175,7 +175,7 @@ spv.module Logical GLSL450 {
>     spv.EntryPoint "GLCompute" @do_something
>  }
>
> -/// TODO(ravishankarm) : Add a test that verifies an error is thrown
> +/// TODO: Add a test that verifies an error is thrown
>  /// when interface entries of EntryPointOp are not
>  /// spv.Variables. There is currently no other op that has a spv.ptr
>  /// return type
>
> diff  --git a/mlir/test/Dialect/SPIRV/types.mlir
> b/mlir/test/Dialect/SPIRV/types.mlir
> index d5eb073c9aa5..810e00b5dedd 100644
> --- a/mlir/test/Dialect/SPIRV/types.mlir
> +++ b/mlir/test/Dialect/SPIRV/types.mlir
> @@ -1,6 +1,6 @@
>  // RUN: mlir-opt -split-input-file -verify-diagnostics %s | FileCheck %s
>
> -// TODO(b/133530217): Add more tests after switching to the generic
> parser.
> +// TODO: Add more tests after switching to the generic parser.
>
>
>  //===----------------------------------------------------------------------===//
>  // ArrayType
>
> diff  --git a/mlir/test/Dialect/Vector/vector-flat-transforms.mlir
> b/mlir/test/Dialect/Vector/vector-flat-transforms.mlir
> index 6a1e6ee85a7d..c07d651d985e 100644
> --- a/mlir/test/Dialect/Vector/vector-flat-transforms.mlir
> +++ b/mlir/test/Dialect/Vector/vector-flat-transforms.mlir
> @@ -2,10 +2,9 @@
>
>  // Tests for lowering 2-D vector.transpose into vector.flat_transpose.
>  //
> -// TODO(ajcbik,ntv): having ShapeCastOp2DDownCastRewritePattern and
> -//                   ShapeCastOp2DUpCastRewritePattern too early in
> -//                   the greedy rewriting patterns misses opportunities
> -//                   to fold shape casts!
> +// TODO: having ShapeCastOp2DDownCastRewritePattern and
> +//       ShapeCastOp2DUpCastRewritePattern too early in the greedy
> rewriting
> +//       patterns misses opportunities to fold shape casts!
>
>  // No shape cast folding expected.
>  //
>
> diff  --git a/mlir/test/Dialect/Vector/vector-transforms.mlir
> b/mlir/test/Dialect/Vector/vector-transforms.mlir
> index 0bd6c3c43b59..a32fa4a05649 100644
> --- a/mlir/test/Dialect/Vector/vector-transforms.mlir
> +++ b/mlir/test/Dialect/Vector/vector-transforms.mlir
> @@ -281,7 +281,7 @@ func @contraction4x4_ikj_xfer_read(%arg0 :
> memref<4x2xf32>,
>    return
>  }
>
> -// TODO(andydavis) Update test with VTR split transform.
> +// TODO: Update test with VTR split transform.
>  // CHECK-LABEL: func @vector_transfers
>  // CHECK-COUNT-8: vector.transfer_read
>  // CHECK-COUNT-4: addf
>
> diff  --git a/mlir/test/IR/invalid-affinemap.mlir
> b/mlir/test/IR/invalid-affinemap.mlir
> index 741ad9a3dfdf..9377824f006a 100644
> --- a/mlir/test/IR/invalid-affinemap.mlir
> +++ b/mlir/test/IR/invalid-affinemap.mlir
> @@ -96,7 +96,7 @@
>  // -----
>  #hello_world = affine_map<(i, j) -> (i, 3*d0 + )> // expected-error {{use
> of undeclared identifier}}
>
> -// TODO(bondhugula): Add more tests; coverage of error messages emitted
> not complete
> +// TODO: Add more tests; coverage of error messages emitted not complete
>
>  // -----
>  #ABC = affine_map<(i,j) -> (i+j)>
>
> diff  --git a/mlir/test/IR/invalid.mlir b/mlir/test/IR/invalid.mlir
> index f025fdf3ba4d..19bf53783869 100644
> --- a/mlir/test/IR/invalid.mlir
> +++ b/mlir/test/IR/invalid.mlir
> @@ -290,7 +290,7 @@ func @invalid_if_conditional6() {
>  }
>
>  // -----
> -// TODO (support affine.if (1)?
> +// TODO: support affine.if (1)?
>  func @invalid_if_conditional7() {
>    affine.for %i = 1 to 10 {
>      affine.if affine_set<(i) : (1)> // expected-error {{expected '== 0'
> or '>= 0' at end of affine constraint}}
>
> diff  --git a/mlir/test/Transforms/loop-fusion.mlir
> b/mlir/test/Transforms/loop-fusion.mlir
> index 51d2fb42a1c1..7d91e780e9ed 100644
> --- a/mlir/test/Transforms/loop-fusion.mlir
> +++ b/mlir/test/Transforms/loop-fusion.mlir
> @@ -1,12 +1,12 @@
>  // RUN: mlir-opt -allow-unregistered-dialect %s -affine-loop-fusion
> -split-input-file | FileCheck %s
>  // RUN: mlir-opt -allow-unregistered-dialect %s
> -affine-loop-fusion="fusion-maximal" -split-input-file | FileCheck %s
> --check-prefix=MAXIMAL
>
> -// TODO(andydavis) Add more tests:
> +// TODO: Add more tests:
>  // *) Add nested fusion test cases when non-constant loop bound support is
>  //    added to iteration domain in dependence check.
>  // *) Add a test w/ floordiv/ceildiv/mod when supported in dependence
> check.
>  // *) Add tests which check fused computation slice indexing and loop
> bounds.
> -// TODO(andydavis) Test clean up: move memref allocs to func args.
> +// TODO: Test clean up: move memref allocs to func args.
>
>  // -----
>
> @@ -317,7 +317,7 @@ func @should_fuse_producer_consumer() {
>    }
>    // Fusing loop %i0 to %i2 would violate the WAW dependence between %i0
> and
>    // %i1, but OK to fuse %i1 into %i2.
> -  // TODO(andydavis) When the fusion pass is run to a fixed-point, it
> should
> +  // TODO: When the fusion pass is run to a fixed-point, it should
>    // fuse all three of these loop nests.
>    // CHECK:      alloc() : memref<1xf32>
>    // CHECK:      affine.for %{{.*}} = 0 to 10 {
>
> diff  --git a/mlir/test/Transforms/memref-dependence-check.mlir
> b/mlir/test/Transforms/memref-dependence-check.mlir
> index 65af899c726c..154dcf79c114 100644
> --- a/mlir/test/Transforms/memref-dependence-check.mlir
> +++ b/mlir/test/Transforms/memref-dependence-check.mlir
> @@ -766,7 +766,7 @@ func @delinearize_mod_floordiv() {
>  // expected-remark at above {{dependence from 1 to 2 at depth 1 = false}}
>  // expected-remark at above {{dependence from 1 to 2 at depth 2 = false}}
>  // expected-remark at above {{dependence from 1 to 2 at depth 3 = false}}
> -// TODO(andydavis): the dep tester shouldn't be printing out these
> messages
> +// TODO: the dep tester shouldn't be printing out these messages
>  // below; they are redundant.
>        affine.store %v0, %out[%ii, %jj] : memref<64x9xi32>
>  // expected-remark at above {{dependence from 2 to 0 at depth 1 = false}}
> @@ -781,7 +781,7 @@ func @delinearize_mod_floordiv() {
>    return
>  }
>
> -// TODO(bondhugula): add more test cases involving mod's/div's.
> +// TODO: add more test cases involving mod's/div's.
>
>  // -----
>
>
> diff  --git
> a/mlir/test/lib/DeclarativeTransforms/TestVectorTransformPatterns.td
> b/mlir/test/lib/DeclarativeTransforms/TestVectorTransformPatterns.td
> index 49a3499b0bad..2c6ca1a05733 100644
> --- a/mlir/test/lib/DeclarativeTransforms/TestVectorTransformPatterns.td
> +++ b/mlir/test/lib/DeclarativeTransforms/TestVectorTransformPatterns.td
> @@ -26,7 +26,7 @@ def : Pat<(AddFOp:$op_results $a, $b),
>            (UnrollVectorOp<[2, 2]> $op_results, $a, $b),
>            [(Constraint<HasShape<[4, 4]>> $a)]>;
>
> -// TODO(andydavis) Add Constraints on lhs/rhs shapes.
> +// TODO: Add Constraints on lhs/rhs shapes.
>  def : Pat<(Vector_ContractionOp:$op_results $a, $b, $c, $masks, $attr0,
> $attr1),
>            (UnrollVectorOp<[2, 2, 2]> $op_results, $a, $b, $c),
>            [(Constraint<HasShape<[4, 4]>> $c)]>;
>
> diff  --git a/mlir/test/lib/Dialect/Test/TestPatterns.cpp
> b/mlir/test/lib/Dialect/Test/TestPatterns.cpp
> index f44b987f17cb..c471cd3ead3e 100644
> --- a/mlir/test/lib/Dialect/Test/TestPatterns.cpp
> +++ b/mlir/test/lib/Dialect/Test/TestPatterns.cpp
> @@ -111,7 +111,7 @@ static void
> invokeCreateWithInferredReturnType(Operation *op) {
>                context, llvm::None, values, op->getAttrDictionary(),
>                op->getRegions(), inferredReturnTypes))) {
>          OperationState state(location, OpTy::getOperationName());
> -        // TODO(jpienaar): Expand to regions.
> +        // TODO: Expand to regions.
>          OpTy::build(b, state, values, op->getAttrs());
>          (void)b.createOperation(state);
>        }
>
> diff  --git a/mlir/test/lib/Transforms/TestInlining.cpp
> b/mlir/test/lib/Transforms/TestInlining.cpp
> index 1cc762836b3c..27c0d0846269 100644
> --- a/mlir/test/lib/Transforms/TestInlining.cpp
> +++ b/mlir/test/lib/Transforms/TestInlining.cpp
> @@ -6,7 +6,7 @@
>  //
>
>  //===----------------------------------------------------------------------===//
>  //
> -// TODO(riverriddle) This pass is only necessary because the main
> inlining pass
> +// TODO: This pass is only necessary because the main inlining pass
>  // has no abstracted away the call+callee relationship. When the inlining
>  // interface has this support, this pass should be removed.
>  //
>
> diff  --git a/mlir/test/lib/Transforms/TestMemRefBoundCheck.cpp
> b/mlir/test/lib/Transforms/TestMemRefBoundCheck.cpp
> index 087ea4fdde94..4ae9e505cab3 100644
> --- a/mlir/test/lib/Transforms/TestMemRefBoundCheck.cpp
> +++ b/mlir/test/lib/Transforms/TestMemRefBoundCheck.cpp
> @@ -41,7 +41,7 @@ void TestMemRefBoundCheck::runOnFunction() {
>          .Case<AffineReadOpInterface, AffineWriteOpInterface>(
>              [](auto op) { boundCheckLoadOrStoreOp(op); });
>
> -    // TODO(bondhugula): do this for DMA ops as well.
> +    // TODO: do this for DMA ops as well.
>    });
>  }
>
>
> diff  --git a/mlir/test/lib/Transforms/TestMemRefDependenceCheck.cpp
> b/mlir/test/lib/Transforms/TestMemRefDependenceCheck.cpp
> index 7a67bef93bc2..787c92737daf 100644
> --- a/mlir/test/lib/Transforms/TestMemRefDependenceCheck.cpp
> +++ b/mlir/test/lib/Transforms/TestMemRefDependenceCheck.cpp
> @@ -25,7 +25,7 @@ using namespace mlir;
>
>  namespace {
>
> -// TODO(andydavis) Add common surrounding loop depth-wise dependence
> checks.
> +// TODO: Add common surrounding loop depth-wise dependence checks.
>  /// Checks dependences between all pairs of memref accesses in a Function.
>  struct TestMemRefDependenceCheck
>      : public PassWrapper<TestMemRefDependenceCheck, FunctionPass> {
> @@ -85,7 +85,7 @@ static void checkDependences(ArrayRef<Operation *>
> loadsAndStores) {
>              &dependenceComponents);
>          assert(result.value != DependenceResult::Failure);
>          bool ret = hasDependence(result);
> -        // TODO(andydavis) Print dependence type (i.e. RAW, etc) and print
> +        // TODO: Print dependence type (i.e. RAW, etc) and print
>          // distance vectors as: ([2, 3], [0, 10]). Also, shorten distance
>          // vectors from ([1, 1], [3, 3]) to (1, 3).
>          srcOpInst->emitRemark("dependence from ")
>
> diff  --git a/mlir/tools/mlir-rocm-runner/mlir-rocm-runner.cpp
> b/mlir/tools/mlir-rocm-runner/mlir-rocm-runner.cpp
> index 8c23713ad65d..d61cb2d98809 100644
> --- a/mlir/tools/mlir-rocm-runner/mlir-rocm-runner.cpp
> +++ b/mlir/tools/mlir-rocm-runner/mlir-rocm-runner.cpp
> @@ -198,7 +198,7 @@ static LogicalResult createHsaco(const Blob &isaBlob,
> StringRef name,
>
>  static std::unique_ptr<llvm::Module> compileModuleToROCDLIR(Operation *m)
> {
>    auto llvmModule = translateModuleToROCDLIR(m);
> -  // TODO(whchung): Link with ROCm-Device-Libs in case needed (ex: the
> Module
> +  // TODO: Link with ROCm-Device-Libs in case needed (ex: the Module
>    // depends on math functions).
>    return llvmModule;
>  }
>
> diff  --git a/mlir/tools/mlir-tblgen/OpDefinitionsGen.cpp
> b/mlir/tools/mlir-tblgen/OpDefinitionsGen.cpp
> index f391f4be9160..dcf40691e17f 100644
> --- a/mlir/tools/mlir-tblgen/OpDefinitionsGen.cpp
> +++ b/mlir/tools/mlir-tblgen/OpDefinitionsGen.cpp
> @@ -319,7 +319,7 @@ class OpEmitter {
>
>  private:
>    // The TableGen record for this op.
> -  // TODO(antiagainst,zinenko): OpEmitter should not have a Record
> directly,
> +  // TODO: OpEmitter should not have a Record directly,
>    // it should rather go through the Operator for better abstraction.
>    const Record &def;
>
> @@ -915,9 +915,9 @@ void OpEmitter::genSeparateArgParamBuilder() {
>
>      if (inferType) {
>        // Generate builder that infers type too.
> -      // TODO(jpienaar): Subsume this with general checking if type can be
> +      // TODO: Subsume this with general checking if type can be
>        // inferred automatically.
> -      // TODO(jpienaar): Expand to handle regions.
> +      // TODO: Expand to handle regions.
>        body << formatv(R"(
>          ::llvm::SmallVector<::mlir::Type, 2> inferredReturnTypes;
>          if (succeeded({0}::inferReturnTypes(odsBuilder.getContext(),
> @@ -1006,7 +1006,7 @@ void
> OpEmitter::genUseOperandAsResultTypeCollectiveParamBuilder() {
>  }
>
>  void OpEmitter::genInferredTypeCollectiveParamBuilder() {
> -  // TODO(jpienaar): Expand to support regions.
> +  // TODO: Expand to support regions.
>    const char *params =
>        "::mlir::OpBuilder &odsBuilder, ::mlir::OperationState &{0}, "
>        "::mlir::ValueRange operands,
> ::llvm::ArrayRef<::mlir::NamedAttribute> "
> @@ -1119,7 +1119,7 @@ void OpEmitter::genUseAttrAsResultTypeBuilder() {
>
>  void OpEmitter::genBuilder() {
>    // Handle custom builders if provided.
> -  // TODO(antiagainst): Create wrapper class for OpBuilder to hide the
> native
> +  // TODO: Create wrapper class for OpBuilder to hide the native
>    // TableGen API calls here.
>    {
>      auto *listInit =
> dyn_cast_or_null<ListInit>(def.getValueInit("builders"));
> @@ -1215,7 +1215,7 @@ void OpEmitter::genCollectiveParamBuilder() {
>    body << "  " << builderOpState << ".addTypes(resultTypes);\n";
>
>    // Generate builder that infers type too.
> -  // TODO(jpienaar): Expand to handle regions and successors.
> +  // TODO: Expand to handle regions and successors.
>    if (canInferType(op) && op.getNumSuccessors() == 0)
>      genInferredTypeCollectiveParamBuilder();
>  }
> @@ -1279,7 +1279,7 @@ void OpEmitter::buildParamList(std::string
> &paramList,
>        // Creating an APInt requires us to provide bitwidth, value, and
>        // signedness, which is complicated compared to others. Similarly
>        // for APFloat.
> -      // TODO(b/144412160) Adjust the 'returnType' field of such
> attributes
> +      // TODO: Adjust the 'returnType' field of such attributes
>        // to support them.
>        StringRef retType = namedAttr->attr.getReturnType();
>        if (retType == "::llvm::APInt" || retType == "::llvm::APFloat")
>
> diff  --git a/mlir/tools/mlir-tblgen/OpDocGen.cpp
> b/mlir/tools/mlir-tblgen/OpDocGen.cpp
> index a432b4a2f21c..df78556c1c77 100644
> --- a/mlir/tools/mlir-tblgen/OpDocGen.cpp
> +++ b/mlir/tools/mlir-tblgen/OpDocGen.cpp
> @@ -197,7 +197,7 @@ static void emitDialectDoc(const Dialect &dialect,
> ArrayRef<Operator> ops,
>
>    os << "[TOC]\n\n";
>
> -  // TODO(antiagainst): Add link between use and def for types
> +  // TODO: Add link between use and def for types
>    if (!types.empty()) {
>      os << "## Type definition\n\n";
>      for (const Type &type : types)
>
> diff  --git a/mlir/tools/mlir-tblgen/OpFormatGen.cpp
> b/mlir/tools/mlir-tblgen/OpFormatGen.cpp
> index 1cfcf32f8c06..3fcbeeff1e6f 100644
> --- a/mlir/tools/mlir-tblgen/OpFormatGen.cpp
> +++ b/mlir/tools/mlir-tblgen/OpFormatGen.cpp
> @@ -2118,7 +2118,7 @@
> FormatParser::parseTypeDirectiveOperand(std::unique_ptr<Element> &element) {
>
>  //===----------------------------------------------------------------------===//
>
>  void mlir::tblgen::generateOpFormat(const Operator &constOp, OpClass
> &opClass) {
> -  // TODO(riverriddle) Operator doesn't expose all necessary
> functionality via
> +  // TODO: Operator doesn't expose all necessary functionality via
>    // the const interface.
>    Operator &op = const_cast<Operator &>(constOp);
>    if (!op.hasAssemblyFormat())
>
> diff  --git a/mlir/tools/mlir-tblgen/RewriterGen.cpp
> b/mlir/tools/mlir-tblgen/RewriterGen.cpp
> index 37af7d222753..fc67ec4b8c17 100644
> --- a/mlir/tools/mlir-tblgen/RewriterGen.cpp
> +++ b/mlir/tools/mlir-tblgen/RewriterGen.cpp
> @@ -204,7 +204,7 @@ std::string
> PatternEmitter::handleConstantAttr(Attribute attr,
>      PrintFatalError(loc, "Attribute " + attr.getAttrDefName() +
>                               " does not have the 'constBuilderCall'
> field");
>
> -  // TODO(jpienaar): Verify the constants here
> +  // TODO: Verify the constants here
>    return std::string(tgfmt(attr.getConstBuilderTemplate(), &fmtCtx,
> value));
>  }
>
> @@ -343,7 +343,7 @@ void PatternEmitter::emitAttributeMatch(DagNode tree,
> int argIndex, int depth,
>        "(void)tblgen_attr;\n",
>        depth, attr.getStorageType(), namedAttr->name);
>
> -  // TODO(antiagainst): This should use getter method to avoid
> duplication.
> +  // TODO: This should use getter method to avoid duplication.
>    if (attr.hasDefaultValue()) {
>      os.indent(indent) << "if (!tblgen_attr) tblgen_attr = "
>                        << std::string(tgfmt(attr.getConstBuilderTemplate(),
> @@ -429,7 +429,7 @@ void PatternEmitter::emitMatchLogic(DagNode tree) {
>        PrintFatalError(
>            loc, "cannot use AttrConstraint in Pattern multi-entity
> constraints");
>      } else {
> -      // TODO(b/138794486): replace formatv arguments with the exact
> specified
> +      // TODO: replace formatv arguments with the exact specified
>        // args.
>        if (entities.size() > 4) {
>          PrintFatalError(loc, "only support up to 4-entity constraints
> now");
> @@ -526,7 +526,7 @@ void PatternEmitter::emit(StringRef rewriteName) {
>      auto &info = symbolInfoPair.getValue();
>      os.indent(4) << info.getVarDecl(symbol);
>    }
> -  // TODO(jpienaar): capture ops with consistent numbering so that it can
> be
> +  // TODO: capture ops with consistent numbering so that it can be
>    // reused for fused loc.
>    os.indent(4) << formatv("::mlir::Operation *tblgen_ops[{0}];\n\n",
>                            pattern.getSourcePattern().getNumOps());
> @@ -619,7 +619,7 @@ void PatternEmitter::emitRewriteLogic() {
>        // `{0}` resolves to an `Operation::result_range` as well as cases
> that
>        // are not iterable (e.g. vector that gets wrapped in additional
> braces by
>        // RewriterGen).
> -      // TODO(b/147096809): Revisit the need for materializing a vector.
> +      // TODO: Revisit the need for materializing a vector.
>        os << symbolInfoMap.getAllRangeUse(
>            val,
>            "    for (auto v : ::llvm::SmallVector<::mlir::Value, 4>{ {0}
> }) {{ "
> @@ -771,7 +771,7 @@ std::string
> PatternEmitter::handleReplaceWithNativeCodeCall(DagNode tree) {
>    LLVM_DEBUG(llvm::dbgs() << '\n');
>
>    auto fmt = tree.getNativeCodeTemplate();
> -  // TODO(b/138794486): replace formatv arguments with the exact
> specified args.
> +  // TODO: replace formatv arguments with the exact specified args.
>    SmallVector<std::string, 8> attrs(8);
>    if (tree.getNumArgs() > 8) {
>      PrintFatalError(loc, "unsupported NativeCodeCall argument numbers: " +
> @@ -797,7 +797,7 @@ int PatternEmitter::getNodeValueCount(DagNode node) {
>      // Otherwise this is an unbound op; we will use all its results.
>      return pattern.getDialectOp(node).getNumResults();
>    }
> -  // TODO(antiagainst): This considers all NativeCodeCall as returning one
> +  // TODO: This considers all NativeCodeCall as returning one
>    // value. Enhance if multi-value ones are needed.
>    return 1;
>  }
> @@ -1026,7 +1026,7 @@ void PatternEmitter::supplyValuesForOpArgs(
>        // The argument in the result DAG pattern.
>        auto patArgName = node.getArgName(argIndex);
>        if (leaf.isConstantAttr() || leaf.isEnumAttrCase()) {
> -        // TODO(jpienaar): Refactor out into map to avoid recomputing
> these.
> +        // TODO: Refactor out into map to avoid recomputing these.
>          if (!opArg.is<NamedAttribute *>())
>            PrintFatalError(loc, Twine("expected attribute ") +
> Twine(argIndex));
>          if (!patArgName.empty())
>
> diff  --git a/mlir/tools/mlir-tblgen/SPIRVUtilsGen.cpp
> b/mlir/tools/mlir-tblgen/SPIRVUtilsGen.cpp
> index 37a2dabb0981..21f7349763db 100644
> --- a/mlir/tools/mlir-tblgen/SPIRVUtilsGen.cpp
> +++ b/mlir/tools/mlir-tblgen/SPIRVUtilsGen.cpp
> @@ -1283,7 +1283,7 @@ static void emitAvailabilityImpl(const Operator
> &srcOp, raw_ostream &os) {
>        os << formatv("    auto tblgen_instance =
> {0}::{1}(tblgen_attrVal);\n",
>                      enumAttr->getCppNamespace(), avail.getQueryFnName());
>        os << "    if (tblgen_instance) "
> -         // TODO(antiagainst): use `avail.getMergeCode()` here once ODS
> supports
> +         // TODO` here once ODS supports
>           // dialect-specific contents so that we can use not implementing
> the
>           // availability interface as indication of no requirements.
>           <<
> std::string(tgfmt(caseSpecs.front().second.getMergeActionCode(),
>
> diff  --git a/mlir/tools/mlir-vulkan-runner/VulkanRuntime.cpp
> b/mlir/tools/mlir-vulkan-runner/VulkanRuntime.cpp
> index 70812d2168fc..42f8be45135b 100644
> --- a/mlir/tools/mlir-vulkan-runner/VulkanRuntime.cpp
> +++ b/mlir/tools/mlir-vulkan-runner/VulkanRuntime.cpp
> @@ -15,7 +15,7 @@
>
>  #include <chrono>
>  #include <cstring>
> -// TODO(antiagainst): It's generally bad to access stdout/stderr in a
> library.
> +// TODO: It's generally bad to access stdout/stderr in a library.
>  // Figure out a better way for error reporting.
>  #include <iomanip>
>  #include <iostream>
> @@ -261,7 +261,7 @@ LogicalResult VulkanRuntime::createDevice() {
>    RETURN_ON_VULKAN_ERROR(physicalDeviceCount ? VK_SUCCESS : VK_INCOMPLETE,
>                           "physicalDeviceCount");
>
> -  // TODO(denis0x0D): find the best device.
> +  // TODO: find the best device.
>    physicalDevice = physicalDevices.front();
>    if (failed(getBestComputeQueue()))
>      return failure();
>
> diff  --git a/mlir/unittests/Dialect/SPIRV/DeserializationTest.cpp
> b/mlir/unittests/Dialect/SPIRV/DeserializationTest.cpp
> index bfefebeefac3..31fc0e426e24 100644
> --- a/mlir/unittests/Dialect/SPIRV/DeserializationTest.cpp
> +++ b/mlir/unittests/Dialect/SPIRV/DeserializationTest.cpp
> @@ -54,7 +54,7 @@ class DeserializationTest : public ::testing::Test {
>    void expectDiagnostic(StringRef errorMessage) {
>      ASSERT_NE(nullptr, diagnostic.get());
>
> -    // TODO(antiagainst): check error location too.
> +    // TODO: check error location too.
>      EXPECT_THAT(diagnostic->str(), StrEq(std::string(errorMessage)));
>    }
>
>
> diff  --git a/mlir/utils/spirv/gen_spirv_dialect.py
> b/mlir/utils/spirv/gen_spirv_dialect.py
> index 19bd7cdfef06..159d8651223b 100755
> --- a/mlir/utils/spirv/gen_spirv_dialect.py
> +++ b/mlir/utils/spirv/gen_spirv_dialect.py
> @@ -255,7 +255,7 @@ def get_availability_spec(enum_case,
> capability_mapping, for_op, for_cap):
>      min_version = ''
>    elif min_version:
>      min_version = 'MinVersion<SPV_V_{}>'.format(min_version.replace('.',
> '_'))
> -  # TODO(antiagainst): delete this once ODS can support dialect-specific
> content
> +  # TODO: delete this once ODS can support dialect-specific content
>    # and we can use omission to mean no requirements.
>    if for_op and not min_version:
>      min_version = DEFAULT_MIN_VERSION
> @@ -263,7 +263,7 @@ def get_availability_spec(enum_case,
> capability_mapping, for_op, for_cap):
>    max_version = enum_case.get('lastVersion', '')
>    if max_version:
>      max_version = 'MaxVersion<SPV_V_{}>'.format(max_version.replace('.',
> '_'))
> -  # TODO(antiagainst): delete this once ODS can support dialect-specific
> content
> +  # TODO: delete this once ODS can support dialect-specific content
>    # and we can use omission to mean no requirements.
>    if for_op and not max_version:
>      max_version = DEFAULT_MAX_VERSION
> @@ -278,7 +278,7 @@ def get_availability_spec(enum_case,
> capability_mapping, for_op, for_cap):
>      # a core symbol since the given version, rather than a minimal version
>      # requirement.
>      min_version = DEFAULT_MIN_VERSION if for_op else ''
> -  # TODO(antiagainst): delete this once ODS can support dialect-specific
> content
> +  # TODO: delete this once ODS can support dialect-specific content
>    # and we can use omission to mean no requirements.
>    if for_op and not exts:
>      exts = DEFAULT_EXT
> @@ -305,7 +305,7 @@ def get_availability_spec(enum_case,
> capability_mapping, for_op, for_cap):
>      else:
>        caps = 'Capability<[{}]>'.format(', '.join(prefixed_caps))
>        implies = ''
> -  # TODO(antiagainst): delete this once ODS can support dialect-specific
> content
> +  # TODO: delete this once ODS can support dialect-specific content
>    # and we can use omission to mean no requirements.
>    if for_op and not caps:
>      caps = DEFAULT_CAP
> @@ -621,7 +621,7 @@ def map_spec_operand_to_ods_argument(operand):
>      else:
>        arg_type = 'Variadic<SPV_Type>'
>    elif kind == 'IdMemorySemantics' or kind == 'IdScope':
> -    # TODO(antiagainst): Need to further constrain 'IdMemorySemantics'
> +    # TODO: Need to further constrain 'IdMemorySemantics'
>      # and 'IdScope' given that they should be generated from OpConstant.
>      assert quantifier == '', ('unexpected to have optional/variadic
> memory '
>                                'semantics or scope <id>')
>
>
>
> _______________________________________________
> Mlir-commits mailing list
> Mlir-commits at lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/mlir-commits
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/mlir-commits/attachments/20200808/d46590a1/attachment-0001.html>


More information about the Mlir-commits mailing list